How to trace your logs using SLF4J MDC

How do you use your logs for searching problem requests? For instance, you got a problem response, with all headers, response body, and you need to find appropriate logs. How would you do that? Personally, I found it useful to write some words about MDC - Mapped Diagnostic Context. Shortly, it is a concept of mapping request specific information. Usage We will configure MDC in Spring Boot application. We will use SLF4J on top of Logback implementation. Using it all together, we will create a unique requestId for each request in our system. Components We will use 4 components here: Spring Boot, Slf4j, Logback and Java Spring Boot Spring Boot will be used for managing dependency injection and registering pure Java Filter. Slf4j Simple Logging Facade is used for following abstraction principles. Additionally, MDC class is located inside slf4j dependency. Similar classes are inside log4j and logback dependencies. Logback Logback is used as one of logging providers Pure Java Java is used for writing simple Java Filter. Affected files MDCFilter package org.startup.queue.filter; import org.slf4j.MDC; import org.springframework.stereotype.Component; import javax.servlet.*; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.UUID; @Component public class MDCFilter implements Filter { @Override public void init(FilterConfig filterConfig) throws ServletException...

Trying Amazon API Gateway, Lambda and DynamoDb

At the beginning of June 2016, I visited a JavaDay conference in Lviv, where I listen to a talk about serverless architecture. Today I’d like to try serverless. According to Martin Fowler, serverless is an architecture, which uses third party services or a custom code on ephemeral containers, best known as Amazon Lambda. So, I understand it as an idea to run your backend code on demand: there won’t be any real servers like EC2, Amazon infrastructure will start a container, which will run your Lambda function. Consider reading this article for explanations. I’d like to try this infrastructure. I decided to use Amazon API Gateway and Amazon Lambda for analyzing GitHub repositories: after each push to the repository I will examine all .md files and store them to Amazon Dynamo Database. More generally, I’ll create a backend part for a self-written blogging platform. Who knows, maybe I will write a frontend part soon. General plan The approach isn’t clear enough, so let’s clarify what do we need: We want to save articles on GitHub We need to have an API for our blog Since we need to have a good latency for API, reading articles from GitHub is not...

11 short stories about creating good pull requests

Once upon a time there was a good developer. He produced a good code, had good relations with his teammates and never break the master branch. This guy followed 11 rules and lived a long - long life. So I’m posting them here, in case someone will have a good reason to improve his way of working on pull requests. Make them small If you plan to change your source code, don’t make significant changes with dozens of classes involved. Make a granular, little changes containing < 10 files. It’s important because your teammates will do a review, and in a case of big pull requests it’ll take time to understand what you wrote, why you wrote it, and find possible bugs. By making smaller pull requests, you let your teammates review your code more precisely and find possible mistakes there. Give an initial point If you failed with small pull requests or not, always help your reviewers with a starting point. Where should they start? What unit of logic should they review first? It won’t be a problem for you to give this information and your teammates will review your PR’s faster. Write couple of comments If needed, support...

How to limit number of requests

Did you ever need to limit requests coming to your endpoints? Say, your maximum design capacity is near 10k requests/second, and you don’t want to probe your service on higher rates. I choose to use RateLimiter - small class from Guava. The init process aren’t so huge, just couple of lines: RateLimiter rateLimiter = RateLimiter.create(10); // 10 requests/second Then, just place rateLimiter in place, where you need to have limited requests: boolean isAcquired = rateLimiter.tryAcquire(); if (!isAcquired) { throw new NotSoFastBuddyException(...); } Links Guava

Better application deployment. Monitoring your application with Graphite/Grafana

Intro In the previous part I explained how to use Terraform, Ansible, Docker and Spring Boot to deploy applications in the cloud. Today I’d like to introduce something, which will work as a monitoring tool, inside our infrastructure. If you follow my blog posts, you should remember a post about Spring and Dropwizard Module - there I explained how you could get a meaningful metrics from your app. But wait, why you should even do monitoring and can you skip this part? Well, when I first came to the project, when a wide variety of metrics was present in each of the microservices in the ecosystem, I had a feeling that this is something which I won’t use in the future. I was right, and I didn’t use them…until my first incident, on which I had to understand what’s going on. I start looking for some explanations, and found that our service is sending many 500 statuses. Then I found out, that one dependant service, which we use to get some part of response, is broken, and problem is not on our side. From that period I introduced a couple of custom dashboards, and during incidents/crashes, I can answer most...