How to log requests and their payloads in Spring

From time to time we may need to log our requests in order to get some information. Personally, I’m writing this short article, because we need to find out the reason why jackson throws 400 error status. Luckily, it’s very easy to log your requests. Spring has a class AbstractRequestLoggingFilter, which has three concrete classes, which you can potentially use: ServletContextRequestLoggingFilter Log4jNestedDiagnosticContextFilter CommonsRequestLoggingFilter The last one is the guy we need. It’s pretty straightforward how to configure this class: just declare it in your context: @Bean public CommonsRequestLoggingFilter requestLoggingFilter() { CommonsRequestLoggingFilter loggingFilter = new CommonsRequestLoggingFilter(); loggingFilter.setIncludeClientInfo(true); loggingFilter.setIncludeQueryString(true); loggingFilter.setIncludePayload(true); return loggingFilter; } And yes, don’t forget to add additional line in your application.properties file: logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG And voila, you have your requests being logged: 2016-10-24 21:50:45.520 DEBUG 83061 --- [nio-8080-exec-3] o.s.w.f.CommonsRequestLoggingFilter : Before request [uri=/api/requests;client=0:0:0:0:0:0:0:1] ... // My logs :) 2016-10-24 21:50:45.526 DEBUG 83061 --- [nio-8080-exec-3] o.s.w.f.CommonsRequestLoggingFilter : After request [uri=/api/requests;client=0:0:0:0:0:0:0:1;payload={ "title": "..."}]

Monitoring your Spring application using Dropwizard metrics module

Yes, it’s true, that Spring is better, than Dropwizard. I’ve worked with both frameworks, and can truly say, that Dropwizard has poor Guice dependency injection, Jersey, which I don’t like at all, and other things. But there’s one thing, which I like in Dropwizard, and which I’d be happy to see in Spring Framework - Dropwizard Metrics module. it has very rich number of instruments, which can help you to understand your application behaviour: different gauges, timers, counters, histograms, timers, and healthcheks. The question is how to import Dropwizard Metrics into Spring? Basically, you need to add 3 dependencies:one as an adapter for dropwizard metrics, another is for jvm metrics and last one for Filter, which you will use to get this metrics using http. com.ryantenney.metrics:metrics-spring:3.1.2 io.dropwizard.metrics:metrics-jvm:3.1.2 io.dropwizard.metrics:metrics-servlets:3.1.2 Depending on what build tool you have in your application - include those libraries. I use Gradle, so my build.gradle looks like following: Create Metrics Listener One thing you need to know about Dropwizard Metrics module is that it’s required to have MetricRegistry and HealthCheckRegistry classes to be instantiated under ServletContext. That’s why we’ll get it from constructor, and initialise context. We will create a separate bean for this listener soon. Creating...

Trying new JUnit 5 - let's extend everything

General idea </a> Recently, new JUnit version has been released. I found many useful things. Besides, there’re lot’s of useless features. At least, I think something won’t be used in new JUnit version. By the way, here the new version - JUnit 5 JUnit Goal </a> It’s obvious, that JUnit decided to make their product more opensource - by releasing instruments, which will allow junit users to create lot’s of extensions Restrictions </a> JUnit 5 is running on java > 1.8 JUnit 5 has lot’s of new features, which are not working in JUnit 3 or 4. However, there is a project JUnit Vintage which allows older versions work on JUnit 5. Story about another package </a> New version has new package - org.junit.jupiter.*. This was done, mostly, for separating new version from previous versions, which completely differs. Transformations </a> All core annotation are located under org.junit.jupiter.api @Test - not this annotation comes from completely new package @TestFactory - comparing to something oldes, TestFactory is a replacement of parameterized tests @BeforeEach - new version of @Before. I wonder, why it was done. @AfterEach - new version of @After. @BeforeAll - replacement of @BeforeClass. Must be static, as usual. @AfterAll -...

Do we really know our application performance behaviour ?

Why do performance testing ? Are you asking this question ? You maybe asking “are we ready to go live?”. You have enough functional tests, they are working well, your business logic is well tested, and you are sure you won’t have any troubles on your production servers. On the other side, you have lot’s of infrastructure work, which is not covered by your tests. Let’s say, you have few applications, couple of databases, cached layer, and of course, load balancer layer. What about failover, are our load balancer working correctly ? Oh, by the way, what if we run our load test for a long period of time, what will happen ? Will you notice some performance degradation after ? Another thing, your application was successful enough to double its transactions, will it’s performance behave the same after ? Performance testing can answer this questions, and if you don’t know the answers for them before running on production, then, eventually, your customers will answer them. Testing is all about risk - you have a choice - do it, or skip. And if you are lucky enough to properly write your application without writing a single test - then you...

Kafka differences: topics, partitions, publish/subscribe and queue.

When I first started to look at Kafka as a event publishing system, my first question was ‘How to create pub/sub and queue type of events?’. Unfortunately, there was no quick answers for that, because the way Kafka works. My previous article was about Kafka basics, which you can, of course, read and get more information about this cool commit log. By this article I’ll try to explain why Kafka is different from other similar system, how it differs, and will try to answer to all interesting questions, which I had in the beginning. Why Kafka is a commit log? Simply, because Kafka works different to other pub/sub systems. It’s a commit log, where new messages are being appended constantly. Each message has it’s own unique id, which is called offset Okay, and how to consume this so called ‘commit log’ ? Consumer stores one thing - offset, and he is responsible for reading messages. Consider this console consumer bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning As you see, this consumer will read all log from the beginning. Are messages being deleted ? Yes, after some time, there’s a retention policy. So, how to create pub/sub model ? Every consumer should...