General idea </a> Recently, new JUnit version has been released. I found many useful things. Besides, there’re lot’s of useless features. At least, I think something won’t be used in new JUnit version. By the way, here the new version - JUnit 5 JUnit Goal </a> It’s obvious, that JUnit decided to make their product more opensource - by releasing instruments, which will allow junit users to create lot’s of extensions Restrictions </a> JUnit 5 is running on java > 1.8 JUnit 5 has lot’s of new features, which are not working in JUnit 3 or 4. However, there is a project JUnit Vintage which allows older versions work on JUnit 5. Story about another package </a> New version has new package - org.junit.jupiter.*. This was done, mostly, for separating new version from previous versions, which completely differs. Transformations </a> All core annotation are located under org.junit.jupiter.api @Test - not this annotation comes from completely new package @TestFactory - comparing to something oldes, TestFactory is a replacement of parameterized tests @BeforeEach - new version of @Before. I wonder, why it was done. @AfterEach - new version of @After. @BeforeAll - replacement of @BeforeClass. Must be static, as usual. @AfterAll -...
Why do performance testing ? Are you asking this question ? You maybe asking “are we ready to go live?”. You have enough functional tests, they are working well, your business logic is well tested, and you are sure you won’t have any troubles on your production servers. On the other side, you have lot’s of infrastructure work, which is not covered by your tests. Let’s say, you have few applications, couple of databases, cached layer, and of course, load balancer layer. What about failover, are our load balancer working correctly ? Oh, by the way, what if we run our load test for a long period of time, what will happen ? Will you notice some performance degradation after ? Another thing, your application was successful enough to double its transactions, will it’s performance behave the same after ? Performance testing can answer this questions, and if you don’t know the answers for them before running on production, then, eventually, your customers will answer them. Testing is all about risk - you have a choice - do it, or skip. And if you are lucky enough to properly write your application without writing a single test - then you...
When I first started to look at Kafka as a event publishing system, my first question was ‘How to create pub/sub and queue type of events?’. Unfortunately, there was no quick answers for that, because the way Kafka works. My previous article was about Kafka basics, which you can, of course, read and get more information about this cool commit log. By this article I’ll try to explain why Kafka is different from other similar system, how it differs, and will try to answer to all interesting questions, which I had in the beginning. Why Kafka is a commit log? Simply, because Kafka works different to other pub/sub systems. It’s a commit log, where new messages are being appended constantly. Each message has it’s own unique id, which is called offset Okay, and how to consume this so called ‘commit log’ ? Consumer stores one thing - offset, and he is responsible for reading messages. Consider this console consumer bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning As you see, this consumer will read all log from the beginning. Are messages being deleted ? Yes, after some time, there’s a retention policy. So, how to create pub/sub model ? Every consumer should...
Preface Today, we live in a world, which defines no ip addresses and is dynamically changing minute by minute. As amount of services increase every day, we receive new problems. Story Just imagine, that you have a monolithic application, which you want to rewrite into microservices architecture. You start with a single service, which, let’s say, maintains profile functionality. You use MongoDB as a database. At this step you don’t have any troubles, because, it’s a single one, and it doesn’t interact with other world. You developed some required amount of endpoints, everything works fine. Then imagine, that your next step is to start doing another service, let’s say, billing service, which uses PostgreSQL for storing transactions. Besides that, you need to know, when Profile service receives new put request and updates some profile. So you have two options - develop an external API for your case or work with PostgreSQL db straightly. So, you chose to work with db, and your problems begin with this point: you coupled profile and billing service together. You are signing a contract, that from now on, you need to think about two databases. And this sucks. Here’s why After some time you receive...
Few days ago I faced an issue with using Java InputStream in parallel. Imagine following situation: you have InputStream, which you need to use in parallel. First thing - you can’t use it in parallel, because InputStream keeps some pointer, which store information about where stream position is. More realistic scenario is to make first call asynchronous, and leave second as it is. But again, if we are working with streams, after we read it fully, there shouldn’t be anything to read again, right ? So, this article is about problem of parallel read and how to fix them. Watch this example to understand why parallel stream read is a bad idea The output will be similar to: main thread line: Number1 Number2 Number3 Number4 Number5 Number6 Number7 Number8 Number9 Number... t1 line: Number831 Number832 Number833 Number834 Number835 Number836 Number837 Number838 Number8... As we see, some of the numbers are in the first line, and some of - in the second. org.ivanursul.ghost.Main thread could be executed first The result is even funnier: main thread line: Number1 Number2 Number3 Number4 Numbe t1 line: Because main thread read everything first, there was nothing to read for t1 thread. ######TeeInputStream The idea is...