Motivation The reason for writing this article is a lack of my understanding of Java concurrency. That’s why I decided to create a post with some structures, which I wrote by myself. For sure, you may agree or disagree with me, I encourage to post your thoughts in comments so we can discuss them there. List of structures Implemented: Publish-Subscribe Queue Will implement them later: Fixed Thread Pool Cached Thread Pool CountDownLatch CyclicBarrier Phaser Semaphore Exchanger Rate Limiter Lock, ReentrantLock, ReentrantReadWriteLock ConcurrentHashMap AtomicInteger(For fun) Publish/Subscribe What is Publish-Subscribe, in a nutshell? Obviously, there someone, who has something share something to others, and want to be sure, that everyone will receive this message. The idea of following structure is to have a publish-subscribe mechanism, which will act asynchronously, without usage of high-level classes from java.util.concurrent package: Consumer - entity, which will consume messages from main thread. PubSubModel - main thread, which will send messages to the consumers. Formally, you can treat it as a consumer. Code can be found here. Let’s go over this code and try all the details: First of all, there will be one main thread, which will read lines from console and act as a producer, so...
When it comes to the problem of migrating database structure, some of you may think about relational databases: there is a strict schema, and to remove something(field, table, index, etc.), you need to take action: execute an SQL statement. But when you work on schema-less databases, it may look like you don’t need those migrations. But to be honest, are schema-less database are schemaless? In fact, you have more freedom in column and document-based database, but sooner or later you will have to modify some of the results of your work: remove the index, transform column format, etc. That’s why with the help of this article I would like to review the available tools for Mongo migrations. Mongobee If you use Spring in your project, then MongoBee should be the most suitable tool for you. The idea is that you write Java methods(changesets), which describe what changes need to be done. The method annotated by @ChangeSet is taken and applied to the database. Mongobee stores changesets history in dbchangelog collection. If you are a Spring guy, and like Java config among others, then you should choose this tool. You have two options how to run Mongobee - inside Spring container...
Have you ever had a need to use some values from application.properties or **application.yml? How did you take them out? Personally, I always used @Value annotation: @Value("${graphite.host}") private String graphiteHost; It wasn’t the best way to work with my properties. However, I didn’t know a better approach. Then, I found @ConfigurationProperties - annotation from Boot package, which has everything you need to map your properties. Given Let’s say, your application.yml looks like following: graphite: enabled: true host: localhost port: 2003 amountOfTimeBetweenPolls: 20000 When You need to create a bunch of classes, which you will be autowiring in all parts of your code. I’m using Project Lombok for skipping java formalities, if you are not, then create getters and setters for your classes. package org.rngr.properties; import lombok.Data; import org.springframework.boot.context.properties.ConfigurationProperties; import javax.validation.constraints.NotNull; @ConfigurationProperties(ignoreUnknownFields = true) @Data public class ApplicationProperties { @NotNull private GraphiteProperties graphite; } Pay attention to @ConfigurationProperties annotation, it’s playing a key role here. Don’t forget about subclass: package org.rngr.properties; import lombok.Data; @Data public class GraphiteProperties { private boolean enabled; private String host; private int port; private int amountOfTimeBetweenPolls; } In the end, you need to enable configuration properties: package org.rngr; import lombok.extern.slf4j.Slf4j; import org.rngr.config.*; import org.rngr.properties.ApplicationProperties; import org.springframework.beans.factory.annotation.Autowired; import...
How do you use your logs for searching problem requests? For instance, you got a problem response, with all headers, response body, and you need to find appropriate logs. How would you do that? Personally, I found it useful to write some words about MDC - Mapped Diagnostic Context. Shortly, it is a concept of mapping request specific information. Usage We will configure MDC in Spring Boot application. We will use SLF4J on top of Logback implementation. Using it all together, we will create a unique requestId for each request in our system. Components We will use 4 components here: Spring Boot, Slf4j, Logback and Java Spring Boot Spring Boot will be used for managing dependency injection and registering pure Java Filter. Slf4j Simple Logging Facade is used for following abstraction principles. Additionally, MDC class is located inside slf4j dependency. Similar classes are inside log4j and logback dependencies. Logback Logback is used as one of logging providers Pure Java Java is used for writing simple Java Filter. Affected files MDCFilter package org.startup.queue.filter; import org.slf4j.MDC; import org.springframework.stereotype.Component; import javax.servlet.*; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.UUID; @Component public class MDCFilter implements Filter { @Override public void init(FilterConfig filterConfig) throws ServletException...
At the beginning of June 2016, I visited a JavaDay conference in Lviv, where I listen to a talk about serverless architecture. Today I’d like to try serverless. According to Martin Fowler, serverless is an architecture, which uses third party services or a custom code on ephemeral containers, best known as Amazon Lambda. So, I understand it as an idea to run your backend code on demand: there won’t be any real servers like EC2, Amazon infrastructure will start a container, which will run your Lambda function. Consider reading this article for explanations. I’d like to try this infrastructure. I decided to use Amazon API Gateway and Amazon Lambda for analyzing GitHub repositories: after each push to the repository I will examine all .md files and store them to Amazon Dynamo Database. More generally, I’ll create a backend part for a self-written blogging platform. Who knows, maybe I will write a frontend part soon. General plan The approach isn’t clear enough, so let’s clarify what do we need: We want to save articles on GitHub We need to have an API for our blog Since we need to have a good latency for API, reading articles from GitHub is not...