Once upon a time there was a good developer. He produced a good code, had good relations with his teammates and never break the master branch. This guy followed 11 rules and lived a long - long life. So I’m posting them here, in case someone will have a good reason to improve his way of working on pull requests. Make them small If you plan to change your source code, don’t make significant changes with dozens of classes involved. Make a granular, little changes containing < 10 files. It’s important because your teammates will do a review, and in a case of big pull requests it’ll take time to understand what you wrote, why you wrote it, and find possible bugs. By making smaller pull requests, you let your teammates review your code more precisely and find possible mistakes there. Give an initial point If you failed with small pull requests or not, always help your reviewers with a starting point. Where should they start? What unit of logic should they review first? It won’t be a problem for you to give this information and your teammates will review your PR’s faster. Write couple of comments If needed, support...
Did you ever need to limit requests coming to your endpoints? Say, your maximum design capacity is near 10k requests/second, and you don’t want to probe your service on higher rates.
I choose to use RateLimiter - small class from Guava.
The init process aren’t so huge, just couple of lines:
RateLimiter rateLimiter = RateLimiter.create(10); // 10 requests/second
Then, just place rateLimiter in place, where you need to have limited requests:
boolean isAcquired = rateLimiter.tryAcquire();
if (!isAcquired) {
throw new NotSoFastBuddyException(...);
}
Links
Guava
Intro In the previous part I explained how to use Terraform, Ansible, Docker and Spring Boot to deploy applications in the cloud. Today I’d like to introduce something, which will work as a monitoring tool, inside our infrastructure. If you follow my blog posts, you should remember a post about Spring and Dropwizard Module - there I explained how you could get a meaningful metrics from your app. But wait, why you should even do monitoring and can you skip this part? Well, when I first came to the project, when a wide variety of metrics was present in each of the microservices in the ecosystem, I had a feeling that this is something which I won’t use in the future. I was right, and I didn’t use them…until my first incident, on which I had to understand what’s going on. I start looking for some explanations, and found that our service is sending many 500 statuses. Then I found out, that one dependant service, which we use to get some part of response, is broken, and problem is not on our side. From that period I introduced a couple of custom dashboards, and during incidents/crashes, I can answer most...
I use ImageMagick for resizing images in one of the projects, and I needed a command to monitor folder changes in a temporary folder. So I found watch:
watch -d -n 0.3 'ls -l | grep ivanursul'
This command check every 0.3 seconds for current folder, and find all files, created by user ivanursul
Links
Linux Watch
Just creating instances in the cloud is an intermediate result. Yes, you know don’t need to create them manually, but it’s not a target. We need to configure them somehow, deploy logic, restart, etc. That’s why this article is about describing how to work with Terraform instances - using Ansible tool. What do you need to do before starting Create terraform instance - here’s how Install ansible: brew install ansible How Ansible knows about Terraform Let’s take a close look how Ansible will recognize terraform instances. Normally, this is being done by specifying instances in inventory file - the idea is to have a file, which contains all the IP addresses, organized by the group. Let’s say, you have one load balancer and three instances of an application, then you need to have following inventory file: [lb] lb.example.com [app] one.example.com two.example.com three.example.com Because we are creating instances on the fly, we don’t have a predefined set of IP addresses, which we’re going to use. That’s why we should use dynamic inventory. As we’re using Terraform, we need some tool, which knows how to read instances from terraform.tfstate file and represent them to ansible. Personally, I found it useful to use...