How we wrote Telegram ChatBot to keep track of co-working spends

I guess, everyone know knows what are chat bots. Do do I, I was looking for some idea of chat bots for many months and finally, I ended up with an idea for my co-working, which I made with my friends and where I am currently spending most of my life. During the month, we collect a list of expenses, then sum them up, divide by the number of people working in the office and then, finally, reach to every individual and ask to pay for general expenses. We were tracking expenses in the Google Spreadsheet and it was working fine, in general. However, I noticed that in most of the cases, people are too lazy to go into the spreadsheet and put their expenses there. So my obsession of being up to date with modern technologies and a tiny small problem with office expenses met together, I invested 3-4 hours of my time on writing a chat bot and now we are using it. I decided to stick with Telegram, a messenger which became very popular within the last years. I used Java as a programming language. Telegram has an API for chat bots, so I made a research...

How we wrote chicken egg counter on a Raspberry PI

How it started Besides my main work on Upwork I quite often pick different projets. So I found a project, where I had to write a program for recognizing chicken eggs on a factory stream line. Customer wanted to install the application on computer with web camera, put this camera at a top of stream line and the application had to calculate eggs and send them to the DB. He also wanted to run this program on a cheap computer. The quality of the network in the factory isn’t stable, so the program had to be resilient to outstand the network issues. There was enough challenges for me, so I decided to participate on this project. The biggest challenge here was that I had no serious experience with OpenCV and image recognition, so I wanted to test myself if I can deep dive into unknown field and return with successful result. Customer wanted to have 99% of recognition. This whole post will be a story how this application was designed, how it was written and what problems did I faced during the development. I will try to explain each architecture decision, from the beginning and to the end of the...

What I learned from AWS Lambda

For the past 1 month, I had a chance to work with AWS Lambda. During the period of work with Lambda, I collected a lot of thoughts about this technology and would like to share them with you. Getting started So if you don’t know anything about AWS, I recommend starting with official docs: Amazon has a very rich documentation which will explain all the details about Lambda. If you don’t want to read the whole doc, then Lambda is a technology which allows you to deploy your code in a so-called Lambda functions - a containers somewhere inside AWS infrastructure. This gives a lot of benefits: you pay money only when you start invoking Lambda. The pricing for it is relatively low, as usual, AWS has a free tier which includes 1M free requests per month and 400,000 GB-seconds of compute time per month. The free tier description is a big confusing, I recommend using this table: Memory (MB) Free tier seconds per month Price per 100ms ($) 128 3,200,000 0.000000208 … … … 512 800,000 0.000000834 … … … 1024 400,000 0.000001667 … … … 2048 200,000 0.000003334 … … … 3008 136,170 0.000004897 Basically, for each particular...

Are you sure microservices architecture is for you?

Today is the starting of the fourth year since I began my journey with microservices. I started with a theoretical knowledge about this architecture and now I ended up with a more deep and practical experience. While I still believe I can find news problems in microservices, I prepared an article with a list of problems which I had a chance to face in my work. You will read short stories which I faced during my work. If you don’t agree with them and think that they could be fixed and identified earlier - that’s okay, I believe that you can’t find a microservices structure with identical problems - every organization has its own path and its own problems, thus, things that failed in one microservices architecture, could be omitted in another. Isolated messaging layer This story is about the messaging layer. You know the story, every cool microservices architecture has to have its own messaging layer: the idea is that you have an asynchronous way of communication between your services. I spent some time explaining this in Microservices interaction at scale using Apache Kafka article. So, the perfect scenario assumes that you have a bunch of microservices and some...

Kafka Consumer memory usage

I’m working with Kafka for more than 2 years and I wasn’t sure if Kafka Consumer eats more RAM memory when it has more partitions. I couldn’t find any useful information on the internet, so I decided to measure everything by myself. Inputs I started with 1 broker, since I am interested in actual memory consumption for 1 and 1000 partition topics. I know, lauching Kafka in a cluster can differs, because we have replication processes, acknowledgments, and other cluster things, but let’s skip it for now. Two basic commands for launching Kafka single node cluster: bin/zookeeper-server-start.sh config/zookeeper.properties bin/kafka-server-start.sh config/server.properties I created two topics, topic1, with 1 partition, and topic2, with 1000 partitions. I believe, the difference between partitions is enough for understanding memory consumption. bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic1 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1000 --topic topic2 It’s good that Kafka provides us with kafka-producer-perf-test.sh, a performance script, which let us load test Kafka. bin/kafka-producer-perf-test.sh --topic topic1 --num-records 99999999999999 --throughput 1 --producer-props bootstrap.servers=localhost:9092 key.serializer=org.apache.kafka.common.serialization.StringSerializer value.serializer=org.apache.kafka.common.serialization.StringSerializer --record-size 100 So, I consequently launched load tests to insert data into two topics with a throughput of 1, 200, 500 and 1000 messages/second. I collected all...