This is the 3rd post in a small mini series that I will be doing using Apache Kafka + Avro. The programming language will be Scala. As such the following prerequisites need to be obtained should you wish to run the code that goes along with each post. The other point is that I am mainly a Windows user, as such the instructions, scripts will have a Windows bias to them. So if you are not a Windows user you will need to find the instructions for your OS of choice. Prerequisites Down the open source confluent platform : https://www.confluent.io/download/?utm_medium=ppc&utm_source=adwords&utm_campaign=Branded&utm_content=https://www.confluent.io/download/&utm_term=%2Bconfluent%20%2Bio%20%2Bdownload&b&gclid=CjwKCAjw9qfZBRA5EiwAiq0AbR4M47Cvwr5bXA8z5LbGdsvz7eQYhAs0CovCqiuHNHtF1EE4xhNf8RoCCnQQAvD_BwE#popup_form_1905 IntelliJ IDEA Community edition (Scala IDE), you should enable the SBT plugin in this Java 1.8 SDK SBT (Scala build tool) So go and grab that lot if you want to follow along. Last time we talked about how to create a Kafka Producer/Consumer which uses the KafkaAvroSerializer when using Avro data of a specific type.


I guess you came to this post by searching similar kind of issues in any of the search engine and hope that this resolved your problem. If you find this tips useful, just drop a line below and share the link to others and who knows they might find it useful too.

Stay tuned to my blogtwitter or facebook to read more articles, tutorials, news, tips & tricks on various technology fields. Also Subscribe to our Newsletter with your Email ID to keep you updated on latest posts. We will send newsletter to your registered email address. We will not share your email address to anybody as we respect privacy.


This article is related to

Distributed Systems,Kafka,Scala