A technical analysis on Kong, an open-source API Gateway built to secure and manage your cloud services.
In software, we know that you have hundreds of choices when it comes to what tools you use for your next project. Today, we are going to cover one of the many options, this time in the realm of API gateways. We have talked about how cloud services are dominating the industry and Kong is one way to get on board.
Kong is the world’s most popular open source API gateway for multi-cloud and hybrid cloud systems, optimized for microservice and distributed system architectures. It is built on top of a lightweight NGINX implementation to deliver unparalleled latency, performance, and scalability.
It also allows you granular and extensible control of your traffic through plugins, through which a large share of Kong’s features are made available. Authentication, rate-limiting, transformation, logging, and more can all be added declaratively to a route, service, or entire system. They are written in Lua natively, but are completely open-sourced, customizable, and can be created independently to perfectly fit your business needs.
“For many years, API Management and the adoption of API gateways was the primary technology used to implement modern API use cases both inside and outside the data center. API gateway technology has evolved a lot in the past decade, capturing bigger and more comprehensive use cases in what the industry calls “full lifecycle API management.” It’s not just the runtime that connects, secures and governs our API traffic on the data plane of our requests but also a series of functionalities that enable the creation, testing, documentation, monetization, monitoring, and overall exposure of our APIs in a much broader context — and target a wider set of user personas from start to finish. That is, there is a full lifecycle of creating and offering APIs as a product to users and customers, not just the management of the network runtime that allows us to expose and consume the APIs.” - Marco Palladino, Co-Founder and CTO of Kong
Kong and all its plugins comply with industry-leading standards in regards to HTTP and JSON. It has been tested with a wide range of tools and in many environments. The company has boldly stated, “Because Kong operates at the application level and adheres to industry standards, it is broadly compatible with all leading web technologies and orchestration, log management, continuous deployment, and microservice development tools.”
First and foremost, Kong seems to consider itself an API Management (APIM) tool. What are its predecessors? What is the competition? And why is Kong the leader? If you look back about 4 or 5 years, you are going to see APIM focusing heavily on the distribution of products—getting APIs into the hands of consumers and documentation. APIs have seen a major evolution in DevOps integration for both frontend and backend development, as well as the maturity of APIM tools to compliment the full product’s agile lifecycle. This progression can be seen in most of these tools: WSO2, RedHats Legacy APIM (which was abandoned for 3Scale), IBM API Connect (previously known as IBM API Management), Mashape (Kong before it was “Kong”), and many others.
Kong is built on NGINX and is scalable to thousands of nodes with enterprise-worthy management. It’s Open-Sourced—just download and go. My personal favorite is that the documentation is immaculate! In my history as a developer, I’ve seen few projects with such well-organized and accessible documentation. Price, community, flexibility, low-complexity, and a plethora of API management features all contribute to the widespread belief that “Kong is King.” One of the most impactful reasons to choose Kong, in my opinion, is its flexibility. You can integrate Kong, build your plugins to meet every business need that you have, save the configuration to a docker container, deploy it anywhere, and scale it instantly.
API gateway is a pattern that leverages the same design as the host operating systems network to manage many different services. There has been an increasing need for this due to the rise in microservice architecture. More services mean more orchestration, but this creates several problems. How do you abstract producer information? Different devices frequently need different data sets. How do you handle different protocols? How do you centralize logging or security?
This is where Kong comes in. At its heart, it’s an application layer for a distributed system. Kong can address many of these problems with its base installation and all of the remaining through its plugin library. You can reroute requests, modify the protocol, log interactions and responses, integrate custom security plugins with your authorization workflow, aggregate results, and transform results. It is “Express,” for the node developers out there, and all its middleware is pre-built and ready for deployment through declarative configuration.
There are two common implementations of Kong, a declarative YAML-based configuration and the enterprise configuration with a data source. A simple example of declarative configuration is the following snippet using a docker Kong distribution.
The majority of your instance and container configuration happens at startup and is declared in a separate .conf file. If you’re using Docker Kong, the container is ready to go out of the box. This file is used to wire your services together and is the only thing necessary to start your service mesh.
At the top, you have some basic metadata. The only required field is _format_version. Services are where you declare the routes for redirection. You give Kong a name to reference the service, the domain where the base route can be located, any plugins you want associating with that route, a name to the route that you’ve declared, and any Kong paths you would like redirected to that service. A consumer is usually the application. You are registering an application that will interact with your routes and plugins, defining the interface the UI or other application uses with any plugins previously defined in your services. You can find full guides to this and other implementations at https://docs.konghq.com.
NGINX is considered the gold standard of performance in API gateways. NGINX has its own APIM suite of tools which they have pinned against Kong. The four metrics NGINX used to quantify the performance of these tools are single request latency, calls per second, API calls per second with JWTs, and CPU usage.
“It’s important to note that these figures are generated using next to zero configuration on either side.”
As I stated previously, Kong is built on top of NGINX. This information may seem to condemn and is worth considering, but ultimately not why you would choose Kong over NGINX suite of API Management tools. It also does not take into consideration that you can configure NGINX inside Kong, tailoring it to your business needs.
There are a lot of options on the market. You choose Kong for its extensible plugin library, its platform-agnostic flexibility, the fact that it is open-source, and its ease of use. Kong is a great tool that solves a lot of big, but also common, problems in the microservice world. We have the technology. Hopefully, I’ve given enough information to help get your feet wet or at least interested.
And of course, reach out if you have more questions, we at Verys keep up to date and want to help you make the right choice for your business.