When architecting microservice solutions, you'll often find yourself struggling with cross-cutting concerns. Think security, rate limiting, access control, monitoring, location-aware routing… Things can quickly become a nightmare. The API Gateway pattern can help you solve such problems in an elegant and uniform way. Using Kong, an open source product, you can get started today. In this session we'll look at the why and how of this approach.
In this WebHack talk I shared my experience about microservices, Docker, Kubernetes and Kong, an API gateway by Mashape. Since they are based on a real working system, this slides is majorly for how to build the whole thing up, not about detailed internal implementation. Although I included some details and reference in order to make it more comprehensive.
Talk given at OpenResty Con 2017 in Beijing. Kong (https://getkong.org) is a widely-adopted open source API Gateway built with OpenResty. It aims at helping secure, manage, and extend microservices-based architectures with minimal effort from the user, while ensuring platform agnosticism. In this talk, we will explore the challenges we encountered developing such an OpenResty application, and how we overcame many of them by way of libraries and contributions back to the OpenResty community. We will cover topics such as clustering OpenResty nodes, inter-workers communication, DNS resolution, typical pitfalls OpenResty developers should avoid, and much more.
Kong is a lightweight, cloud-native API solution that makes it easier and faster than ever to connect APIs and microservices in today’s hybrid, multi-cloud environments. With its agnostic, flexible deployment approach, Kong can be used in today’s heterogeneous IT system landscapes to integrate a wide variety of data and systems – even across company boundaries – using APIs. In addition to REST APIs, Kong also offers support for gRPC and GraphQL, which broadens the possibilities to implement modern application architectures. In this presentation, we will discuss deployment patterns and use cases for Kong to demonstrate the flexibility of the platform. Using a practical example, aspects of the API development and deployment process as well as the integration in existing software development processes will be discussed.
This document discusses using NGINX as an API gateway for microservices architectures. It describes how NGINX can provide essential API gateway functions like API routing, authentication, overload protection, and request tracing in a lightweight and efficient manner. The document advocates for separating the roles of a secure proxy and API gateway to handle north-south and east-west traffic respectively. Key API gateway capabilities of NGINX like API routing, authentication using API keys or JWT, and request tracing are demonstrated with code examples.
In this talk we discuss how you can deploy both NGINX and NGINX Plus within Kubernetes as an Ingress Controller
A short introductory talk given as part of the April 2018 Kong meetup "Introducing Kubernetes Ingress Controller for Kong". This talk covers the new features and improvements made to Kong from 2017 to 2018, including the groundwork conducted by Kong Inc. and open source contributors that allowed for the development of the Kong Ingress Controller for Kubernetes. The Kong Ingress Controller for Kubernetes was then announced during the meetup: https://github.com/Kong/kubernetes-ingress-controller
This document discusses using Kong as a Kubernetes ingress controller to provide advanced traffic management capabilities. It introduces Kubernetes ingress and describes how Kong can act as a single point of entry to handle authentication, logging, caching, load balancing, rate limiting and more for Kubernetes applications. The document demonstrates configuring Kong plugins through custom resources and annotations to apply policies to ingress routes. It also highlights Kong's support for features like TLS termination, gRPC and integrations with Prometheus and cert-manager.
On-Demand Recording: https://www.nginx.com/resources/webinars/whats-new-nginx-ingress-controller-kubernetes-version-150/ Kubernetes is the leading orchestration platform for deploying, scaling, and managing containerized applications. Infrastructure operators constantly impose new application delivery requirements as they adopt Kubernetes for production workloads. The NGINX Ingress controller is the most popular ingress load balancer for Kubernetes, providing a complete and supported solution for delivering your containerized applications to clients. Attend this webinar to learn about the latest developments in NGINX Ingress Controller for Kubernetes Release 1.5.0.
Ambassador is an open source API gateway and L7 proxy built by Lyft to run on Kubernetes. It provides a Kubernetes-native API gateway that uses annotations for declarative and decentralized configuration. Ambassador simplifies architecture by removing the need for a database, and it can scale automatically via HPA. It also supports features like gRPC, HTTP/2, rate limiting, timeouts, canary releases, and shadowing traffic through the Envoy proxy.
This document discusses a Cassandra driver for Lua called lua-cassandra. It was created to provide a pure Lua implementation of Cassandra for the Lua and OpenResty communities. The driver supports features like cluster awareness, load balancing policies, retries, and SSL connections. It was created as a fork of an earlier Cassandra driver called lua-resty-cassandra to add support for Cassandra 3.x and improve interoperability with OpenResty. The driver is well tested and documented to make it easy for others to use.
This document discusses microservices and how to build them using Go. It describes the benefits of microservices over monolithic architectures, such as improved scalability, resilience, and ease of deployment. Some key aspects of building microservices with Go that are covered include making services autonomous and focused, using a domain-driven design, implementing service discovery, API gateways, and messaging between services using events. The document also provides guidance on important operational concerns like security, monitoring, and testing when building microservices applications.
This webinar gets you started using the Kubernetes Ingress controllers for NGINX & NGINX Plus to load balance, route, and secure Kubernetes applications Join this webinar to learn: - The benefits of using Kubernetes and why it's become the de facto container scheduler - About the Kubernetes Ingress resource and Ingress controllers - How to use NGINX and NGINX Plus Ingress controllers to load balance, route traffic to, and secure applications on Kubernetes - How to monitor the NGINX Plus Ingress controller with Prometheus
TADSummit Dangerous demo: Oracle Presented by Doug Tait, Oracle at TADSummit Lisbon 18th November 2015 WebRTC Client connect to an HTML application deployed on OCSG over HTPP(s). The app use: OCSG Authentication REST API oneAPI SMS REST service exposed by OCSG to send SMS WebRTC API SDK deployed on WSC Once connected, the webRTC endpoint create a conference room and then: open a websocket connection to WSC using WSC SDK Can send an SMS an SMS to a mobile device with the link to the conference leveraging the SMS API It then use WSC API to make a call to a mobile user or to another webRTC Endpoint Chat message are sent via Datachannel RTP stream goes through WSC
This document summarizes Squarespace's transition from a monolithic architecture to a microservices architecture and their implementation of a service mesh using Envoy proxy. It describes how Squarespace grew from less than 50 engineers in 2013 to over 200 engineers in 2017, necessitating the move to microservices for scalability. It outlines their initial use of Consul for service discovery and Netflix OSS libraries. It then introduces the concept of a service mesh and how Envoy proxy deployed as a sidecar can provide advanced control, observability, and support for multiple languages. It details how Envoy uses Consul and its xDS APIs for dynamic service discovery and configuration. Finally, it discusses future work including integrating orchestration and abstracting common service
Webinar recording: nginx.com/resources/webinars/microservices-container-management-nginx-plus-mesosphere-dcos NGINX and NGINX Plus are emerging as the standard for connecting, securing, caching, and scaling microservices. We hope you found it valuable to learn how to use Mesosphere DC/OS and containers, such as Docker containers, to create and run microservices applications in an NGINX Plus environment.
Getting traffic into a Kubernetes cluster should be simple, but it’s not. Richard Li explains how software architectures have evolved to take advantage of Kubernetes and discusses the implications that these changes have on ingress. Richard then covers some of the nuances of modern ingress, including authentication, resilience, and observability at the edge, explores how Kubernetes handles ingress today, with NodePorts, LoadBalancers, and ingress controllers, and shares his experience and lessons learned from using several real-world implementations of ingress on Kubernetes.
On-Demand Link: https://www.nginx.com/resources/webinars/analyzing-nginx-logs-datadog/ About the Webinar Datadog is a SaaS-based monitoring and analytics platform for cloud-scale organizations. The company is an industry leader in monitoring and observability – with over 350+ vendor-supported integrations, Datadog seamlessly correlates metrics, traces, and logs across the full DevOps stack. With Datadog’s Log Management solution, you can cost-effectively collect, analyze, and archive all your logs with an easy-to-use, intuitive interface. Attend this webinar to learn how to analyze NGINX logs using Datadog to achieve business outcomes including SEO optimization, improved website performance, and detection of DDoS attacks.
When architecting microservice solutions, you'll often find yourself struggling with cross-cutting concerns. Think security, rate limiting, access control, monitoring, location-aware routing… Things can quickly become a nightmare. The API Gateway pattern can help you solve such problems in an elegant and uniform way. Using Kong, an open source product, you can get started today. In this session we'll look at the why and how of this approach. Disclaimer: This presentation may include live coding. No humans or animals will be hurt during the process.
This set of slides showcases how you can interact with PowerVC via its OpenStack-based REST APIs. It also demonstrates how to mimic the REST APIs made from your web browser in familiar command line utilities like curl.
This document provides an overview of service mesh and serverless technologies. It discusses the evolution of microservices and how service mesh addresses needs like service discovery, routing and monitoring. It introduces concepts like sidecars and shows the architecture of Istio service mesh. It then defines serverless computing and discusses how the Knative project implements a serverless platform on Kubernetes. It shows examples of using Knative to deploy serverless applications on OpenShift and highlights the roadmap for integrating technologies like Tekton.
SpaceONE is an open-source multi-cloud management platform consisting of microservices including frontend, backend, and plugins. The backend and plugins have a common software framework and use gRPC APIs and python-core libraries. SpaceONE uses a microservices architecture with components like identity, inventory, monitoring, and billing that can scale independently. It also has a plugin mechanism to extend the capabilities of core services like inventory to support multiple cloud providers.
The document discusses an introduction to the CloudStack API. It covers topics like API documentation, clients that interface with the API, exploring the API by examining HTTP calls from the UI, making authenticated and unauthenticated API calls, asynchronous calls, error handling, and includes an exercise on building a REST interface to CloudStack using Flask.
1. The document discusses using JHipster, an open source tool, to generate Angular and Spring Boot applications. It demonstrates generating both monolithic and microservices applications. 2. Key features of JHipster covered include generating entities, internationalization, and deployment options. Running and developing applications in both development and production modes is explained. 3. Examples are provided of generating sample applications using JHipster's online generator and locally installed generator. This includes reviewing the generated code and application structure.
At Adobe APIs are powering the next generation of Creative applications. Mesos makes it very easy and fun to deploy and run Robust and Scalable Microservices in the Cloud. Today's technologies offer simple solutions to create RESTfull services while Mesos brings them to life faster. As the number of microservices increase and the inter communication between them becomes more complicated, we soon realize we have new questions awaiting our answers: how do microservices authenticate ? how do we monitor who's using the APIs they expose ? How do we protect them from attacks ? How do we set throttling and rate limiting rules across a cluster of microservices ? How do we control which service allows public access and which one we want to keep private ? How about Mesos APIs and its frameworks ? Can they benefit from these features as well ? Come and learn a scalable architecture to manage microservices in Mesos by integrating an API Management layer inside your Mesos clusters. This presentation will show you what an API Management layer is, what it's composed of and how it can help you expose microservices in a secure,managed and highly-available way, even in multi-Mesos cluster setups. During this session you will also have the opportunity to learn how Adobe's API Platform solved this problem, where it is today and what it envisions do to with Mesos further. If you're working with microservices already or you're creating new ones then this presentation is for you. Come and learn how Mesos together with an API management layer will make you a microservices hero in your organisation. At Adobe APIs are powering the next generation of Creative applications.
Plack provides a common interface called PSGI (Perl Server Gateway Interface) that allows Perl web applications to run on different web servers. It includes tools like Plackup for running PSGI applications from the command line and middleware for adding functionality. Plack has adapters that allow many existing Perl web frameworks to run under PSGI. It also provides high performance PSGI servers and utilities for building and testing PSGI applications.
Nuwan discusses how you can expose microservices as managed APIs in Kubernetes with the API Operator, so that you can create an end-to-end solution for your entire business functionality from microservices and APIs, to end-user applications. You can watch the on-demand webinar "Cloud Native APIs: The API Operator for Kubernetes" here: https://wso2.com/library/webinars/2019/11/cloud-native-apis-the-api-operator-for-kubernetes/
This document provides an overview of REST APIs and automated API documentation solutions. It discusses REST architecture and best practices for documenting REST APIs. It also covers popular automated documentation solutions like Swagger and RAML that can generate reference documentation from API specifications. The document demonstrates how to use Swagger and RAML specifications to automatically generate API documentation websites and interactive consoles. It compares the pros and cons of Swagger versus RAML and provides examples of professionally designed API documentation websites.