While service meshes may be the next "big thing" in microservices, the concept isn't new. Classical SOA attempted to implement similar technology for abstracting and managing all aspects of service-to-service communication, and this was often realized as the much-maligned Enterprise Service Bus (ESB). Several years ago similar technology emerged from the microservice innovators, including Airbnb (SmartStack for service discovery), Netflix (Prana integration sidecars), and Twitter (Finagle for extensible RPC), and these technologies have now converged into the service meshes we are currently seeing being deployed. In this webcast, Daniel Bryant shows you what service meshes are, why they're well-suited for microservice deployments, and how best to use a service mesh when you're deploying microservices. This webcast begins with a brief history of the development of service meshes. From there, you'll learn about some of the currently available implementations that are targeting microservice deployments, such as Istio (Envoy), Linkerd, NGINX Plus, and Traefik. Attendees will walk away with a high-level overview of the concept, tools for deciding when best to use a service mesh, and a getting started guide if they decide this technology is the right fit for their organization.
Istio is a service mesh—a modernized service networking layer that provides a transparent and language-independent way to flexibly and easily automate application network functions. Istio is designed to run in a variety of environments: on-premise, cloud-hosted, in Kubernetes containers.
As more applications are being developed as a set of microservices, containers and platforms such as Kubernetes make many things much easier, but still leave untouched many operational issues such as traffic management and visibility, service authentication, security and policy. Istio, is a new service mesh that attempts to address many of these. We will discuss the architecture of Istio and the benefits it may offer to new microservice-based systems in a multicloud world.
Microservices are everywhere and they help in solving business problems. But they also introduce complexity. Istio Service Mesh will help you solve it.
Presenting how it is possible to build a great microservice architecture using the service mesh ISTIO
This document provides an overview of communication amongst microservices using Kubernetes, Istio, and Spring Cloud. It discusses how Kubernetes is a container orchestrator that allows developers to run applications across infrastructure, and how Pivotal Container Service (PKS) provides managed Kubernetes. Istio is introduced as a platform that connects, secures, and observes microservices, utilizing sidecar proxies. Spring Cloud services are also discussed as providing abstractions for common patterns in distributed systems. The presentation explores how Istio and Kubernetes can work together to provide capabilities like retries, load balancing and mutual TLS for microservices, and compares this to features provided by Spring Cloud.
With microservices and containers becoming mainstream, container orchestrators provide much of what the cluster (nodes and containers) needs. With container orchestrators' core focus on scheduling, discovery, and health at an infrastructure level, microservices are left with unmet, service-level needs, such as: - Traffic management, routing, and resilient and secure communication between services - Policy enforcement, rate-limiting, circuit breaking - Visibility and monitoring with metrics, logs, and traces - Load balancing and rollout/canary deployment support Service meshes provide for these needs. In this session, we will dive into Istio - its components, capabilities, and extensibility. Istio envelops and integrates with other open source projects to deliver a full-service mesh. We'll explore these integrations and Istio's extensibility in terms of choice of proxies and adapters, such as nginMesh.
#Codemotion Rome 2018 - Containers provide a consistent environment to run services. Kubernetes help us to manage and scale our container cluster. Good start for a loosely coupled microservices architecture but not enough. How do you control the flow of traffic & enforce policies between services? How do you visualize service dependencies & identify issues? How can you provide verifiable service identities, test for failures? You can implement your own custom solutions or you can rely on Istio, an open platform to connect, manage and secure microservices.
Learn the differences between Envoy, Istio, Conduit, Linkerd and other service meshes and their components. Watch the recording including demo at: https://info.mirantis.com/service-mesh-webinar
SpringOne Platform 2017 Ramiro Salas, Pivotal The concept of a service mesh represents a paradigm shift on application connectivity for distributed systems, with wide implications for analytics, policy and extensibility. In this talk, we will explain what a service mesh is, the power it brings to microservices, and its impact on Cloud Foundry and K8s, both separately and together. We will also discuss the implications for the traditional network infrastructure, and the shifting of responsibilities from L3/4 to L7, and our current thinking of using Istio to integrate all abstractions.
Kubernetes users need to allow traffic to flow into and within the cluster. Treating the application traffic separately from the business logic allows presents new possibilities in how service to service traffic is served, controlled and observed — and provides a transition to intra cluster networking like Service Mesh. With microservices, there is a concept of both North / South traffic (incoming requests from end users to the cluster) and East / West (intra cluster) communication between the services. In this talk we will explain how Envoy Proxy works in Kubernetes as a proxy for both of these traffic directions and how it can be leveraged to do things like traffic shaping, security, and integrate the north/south to east/west behavior. Christian Posta (@christianposta) is Global Field CTO at Solo.io, former Chief Architect at Red Hat, and well known in the community for being an author (Istio in Action, Manning, Istio Service Mesh, O'Reilly 2018, Microservices for Java Developers, O’Reilly 2016), frequent blogger, speaker, open-source enthusiast and committer on various open-source projects including Istio, Kubernetes, and many others. Christian has spent time at both enterprises as well as web-scale companies and now helps companies create and deploy large-scale, cloud-native resilient, distributed architectures. He enjoys mentoring, training and leading teams to be successful with distributed systems concepts, microservices, devops, and cloud-native application design.
Microservice 4.0 Journey - From Spring NetFlix OSS to Istio Service Mesh and Serverless at Open Source Summit Japan
The exploration of service mesh for any organization comes with some serious questions. What data plane should I use? How does this tie in with my existing API infrastructure? What kind of overhead do sidecar proxies demand? As I've seen in my work with various organizations over the years "if you have a successful microservices deployment, then you have a service mesh whether it’s explicitly optimized as one or not." In this talk, we seek to understand the role of the data plane and how to pick the right component for the problem context. We start off by establishing the spectrum of data-plane components from shared gateways to in-code libraries with service proxies being along that spectrum. We clearly identify which scenarios would benefit from which part of the data-plane spectrum and show how modern service meshes including Istio, Linkerd, and Consul enable these optimizations.
Service-mesh technology promises to deliver a lot of value to a cloud-native application, but it doesn't come without some hype. In this talk, we'll look at what is a "service mesh", how it compares to similar technology (Netflix OSS, API Management, ESBs, etc) and what options for service mesh exist today.
A service mesh is a necessary tool in your cloud native infrastructure. The era of service meshes ushers in a new layer of intelligent network services that are changing the architecture of modern applications and the confidence with which they are delivered. Istio, as one of many service meshes, but one with a vast set of features and capabilities, needs an end-to-end guide
The document discusses common DevOps challenges related to rolling out new versions of microservices and testing them. It introduces Istio as a solution for addressing these challenges through intelligent routing, resiliency features, traffic controls, telemetry collection, and other capabilities. Istio uses the Envoy proxy and control tools like Pilot and Mixer to provide features for reliable traffic management between services, such as advanced routing rules for canary releases, fault injection for testing resiliency, and policy enforcement across the mesh.
Shay Naeh, Senior Architect in the Cloudify CTO Office's talk from Open Networking Summit Europe 2018. Talking open source edge networking, federated Kubernetes and cloud native stacks - and how to truly achieve an open edge stack.
Presentation in IBM Cloud Meet-up of Toronto https://www.meetup.com/IBM-Cloud-Toronto/events/253903913/?_xtd=gatlbWFpbF9jbGlja9oAJGU3NmM3ZjdmLWE2NzgtNGVlNC1iNGZiLTBlZGE5ZWM0NDZjOQ
While service meshes may be the next "big thing" in microservices, the concept isn't new. Classical SOA attempted to implement similar technology for abstracting and managing all aspects of service-to-service communication, and this was often realized as the much-maligned Enterprise Service Bus (ESB). Several years ago similar technology emerged from the microservice innovators, including Airbnb (SmartStack for service discovery), Netflix (Prana integration sidecars), and Twitter (Finagle for extensible RPC), and these technologies have now converged into the service meshes we are currently seeing being deployed. In this talk, Daniel Bryant will share with you what service meshes are, why they are (and sometimes are not) well-suited for microservice deployments, and how best to use a service mesh when you're deploying microservices. This presentation begins with a brief history of the development of service meshes, and the motivations of the unicorn organisations that developed them. From there, you'll learn about some of the currently available implementations that are targeting microservice deployments, such as Istio/Envoy, Linkerd, and NGINX Plus.
What is a Service Mesh? And Do I Need One When Developing Cloud Native Microservices? By Daniel Bryant, Micro Xchg 2018
All is not completely rosy in microservice-land. It’s often a sign of an architectural approach’s maturity that anti-patterns begin to be identified and classified alongside well-established principles and practices. Daniel Bryant introduces seven deadly sins from real projects, which left unchecked could easily ruin your next microservices project. Daniel offers an updated tour of some of the nastiest anti-patterns in microservices from several real-world projects he’s encountered as a consultant, providing a series of anti-pattern “smells” to watch out for and exploring the tools and techniques you need to avoid or mitigate the potential damage. Topics include: Pride: the admission of the challenges with testing in a distributed system Envy: introducing inappropriate intimacy within services by creating a shared “canonical” domain model Wrath: failing to deal with the inevitable bad things that occur when operating new technologies, both from the people and technical aspects Sloth: composing services in a lazy fashion, which ultimately leads to the creation of a "distributed monolith” Lust: embracing the latest and greatest technology without evaluating the operational impact incurred by these choices
This document discusses continuous delivery patterns for modern architectures and Java. It covers topics like moving from complicated to complex systems, how architecture is becoming more about technical leadership, and encoding all requirements into a continuous delivery pipeline. It also discusses challenges with modern app architectures like multiple services/pipelines, independent service deployment, and evolving architecture. Continuous delivery, testing microservice integration, contracting testing, and measuring what matters are also covered.
Building microservices for the Cloud is easy, right?... Perhaps, but if you want to build effective and reliable services that not only work correctly within the Cloud, but also take advantage of running within this unique environment, then you might be in for a surprise. This talk will introduce lessons learnt over the past several years of designing and implementing successful Cloud-based Java applications which we have codified into our Cloud development ‘DHARMA' principles; Documented (just enough); Highly cohesive / lowly coupled (all the way down); Automated from commit to cloud; Resource aware; Monitored thoroughly; and Antifragile. We will look at these lessons from both a theoretic and practical perspective using several real-world case studies involving a move from monolithic applications deployed into a data center on a 'big bang' schedule, to a platform of JVM-based loosely-coupled components, all being continuously deployed into the Cloud. Topics discussed will include API contracts and documentation, architecture, build and deployment pipelines, Cloud fabric properties, monitoring in a distributed environment, and fault-tolerant design patterns. This presentation was delivered at muCon 2015 on 27/11/14, the microservice conference. The video can be seen here: https://skillsmatter.com/skillscasts/5938-developing-java-services-for-the-cloud
VJUG24 SESSION: CONTINUOUS DELIVERY PATTERNS FOR THE MODERN JAVA DEVELOPER (I.E. ALL OF US!) Modern software architecture is evolving towards fully component-based systems, but there can be many challenges in delivering these applications in a continuous, safe and rapid fashion. This talk presents a series of patterns that will help developers implement continuous delivery solutions.
The document discusses continuous delivery patterns for contemporary architecture. It notes that systems are moving from complicated to complex, requiring architecture to focus more on technical leadership. All requirements must be encoded in continuous delivery pipelines to test both functional and non-functional requirements. Architectural fundamentals like loose coupling and high cohesion are important to consider in design, testing, deployment and observability in continuous delivery.
Service-mesh technology promises to deliver a lot of value to a cloud-native application, but it doesn't come without some hype. In this talk, we'll look at what is a "service mesh", how it compares to similar technology (Netflix OSS, API Management, ESBs, etc) and what options for service mesh exist today.
Modern software development architecture has almost completed its evolution towards being properly component-based: this can be seen by the mainstream embracing Self Contained Systems (SCS), microservices, and serverless. We all know the benefits this can bring, but there can be many challenges delivering applications built using these styles in a continuous, safe, and rapid fashion. This talk presents a series of patterns based on real-world experience, which will help architects identify and implement solutions for continuous delivery of contemporary architectures. Key topics and takeaways include: - Core stages in the component delivery lifecycle: develop, test, deploy, operate and observe - How contemporary architectures impact continuous delivery - Modifying the build pipeline for testability and deployability of components (with a hat tip to Jez Humble and Dave Farley’s seminal work) - Commonality between delivery of SCS, microservices and serverless components - Continuous delivery, service contracts and end-to-end validation: The good, bad and ugly - Lessons learned in the trenches
Building applications for the IaaS Cloud is easy, right? "Sure, no problem - just lift and shift!" all the Cloud vendors shout in unison. However, the reality of building and deploying Cloud applications can often be different. This talk will introduce lessons learnt from the trenches during two years of designing and implementing cloud-based Java applications, which we have codified into our Cloud developer’s 'DHARMA' rules; Documented (just enough); Highly cohesive/loosely coupled (all the way down); Automated from code commit to cloud; Resource aware; Monitored thoroughly; and Antifragile. We will look at these lessons from both a theoretic and practical perspective using a real-world case study from Instant Access Technologies (IAT) Ltd. IAT recently evolved their epoints.com(http://epoints.com/) customer loyalty platform from a monolithic Java application deployed into a data centre on a 'big bang' schedule, to a platform of loosely-coupled JVM-based components, all being continuously deployed into the AWS IaaS Cloud
Last year at this conference we learned from Mark Richards that modern software has almost completed its evolution toward component-based architectures—seen in the mainstream embrace of self-contained systems (SCS), microservices, and serverless architecture. We all know the benefits of component-based architectures, but there are also many challenges to delivering such applications in a continuous, safe, and rapid fashion. Daniel Bryant shares a series of patterns to help you identify and implement solutions for continuous delivery of contemporary service-based architectures. Topics include: - The core stages in the component delivery lifecycle: Develop, test, deploy, operate, and observe - How contemporary architectures impact continuous delivery and how to ensure that this is factored into the design - Modifying the build pipeline to support testability and deployability of components (with a hat tip to Jez Humble’s and Dave Farley’s seminal work) - Commonality between delivery of SCS, microservices, and serverless components - Continuous delivery, service contracts, and end-to-end validation: The good, the bad, and the ugly - Validating NFRs within a service pipeline Lessons learned in the trenches
(Updated for Sept 2016, and Java-themed as this talk was presented as part of the 'Virtual JUG' vJUG24 event on 27th Sept) There is trouble brewing in the land of microservices – today’s shiny technology is tomorrow’s legacy, and there is concern that we will all be dealing with spaghetti services in 2018… It is often a sign of an architectural approach’s maturity that, in addition to the emergence of well-established principles and practices, anti-patterns also begin to be identified and classified. In this talk we introduce the 2016 edition of the seven deadly sins that if left unchecked could easily ruin your next microservices project… This talk will feature as a session in vJUG24, the first 24 hour virtual Java Conference in the World. More information is available at http://virtualjug.com/vJUG24/
Modern software has almost completed its evolution toward component-based architectures—seen in the mainstream embrace of self-contained systems (SCS), microservices, and serverless architecture. We all know the benefits of component-based architectures, but there are also many challenges to delivering such applications in a continuous, safe, and rapid fashion. Daniel Bryant shares a series of patterns to help you identify and implement solutions for continuous delivery of contemporary service-based architectures. Learning Outcomes: - Identify core stages in the component delivery lifecycle: Develop, test, deploy, operate, and observe - How contemporary architectures impact continuous delivery and how to ensure that this is factored into the design - Modifying the build pipeline to support testability and deployability of components (with a hat tip to Jez Humble’s and Dave Farley’s seminal work) - Commonality between delivery of SCS, microservices, and serverless components - Continuous delivery, service contracts, and end-to-end validation: The good, the bad, and the ugly - Validating NFRs within a service pipeline - Lessons learned in the trenches
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. Dependent on the size and quantity of such events, this can quickly be in the range of Big Data. How can we efficiently collect and transmit these events? How can we make sure that we can always report over historical events? How can these new events be integrated into traditional infrastructure and application landscape? Starting with a product and technology neutral reference architecture, we will then present different solutions using Open Source frameworks.
This document discusses iRobot's adoption of serverless architecture and the reasons for choosing it. Some key benefits identified are lower latency and cost compared to monolithic or microservices architectures. Specific challenges addressed by serverless include deployment, service discovery, and security. While serverless addresses many issues, the document notes there is still room for improvement from cloud providers in areas like deployment models and integration testing.
All is not completely rosy in microservice-land. It’s often a sign of an architectural approach’s maturity that anti-patterns begin to be identified and classified alongside well-established principles and practices. Daniel Bryant introduces seven deadly sins from real projects, which left unchecked could easily ruin your next microservices project. Daniel offers an updated tour for 2016 of some of the nastiest anti-patterns in microservices from several real-world projects he’s encountered as a consultant, providing a series of anti-pattern “smells” you can sniff out and exploring the tools and techniques you need to avoid or mitigate the potential damage. Topics include: - Pride: Selfishly building the wrong thing, such as the “Inter-Domain-Enterprise-Application-Service-Bus” or a fully bespoke infrastructure platform - Envy: Introducing inappropriate intimacy within services by creating a shared “canonical” domain model - Wrath: Failing to deal with the inevitable bad things that occur within a distributed system - Sloth: Composing services in a lazy fashion, which ultimately leads to the creation of a “distributed monolith” - Lust: Embracing the latest and greatest technology without evaluating the operational impact incurred by these choices
A quick overview of application networking and microservice resilience and how a service mesh like Istio.io can help alleviate some of this pain.
The document discusses the evolution of application networking from individual microservices libraries to shared proxies like Envoy and service meshes like Istio. It notes that as applications adopt microservices architectures, many common concerns around distributed systems must be addressed, such as service discovery, load balancing, and fault tolerance. Initially, different frameworks offered individual libraries to handle these issues, but this led to inconsistencies and increased complexity. Envoy proxy and the Istio service mesh aim to provide a standardized and shared way to address these cross-cutting distributed system concerns for all services regardless of language or framework.
WebXR allows accessing virtual and augmented reality devices from the web. With 5G networks promising low latency and high speeds, WebXR combined with 5G could enable new immersive experiences on the web. Benefits may include improved discoverability of content, increased reach of experiences across devices, and more immediate and social experiences due to higher bandwidth and lower latency. The W3C is exploring how to leverage 5G innovations through the open web platform.
This document provides an introduction to microservices, including: - The benefits of microservices compared to monolithic architecture like independent deployability and scalability. - Microservices are small, independently deployable services that work together and are modeled around business domains. - Implementing microservices requires automation, high cohesion, loose coupling, and stable APIs. - Potential downsides include increased complexity in testing, monitoring, and operations. Microservices are best suited to problems of scale.
API Gateways are certainly not a new technology, but the way in which they are being deployed, configured, and operated within modern platforms is forcing many of us to rethink our approach. Can we simply lift and shift our existing gateway into the cloud? Is our API gateway GitOps friendly (and does it need to be)? And what about service meshes, CNI, eBPF, and... Join this talk for a whistle stop tour of modern API gateways, which a focus on deploying and managing this technology within Kubernetes (on which many modern platforms are built): - Understand why platform engineers should care about API Gateways today - Learn about API gateways, options, and requirements for modern platforms - Identify key considerations for migrating to the cloud or building a new platform on Kubernetes - Understand how cloud native workflows impact the user/developer experience (UX/DX) of an API gateway - Explore the components of a complete "edge stack" that supports end-to-end development flows
When enterprise organizations adopt microservices, containers, and cloud native development, the technologies and architectures may change, but the fact remains that we all still add the occasional bug to our code. The main challenge you now face is how to perform integration or end-to-end testing without spinning up all of your microservices locally and driving your laptop fans into high speed! Join me for a tour of your microservices testing options using a series of Java-friendly tools. - Explore challenges with scaling container-based application development (you can only run so many microservices locally before minikube melts your laptop) - Learn about effective unit testing with mocks, using TestContainers for dependency testing, and using Telepresence to extend your local testing environment into the cloud - Understand when to use each type of test and tooling based on your use case and requirements for realism, speed, and practicality - See how Telepresence can "intercept" or reroute traffic from a specified service in a remote K8s cluster to your local dev machine