Utilising messaging in cloud deployments isn't straightforward, particularly if you want to take advantage of auto scaling. This talk covers the general problems of scaling for cloud deployments, and messaging for faster inter-service communication for Microservices
Nodeconf Barcelona 2015 presentation exploring several ways of building microservices in an asynchronous way. Presented the concept of a broker as an alternative to a multiple point-to-point architecture.
This document discusses how load balancing needs have changed with the rise of containers and microservices. Traditional load balancers are not well-suited for dynamic container environments where services and topology frequently change. New approaches are needed like integrating load balancers with container orchestration platforms, using client-side load balancing, and leveraging load balancers for advanced patterns like zero-downtime deployments, circuit breaking, visibility and security segmentation. Load balancers are playing an increasingly important role in cloud native architectures.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/29ZQmIx. Adrian Cockcroft discusses success/failure stories of adopting microservices, overviews what’s next with microservices and presents some of the techniques that have led to successful deployments. Filmed at qconnewyork.com. Adrian Cockcroft works at Battery where he advises the firm and its portfolio companies about technology issues and also assists with deal sourcing and due diligence. He was a founding member of eBay Research Labs, developing advanced mobile applications and even building his own homebrew phone, years before iPhone and Android launched.
This document discusses best practices for running Apache Kafka on Docker containers. It describes how to design Kafka deployments using Docker to provide portability, elasticity, and multi-tenancy. Key considerations include allocating appropriate resources to different roles like brokers and Zookeepers, enabling auto-configuration, and providing security and network isolation. The document outlines an implementation using Dockerfiles, configurations, and orchestration to deploy multi-node Kafka clusters across multiple hosts.
Jelastic provides an advanced DevOps PaaS with Docker containers support, easy cloud management and flexible quotas system to help service providers to unleash the full potential of containers.
This document discusses using microservices with Kafka. It describes how Kafka can be used to connect microservices for asynchronous communication. It outlines various features of Kafka like high throughput, replication, partitioning, and how it can provide reliability. Examples are given of how microservices could use Kafka for logging, filtering messages, and dispatching to different topics. Performance benefits of Kafka are highlighted like scalability and ability to handle high volumes of messages.
To manage the ever-increasing volume and velocity of data within your company, you have successfully made the transition from single machines and one-off solutions to large distributed stream infrastructures in your data center, powered by Apache Kafka. But what if one data center is not enough? I will describe building resilient data pipelines with Apache Kafka that span multiple data centers and points of presence, and provide an overview of best practices and common patterns while covering key areas such as architecture guidelines, data replication, and mirroring as well as disaster scenarios and failure handling.
DevOps with Containers in Virtual Private Cloud and Hybrid Cloud. A new opportunity for hosting providers to attract Enterprise customers.
This document discusses microservices and OSGi services running with Apache Karaf. It covers some of the operational overhead and complexity of microservices compared to using OSGi microservices (μServices) with Apache Karaf. Key points include reduced operational overhead and skills requirements, built-in support for versioning and distributed capabilities with OSGi μServices in Apache Karaf. Continuous delivery techniques like using Jolokia for deployment and Apache Karaf Cellar for clustering are also mentioned.
This document provides an agenda and overview for a presentation on Apache Camel essential components. The presentation is given by Christian Posta from Red Hat on January 23, 2013. The agenda includes an introduction to Camel, a discussion of components, and time for questions. An overview of FuseSource/Red Hat is given, noting the acquisition of FuseSource by Red Hat in 2012. Details are provided on the speaker and their background. The document focuses on introducing some of the most widely used and essential Camel components, including File, Bean, Log, JMS, CXF, and Mock. Configuration options and examples of using each component are summarized.
Kafka's basic terminologies, its architecture, its protocol and how it works. Kafka at scale, its caveats, guarantees and use cases offered by it. How we use it @ZaprMediaLabs.
NFV workloads pose challenges for IIAS providers. Learn how hardware performance enhancements (DPA&EPA) by Intel, integrated with virtualization providers, can be an NFV enabler, and how advanced orchestration by TOSCA and Cloudify can put the right VNF on the right hardware and coordinate complex deployments.
In this webinar, we review the benefits of deploying a microservices architecture with Cassandra as your backbone in order to ensure your applications become incredibly reliable. We discuss in detail: - How to create microservices in Node.js with ExpressJs and Seneca - Tuning the Node.js driver for Cassandra: error handling, load balancing and degrees of parallelism - Additional best practices to ensure your systems are highly performant and available The sample service is available on GitHub: https://github.com/jorgebay/killr-service
Presented at IBM InterConnect 2105. Is your next enterprise application ready for the cloud? Do you know how to build the kind of low-latency, highly available, highly scalable, omni-channel, micro-service modern-day application that customers expect? This introductory presentation will cover what it takes to build such an application using the multiple language runtimes and composing services offered on IBM Bluemix cloud.
The document provides an introduction and overview of Apache Kafka presented by Jeff Holoman. It begins with an agenda and background on the presenter. It then covers basic Kafka concepts like topics, partitions, producers, consumers and consumer groups. It discusses efficiency and delivery guarantees. Finally, it presents some use cases for Kafka and positioning around when it may or may not be a good fit compared to other technologies.
Service meshes are all the buzz in cloud-native world. How come only yesterday we didn't know such a thing existed and now everybody seems to want one? If you're already running a microservice-based system or only starting out with one — you may be asking yourself: "Do I also need a mesh?" In this session we'll try to answer what the mesh is good for, what problems it solves, what new questions it poses. More specifically we will: explore the SMI Spec; understand why everybody wants a mesh; see how the mesh helps with progressive delivery; discuss if it's time for you to get into the mesh.
This document discusses Equinix's plans to implement a microservices architecture and edge caching capabilities for their DCIM platform. The new architecture aims to address challenges from Equinix's large, global, and heterogeneous infrastructure by breaking the application into independent, containerized microservices. This will improve scalability, performance, fault tolerance, and deployments. The architecture will leverage edge processing by caching data and services at local data centers to reduce latency and enable offline functionality.
Presentation from WJAX 2015 with Oliver Gierke. Compares REST and Messaging as an integration approach for Microservices.
Message queuing connects systems and components loosely by allowing them to exchange data and information asynchronously without being directly connected or aware of each other. It is ideal for cloud applications as it provides loose coupling between systems, making individual systems less likely to suffer outages when another system changes or has issues. The document discusses the history and benefits of message queuing including AMQP, which provides an open standard to enable interoperability between messaging systems.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services available over the internet. It allows users and businesses to access these resources as needed without having to manage physical infrastructure themselves. The average internet user was one of the early major adopters of cloud computing without realizing it through services like email, photo storage, and social media. Cloud computing is here to stay due to benefits like agility, scalability, lower costs and simplifying software deployment for businesses of all sizes.
Heb je ook een onderbuik gevoel dat er iets mis is als mensen praten over microservices? Alsof je verplicht bent services te bouwen van hooguit enkele regels code, die expliciet autonoom en los inzetbaar zijn, en altijd via REST moeten communiceren? Iedereen zegt dat je deployment goed op orde moet hebben, omdat dit anders een nachtmerrie wordt, maar niemand geeft een oplossing. In deze sessie geven we je een andere kijk op microservices en hoe je dit ook kunt implementeren binnen je bestaande architectuur.
CQS and CNS are message bus services for the cloud that are API compatible with AWS SQS and SNS. They use Cassandra for persistence and Redis for caching to provide scalability, high availability, and low latency. Extensive testing showed CQS can scale linearly to over 15,000 messages per second and CNS can handle thousands of messages per second with end-to-end latencies under 100ms. The services are in production use at Comcast to power applications like their X1 Sports app.
This document provides an introduction and overview of Akka, an open-source toolkit for building concurrent, distributed, and fault-tolerant applications on the JVM. It discusses the benefits of the actor model for concurrency, key Akka concepts including actors, messages, dispatchers, and supervision. It provides examples of actor definitions and message passing. The document aims to explain when and why Akka would be useful for building reactive and scalable applications.
CQS and CNS are open source alternatives to Amazon SQS and SNS developed by Comcast to meet their requirements of compatibility with AWS, active-active multi-datacenter support, horizontal scalability, guaranteed delivery, and very low latency. CQS uses Cassandra for persistence and Redis for caching queue metadata and payloads to achieve high performance. CNS uses CQS and scales by distributing publishing and delivery work across multiple servers. The services have been open sourced and Comcast is seeking feedback on integrating them with OpenStack.
Apache Camel journey with Microservices, lessons learned and utilisation of Fabric8 to make Docker, Kubernetes and OpenShift easy for developers to use
At Comcast Silicon Valley we have developed a general purpose message bus for the cloud. The service is API compatible with Amazon’s SQS/SNS and is built on Cassandra and Redis with the goal of linear horizontal scalability. In this Webinar we will explore the architecture of the system and how we employ Cassandra as a central component to meet key requirements. We will also take a look at the latest performance numbers.
Red Hat JBoss Fuse integration services delivers cloud-based integration based on OpenShift by Red Hat to deliver continuous delivery of tested, production-ready integration solutions. Utilizing a drag and drop, code-free UI and combining that with the integration power of Apache Camel, Fuse integration services is the next generation iPaaS. In this session, we'll walk you through why iPaaS is important, the current Fuse integration services roadmap, and the innovation happening in open source community projects to make this a reality.
iPaaS (Integration Platform as a Service) is a cloud service that enables integration between applications both on-premises and in the cloud without having to write code. It provides pre-built connectors, data mapping capabilities, integration flow orchestration, and tools for managing the integration lifecycle. While iPaaS can help with integration challenges, issues around security, vendor lock-in, and regulatory compliance still need to be addressed. WSO2 envisions their iPaaS providing multi-tenancy, a connector catalog, an IDE, and integration with their AppFactory for application lifecycle management.
Build a Cloud Day presentation about Fuse Fabric technology in the cloud and how integration projects / architectures can be designed top of cloudstack, openstack, amazon, ...
One of the most fundamental challenges of CI/CD is the ability to balance between Quality, Time, and Cost. Amazon EC2 Container Service (ECS), along with Docker and Amazon EC2 Container Registry (ECR), has changed the game for many by making resource management very simple. For Okta, it has enabled the Continuous Integration team to maximize throughput while minimizing cost. In this session we will show you how Okta has created a flexible CI system with ECS, Docker, ECR, AWS Lambda, AWS CloudFormation, Amazon RDS, and Amazon SQS. Okta runs 30,000 tests with each developer commit, and releases 10,000 new lines of code each week to production. The CI system, built 100% on AWS, must be able to handle load while keeping cost under control. This talk is oriented toward developers looking to achieve efficient resource and cost management without compromising speed or quality.
The document discusses the challenges of handling massive online traffic for a company's quarterly sales promotions, including spikes up to 80,000 concurrent users per second. It proposes an elastic infrastructure solution on Amazon Web Services that uses load balancers, auto-scaling web and application servers, RabbitMQ queues to moderate traffic, and Redis for caching to dynamically expand capacity as needed during promotions and contract it afterwards. The architecture aims to efficiently queue and process high volumes of traffic while providing a scalable and available system.
"Containers, DevOps, Microservices and Kafka: Tools used by our Monolith wrecking crew." Speakers: Jonathan Owens, Senior Site Reliability Engineer and Jose Fernandez, Lead Software Engineer, New Relic
The document discusses integration architecture in a microservices world. It begins by defining integration architecture as how data and functions are shared between applications. It then discusses challenges with large enterprise landscapes that have undergone mergers and acquisitions. The document outlines different types of integration architectures like external, enterprise, batch-based, and event-based integration. It also discusses common misconceptions around microservices, such as thinking microservices refer to exposed APIs rather than application components. The summary concludes by noting debates around the differences between microservices and service-oriented architecture (SOA).
The document discusses iPaaS and cloud integration platforms. It describes Xactly's use of SnapLogic for integrating various SaaS applications like Salesforce, Workday and NetSuite OpenAir. Xactly needed a platform to create a single view of customers across different systems and apps. SnapLogic provided ease of use, flexibility and a cloud-based architecture. The presentation also demonstrates SnapLogic's elastic integration capabilities and how it can integrate big data in Hadoop.
There is a renaissance underway in the messaging space. Due to the demands of IoT networks, cloud native apps, and microservices developers are looking for simple, fast, messaging systems. This is a sharp contrast to how traditional messaging was done. This webinar will cover: - The basics of messaging patterns - What makes NATS unique - Using a demo inspired by Pokemon Go as an example
Talk given at the Apache Kafka NYC Meetup, October 20, 2015. http://www.meetup.com/Apache-Kafka-NYC/events/225697500/ Kafka has emerged as a clear choice for a high-throughput, low latency messaging system that addresses the needs of high-performance streaming applications. The Spring Framework has been, in the last decade, the de-facto standard for developing enterprise Java applications, providing a simple and powerful programming model that allows developers to focus on the business needs, leaving the boilerplate and middleware integration to the framework itself. In fact, it has evolved into a rich and powerful ecosystem, with projects focusing on specific aspects of enterprise software development - like Spring Boot, Spring Data, Spring Integration, Spring XD, Spring Cloud Stream/Data Flow to name just a few. In this presentation, Marius Bogoevici from the Spring team will take the perspective of the Kafka user, and show, with live demos, how the various projects in the Spring ecosystem address their needs: - how to build simple data integration applications using Spring Integration Kafka; - how to build sophisticated data pipelines with Spring XD and Kafka; - how to build cloud native message-driven microservices using Spring Cloud Stream and Kafka, and how to orchestrate them using Spring Cloud Data Flow;
How do Google, Twitter, and Instagram ensure fast application performance at scale? One technique is asynchronous messaging using RabbitMQ to prevent application bottlenecks. In this session, we’ll cover common asynchronous messaging patterns and how to implement them in RabbitMQ, common pitfalls to avoid, and how to cluster RabbitMQ for increased scalability and reliability.
This document discusses various message queue technologies including RabbitMQ, ZeroMQ, cloud-based options like Azure Service Bus and Amazon SQS/SNS, and the lightweight NATS system. It provides overviews of each technology, highlighting key features, protocols, and use cases. Examples and code demos are shown for RabbitMQ and ZeroMQ. The document aims to help readers understand different message queue options and pick the most suitable one based on their distributed system and cloud hosting needs.
Scale changes everything. Number of connections and destinations went from dozen to thousands, number of messages increased by order of magnitude. What once was quite adequate for enterprise messaging can't scale to support "Internet of Things". We need new protocols, patterns and architectures to support this new world. This session will start with basic introduction to the concept of Internet of things. Next it will discuss general technical challenges involved with the concept and explain why it is becoming mainstream now. Now we're ready to start talking about solutions. We will introduce some messaging patterns (like telemetry and command/control) and protocols (such as MQTT and AMQP) used in these scenarios. Finally we will see how Apache ActiveMQ is gearing up for this race. We will show tips for horizontal and vertical scaling of the broker, related projects that can help with deployments and what the future development road map looks like.
In this webinar HiveMQ CTO Dominik Obermaier will cover everything you need to know about creating a lightweight and scalable IoT message architecture. He will discuss the open source projects you need to deploy and manage an MQTT based IoT architecture. Don't miss your chance to learn about HiveMQ and the concept of MQTT! The recording of this webinar is available on Youtube:
MQTT is the de-facto protocol for the Internet of Things (IoT). This webinar covers everything you need to know about scalable pub/sub communication with MQTT for up to millions of devices and shows the available software options in the (open source) ecosystem. About the Speaker. Dominik Obermaier is CTO and co-founder of HiveMQ. He is a member of the OASIS Technical Committee and is part of the standardization committee for MQTT 3.1.1 and MQTT 5. He is the co-author of the book 'The Technical Foundations of IoT' and a frequent speaker on IoT, MQTT, and messaging. To watch the webinar recording: https://www.hivemq.com/webinars/lightweight-and-scalable-iot-messaging-with-mqtt/
MQTT is by far the most popular Internet of Things protocol used in the largest professional IoT deployments worldwide. The protocol is so simple and versatile, that it can be used for private home automation projects as well as ultra-secure and highly scalable enterprise installations. This talk will show how a pure Java and open source technology stack can be used for IoT devices as well as backend applications for building next-generation IoT projects.
MQTT is a lightweight publish/subscribe messaging protocol that is ideal for constrained environments like sensors and mobile devices. It was invented in 1999 by IBM employees Dr. Andy Stanford-Clark and Arlen Nipper. MQTT uses a broker-based messaging model with a publish/subscribe pattern, and supports three qualities of service. It has been widely adopted in applications involving sensors, mobile devices, and the Internet of Things.
This document discusses software-based networking and network function virtualization (NFV). It introduces NetVM, an NFV platform developed by the author that provides high performance packet delivery across virtual machines using DPDK for zero-copy networking. NetVM enables complex network services to be distributed across multiple VMs while maintaining high throughput. The author also discusses OpenNetVM, an open source version of NetVM, and contributions like Flurries that enable unique network functions to run per flow for improved scalability. NFVnice, a userspace framework for scheduling NFV chains, is also introduced to improve throughput, fairness and CPU utilization.