This document provides an overview and agenda for a presentation on Apache ActiveMQ 5.9.x and Apache Apollo. The presentation will cover new features in ActiveMQ 5.9.x including AMQP 1.0 support, REST management, a new default file-based store using LevelDB, and high availability replication of the store. It will also introduce Apache Apollo and allow for a question and discussion period.
This document provides an agenda and overview for a presentation on Apache Camel essential components. The presentation is given by Christian Posta from Red Hat on January 23, 2013. The agenda includes an introduction to Camel, a discussion of components, and time for questions. An overview of FuseSource/Red Hat is given, noting the acquisition of FuseSource by Red Hat in 2012. Details are provided on the speaker and their background. The document focuses on introducing some of the most widely used and essential Camel components, including File, Bean, Log, JMS, CXF, and Mock. Configuration options and examples of using each component are summarized.
This document discusses integration in the age of DevOps. It describes how microservices help solve the problem of decoupling services and teams to move quickly at scale. Apache Camel is presented as a solution for integration that allows for reliable and distributed integration through mechanisms like messaging. Kubernetes and Docker are discussed as platforms that help develop and run microservices locally and at scale by providing automation, configuration, isolation and service discovery capabilities.
This document provides an overview of a presentation given at CamelOne 2013 in Boston on June 10-11, 2013 about the internals of Apache ActiveMQ. The presentation covered the major subcomponents of ActiveMQ including transports, the broker core, persistence adapters, and networking brokers. It provided details on architecture, configuration, and implementation of these different aspects of ActiveMQ.
This document provides an overview of Apache ActiveMQ, an open-source messaging server. It discusses ActiveMQ's features such as high performance, high availability, multiple protocols and transports. It also covers tools for benchmarking and performance tuning ActiveMQ brokers, including the ActiveMQ Performance Module, jms-benchmark, JMSTester, JMeter and OS monitoring tools. The document is intended to help understand how to approach performance tuning of ActiveMQ brokers.
Messaging is the backbone of many top enterprises. It affords reliable, asynchronous data passing to achieve loosely coupled, highly scalable distributed systems. As enterprises large and small become more interconnected, demand for remote and limited devices to be integrated with enterprise systems is surging. Come see how the most widely used, open-source messaging broker, Apache ActiveMQ, fits nicely and how it supports polyglot messaging.
Microservices with Apache Camel, Docker and Fabric8 v2Christian Posta
My talk from Red Hat Summit 2015 about the pros/cons of microservices, how integration is a strong requirement for doing distributed systems designs, and how open source projects like Apache Camel, Docker, Kubernetes, OpenShift and Fabric8 can help simplify and manage microservice environments
This document provides an agenda and overview for a presentation on Apache Camel essential components. The presentation is given by Christian Posta from Red Hat on January 23, 2013. The agenda includes an introduction to Camel, a discussion of components, and time for questions. An overview of FuseSource/Red Hat is given, noting the acquisition of FuseSource by Red Hat in 2012. Details are provided on the speaker and their background. The document focuses on introducing some of the most widely used and essential Camel components, including File, Bean, Log, JMS, CXF, and Mock. Configuration options and examples of using each component are summarized.
This document discusses integration in the age of DevOps. It describes how microservices help solve the problem of decoupling services and teams to move quickly at scale. Apache Camel is presented as a solution for integration that allows for reliable and distributed integration through mechanisms like messaging. Kubernetes and Docker are discussed as platforms that help develop and run microservices locally and at scale by providing automation, configuration, isolation and service discovery capabilities.
This document provides an overview of a presentation given at CamelOne 2013 in Boston on June 10-11, 2013 about the internals of Apache ActiveMQ. The presentation covered the major subcomponents of ActiveMQ including transports, the broker core, persistence adapters, and networking brokers. It provided details on architecture, configuration, and implementation of these different aspects of ActiveMQ.
This document provides an overview of Apache ActiveMQ, an open-source messaging server. It discusses ActiveMQ's features such as high performance, high availability, multiple protocols and transports. It also covers tools for benchmarking and performance tuning ActiveMQ brokers, including the ActiveMQ Performance Module, jms-benchmark, JMSTester, JMeter and OS monitoring tools. The document is intended to help understand how to approach performance tuning of ActiveMQ brokers.
Messaging is the backbone of many top enterprises. It affords reliable, asynchronous data passing to achieve loosely coupled, highly scalable distributed systems. As enterprises large and small become more interconnected, demand for remote and limited devices to be integrated with enterprise systems is surging. Come see how the most widely used, open-source messaging broker, Apache ActiveMQ, fits nicely and how it supports polyglot messaging.
Microservices with Apache Camel, Docker and Fabric8 v2Christian Posta
My talk from Red Hat Summit 2015 about the pros/cons of microservices, how integration is a strong requirement for doing distributed systems designs, and how open source projects like Apache Camel, Docker, Kubernetes, OpenShift and Fabric8 can help simplify and manage microservice environments
The document discusses Apache Camel, an open-source integration library that can be used to integrate disparate systems that use different protocols and data formats. It provides an overview of what integration is, describes how Camel works using a domain-specific language and components, and demonstrates how to define simple routes using Java or XML. The presentation concludes with information on management and tooling support for Camel.
Moving Gigantic Files Into and Out of the Alfresco RepositoryJeff Potts
This talk is a technical case study showing show Metaversant solved a problem for one of their clients, Noble Research Institute. Researchers at Noble deal with very large files which are often difficult to move into and out of the Alfresco repository.
This document discusses achieving horizontal scaling for enterprise messaging using Fabric8. It provides an introduction to Fabric8 and enterprise messaging concepts. It then describes how Fabric8MQ, which is built on Vert.x, provides horizontal scaling and load balancing for ActiveMQ by implementing features like protocol conversion, Camel routing, API management, multiplexing, and destination sharding across Kubernetes pods and nodes. The document concludes with a demo of Fabric8MQ's capabilities.
The Proxy Wars - MySQL Router, ProxySQL, MariaDB MaxScaleColin Charles
This document discusses MySQL proxy technologies including MySQL Router, ProxySQL, and MariaDB MaxScale. It provides an overview of each technology, including when they were released, key features, and comparisons between them. ProxySQL is highlighted as a popular option currently with integration with Percona tools, while MySQL Router may become more widely used due to its support for MySQL InnoDB Cluster. MariaDB MaxScale is noted for its binlog routing capabilities. Overall the document aims to help people understand and choose between the different MySQL proxy options.
Extreme performance with Oracle SOA Suite 12.2, Coherence and Exalogic can be achieved by configuring the platform to take advantage of the Exalogic infrastructure and optimizing SOA Suite and Coherence settings. Key aspects include using Coherence caching to minimize database transactions, configuring optimal WebLogic and JDBC settings for InfiniBand networking, and tuning SOA Suite dehydration and caching properties. This provides significant performance gains over a traditional architecture.
Michel Schildmeijer gave a keynote at the Oracle Middleware Summit on January 9th, 2019. He discussed the history and evolution of Oracle Fusion Middleware from traditional middleware to more modern, cloud-native approaches. He outlined Oracle's focus on containers, Kubernetes, and microservices and how WebLogic and other FMW products are adapting to these trends, including new options like Helidon for developing microservices. Schildmeijer concluded that WebLogic will still be foundational but the focus is shifting to hybrid cloud-native solutions.
Cloud Development with Camel and Amazon Web ServicesRobin Howlett
This presentation will demonstrate how to rapidly prototype and develop distributed, scalable applications with Apache Camel, its AWS Components and the AWS Java SDK.
Robin Howlett is Senior Architect at Silver Chalice, a Chicago White Sox affiliated start-up, based in Boulder, CO, with a portfolio of high-value digital-based businesses in the fields of sports, media and entertainment. In 2011, he built the Advanced Media Platform, a proprietary cloud-based platform that services millions of requests per day across dozens of mobile application products, heavily utilizing the Apache Camel framework.
WebLogic 12.2 introduces new multitenancy features including:
- Improved high density deployment features through microcontainers and partitions that allow for increased isolation between tenant applications and resources.
- Enhanced multitenancy capabilities including live partition migration to move running partitions between clusters with zero downtime.
- Continuous availability features such as automated data center setup and failover, cross-domain transaction recovery, and multitenant live partition migration.
Messaging For the Cloud and MicroservicesRob Davies
Utilising messaging in cloud deployments isn't straightforward, particularly if you want to take advantage of auto scaling. This talk covers the general problems of scaling for cloud deployments, and messaging for faster inter-service communication for Microservices
My talk at ScaleConf 2017 in Cape Town on some tips and tactics for scaling WordPress, with reference to WordPress.com and the container-based VIP Go platform.
Video of my talk is here: https://www.youtube.com/watch?v=cs0DcY80spw
MariaDB Server & MySQL Security Essentials 2016Colin Charles
This document summarizes a presentation on MariaDB/MySQL security essentials. The presentation covered historically insecure default configurations, privilege escalation vulnerabilities, access control best practices like limiting privileges to only what users need and removing unnecessary accounts. It also discussed authentication methods like SSL, PAM, Kerberos and audit plugins. Encryption at the table, tablespace and binary log level was explained as well. Preventing SQL injections and available security assessment tools were also mentioned.
Zarafa SummerCamp 2012 - Steve Hardy Friday KeynoteZarafa
The document summarizes updates to the Zarafa development in 2012, including:
1) Expanded development teams with new offices in Ukraine and additions in India and Delft, and adopting Scrum methodology with 2-weekly releases.
2) Improvements to tracking tickets in JIRA and assessing tickets within 1 day with a focus on fixing bugs over features.
3) Increased platforms supported from 38 to including Windows, and reduced build time for all distributions from over 6 hours to under 1 hour.
4) Continued work on the WebApp, plugins, Z-Admin, integration with other services like Spreed, and contributions from an expanded international team.
Scaling out a web application involves adding redundancy, separating application tiers across multiple servers, implementing load balancing, caching content, and monitoring performance. Key aspects include mirroring disks for redundancy, moving services to separate application servers, using load balancing schemes like DNS round-robin or load balancers, solving session state issues through sticky routing or database storage, and caching dynamic content to improve performance. Monitoring the environment is also important to detect failures or bottlenecks as the infrastructure scales out.
Zarafa SummerCamp 2012 - Exchange Web Services, technical informationZarafa
Exchange Web Services (EWS) is an XML-based protocol used to access Exchange servers. It was introduced in Exchange 2007 and uses SOAP over HTTP. EWS supports synchronization of folders and items, as well as live access via methods like GetItem and FindItem. Authentication can occur via NTLM, Kerberos, or basic authentication. Notifications can be handled through polling, pulling, or pushing events to clients. The EWS protocol has evolved over time to support additional Exchange features.
Microservices architecture has many benefits. But it comes at a cost. Running microservices and monitoring what’s going on is tedious. That’s why MicroProfile adopts monitoring as a first-class concept. In this session, learn how MicroProfile runtimes collect metrics and how to seamlessly collect them with tools like Prometheus and Grafana. Learn how MicroProfile makes it easy to connect information about interrelated service calls, how to gather the information and analyze system bottlenecks, how to deploy and scale MicroProfile applications with Kubernetes and how to react to their health status to detect and automatically recover from failures.
The Complete MariaDB Server Tutorial - Percona Live 2015Colin Charles
The document provides an overview of the Complete MariaDB Server Tutorial presentation. It introduces MariaDB and discusses what it is, its goals of being compatible with MySQL and having stable releases. It also covers MariaDB architecture, installation, utilities, and storage engines.
Meet MariaDB 10.1 at the Bulgaria Web SummitColin Charles
Meet MariaDB 10.1 at the Bulgaria Web Summit, held in Sofia in February 2016. Learn all about MariaDB Server, and the new features like encryption, audit plugins, and more.
Apache ActiveMQ - Enterprise messaging in actiondejanb
This document provides an overview of Apache ActiveMQ, an open source messaging platform. It discusses key ActiveMQ concepts like topics, queues, and messaging protocols. It also covers ActiveMQ enterprise features such as high availability, clustering, security, and monitoring. The document concludes by discussing ActiveMQ performance tuning, scaling, and future plans.
Apache ActiveMQ is an open-source messaging and integration pattern server that allows for message throttling, redelivery, and delay. This document discusses how to install and configure ActiveMQ, including setting up dead letter queues and clustering multiple ActiveMQ instances. The key steps are: 1) Installing ActiveMQ on each node, 2) Configuring dead letter queues by setting redelivery policies in activemq.xml, and 3) Configuring clustering by giving each broker a unique name, connecting them to a shared SQL database, and starting one as the master node.
The document discusses Apache Camel, an open-source integration library that can be used to integrate disparate systems that use different protocols and data formats. It provides an overview of what integration is, describes how Camel works using a domain-specific language and components, and demonstrates how to define simple routes using Java or XML. The presentation concludes with information on management and tooling support for Camel.
Moving Gigantic Files Into and Out of the Alfresco RepositoryJeff Potts
This talk is a technical case study showing show Metaversant solved a problem for one of their clients, Noble Research Institute. Researchers at Noble deal with very large files which are often difficult to move into and out of the Alfresco repository.
This document discusses achieving horizontal scaling for enterprise messaging using Fabric8. It provides an introduction to Fabric8 and enterprise messaging concepts. It then describes how Fabric8MQ, which is built on Vert.x, provides horizontal scaling and load balancing for ActiveMQ by implementing features like protocol conversion, Camel routing, API management, multiplexing, and destination sharding across Kubernetes pods and nodes. The document concludes with a demo of Fabric8MQ's capabilities.
The Proxy Wars - MySQL Router, ProxySQL, MariaDB MaxScaleColin Charles
This document discusses MySQL proxy technologies including MySQL Router, ProxySQL, and MariaDB MaxScale. It provides an overview of each technology, including when they were released, key features, and comparisons between them. ProxySQL is highlighted as a popular option currently with integration with Percona tools, while MySQL Router may become more widely used due to its support for MySQL InnoDB Cluster. MariaDB MaxScale is noted for its binlog routing capabilities. Overall the document aims to help people understand and choose between the different MySQL proxy options.
Extreme performance with Oracle SOA Suite 12.2, Coherence and Exalogic can be achieved by configuring the platform to take advantage of the Exalogic infrastructure and optimizing SOA Suite and Coherence settings. Key aspects include using Coherence caching to minimize database transactions, configuring optimal WebLogic and JDBC settings for InfiniBand networking, and tuning SOA Suite dehydration and caching properties. This provides significant performance gains over a traditional architecture.
Michel Schildmeijer gave a keynote at the Oracle Middleware Summit on January 9th, 2019. He discussed the history and evolution of Oracle Fusion Middleware from traditional middleware to more modern, cloud-native approaches. He outlined Oracle's focus on containers, Kubernetes, and microservices and how WebLogic and other FMW products are adapting to these trends, including new options like Helidon for developing microservices. Schildmeijer concluded that WebLogic will still be foundational but the focus is shifting to hybrid cloud-native solutions.
Cloud Development with Camel and Amazon Web ServicesRobin Howlett
This presentation will demonstrate how to rapidly prototype and develop distributed, scalable applications with Apache Camel, its AWS Components and the AWS Java SDK.
Robin Howlett is Senior Architect at Silver Chalice, a Chicago White Sox affiliated start-up, based in Boulder, CO, with a portfolio of high-value digital-based businesses in the fields of sports, media and entertainment. In 2011, he built the Advanced Media Platform, a proprietary cloud-based platform that services millions of requests per day across dozens of mobile application products, heavily utilizing the Apache Camel framework.
WebLogic 12.2 introduces new multitenancy features including:
- Improved high density deployment features through microcontainers and partitions that allow for increased isolation between tenant applications and resources.
- Enhanced multitenancy capabilities including live partition migration to move running partitions between clusters with zero downtime.
- Continuous availability features such as automated data center setup and failover, cross-domain transaction recovery, and multitenant live partition migration.
Messaging For the Cloud and MicroservicesRob Davies
Utilising messaging in cloud deployments isn't straightforward, particularly if you want to take advantage of auto scaling. This talk covers the general problems of scaling for cloud deployments, and messaging for faster inter-service communication for Microservices
My talk at ScaleConf 2017 in Cape Town on some tips and tactics for scaling WordPress, with reference to WordPress.com and the container-based VIP Go platform.
Video of my talk is here: https://www.youtube.com/watch?v=cs0DcY80spw
MariaDB Server & MySQL Security Essentials 2016Colin Charles
This document summarizes a presentation on MariaDB/MySQL security essentials. The presentation covered historically insecure default configurations, privilege escalation vulnerabilities, access control best practices like limiting privileges to only what users need and removing unnecessary accounts. It also discussed authentication methods like SSL, PAM, Kerberos and audit plugins. Encryption at the table, tablespace and binary log level was explained as well. Preventing SQL injections and available security assessment tools were also mentioned.
Zarafa SummerCamp 2012 - Steve Hardy Friday KeynoteZarafa
The document summarizes updates to the Zarafa development in 2012, including:
1) Expanded development teams with new offices in Ukraine and additions in India and Delft, and adopting Scrum methodology with 2-weekly releases.
2) Improvements to tracking tickets in JIRA and assessing tickets within 1 day with a focus on fixing bugs over features.
3) Increased platforms supported from 38 to including Windows, and reduced build time for all distributions from over 6 hours to under 1 hour.
4) Continued work on the WebApp, plugins, Z-Admin, integration with other services like Spreed, and contributions from an expanded international team.
Scaling out a web application involves adding redundancy, separating application tiers across multiple servers, implementing load balancing, caching content, and monitoring performance. Key aspects include mirroring disks for redundancy, moving services to separate application servers, using load balancing schemes like DNS round-robin or load balancers, solving session state issues through sticky routing or database storage, and caching dynamic content to improve performance. Monitoring the environment is also important to detect failures or bottlenecks as the infrastructure scales out.
Zarafa SummerCamp 2012 - Exchange Web Services, technical informationZarafa
Exchange Web Services (EWS) is an XML-based protocol used to access Exchange servers. It was introduced in Exchange 2007 and uses SOAP over HTTP. EWS supports synchronization of folders and items, as well as live access via methods like GetItem and FindItem. Authentication can occur via NTLM, Kerberos, or basic authentication. Notifications can be handled through polling, pulling, or pushing events to clients. The EWS protocol has evolved over time to support additional Exchange features.
Microservices architecture has many benefits. But it comes at a cost. Running microservices and monitoring what’s going on is tedious. That’s why MicroProfile adopts monitoring as a first-class concept. In this session, learn how MicroProfile runtimes collect metrics and how to seamlessly collect them with tools like Prometheus and Grafana. Learn how MicroProfile makes it easy to connect information about interrelated service calls, how to gather the information and analyze system bottlenecks, how to deploy and scale MicroProfile applications with Kubernetes and how to react to their health status to detect and automatically recover from failures.
The Complete MariaDB Server Tutorial - Percona Live 2015Colin Charles
The document provides an overview of the Complete MariaDB Server Tutorial presentation. It introduces MariaDB and discusses what it is, its goals of being compatible with MySQL and having stable releases. It also covers MariaDB architecture, installation, utilities, and storage engines.
Meet MariaDB 10.1 at the Bulgaria Web SummitColin Charles
Meet MariaDB 10.1 at the Bulgaria Web Summit, held in Sofia in February 2016. Learn all about MariaDB Server, and the new features like encryption, audit plugins, and more.
Apache ActiveMQ - Enterprise messaging in actiondejanb
This document provides an overview of Apache ActiveMQ, an open source messaging platform. It discusses key ActiveMQ concepts like topics, queues, and messaging protocols. It also covers ActiveMQ enterprise features such as high availability, clustering, security, and monitoring. The document concludes by discussing ActiveMQ performance tuning, scaling, and future plans.
Apache ActiveMQ is an open-source messaging and integration pattern server that allows for message throttling, redelivery, and delay. This document discusses how to install and configure ActiveMQ, including setting up dead letter queues and clustering multiple ActiveMQ instances. The key steps are: 1) Installing ActiveMQ on each node, 2) Configuring dead letter queues by setting redelivery policies in activemq.xml, and 3) Configuring clustering by giving each broker a unique name, connecting them to a shared SQL database, and starting one as the master node.
Messaging for Web and Mobile with Apache ActiveMQdejanb
This document summarizes a presentation on messaging for web and mobile applications using Apache ActiveMQ. The presentation covered challenges with HTTP messaging, advantages of STOMP and MQTT protocols, and examples of using STOMP over WebSocket for browser messaging and MQTT for mobile apps. It also provided an overview of Apache ActiveMQ's support for STOMP, including client examples in Java.
This document provides an overview of Apache ActiveMQ and messaging with JMS. It discusses what JMS is and how it abstracts message brokers. It then describes what ActiveMQ is and its goals as open source message-oriented middleware. The document outlines examples, configurations, transports, topologies and high availability options for ActiveMQ. It also discusses security, monitoring, visualization and integration with Apache Camel.
This document summarizes common problems and solutions when using ActiveMQ. It addresses questions about creating JMS clients from scratch, efficiently managing connections, consuming only certain messages, reasons for locking/freezing, when a network of brokers is needed, and using a master/slave configuration. Spring JMS and selectors are recommended over building clients from scratch. Connection pooling and caching are advised for efficiency. Selectors and proper design can filter messages. Memory, prefetch limits, and cursors impact performance and need configuration. Networked brokers improve availability while master/slave configurations provide high availability.
This document provides an overview of integrating microservices with Apache Camel and JBoss Fuse. It introduces Apache Camel as a lightweight integration library that uses enterprise integration patterns and domain-specific languages to define integration "flows" and "routes". It describes how Camel supports features like dynamic routing, REST APIs, backpressure, load balancing, and circuit breakers that are useful for building microservices. The document also introduces JBoss Fuse as a development and runtime platform for microservices that provides tooling, frameworks, management capabilities and container support using technologies like Apache Camel, CXF, ActiveMQ and Karaf.
Microservices architecture is a very powerful way to build scalable systems optimized for speed of change. To do this, we need to build independent, autonomous services which by definition tend to minimize dependencies on other systems. One of the tenants of microservices, and a way to minimize dependencies, is “a service should own its own database”. Unfortunately this is a lot easier said than done. Why? Because: your data.
We’ve been dealing with data in information systems for 5 decades so isn’t this a solved problem? Yes and no. A lot of the lessons learned are still very relevant. Traditionally, we application developers have accepted the practice of using relational databases and relying on all of their safety guarantees without question. But as we build services architectures that span more than one database (by design, as with microservices), things get harder. If data about a customer changes in one database, how do we reconcile that with other databases (especially where the data storage may be heterogenous?).
For developers focused on the traditional enterprise, not only do we have to try to build fast-changing systems that are surrounded by legacy systems, the domains (finance, insurance, retail, etc) are incredibly complicated. Just copying with Netflix does for microservices may or may not be useful. So how do we develop and reason about the boundaries in our system to reduce complexity in the domain?
In this talk, we’ll explore these problems and see how Domain Driven Design helps grapple with the domain complexity. We’ll see how DDD concepts like Entities and Aggregates help reason about boundaries based on use cases and how transactions are affected. Once we can identify our transactional boundaries we can more carefully adjust our needs from the CAP theorem to scale out and achieve truly autonomous systems with strictly ordered eventual consistency. We’ll see how technologies like Apache Kafka, Apache Camel and Debezium.io can help build the backbone for these types of systems. We’ll even explore the details of a working example that brings all of this together.
This document provides an overview of Apache ActiveMQ, an open source messaging system. It discusses what ActiveMQ is, its basics like topics and queues, techniques for scaling such as vertical, horizontal and hybrid approaches, ensuring high availability, and its future direction with ActiveMQ Apollo. The presentation aims to explain how ActiveMQ works and how to configure it for different deployment needs.
Connecting Applications Everywhere with ActiveMQRob Davies
This document summarizes a presentation given by Rob Davies at the CamelOne 2013 conference in Boston, MA on June 10-11, 2013. The presentation introduced Apache ActiveMQ, an open-source message broker, and discussed its features including messaging protocols, management tools, high availability, and integration with Apache Camel. It also covered challenges of deploying and maintaining large ActiveMQ clusters and how Red Hat Fuse Fabric can help address these challenges.
This document provides an agenda and summaries of key points from a presentation on integrating systems using Apache Camel. The presentation discusses how Apache Camel is an open-source integration library that uses enterprise integration patterns to connect disparate systems. It highlights features of Camel including components, data formats, and testing frameworks. Customer examples are presented that demonstrate large returns on investment and cost savings from using Camel for integration projects. The presenters argue that Camel provides flexibility, reusability and rapid development of integrations.
Using apache camel for microservices and integration then deploying and managing on Docker and Kubernetes. When we need to make changes to our app, we can use Fabric8 continuous delivery built on top of Kubernetes and OpenShift.
Real-world #microservices with Apache Camel, Fabric8, and OpenShiftChristian Posta
What are and aren't microservices?
Microservices is a validation of the open-source approach to integration and service implementation and a rebuff of the committee-driven SOA approach. In this
Enterprise Integration Patterns with ActiveMQRob Davies
This document discusses enterprise integration patterns and deployments using Apache ActiveMQ. It provides an overview of key integration concepts like message channels, routing, types of messages, push and pull integration models, request/reply patterns, and job processing. It also covers deployment patterns such as hub and spoke and failover between data centers. Finally, it introduces Apache Camel as a powerful integration framework that supports these patterns and can be used with ActiveMQ.
JavaOne 2016: Kubernetes introduction for Java Developers Rafael Benevides
This document provides an introduction to Kubernetes and summarizes some of its key concepts. It describes how Kubernetes can manage containers across multiple machines and help address challenges of scaling, port conflicts, and high availability. Core Kubernetes concepts discussed include pods, replication controllers, labels, services, and persistent volumes. It also provides an overview of a sample application that will be used in an accompanying Kubernetes lab.
The document discusses continuous delivery of integration applications using JBoss Fuse and OpenShift. It covers the cost of change in software development, how JBoss Fuse can help with integration challenges, and how OpenShift enables continuous delivery through automation and a developer self-service platform as a service model. The presentation demonstrates how to build a continuous delivery pipeline using tools like Git, Jenkins, Fabric8, and OpenShift to deploy and test applications.
Microservices with Apache Camel, DDD, and KubernetesChristian Posta
Building microservices requires more than just infrastructure, but infrastructure does have a role. In this talk we look at microservices from an enterprise perspective and talk about DDD, Docker, Kubernetes and how established open-source projects in the integration space fits a microservices architecture
ActiveMQ is an open source message broker that implements the Java Message Service (JMS) API. It allows applications written in different languages to communicate asynchronously. Apache Camel is an open source integration framework that can be used to build messaging routes between different transports and APIs using a simple domain-specific language. It provides mediation capabilities and supports common integration patterns. Both ActiveMQ and Camel aim to simplify integration between disparate systems through message-based communication.
Atlanta JUG - Integrating Spring Batch and Spring IntegrationGunnar Hillert
This document provides an overview and introduction to Spring Batch, Spring Integration, and Spring XD. It discusses key concepts and features of Spring Batch for batch processing and Spring Integration for enterprise integration. It also demonstrates how Spring Batch and Spring Integration can be used together for batch integration use cases. Finally, it introduces Spring XD for unified data ingestion, analytics, and export capabilities using existing Spring projects. The presentation includes code samples and links to documentation and GitHub repositories for further information.
How does Apache Pegasus (incubating) community develop at SensorsDataacelyc1112009
A presentation in ApacheCon Asia 2022 from Dan Wang and Yingchun Lai.
Apache Pegasus is a horizontally scalable, strongly consistent and high-performance key-value store.
Know more about Pegasus https://pegasus.apache.org, https://github.com/apache/incubator-pegasus
Getting started with Riak in the Cloud involves provisioning a Riak cluster on Engine Yard and optimizing it for performance. Key steps include choosing instance types like m1.large or m1.xlarge that are EBS-optimized, having at least 5 nodes, setting the ring size to 256, disabling swap, using the Bitcask backend, enabling kernel optimizations, and monitoring and backing up the cluster. Benchmarks show best performance from high I/O instance types like hi1.4xlarge that use SSDs rather than EBS storage.
MariaDB 10.1 what's new and what's coming in 10.2 - Tokyo MariaDB MeetupColin Charles
Presented at the Tokyo MariaDB Server meetup in July 2016, this is an overview of what you can see and use in MariaDB Server 10.1, but more importantly what is planned to arrive in 10.2
MariaDB started life as a database to host the Maria storage engine in 2009. Not long after its inception, the MySQL community went through yet another change in ownership, and it was deemed that MariaDB will be a complete database branch developed to extend MySQL, but with constant merging of upstream changes.
The goal of the MariaDB project is to ensure that everyone is part of the community, including employees of the major steering companies. MariaDB also features enhanced features, some of which are common with the Percona Performance Server. Most importantly, MariaDB is a drop-in replacement and is completely backward compatible with MySQL. In 2010, MariaDB released 5.1 in February, and 5.2 in November – two major releases in a span of one calendar year is a feat that was achieved!
DBAs and developers alike will gain an introduction to MariaDB, what is different with MySQL, how to make use of the feature enhancements, and more.
OSDC 2018 | Scaling & High Availability MySQL learnings from the past decade+...NETWAYS
The MySQL world is full of tradeoffs and choosing a High Availability (HA) solution is no exception. This session aims to look at all of the alternatives in an unbiased nature. While the landscape will be covered, including but not limited to MySQL replication, MHA, DRBD, Galera Cluster, etc. the focus of the talk will be what is recommended for today, and what to look out for. Thus, this will include extensive deep-dive coverage of ProxySQL, semi-sync replication, Orchestrator, MySQL Router, and Galera Cluster variants like Percona XtraDB Cluster and MariaDB Galera Cluster. I will also touch on group replication.
Learn how we do this for our nearly 4000+ customers!
Sina Weibo is the most popular Microblogging platform in China. It has more than 100 million user and tens of millions of daily updates. This slide explains the performance challenges in Weibo platform.
Performance Tuning RocksDB for Kafka Streams’ State Storesconfluent
Performance Tuning RocksDB for Kafka Streams’ State Stores, Bruno Cadonna, Contributor to Apache Kafka & Software Developer at Confluent and Dhruba Borthakur, CTO & Co-founder Rockset
Meetup link: https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273823025/
Technical overview of three of the most representative KeyValue Stores: Cassandra, Redis and CouchDB. Focused on Ruby and Ruby on Rails developement, this talk shows how to solve common problems, the most popular libraries, benchmarking and the best use case for each one of them.
This talk was part of the Conferencia Rails 2009, Madrid, Spain.
http://app.conferenciarails.org/talks/43-key-value-stores-conviertete-en-un-jedi-master
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Companies with batch and stream processing pipelines need to serve the insights they glean back to their users, an often-overlooked problem that can be hard to achieve reliably and at scale. Felix GV and Yan Yan offer an overview of Venice, a new data store capable of ingesting data from Hadoop and Kafka, merging it together, replicating it globally, and serving it online at low latency.
Venice was designed to be the next-generation replacement of the Voldemort Read-Only system, with the intent to provide a broader feature set, better availability characteristics, and a more efficient architecture. Venice is designed for high-throughput ingestion from Hadoop and Kafka, and these data sources can be merged at ingestion time in order to provide semantics similar to those of a lambda architecture but with a simpler, faster, and more available read path. Robustness is a primary architectural concern and, as such, Venice provides highly available reads and writes, self-healing, stringent data validation guarantees, and the ability to roll back entire datasets in cases where bad data is pushed.
Today you can use hosted MySQL/MariaDB/Percona Server in several "cloud providers" in what is considered using it as a service, a database as a service (DBaaS). You can also use hosted PostgreSQL and MongoDB thru various service providers. Learn the differences, the access methods, and the level of control you have for the various public cloud offerings:
- Amazon RDS for MySQL and PostgreSQL
- Google Cloud SQL
- Rackspace OpenStack DBaaS
- The likes of compose.io, MongoLab and Rackspace's offerings around MongoDB
The administration tools and ideologies behind it are completely different, and you are in a "locked-down" environment. Some considerations include:
* Different backup strategies
* Planning for multiple data centres for availability
* Where do you host your application?
* How do you get the most performance out of the solution?
* What does this all cost?
Growth topics include:
* How do you move from one DBaaS to another?
* How do you move all this from DBaaS to your own hosted platform?
Questions like this will be demystified in the talk. This talk will benefit experienced database administrators (DBAs) who now also have to deal with cloud deployments as well as application developers in startups that have to rely on "managed services" without the ability of a DBA.
The Evolution of Open Source DatabasesIvan Zoratti
The document provides an overview of the evolution of open source databases from the past to present and future. It discusses the early days of navigational and hierarchical databases. It then covers the development of relational databases and SQL. It outlines the rise of open source databases like MySQL, PostgreSQL, and SQLite. It also summarizes the emergence of NoSQL databases and NewSQL systems to handle big data and cloud computing. The document predicts continued development and blending of features between SQL, NoSQL, and NewSQL databases.
Presented at Percona Live Amsterdam 2016, this is an in-depth look at MariaDB Server right up to MariaDB Server 10.1. Learn the differences. See what's already in MySQL. And so on.
The Stack Exchange infrastructure supports 560 million page views and 34TB of data transferred per month across 1665 requests per second on average. To ensure high performance, Stack Exchange uses load balancers, web servers, caching, and databases in a redundant configuration. Careful performance monitoring and optimization has resulted in homepage and question page render times of 52ms and 33ms respectively.
OSDC 2016 - Tuning Linux for your Database by Colin CharlesNETWAYS
Many operations folk know that performance varies depending on using one of the many Linux filesystems like EXT4 or XFS. They also know of the schedulers available, they see the OOM killer coming and more. However, appropriate configuration is necessary when you're running your databases at scale.
Learn best practices for Linux performance tuning for MariaDB/MySQL (where MyISAM uses the operating system cache, and InnoDB maintains its own aggressive buffer pool), as well as PostgreSQL and MongoDB (more dependent on the operating system). Topics that will be covered include: filesystems, swap and memory management, I/O scheduler settings, using and understanding the tools available (like iostat/vmstat/etc), practical kernel configuration, profiling your database, and using RAID and LVM.
There is a focus on bare metal as well as configuring your cloud instances in.
Learn from practical examples from the trenches.
This document summarizes DreamObjects, an object storage platform powered by Ceph. It discusses the hardware used in storage and support nodes, including Intel and AMD processors, RAM, disks, and networking components. The document also provides details on Ceph configuration including replication, CRUSH mapping, OSD configuration, and application tuning. Monitoring tools discussed include Chef, pdsh, Sensu, collectd, graphite, logstash, Jenkins and future plans.
OpenStack is an open source cloud computing platform that can manage large networks of virtual machines and physical servers. It uses a distributed architecture with components like Nova (compute), Swift (object storage), Cinder (block storage), and Quantum (networking). OpenStack has been successful due to its scalability, support for multiple hypervisors including Hyper-V, and compatibility with popular programming languages like Python. While OpenStack is best suited for large public and private clouds, its complex installation and lack of unified deployment tools can present challenges, especially for small to mid-sized clouds.
Cost Effectively Run Multiple Oracle Database Copies at Scale NetApp
Scaling multiple databases with a single legacy storage system works well from a cost perspective, but workload conflicts and hardware contention make these solutions an unattractive choice for anything but low-performance applications.
Move Auth, Policy, and Resilience to the PlatformChristian Posta
Developer's time is the most crucial resource in an enterprise IT organization. Too much time is spent on undifferentiated heavy lifting and in the world of APIs and microservices much of that is spent on non-functional, cross-cutting networking requirements like security, observability, and resilience.
As organizations reconcile their DevOps practices into Platform Engineering, tools like Istio help alleviate developer pain. In this talk we dig into what that pain looks like, how much it costs, and how Istio has solved these concerns by examining three real-life use cases. As this space continues to emerge, and innovation has not slowed, we will also discuss the recently announced Istio sidecar-less mode which significantly reduces the hurdles to adopt Istio within Kubernetes or outside Kubernetes.
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
Service mesh is a powerful pattern for implementing strong zero-trust networking practices, introducing better network observability, and allowing for more fine-grained traffic control. Up until now, the sidecar pattern was used to implement service-mesh capability but as the technology matures, a new pattern has emerged: sidecarless service mesh. Two prominent open-source networking projects, Cilium and Istio, have implemented a sidecar-free approach to service mesh but they both make interesting design decisions and tradeoffs. In this talk we review the architecture of both, focusing on the pros and cons of implementations such as mutual authentication, ingress, and observability.
Understanding Wireguard, TLS and Workload IdentityChristian Posta
Zero Trust Networking has become a standard marketing buzzword but the underlying principles are critical for modern microservice-style architectures. Authentication, authorizations, policy, etc. can be difficult to implement between services and do so in a maintainable way. Google invented their own transparent encryption and authorization protocol called "ALTS" back in 2007 to serve the application layer of Google's Borg workload scheduler, but we don't see others using it outside Google.
In this webinar we look at existing technology like TLS and newcomer Wireguard and see how these technologies come together to provide a secure foundation for workload identity and modern service-to-service networking.
Istio ambient mesh uses a sidecar-less data plane that focuses on ease of operations, incremental adoption, and separation of security boundaries for applications and mesh infrastructure.
In this webinar, we'll explore:
- The forces of modernization and compliance pressures,
- How Zero Trust Architecture (ZTA) can help, and
- How Istio ambient mesh lowers the barrier for establishing the properties necessary to achieve Zero Trust and compliance
The document discusses Cilium and Istio with Gloo Mesh. It provides an overview of Gloo Mesh, an enterprise service mesh for multi-cluster, cross-cluster and hybrid environments based on upstream Istio. Gloo Mesh focuses on ease of use, powerful best practices built in, security, and extensibility. It allows for consistent API for multi-cluster north-south and east-west policy, team tenancy with service mesh as a service, and driving everything through GitOps.
This document discusses service mesh patterns for connecting microservices across multiple clusters. It describes using Envoy proxy to provide service discovery, load balancing, security and resiliency. Patterns are presented for connecting services across clusters with flat, controlled or separate networks. Managing connectivity across clusters can increase operator burden. Gloo Mesh is presented as a way to simplify management across multiple clusters with a centralized control plane.
Multicluster Kubernetes and Service Mesh PatternsChristian Posta
Building applications for cloud-native infrastructure that are resilient, scalable, secure, and meet compliance and IT objectives gets complicated. Another wrinkle for the organizations with which we work is the fact they need to run across a hybrid deployment footprint, not just Kubernetes. At Solo.io, we build application networking technology on Envoy Proxy that helps solve difficult multi-deployment, multi-cluster, and even multi-mesh problems.
In this webinar, we’re going to explore different options and patterns for building secure, scalable, resilient applications using technology like Kubernetes and Service Mesh without leaving behind existing IT investments. We’ll see why and when to use multi-cluster topologies, how to build for high availability and team autonomy, and solve for things like service discovery, identity federation, traffic routing, and access control.
Cloud-Native Application Debugging with Envoy and Service MeshChristian Posta
Microservices have been great for accelerating the software innovation and delivery, but they also present new challenges, especially as abstractions and automated orchestration at every layer make pinpointing the issue seem like walking around a maze with a blindfold. Existing tools weren’t designed for distributed environments, and the new tools need to consider how to leverage these abstraction layers to better observe, test, and troubleshoot issues.
Christian Posta walks you through Envoy Proxy and service mesh architecture for L7 data plane, the key features in Envoy that can help in debugging and troubleshooting, chaos engineering as a testing methodology for microservices, how to approach a testing and debugging framework for microservices, and new open source tools that address these areas. You’ll explore a workflow to discover and resolve microservices issues, including injecting experiments for stress testing the applications, gathering requests in flight, recording and replaying them, and debugging them step by step without affecting production traffic.
Kubernetes Ingress to Service Mesh (and beyond!)Christian Posta
Kubernetes users need to allow traffic to flow into and within the cluster. Treating the application traffic separately from the business logic allows presents new possibilities in how service to service traffic is served, controlled and observed — and provides a transition to intra cluster networking like Service Mesh. With microservices, there is a concept of both North / South traffic (incoming requests from end users to the cluster) and East / West (intra cluster) communication between the services. In this talk we will explain how Envoy Proxy works in Kubernetes as a proxy for both of these traffic directions and how it can be leveraged to do things like traffic shaping, security, and integrate the north/south to east/west behavior.
Christian Posta (@christianposta) is Global Field CTO at Solo.io, former Chief Architect at Red Hat, and well known in the community for being an author (Istio in Action, Manning, Istio Service Mesh, O'Reilly 2018, Microservices for Java Developers, O’Reilly 2016), frequent blogger, speaker, open-source enthusiast and committer on various open-source projects including Istio, Kubernetes, and many others. Christian has spent time at both enterprises as well as web-scale companies and now helps companies create and deploy large-scale, cloud-native resilient, distributed architectures. He enjoys mentoring, training and leading teams to be successful with distributed systems concepts, microservices, devops, and cloud-native application design.
The exploration of service mesh for any organization comes with some serious questions. What data plane should I use? How does this tie in with my existing API infrastructure? What kind of overhead do sidecar proxies demand? As I've seen in my work with various organizations over the years "if you have a successful microservices deployment, then you have a service mesh whether it’s explicitly optimized as one or not."
In this talk, we seek to understand the role of the data plane and how to pick the right component for the problem context. We start off by establishing the spectrum of data-plane components from shared gateways to in-code libraries with service proxies being along that spectrum. We clearly identify which scenarios would benefit from which part of the data-plane spectrum and show how modern service meshes including Istio, Linkerd, and Consul enable these optimizations.
Deep Dive: Building external auth plugins for Gloo EnterpriseChristian Posta
Using the plugin framework for Ext. Auth Service in Gloo Enterprise, we can build any custom AuthN/AuthZ plugins to handle security requirements not provided out of the box.
Role of edge gateways in relation to service mesh adoptionChristian Posta
API Gateways provide functionality like rate limiting, authentication, request routing, reporting, and more. If you’ve been following the rise in service-mesh technologies, you’ll notice there is a lot of overlap with API Gateways when solving some of the challenges of microservices. If service mesh can solve these same problems, you may wonder whether you really need a dedicated API Gateway solution?
The reality is there is some nuance in the problems solved at the edge (API Gateway) compared to service-to-service communication (service mesh) within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. What’s more important is that the API Gateway is evolving to live at a layer above service mesh and not directly overlapping with it. In other words, API Gateways are evolving to solve application-level concerns like aggregation, transformation, and deeper context and content-based routing as well as fitting into a more self-service, GitOps style workflow.
In this talk we put aside the “API Gateway” infrastructure as we know it today and go back to first principles with the “API Gateway pattern” and revisit the real problems we’re trying to solve. Then we’ll discuss pros and cons of alternative ways to implement the API Gateway pattern and finally look at open source projects like Envoy, Kubernetes, and GraphQL to see how the “API Gateway pattern” actually becomes the API for our applications while coexisting nicely with a service mesh (if you adopt a service mesh).
Navigating the service mesh landscape with Istio, Consul Connect, and LinkerdChristian Posta
The document discusses various service mesh options including Linkerd, Consul Connect, Istio, and AWS App Mesh. It provides an overview of each solution, describing their key features and strengths/opportunities. It emphasizes that the service mesh approach is useful for managing inter-service communication and that implementations are still evolving. It recommends starting simply and iteratively adopting capabilities to match needs.
Distributed microservices introduce new challenges: failure modes are harder to anticipate and resolve. In this session, we present a “Chaos Debugging” framework enabled by three open source projects: Gloo Shot, Squash, and Loop to help you increase your microservices’ “immunity” to issues.
Gloo Shot integrates with any service mesh to implement advanced, realistic chaos experiments. Squash connects powerful and mature debuggers (gdb, dlv, java debugging) to your microservices while they run in Kubernetes. Loop extends the capability of your service mesh to observe your application and record full transactions for sandboxed replay and debugging.
Come to this demo-heavy talk to see how together, Squash, Gloo Shot, and Loop allow you to trigger, replay, and investigate failure modes of your microservices in a language agnostic and efficient manner without requiring any changes to your code.
Leveraging Envoy Proxy and GraphQL to Lower the Risk of Monolith to Microserv...Christian Posta
If you have an existing Java monolith, you know you must take care making changes to it or altering it in any negative way. Often times these monoliths are very valuable to the business and generate a lot of revenue. At the same time, since it’s difficult to make changes to the monolith it’s desirable to move to a microservices architecture. Unfortunately you cannot just do a big-bang migration to a greenfield architecture and will have to incrementally adopt microservices. In this talk, we’ll look at using Gloo proxy which is based on Envoy Proxy and GraphQL to do surgical, function-level traffic control and API aggregation to safely migrate your monolith to microservices and serverless functions.
Service-mesh options with Linkerd, Consul, Istio and AWS AppMeshChristian Posta
Service mesh abstracts the network from developers to solve three main pain points:
How do services communicate securely with one another
How can services implement network resilience
When things go wrong, can we identify what and why
Service mesh implementations usually follow a similar architecture: traffic flows through control points between services (usually service proxies deployed as sidecar processes) while an out-of-band set of nodes is responsible for defining the behavior and management of the control points. This loosely breaks out into an architecture of a "data plane" through which requests flow and a "control plane" for managing a service mesh.
Different service mesh implementations use different data planes depending on their use cases and familiarity with particular technology. The control plane implementations vary between service-mesh implementations as well. In this talk, we'll take a look at three different control plane implementations with Istio, Linkerd and Consul, their strengths, and their specific tradeoffs to see how they chose to solve each of the three pain points from above. We can use this information to make choices about a service mesh or to inform our journey if we choose to build a control plane ourselves.
The document summarizes the new features of Istio 1.1, an open-source service mesh. Some key highlights include improved performance and scalability, namespace isolation, multi-cluster capabilities, easier installation with Helm, and locality-aware load balancing. A new Sidecar resource was introduced to improve performance by configuring resources for individual proxies. The presentation demonstrates performance improvements with the Sidecar resource and highlights additional functionality in Istio like traffic control and metrics collection.
API Gateways are going through an identity crisisChristian Posta
API Gateways provide functionality like rate limiting, authentication, request routing, reporting, and more. If you've been following the rise in service-mesh technologies, you'll notice there is a lot of overlap with API Gateways when solving some of the challenges of microservices. If service mesh can solve these same problems, you may wonder whether you really need a dedicated API Gateway solution?
The reality is there is some nuance in the problems solved at the edge (API Gateway) compared to service-to-service communication (service mesh) within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. What's more important is that the API Gateway is evolving to live at a layer above service mesh and not directly overlapping with it. In other words, API Gateways are evolving to solve application-level concerns like aggregation, transformation, and deeper context and content-based routing as well as fitting into a more self-service, GitOps style workflow.
In this talk we put aside the "API Gateway" infrastructure as we know it today and go back to first principles with the "API Gateway pattern" and revisit the real problems we're trying to solve. Then we'll discuss pros and cons of alternative ways to implement the API Gateway pattern and finally look at open source projects like Envoy, Kubernetes, and GraphQL to see how the "API Gateway pattern" actually becomes the API for our applications while coexisting nicely with a service mesh (if you adopt a service mesh).
KubeCon NA 2018: Evolution of Integration and Microservices with Service Mesh...Christian Posta
Cloud-native describes a way of building applications on a cloud platform to iteratively discover and deliver business value. We now have access to a lot of similar technology that the large internet companies pioneered and used to their advantage to dominate their respective markets. What challenges arise when we start building applications to take advantage of this new technology?
In this talk we'll explore the role of service meshes when building distributed systems, why they make sense, and where they don't make sense. We will look at a class of problem that crops up that service mesh cannot solve, but that frameworks and even new programming languages like Ballerina are aiming to solve
Service-mesh technology promises to deliver a lot of value to a cloud-native application, but it doesn't come without some hype. In this talk, we'll look at what is a "service mesh", how it compares to similar technology (Netflix OSS, API Management, ESBs, etc) and what options for service mesh exist today.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
Test Management as Chapter 5 of ISTQB Foundation. Topics covered are Test Organization, Test Planning and Estimation, Test Monitoring and Control, Test Execution Schedule, Test Strategy, Risk Management, Defect Management
Distributed System Performance Troubleshooting Like You’ve Been Doing it for ...ScyllaDB
Troubleshooting performance issues across distributed systems can be intimidating if you don’t know where to start, and it’s even harder when the system is running on hundreds or thousands of nodes. We’re well past the point of logging into random nodes and poking around hoping we spot the problem. It’s critical to have a methodology to follow as well as a deep understanding of the tools that are available to help you prove (or disprove) your mental model.
In this session, we’ll explore how to go about diagnosing performance problems you might run into, and teach you the tools and process for getting to the bottom of any issue, quickly -- even when it’s one of the biggest distributed database deployments on the planet.
Dev Dives: Mining your data with AI-powered Continuous DiscoveryUiPathCommunity
Want to learn how AI and Continuous Discovery can uncover impactful automation opportunities? Watch this webinar to find out more about UiPath Discovery products!
Watch this session and:
👉 See the power of UiPath Discovery products, including Process Mining, Task Mining, Communications Mining, and Automation Hub
👉 Watch the demo of how to leverage system data, desktop data, or unstructured communications data to gain deeper understanding of existing processes
👉 Learn how you can benefit from each of the discovery products as an Automation Developer
🗣 Speakers:
Jyoti Raghav, Principal Technical Enablement Engineer @UiPath
Anja le Clercq, Principal Technical Enablement Engineer @UiPath
⏩ Register for our upcoming Dev Dives July session: Boosting Tester Productivity with Coded Automation and Autopilot™
👉 Link: https://bit.ly/Dev_Dives_July
This session was streamed live on June 27, 2024.
Check out all our upcoming Dev Dives 2024 sessions at:
🚩 https://bit.ly/Dev_Dives_2024
This slide deck is a deep dive the Salesforce latest release - Summer 24, by the famous Stephen Stanley. He has examined the release notes very carefully, and summarised them for the Wellington Salesforce user group, virtual meeting June 27 2024.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Blockchain and Cyber Defense Strategies in new genre timesanupriti
Explore robust defense strategies at the intersection of blockchain technology and cybersecurity. This presentation delves into proactive measures and innovative approaches to safeguarding blockchain networks against evolving cyber threats. Discover how secure blockchain implementations can enhance resilience, protect data integrity, and ensure trust in digital transactions. Gain insights into cutting-edge security protocols and best practices essential for mitigating risks in the blockchain ecosystem.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
The presentation will delve into the ASIMOV project, a novel initiative that leverages Retrieval-Augmented Generation (RAG) to provide precise, domain-specific assistance to telecommunications engineers and technicians. The session will focus on the unique capabilities of Milvus, the chosen vector database for the project, and its advantages over other vector databases.
Attending this session will give you a deeper understanding of the potential of RAG and Milvus DB in telecommunications engineering. You will learn how to address common challenges in the field and enhance the efficiency of their operations. The session will equip you with the knowledge to make informed decisions about the choice of vector databases, and how best to use them for your use-cases
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
The document discusses fundamentals of software testing including definitions of testing, why testing is necessary, seven testing principles, and the test process. It describes the test process as consisting of test planning, monitoring and control, analysis, design, implementation, execution, and completion. It also outlines the typical work products created during each phase of the test process.
How to Improve Your Ability to Solve Complex Performance ProblemsScyllaDB
This talk is really about problem solving. It’s about how we think about problems and how we resolve those problems in a deeply technical context. The main goal of the talk is the relay the lessons learned from a couple of decades working with and observing some of the best performance troubleshooters in the world.
The talk will be broken into 3 main parts.
1. Explain the basic process we must go through to solve a complex performance problem
2. Discuss some of the main factors that can inhibit our efforts
3. Discuss some of the techniques we can apply to improve our chances, including an almost fool proof method to reach a successful outcome
Specific technical examples from large enterprise customers using relational databases (Oracle primarily) will be used to illustrate the concepts.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
2. 2
• Apache ActiveMQ
• New features!
• Demos
• Apache ActiveMQ Apollo
Agenda for the night
3. 3
Your speaker
Christian Posta
Blog: http://christianposta.com/blog
Email: ceposta@apache.org
Twitter: @christianposta
• Senior Consultant and Architect at Red Hat (formerly FuseSource)
• Committer at Apache Software Foundation: ActiveMQ, Apollo
• PMC at ActiveMQ
• Author: Essential Camel Components DZone Refcard
• Contributor to Apache Camel
5. 5
• The most widely used open-source
messaging broker
• Highly configurable
• Friendly license (no license fees!)
• Vibrant community (TLP)
• Backbone of top enterprises in retail, e-retail,
financial services, shipping, many others!
Apache ActiveMQ
6. 6
• High performance
• High availability
• Light-weight
• Multi-protocol
• JMS compliant
• Backed by Red Hat!
ActiveMQ Features
7. 7
• TCP, NIO
• UDP
• SSL, SSL+NIO
• VM
• HTTP
• WebSockets
Breadth of connectivity
11. 11
• AMQP 1.0 protocol
• REST management with Jolokia
• Pure master slave deprecated and removed
• Java 7 support
• Split up client libs, modularize core
packages
Quick recap of 5.8.0 release
12. 12
• Fall 2013
• New, faster, default file-based store
• Persistence store HA replication
• New management console, HawtIO
• New “broker:” Camel component
• Other new features
ActiveMQ 5.9 on its way!
14. 14
• LevelDB
• Hardened
• JNI (native) and Java versions
• Java version packaged by default
https://github.com/dain/leveldb
• Can download
http://code.google.com/p/leveldb/downloads/list
Default in 5.9.0 – LevelDB
15. 15
• AMQ Message Store (deprecated!!!)
Don’t use this one!
• KahaDB
• LevelDB
File-based stores
16. 16
• Journal / TX Log
• Indexes
• Recovery Logs
File-based stores
Index Journal
Redo Log
X X X X
17. 17
• Homegrown
• Optimized for messaging
• TX log, WAL log, Indexes
• B-Tree based indexes
• Known bottlenecks
KahaDB
18. 18
• Google NoSQL key-value DB
http://code.google.com/p/leveldb/
• Based on BigTable
• Chrome, Riak, IndexedDB
• No relational model, queries, indexes
• Store keys sorted
LevelDB
19. 19
• Underlying data structures are optimized for
sequential access and lots of writes
http://en.wikipedia.org/wiki/Log-structured_merge-tree
• Concurrent reads
• Pause-less log cleanup
• Built-in compression
• JMX
LevelDB cont’d
21. 21
• Makes for very fast store index!
• Fewer entries in index
• Composite sends only store message once
• HDFS support!
• Replication!?...
LevelDB Store
28. 28
• Extreme reliability – but not as fast
• Recommended if already using an
enterprise database
• No restriction on number of slaves
• Simple configuration
JDBC Master-Slave
30. 30
• Recommended if you have a SAN, or DRBD
• Ensure file locking works – and times out – NFSv4
good!
https://issues.apache.org/jira/browse/AMQ-4378
• No restriction on number of slaves
• Simple configuration
• Best performance
Shared File System M/S
31. 31
Local File System Local File SystemLocal File System
Broker
Slave
Broker
Master
Broker
Slave
ZooKeeper
Cluster
Replicated LevelDB
Master Slave – NEW!
32. 32
ZooKeeper
Cluster
Local File System Local File SystemLocal File System
Broker
Slave
Broker
Master
Client
Broker
Slave
Larry’sRemovals
Replicated LevelDB
Master Slave
33. 33
• Requires a HA ZooKeeper Cluster
• No Single Point of Failure
• Dynamic number of slaves
• Sync Replication
• Local Mem/Disk
• Remote Mem/Disk
• Quorum Mem/Disk
Replicated LevelDB
Master Slave
38. 38
• Manage integration infrastructure from one dashboard
• Customizable
• Plugins
• Camel
• ActiveMQ
• Fabric
• Infinispan
• Tomcat
• Many others!
• Visualizations
• One dashboard to rule them all
HawtIO – http://hawt.io
39. 39
• Default ActiveMQ dashboard
• Visualization of health
• Access to operations to make changes
• Move messages from DLQ to original destinations
• Visualize Camel routes deployed along with broker
• Send messages
• Real-time metrics
HawtIO – http://hawt.io
46. 46
• Use Apache Camel routes
• Creates destination interceptor at runtime
• Embed in camel.xml and deploy with ActiveMQ
• More powerful than existing interceptors (use when
needed)
• http://rajdavies.blogspot.com/2013/09/apache-camel-broker-component-for.html?
tw
Broker component
Say some things about FuseSource + Red Hat here… and what I do… So I started out working for FuseSource about a year and a half ago. Fuse Source was a open-source subscription company built around the integration projects at Apache, specifically Apache ActiveMQ, Camel, ServiceMix, and CXF. Basically these projects are best-of-breed and highly adopted freely by community users and used for mission critical infrastructures when building out SOA and other distributed integrations. The thing is, big companies who invest millions of dollars into their businesses aren’t willing to accept using a mailing list and irc for production support, aka when shit hits the fan, they need to be able to rely on some strong partners who would be able to help them out. That’s were fuse source, fit into that picture. It was started by the guys that co-founded the projects, and they were able to build up an amazing set of support engineers and consultancy teams. Along the way, they hired up a lot of the committers on each of the respective projects, and put together professional documentation, on-site and virtual training, an annual conference devoted specifically to these technologies, as well as and most importantly support subscriptions for both production and dev support. We were officially welcomed into the Jboss Redhat family almost exactly a year ago and the spirit of open-source itnegration and SOA lives on under the RedHat umbrella and in complement of the existing Jboss offerings including EAP, Drools/BRMS, and jBPM, etc.
What is activemq… how would I explain what that is?? A message broker… it moves messages from a producer to a consumer.. It takes the responsibility of delivering a self-contained piece of data between two clients… the producing client doesn’t have to know key details usually found in traditional RPC.. Where is the client? Is the client ready to handle a message? What communication protocol can the client support? A messaging broker helps to abstract away all of those details in an effort to make the programming model conceptually easy… a producer is responsible for producing data.. But how it gets to where it needs to go is delegated to another entity. This is beneficial because it keeps the producer simpler, keeps the interaction simple (from A to B), decouples producer and consumer, and many other things. High level: alleviates responsibility from producer to deliver messages. Low Level: takes a piece of data and tries to deliver it as fast and reliably as possible Highly configurable – swiss army knife of messaging, because messaging use cases vary so widely. What type of messaging, how many consumers/producers, what to do in failure scenarios, what level of througput or latency do you need, what type of hardware and OS are you running on, clustering, HA, reliable/unreliable network, etc… there is no one solution that fits all… Vibrant community – very active mailing list (usually within 15 mins response), JIRAs, bug fixes, new features, books, Zappos, UPS, Home Depot, Walmart, Harris Corp, Yahoo, FAA, SAA, Ticketmaster, GM, IHG, Sabre, CERN, Mars rover projects, Wells Fargo,
Some of the main features from ActiveMQ, and reasons why so many people use it, include High performance… so moving messages asynchronously from a producer to eligible consumers as quickly and as efficiently as possible. High availability… so getting messages from A to B is easy when everything is working correctly, but as you all probably already know, distributed systems have to be able to cope with failures, because failures and unplanned disruptions are basically the “norm”.. The larger the network of your distributed system, the higher chance something will go wrong Light-weight… of course these all sound like buzzwords because they’re so overused and abused by those people interested in selling you stuf… but activemq is truly light-weight in that it can be deployed stand alone, in any J2EE container if desired, or even embedded within your own java application. Other messaging brokers, like MQ Series or even other open-source alternatives like Rabbit, cannot be embeded into your Java app … but why would you do that? Well, just like you can build highly performant, scalable distributed systems, so too can you architect your app internally to use asynchronous message passing, but with reliability built in, and options for extensibilty when needed by using broker networks, and routing with camel, etc.
Some of the main features from ActiveMQ, and reasons why so many people use it, include High performance… so moving messages asynchronously from a producer to eligible consumers as quickly and as efficiently as possible. High availability… so getting messages from A to B is easy when everything is working correctly, but as you all probably already know, distributed systems have to be able to cope with failures, because failures and unplanned disruptions are basically the “norm”.. The larger the network of your distributed system, the higher chance something will go wrong Light-weight… of course these all sound like buzzwords because they’re so overused and abused by those people interested in selling you stuf… but activemq is truly light-weight in that it can be deployed stand alone, in any J2EE container if desired, or even embedded within your own java application. Other messaging brokers, like MQ Series or even other open-source alternatives like Rabbit, cannot be embeded into your Java app … but why would you do that? Well, just like you can build highly performant, scalable distributed systems, so too can you architect your app internally to use asynchronous message passing, but with reliability built in, and options for extensibilty when needed by using broker networks, and routing with camel, etc.
AMQP (1.0) A binary format originally from JPMorgan Chase, Originally targeted toward wire-level and messaging semantic interoperability between messaging vendors, but with version 1.0, it has removed some of the semantics of exchanges and queing and is essentially a wire-level protocol. ActiveMQ uses the popular Proton project, which is the same underpinnings of Apache Qpid. MQTT (3.1) A super-compact bianry protocol developed by IBM and others intended to be used exclusively for reliable pub-sub for limited devices on highly unreliable networks. think small monitoring devices like gas meters, pacemakers, medical sensors, car sensors, etc. The idea is to allow all of these sensors out in the field collecting data, known as the "internet of things" to be able to reliably transmit their data through messaging. OpenWire (v1-10) Is the original binary wire-level protocol intended to be an open standard. It's used to implement the JMS API in a highly-performant and feature-rich way. Used by default unless you explicitly specify another proptocol. Backward compatible (as discussed above) with previous versions. STOMP (1.0, 1.1, 1.2) Simple Text Oriented Messaging Protocol who's main goals are simplicity and basic connectivity: get a client talking to the messaging broker regardless of what platform, or programming language that's used. As a result, clients exist in python, ruby, scala, perl, c/c++, .Net, Delphi, JavaScript, Objective-C, PHP, Erlang, Go, etc, etc.. point is, it's a text-protocol that's easy to implement so many languages already have and are using it.
Async integration – allowing apps to do what they are focused on doing without being tied to the dependencies of other systems just because they need to integrate. For example, App A shouldn’t have to tie up threads and resources trying to communicate with App B synchronously or when App B is not available. The idea is to let App A send its message (data) off knowing it will get to App B when App B is ready to handle it, but not take responsibility for that data xfer. That way App A can continue to do the processing it needs on its side, and if it needs to be aware of a response from App B, that response will show up asynchronously and handled when App A is ready to. Using asynchronous processing like this is one important way to get higher throughput and utilization of your systems. When App A needs to send a piece of data to B, not only does it want to do so fast and efficienlty, but it wants to have some level of guarantee that the data didn’t get lost. ActiveMQ plays the part of mediator and broker here by establishing protocols with the producers and consumers to guarantee message delivery even in the face of unplanned failures. Loose coupling – coupling really should be thought of In terms of levels or degress of coupling. Or better yet, the assumptions made by one system on the other system. Asumptions like availability, programming language, data types, network interruptions, etc, are dealt with and aleviated in some cases with messaging. Heterogeneous architectures – be able to reduce the spider web of point to point connections and allow apps written in different languages on different platforms, owned by different groups with different upgrade paths and different lifecycles to be able to reliably communicate with each other. http://www.openamq.org/doc:amqp-background
AMQP 1.0 protocol was first introduced. AMQP was introduced in 2004-2005 JP Morgan Chase as a way to “commoditize” messaging platforms and achieve interoperability between clients and servers. The original spec and still the 0.8 and 0.9 specs forced certain implementation details onto the different brokers and that severely limited adoption. For 1.0 release, a lot of the implemention detail was removed and it’s now just a binary wire-level protocol. It still remains to see what level of “interoperability” can be achieved. But we do support it. ActiveMQ’s impl is powered by the Apache Qpid proton project which is a protocol engine that third parties can use to easily build out AMQP clients and servers. Jolokia is an awesome opensource project that basically provides an alternative to JSR-160 which specifies the connectors that one typically uses for JMX connections. Usually, it’s done by setting up an mbean server which registeres some JNDI and RMI endpoints which you connect to. Jolokia on the other hand is implemented as a java agent which allows HTTP/REST JSON access to the JMX mbeans. It also provides features like bulk requests, fine-grained security and authorizations, etc. Pure master slave deprecated – and now removed and not available… this was the original “shared nothing” master slave, where the master would dispatch to the slave before ack’ing the producer. In this scenario, if a master went down, the slave would be live and be able to take over. Some of the drawbacks of this approach is there would need to be manual intervention to restore the original master as a slave. So you would have to stop the slave, re-sync the master/slave manually, then restart the pair again. Started testing and certifying for Java7 Refactored a lot of the packages so that are more cleanly organized and can be included as dependencies more finely… for example, if you wanted to strip down the distro to its most bare and required components. Or if you wanted to split out the client libs so that you don’ t need to icnlude the broker code too
Expected out this fall.. The last few bug fixes are being committed as we speak, most large features are already baked and ready for release, but since this is still an open-source and community driven project, we do it in our spare time and are working as best we can to get it out We can expect the default store, to change from KahaDB which has served us very well the past couple years, to a newer faster store. We have some new options for HA replication. New management console which has been long over due. Other features like MQTT over websockets, and others I will briefly mention And of course, the community as well as our commercial customers have unearthed some obscure use cases that bring to life some bugs which we quickly squash.
So LevelDB will become the new default store in the 5.9.0 release. Curious if any in this room have played with LevelDB with or without ActiveMQ? We’ve already had a lot of adoption of it in the community and in commercial customers, and we’ve received a lot of feedback. It’s been hardened and now be ready for prime time. There are two different implementations: a native version and a pure java version. The Java version was ported from the native C++ implementation by Dain Sundstrum (a brilliant dude who works at Facbook now). The java version is what’s shipped and enabled by default in the community distro of 5.. There is also the native version available directly from and maintained by google at the link above. This is written in C++ and takes advantage of some of the specific OS features such as AIO and can be faster in some use cases.
There are effectively two file-based stores currently in ActiveMQ. KahaDB and LevelDB. There is also the original AMQ store which still lingers in previous versions. This version is not supported anymore and should not be used. It’s slower, a little more complex that it needs to be, and is not even available in the newer releases. The other two provide much better performance.
So KahaDB is an awesome file-based messaging DB… it has served the community very well for the past years, including helping us beat out IBM MQ series at UPS It’s very fast, good recovery time on failover, well understood, highly tunable. It was homegrown written by the original project co-founders to better fit the messaging paradigm by keeping as much of the index in memory as possible, batching writes to the disk, and allowing to tune page size, buffers, checkpoints, recovery mechanisms, and others. Its implemented using some of the traditional db components like B-tree indexes, write-ahead-logs, and transaction logs, however, it’s highly optimized for writes and deletes, whereas a traditional RDMS is highly optimized for reads and complex queries. There are known limitations in the implementations and bottlenecks when approaching its limits and they usually show up in the indexes. There are multiple btree indexes to help preserve message order, message priority, durable subscriptions and others. Having to update all these data structures, no matter how creatively, using btress becomes expensive at the upper limits of performance. So, actually, when proving out theories in Apache Apollo, which I’ll talk about next, we found LevelDB to fit our usecases very well.
So KahaDB is an awesome file-based messaging DB… it has served the community very well for the past years, including helping us beat out IBM MQ series at UPS It’s very fast, good recovery time on failover, well understood, highly tunable. It was homegrown written by the original project co-founders to better fit the messaging paradigm by keeping as much of the index in memory as possible, batching writes to the disk, and allowing to tune page size, buffers, checkpoints, recovery mechanisms, and others. Its implemented using some of the traditional db components like B-tree indexes, write-ahead-logs, and transaction logs, however, it’s highly optimized for writes and deletes, whereas a traditional RDMS is highly optimized for reads and complex queries. There are known limitations in the implementations and bottlenecks when approaching its limits and they usually show up in the indexes. There are multiple btree indexes to help preserve message order, message priority, durable subscriptions and others. Having to update all these data structures, no matter how creatively, using btress becomes expensive at the upper limits of performance. So, actually, when proving out theories in Apache Apollo, which I’ll talk about next, we found LevelDB to fit our usecases very well.
LevelDB, as mentioned, is a nosql key-value database from google inspired by their work on map reduce, bigtable, and big data storage algorithms. It’s currently used in Chrome, Riak,
So the LevelDB based store still uses WAL and TX logs, but we now use the LevelDB engine to index the tx log. So now we rely on log-structured merge trees, not btrees, and google’s algoriitm for sorting and compressing things. The structures since they are sorted are very friendly for sequential access type systems, like messaging. They also allow very good concurrency features including reads, wheras kahadb had lots of locks and contention on reads and writes. We also use fewer entries per message and destination, which itself also reduces contention at the index… It also comes with throw-in features like JMX out of the box
So what does this mean? "enqueue" "drained enqueue" and "loaded enqueue"? enqueues while there are no consumers (queue starts empty) enqueues while the queue is being drained (it has consumers) enqueues when queue is very large (millions of messages) in benchmarks against kahadb, we see that it outperforms quite a bit. This is using 20 byte structures and async sends: Questions: What are “loaded enqueues” vs loaded dequeues, vs drained enqueues, etc. We use 20 byte bodies here to really push the storage engine. We don’t use larger bodies because then the data is natrually batched, and would give better performance. So we choose usecases and payloads that would inherently require the store to figure out how to properly batch and deal with the payload.
There is also support for HDFS! So basically the store will keep a local copy as well as upload to HDFS and sync anything missed at regular checkpoints. When a master fails, and a slave takes over, it will d/l the last known sst files with latest indexes. Then normal recovery kicks in. Note, for this set up, Master/Slave elections are started manually or coordinated externally with something like zookeeper. Pauseless log clean up – kahadb had period cleanup intervals during which log operations were suspended.. This isn’t the case with the leveldb indexes and cleanup Composite sends, like Virtual Topics, or the new JMS 2.0 durable subscribers, or fanout-type sends to multiple destinations
What is HA? High Availability… but people have different notions of what HA is… You can have two basic types with ActiveMQ: #1 where messages get high-availability in this case, you have producers and consumers to a broker that require their messages to be delivered even in the case of faults. So if a producer sends to broker A, then broker A guarantees deliver to consumers. #2 where clients must be able to connect and send to a broker.. Regardless of past messages, and with the most minimal of delay… We will be talking about the first kind, where clients must be able to eventually connect to a broker and messages guaranteed to be delivered…
Note these are the out of the box configs for HA,… but consider network of brokers to play a role in HA also…
The high-availability features in activemq depend on the client participating and able to failvoer to the slave brokers when connectivity to the master has been interrupted. To do that, you can use the failover transport which takes URIs for the brokers participating in the HA cluster. The failover is supported out of the box for the Java, .NET and C++ clients. In this case if the master goes away, the client will attempt to connect to the slave. There are parameters for prioritizing the backup (so that the client goes back to the master if the msater becomes available) or for delayed or exponential back off, or randomizign the attempts to reconnect, etc. You can also rely on different discovery mechanisms for locating the brokers in the cluster. You can use Fuse Fabric which under the covers uses zookeeper to locate master/slaves
The general pattern is for Slaves to be in warm-standby mode so that if a master gets hit by a truck, a slave will be elecftd master and the client will rely on its failover protocol to be able to detect which broker is the slave and connect to that one Transactions that have not been committed and are in progress are replayed upon reconnection. These HA scenarios shown above currently rely on some kind of shared backing store like NFS or SAN or RDBMS.
Note these are the out of the box configs for HA,… but consider network of brokers to play a role in HA also…
Simple to configure, easy to understand, uses DB table locks, or lease locks. So the master will first grab the exclusive lock, and all slaves will sit idle until the msater has relinquished the lock. So you can have as many idle broker slaves listening for the lock. The first to get the lock when master goes down becomes elected master. We’ve seen this in enterprises where there is good in-house expertise in RDBMs and they are comfortable administering the DB as SPOF and creating redundant solutions for the DB. Also, one thing to keep in mind, RDBMs are not optimized for messaging based use-cases, and will not be nearly as performant as the file-based solutions. On the other hand, all of your RDBMs tools are relevant, messages can be queried, etc…
Ensure your shared file locks work This requires distributed exclusive locks, thus file systems that support this. Because the election depends on the master getting the lock while slaves line up behind waiting for the lock. Basically boils down to being able to take take locks using the way the JVM creates locks (using posix locks, lockf and fcntl). flock is not sufficient… for eg, ocfs2 uses flock but w. newer linux kernel can be run in “ userspace cluster ” that uses lockf and AMQ can use that. You can add as many slaves as you’d like. Highly tunable for best performance.
Coordination and master election are handled by Zookeeper Apache Zookeeper is a distributed coordination and configuraiton service that itself is HA. It is a TLP from Apache that originated as the cluster coordination service used by Hadoop in a Hadoop cluster. You can use zookeeper to build out distributed data structures (queues), locks, synchronizations,, or use it for central location for configuration, or master election. Zookeeper behaves properly when nodes go down, and even during network partiitions. Zookeeper ensemble should always be odd, 1, 3, 5, etc. because a majority is always needed for voting This is what Fuse Fabric uses. Think of it kinda like an LDAP with directory structure with znodes Locks, Barriers, Queues, Master Election, Rendezvous, Group Membership
Management of your middleware infrastructure is critical. ActiveMQ had two main avenues for management and monitoring: #1 JMX through Jconsole, VisualVM, or using a tool like nagios or any of the commercial options for monitoring. #2 was a web-based console that shipped out of the box. The webconsole does give basic overview so you can see queues, basic destination stats, connections, network connectors, etc. Bottom line it was fairly limited, had no visualizations, and could not perform some very common and needed operations on the broker. For example, it was clumsy to move a message from a DLQ back to an original destination. However, one of the biggest motivations for moving away from this webconsole is: Every product comes with its own console! ActiveMQ’s was “just another console” .. So when you had FuseESB which contained ActiveMQ, Karaf, Felix, Camel, and potentially other JVM based apps with their own consoles, you had n consoles which becomes a huge pain.
So HawtIO is a single page web app written with angular-js and written with plugins and customization in mind. It’s intended to be the one and only one console that your JVM apps need for monitring and visualiztions. HawtIO will automatically discover your plugins or services that it knows how to manage and will enable them on your dashboard. It can mange and visualize your Camel, ActiveMQ, Fabric, FuseESB, Jboss, Infinispan, tomcat, others….
So HawtIO is a single page web app written with angular-js and written with plugins and customization in mind. It’s intended to be the one and only one console that your JVM apps need for monitring and visualiztions. HawtIO will automatically discover your plugins or services that it knows how to manage and will enable them on your dashboard. It can mange and visualize your Camel, ActiveMQ, Fabric, FuseESB, Jboss, Infinispan, tomcat, others….
Management of your middleware infrastructure is critical. ActiveMQ had two main avenues for management and monitoring: #1 JMX through Jconsole, VisualVM, or using a tool like nagios or any of the commercial options for monitoring. #2 was a web-based console that shipped out of the box. The webconsole does give basic overview so you can see queues, basic destination stats, connections, network connectors, etc. Bottom line it was fairly limited, had no visualizations, and could not perform some very common and needed operations on the broker. For example, it was clumsy to move a message from a DLQ back to an original destination. However, one of the biggest motivations for moving away from this webconsole is: Every product comes with its own console! ActiveMQ’s was “just another console” .. So when you had FuseESB which contained ActiveMQ, Karaf, Felix, Camel, and potentially other JVM based apps with their own consoles, you had n consoles which becomes a huge pain.
Management of your middleware infrastructure is critical. ActiveMQ had two main avenues for management and monitoring: #1 JMX through Jconsole, VisualVM, or using a tool like nagios or any of the commercial options for monitoring. #2 was a web-based console that shipped out of the box. The webconsole does give basic overview so you can see queues, basic destination stats, connections, network connectors, etc. Bottom line it was fairly limited, had no visualizations, and could not perform some very common and needed operations on the broker. For example, it was clumsy to move a message from a DLQ back to an original destination. However, one of the biggest motivations for moving away from this webconsole is: Every product comes with its own console! ActiveMQ’s was “just another console” .. So when you had FuseESB which contained ActiveMQ, Karaf, Felix, Camel, and potentially other JVM based apps with their own consoles, you had n consoles which becomes a huge pain.