Topic: Speedtest: Benchmark Your Apache Kafka®️
Abstract: In this session, Mark will talk about running benchmarking utilities for Apache Kafka; to determine how much MB/sec a cluster can handle; how to set up automated benchmark runs (including the repo), and using this to find and optimize client-side producer configuration properties
The document provides an overview of a presentation on cloud performance testing. The presentation agenda includes cloud 101 concepts, cloud offerings and deployment models, challenges of cloud computing, and tools for cloud performance testing. It also summarizes a proof of concept that was conducted to compare the performance and costs of using a commercial tool versus an open source tool for load testing on cloud infrastructure. The results showed comparable response times between the tools and significantly lower costs when using the cloud versus maintaining physical infrastructure.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
AWS re:Invent 2016: Amazon CloudFront Flash Talks: Best Practices on Configur...
In this series of 15-minute technical flash talks you will learn directly from Amazon CloudFront engineers and their best practices on debugging caching issues, measuring performance using Real User Monitoring (RUM), and stopping malicious viewers using CloudFront and AWS WAF.
Learnings from the Field. Lessons from Working with Dozens of Small & Large D...
- Upgrades should be done often to get bug fixes and improvements, following the upgrade guide carefully. Start with a healthy cluster and upgrade components outward from Zookeeper to Kafka brokers to clients. Don't rush the process or have any unresolved partition reassignments.
- Collect JMX metrics to monitor the cluster as outages can be prolonged without visibility. The Kafka defaults are suitable for single node deployments but replication factor, threads, and broker configuration should be tuned for larger clusters.
- Quotas like replication throttling and bandwidth/request limits per client or topic should be used to protect the cluster and clients. Log files should separate each component and be retained for a few days. Consider multiple clusters by SLA
High Availability is one of the most important requirements for mission-critical database systems. It is important for business continuity.
Enterprises cannot afford an outage of mission-critical applications, as mere minutes of downtime can cost millions of dollars in lost revenue.
Therefore making a database environment highly available is typically one of the highest priorities and poses significant challenges/questions to enterprises and database administrators.
What you will learn at this webinar:
- Database high availability basics in PostgreSQL
- How to design your environment for high availability
- High availability options available for PostgreSQL
- What EDB can offer to help enterprises meet their high availability requirements
Enable business continuity and high availability through active active techno...
IBM provides an overview of an active-active solution implemented by China Everbright Bank for their credit card system. The solution uses WebSphere MQ for real-time data synchronization between active sites in Beijing and Shanghai. This allows workload and data to be distributed across both sites for continuous availability in case of an outage. Key components discussed include the messaging architecture, application design considerations for performance, and procedures for planned and unplanned site switches. The implementation provides business continuity for Everbright Bank's credit card processing.
This document discusses various aspects of software performance testing. It defines performance testing as determining how fast a system performs under a workload to validate qualities like scalability and reliability. Key points covered include why performance testing is important, what performance testers must know, benefits of the LoadRunner tool, its versions and features. It also summarizes different types of performance testing like load testing, stress testing, capacity testing and soak testing.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LME
Confluent Platform is supporting London Metal Exchange’s Kafka Centre of Excellence across a number of projects with the main objective to provide a reliable, resilient, scalable and overall efficient Kafka as a Service model to the teams across the entire London Metal Exchange estate.
DevDay: Corda Enterprise: Journey to 1000 TPS per node, Rick Parker
This document summarizes Corda Enterprise's journey to achieving 1000 transactions per second (TPS) performance on individual nodes and scaling the entire Corda network. It outlines key steps taken, including moving to multi-threaded processing, optimizing database queries, tuning messaging and reducing network overhead. It also describes how sharding flows and notaries across multiple nodes allowed scaling the network throughput linearly. Lessons learned include that transaction output state count does not linearly impact performance and that attachment validation can be expensive. The document disclaims providing any client-specific or numerical performance details due to non-disclosure agreements.
IBM Blockchain Platform - Architectural Good Practices v1.0
This document discusses architectural good practices for blockchains and Hyperledger Fabric performance. It provides an overview of key concepts like transaction processing in Fabric and performance metrics. It also covers optimizing different parts of the Fabric network like client applications, peers, ordering service, and chaincode. The document recommends using tools like Hyperledger Caliper and custom test harnesses for performance testing and monitoring Fabric deployments. It highlights lessons learned from real projects around reusing connections and load balancing requests.
Uber has one of the largest Kafka deployment in the industry. To improve the scalability and availability, we developed and deployed a novel federated Kafka cluster setup which hides the cluster details from producers/consumers. Users do not need to know which cluster a topic resides and the clients view a "logical cluster". The federation layer will map the clients to the actual physical clusters, and keep the location of the physical cluster transparent from the user. Cluster federation brings us several benefits to support our business growth and ease our daily operation. In particular, Client control. Inside Uber there are a large of applications and clients on Kafka, and it's challenging to migrate a topic with live consumers between clusters. Coordinations with the users are usually needed to shift their traffic to the migrated cluster. Cluster federation enables much control of the clients from the server side by enabling consumer traffic redirection to another physical cluster without restarting the application. Scalability: With federation, the Kafka service can horizontally scale by adding more clusters when a cluster is full. The topics can freely migrate to a new cluster without notifying the users or restarting the clients. Moreover, no matter how many physical clusters we manage per topic type, from the user perspective, they view only one logical cluster. Availability: With a topic replicated to at least two clusters we can tolerate a single cluster failure by redirecting the clients to the secondary cluster without performing a region-failover. This also provides much freedom and alleviates the risks for us to carry out important maintenance on a critical cluster. Before the maintenance, we mark the cluster as a secondary and migrate off the live traffic and consumers. We will present the details of the architecture and several interesting technical challenges we overcame.
Centrifuge is a system for managing in-memory state across servers in a large-scale cloud service like Microsoft's Live Mesh. It uses leasing and partitioning to assign "objects" or pieces of state to servers while load balancing and allowing for server failures. The system includes a manager service that directs leasing and partitioning, lookup libraries that route requests to owners, and owner libraries that hold leased objects and process requests. Centrifuge was shown to handle large numbers of small objects efficiently with low overhead during normal operation and high availability during failure events based on its use in Live Mesh.
Sharing is Caring: Toward Creating Self-tuning Multi-tenant Kafka (Anna Povzn...
Deploying Kafka to support multiple teams or even an entire company has many benefits. It reduces operational costs, simplifies onboarding of new applications as your adoption grows, and consolidates all your data in one place. However, this makes applications sharing the cluster vulnerable to any one or few of them taking all cluster resources. The combined cluster load also becomes less predictable, increasing the risk of overloading the cluster and data unavailability.
In this talk, we will describe how to use quota framework in Apache Kafka to ensure that a misconfigured client or unexpected increase in client load does not monopolize broker resources. You will get a deeper understanding of bandwidth and request quotas, how they get enforced, and gain intuition for setting the limits for your use-cases.
While quotas limit individual applications, there must be enough cluster capacity to support the combined application load. Onboarding new applications or scaling the usage of existing applications may require manual quota adjustments and upfront capacity planning to ensure high availability.
We will describe the steps we took toward solving this problem in Confluent Cloud, where we must immediately support unpredictable load with high availability. We implemented a custom broker quota plugin (KIP-257) to replace static per broker quota allocation with dynamic and self-tuning quotas based on the available capacity (which we also detect dynamically). By learning our journey, you will have more insights into the relevant problems and techniques to address them.
Beyond REST and RPC: Asynchronous Eventing and Messaging Patterns
In this session you will learn about when and why to use asynchronous communication with and between services, what kind of eventing/messaging infrastructure you can use in the cloud and on the edge, and how to make it all work together.
Choose our Linux Web Hosting for a seamless and successful online presence
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
This document provides an overview of Apache Kafka including its main components, architecture, and ecosystem. It describes how LinkedIn used Kafka to solve their data pipeline problem by decoupling systems and allowing for horizontal scaling. The key elements of Kafka are producers that publish data to topics, the Kafka cluster that stores streams of records in a distributed, replicated commit log, and consumers that subscribe to topics. Kafka Connect and the Schema Registry are also introduced as part of the Kafka ecosystem.
(ATS4-PLAT03) Balancing Security with access for DevelopmentBIOVIA
Administrators of Pipeline Pilot servers wish to have a controlled environment to ensure that ownership and access is properly identified and enforced. Protocol developers desire the ability to quickly easily publish protocols and updates for production use. End-users need deployed applications to be tested and maintained. It is important to establish policies that ensure that these often-conflicting needs are met in a balanced way appropriate for your environment. In this session we will discuss the commonly reported pain points and outline the types of policies and procedures that that can help bring harmony. Be prepared to discuss your own experiences!
The document provides an overview of a presentation on cloud performance testing. The presentation agenda includes cloud 101 concepts, cloud offerings and deployment models, challenges of cloud computing, and tools for cloud performance testing. It also summarizes a proof of concept that was conducted to compare the performance and costs of using a commercial tool versus an open source tool for load testing on cloud infrastructure. The results showed comparable response times between the tools and significantly lower costs when using the cloud versus maintaining physical infrastructure.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...ScyllaDB
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
AWS re:Invent 2016: Amazon CloudFront Flash Talks: Best Practices on Configur...Amazon Web Services
In this series of 15-minute technical flash talks you will learn directly from Amazon CloudFront engineers and their best practices on debugging caching issues, measuring performance using Real User Monitoring (RUM), and stopping malicious viewers using CloudFront and AWS WAF.
Learnings from the Field. Lessons from Working with Dozens of Small & Large D...HostedbyConfluent
- Upgrades should be done often to get bug fixes and improvements, following the upgrade guide carefully. Start with a healthy cluster and upgrade components outward from Zookeeper to Kafka brokers to clients. Don't rush the process or have any unresolved partition reassignments.
- Collect JMX metrics to monitor the cluster as outages can be prolonged without visibility. The Kafka defaults are suitable for single node deployments but replication factor, threads, and broker configuration should be tuned for larger clusters.
- Quotas like replication throttling and bandwidth/request limits per client or topic should be used to protect the cluster and clients. Log files should separate each component and be retained for a few days. Consider multiple clusters by SLA
Making your PostgreSQL Database Highly AvailableEDB
High Availability is one of the most important requirements for mission-critical database systems. It is important for business continuity.
Enterprises cannot afford an outage of mission-critical applications, as mere minutes of downtime can cost millions of dollars in lost revenue.
Therefore making a database environment highly available is typically one of the highest priorities and poses significant challenges/questions to enterprises and database administrators.
What you will learn at this webinar:
- Database high availability basics in PostgreSQL
- How to design your environment for high availability
- High availability options available for PostgreSQL
- What EDB can offer to help enterprises meet their high availability requirements
Enable business continuity and high availability through active active techno...Qian Li Jin
IBM provides an overview of an active-active solution implemented by China Everbright Bank for their credit card system. The solution uses WebSphere MQ for real-time data synchronization between active sites in Beijing and Shanghai. This allows workload and data to be distributed across both sites for continuous availability in case of an outage. Key components discussed include the messaging architecture, application design considerations for performance, and procedures for planned and unplanned site switches. The implementation provides business continuity for Everbright Bank's credit card processing.
This document discusses various aspects of software performance testing. It defines performance testing as determining how fast a system performs under a workload to validate qualities like scalability and reliability. Key points covered include why performance testing is important, what performance testers must know, benefits of the LoadRunner tool, its versions and features. It also summarizes different types of performance testing like load testing, stress testing, capacity testing and soak testing.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LMEconfluent
Confluent Platform is supporting London Metal Exchange’s Kafka Centre of Excellence across a number of projects with the main objective to provide a reliable, resilient, scalable and overall efficient Kafka as a Service model to the teams across the entire London Metal Exchange estate.
DevDay: Corda Enterprise: Journey to 1000 TPS per node, Rick ParkerR3
This document summarizes Corda Enterprise's journey to achieving 1000 transactions per second (TPS) performance on individual nodes and scaling the entire Corda network. It outlines key steps taken, including moving to multi-threaded processing, optimizing database queries, tuning messaging and reducing network overhead. It also describes how sharding flows and notaries across multiple nodes allowed scaling the network throughput linearly. Lessons learned include that transaction output state count does not linearly impact performance and that attachment validation can be expensive. The document disclaims providing any client-specific or numerical performance details due to non-disclosure agreements.
IBM Blockchain Platform - Architectural Good Practices v1.0Matt Lucas
This document discusses architectural good practices for blockchains and Hyperledger Fabric performance. It provides an overview of key concepts like transaction processing in Fabric and performance metrics. It also covers optimizing different parts of the Fabric network like client applications, peers, ordering service, and chaincode. The document recommends using tools like Hyperledger Caliper and custom test harnesses for performance testing and monitoring Fabric deployments. It highlights lessons learned from real projects around reusing connections and load balancing requests.
Uber has one of the largest Kafka deployment in the industry. To improve the scalability and availability, we developed and deployed a novel federated Kafka cluster setup which hides the cluster details from producers/consumers. Users do not need to know which cluster a topic resides and the clients view a "logical cluster". The federation layer will map the clients to the actual physical clusters, and keep the location of the physical cluster transparent from the user. Cluster federation brings us several benefits to support our business growth and ease our daily operation. In particular, Client control. Inside Uber there are a large of applications and clients on Kafka, and it's challenging to migrate a topic with live consumers between clusters. Coordinations with the users are usually needed to shift their traffic to the migrated cluster. Cluster federation enables much control of the clients from the server side by enabling consumer traffic redirection to another physical cluster without restarting the application. Scalability: With federation, the Kafka service can horizontally scale by adding more clusters when a cluster is full. The topics can freely migrate to a new cluster without notifying the users or restarting the clients. Moreover, no matter how many physical clusters we manage per topic type, from the user perspective, they view only one logical cluster. Availability: With a topic replicated to at least two clusters we can tolerate a single cluster failure by redirecting the clients to the secondary cluster without performing a region-failover. This also provides much freedom and alleviates the risks for us to carry out important maintenance on a critical cluster. Before the maintenance, we mark the cluster as a secondary and migrate off the live traffic and consumers. We will present the details of the architecture and several interesting technical challenges we overcame.
Centrifuge is a system for managing in-memory state across servers in a large-scale cloud service like Microsoft's Live Mesh. It uses leasing and partitioning to assign "objects" or pieces of state to servers while load balancing and allowing for server failures. The system includes a manager service that directs leasing and partitioning, lookup libraries that route requests to owners, and owner libraries that hold leased objects and process requests. Centrifuge was shown to handle large numbers of small objects efficiently with low overhead during normal operation and high availability during failure events based on its use in Live Mesh.
Sharing is Caring: Toward Creating Self-tuning Multi-tenant Kafka (Anna Povzn...HostedbyConfluent
Deploying Kafka to support multiple teams or even an entire company has many benefits. It reduces operational costs, simplifies onboarding of new applications as your adoption grows, and consolidates all your data in one place. However, this makes applications sharing the cluster vulnerable to any one or few of them taking all cluster resources. The combined cluster load also becomes less predictable, increasing the risk of overloading the cluster and data unavailability.
In this talk, we will describe how to use quota framework in Apache Kafka to ensure that a misconfigured client or unexpected increase in client load does not monopolize broker resources. You will get a deeper understanding of bandwidth and request quotas, how they get enforced, and gain intuition for setting the limits for your use-cases.
While quotas limit individual applications, there must be enough cluster capacity to support the combined application load. Onboarding new applications or scaling the usage of existing applications may require manual quota adjustments and upfront capacity planning to ensure high availability.
We will describe the steps we took toward solving this problem in Confluent Cloud, where we must immediately support unpredictable load with high availability. We implemented a custom broker quota plugin (KIP-257) to replace static per broker quota allocation with dynamic and self-tuning quotas based on the available capacity (which we also detect dynamically). By learning our journey, you will have more insights into the relevant problems and techniques to address them.
Beyond REST and RPC: Asynchronous Eventing and Messaging PatternsClemens Vasters
In this session you will learn about when and why to use asynchronous communication with and between services, what kind of eventing/messaging infrastructure you can use in the cloud and on the edge, and how to make it all work together.
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
3. Understand and tune
• Producers
• Consumers
• Brokers
Producer tuning is key
• Efficient batching is essential
for overall performance
Focus on fundamentals
• Large impact & gains
• Advanced topics e.g. in
• Tail Latency at Scale with
Apache Kafka
Where to begin?
3
4. Service goals and
tradeoffs
4
Non-performance objectives
• Business requirements take
priority
• Durability, availability and
ordering?
Performance objectives
• Trade off between throughput
and latency
Example approach
• Set configuration to ensure data
durability
• Optimize for throughput
Throughput Latency
Availability
Durability
payments
logging
Next Best
Offer
Centralized
Kafka
5. Agenda
5
01. Introduction
Setting the scene & review of relevant terminology
02. Producers
Deep dive into producer internals.
Why is producer behavior key for cluster performance?
03. Consumers
Understand fetching and consumer group behavior.
04. Brokers, Zookeepers and Topics
How are requests handled? Why does Zookeeper matter?
05. Optimising and Tuning Client Applications
Key parameters to consider for different service goals.
06. Summary
Summary and outlook.
6. Identify your
service goal
Throughput, latency,
durability, or availability
Understand
Kafka
internals
Producer, Consumer
and Broker behavior
Configure
cluster and
clients
Ensure service goals are
met
Benchmark,
monitor, and
tune
Iterative procedure to
drive performance
It is a journey...
8. Producer
8
acks=1
enable.idempotence=false
max.request.size=1MB
retries=MAX_INT
delivery.timeout.ms=2min
max.in.flight.requests.
per.connection=5
Serializer
● Retrieves and
caches schemas
from Schema
Registry
Partitioner
● Java client uses
murmur2 for
hashing
● If key not
provided
performs round
robin
● If keys
unbalanced it will
overload one
leader
Sender thread
● Batches grouped
by destination
broker into
requests
● Multiple batches
to different
partitions
potentially in the
same producer
request
Record accumulator
● Buffer per partition,
seldom used partitions
may not achieve high
batching
● If many producers are in
the same JVM, memory
and GC could become
important
● Sticky partitioner could
be used to increase
batches in the case of
round robin
(KIP-408/KIP-794)
Compression
● At batch level
● Allows faster transfer to
the broker
● Reduces the inter
broker replication load
● Reduces page cache &
disk space utilization on
brokers
● Gzip is more CPU
intensive, Snappy is
lighter, LZ4/ZStd are a
good balance*
compress.type=none
batch.size=16KB
buffer.memory=32MB
max.block.ms=60s
record batch request
batch.size=16KB
linger.ms=0
buffer.memory=32MB
max.block.ms=60s
compress.type=none
9. Batching is key
to overall performance
9
Benefits to batching
● Reduced network bandwidth
○ producer to broker
○ broker to broker (replication)
○ broker to consumer
● Less storage requirements on broker disks
● Reduced CPU requirement due to fewer
requests
From Tail Latency at Scale with Apache Kafka
“Batching reduces the cost of each record by
amortizing costs on both the clients and
brokers.
Generally, bigger batches reduce processing
overhead and reduce network and disk IO, which
improves network and disk utilization.”
10. Start the demo
environment
10
in docker-compose (on my mac)
1 * zookeeper
5 * brokers
1 * Squid proxy (sends JMX metrics to Health+)
Not starting:
schema registry
connect
ksqlDB
REST Proxy
Confluent Control Center
11. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc. 11
12. Kafka performance
test tools
12
kafka-producer-perf-test
--num-records 1000000
--record-size 1000
--topic demo-perf-topic
--throughput 10000
--print-metrics
--producer-props bootstrap.servers=kafka:9092
acks=all batch.size=300000 linger.ms=100
compression.type=lz4
Overview
● CLI tools to write & read sample data
to/from topics
● Helpful to enhance understanding of
parameters & impact
Disclaimer
● Performance numbers are not
representative for specific customer use
cases!
○ Random test data is reused
● Use case specific performance testing is
required
kafka-consumer-perf-test
kafka-producer-perf-test
13. Most significant producer performance metrics
Metric Meaning MBean
record-size-avg Avg record size kafka.producer:type=producer-metrics,client-id=([-.w]+)
batch-size-avg
Avg number of bytes sent per partition
per-request
kafka.producer:type=producer-metrics,client-id=([-.w]+)
bufferpool-wait-ratio
Faction of time an appender waits for
space allocation
kafka.producer:type=producer-metrics,client-id=([-.w]+)
compression-rate-avg
Avg compression rate for a topic.
Compressed / uncompressed batch size
kafka.producer:type=producer-topic-metrics,client-id=([-.w]+),to
pic=([-.w]+)
record-queue-time-avg
Avg time (ms) record batches spent in
the send buffer
kafka.producer:type=producer-metrics,client-id=([-.w]+)
request-latency-avg Avg request latency (ms) kafka.producer:type=producer-metrics,client-id=([-.w]+)
produce-throttle-time-avg
Avg time (ms) a request was throttled
by a broker
kafka.producer:type=producer-metrics,client-id=([-.w]+)
record-retry-rate
Avg per-second number of retried record
sends for a topic
kafka.producer:type=producer-topic-metrics,client-id=([-.w]+),to
pic=([-.w]+)
Overview Java metrics & librdkafka statistics
16. Consumers
Partitions
● Basis for scalability
● No partition will be assigned to more than one consumer in the same group
Key parameters
# of partitions
fetch.min.bytes=1
fetch.max.wait.ms=500ms
max.partition.fetch.bytes=10MB
fetch.max.bytes=50MB
max.poll.records=500
max.poll.interval.ms=5min
auto.commit.interval.ms=5s (if being used)
17. Key positions in each
partition
17
Log end offset
• Latest data added to the partition
• Position of the producer
• Not accessible to consumers
High watermark
• Offsets up to the watermark can be
consumed
• Data has been replicated to all insync
replicas
Current position
• Specific to consumer instances
• Current message being processed in
poll-loop
Last committed offset
• Last position persisted in the
__consumer_offsets topic
0 1 2 3 4 5 6 7 8 9 10 11 12
Last
committed
offset
Current
position of
consumer
High
watermark
Log end
offset
18. Consumer groups
Consumer
Any Broker
(bootstrap)
Coordinator
Broker
Find coordinator
Coordinator details
Join consum
er group
Leader details
Sync group
Partition assignm
ent
Rebalances
● Every time a new consumer joins or
leaves (fails) the group
● Until Kafka 2.4 “stop the world” event
(solved in KIP-429)
● Consider setting group.instance.id
to minimize rebalances (KIP-345)
Partition assignment
● Based on
partition.assignment.strategy
● Options: Range (default), round robin,
sticky, cooperative sticky
● Is customizable
Heartbeat
heartbeat.interval.ms=3s
session.timeout.ms=10s
group.initial.
rebalance.delay.ms=3s
19. Selected consumer performance metrics
Metric Meaning MBean
fetch-latency-avg Avg time taken for a fetch request kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-
.w]+)
fetch-size-avg Avg number of bytes fetched per request kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-
.w]+)
commit-latency-avg Avg time commit request kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.w
]+)
rebalance-latency-total Total time taken for group rebalances kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.
w]+)
fetch-throttle-time-avg Avg throttle time (ms) kafka.consumer:type=consumer-fetch-manager-metrics,client-id=([-
.w]+)
Overview Java metrics and librdkafka statistics
20. Consumer
Benchmarking
20
(1) Start with most simple test: Without any
tuning, we get extremely good results
Highlights:
● 10M messages in less than 30 seconds
● 1Gb data retrieved
● 325 Mb/s
Conclusion:
● Tuning producer is key, if it is correctly
tuned, there (can be) almost no tuning
required on consumer side
21. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc. 21
22. Copyright 2021, Confluent, Inc. All rights reserved. This document may not be reproduced in any manner without the express written permission of Confluent, Inc. 22
24. Overview
Brokers and Zookeeper
24
Request lifecycle in broker
● How are produce & fetch requests
handled?
● How can inefficient batching impact
performance?
● How to identify where time is spent during
request handling?
Controller, leaders, and Zookeeper
● How is the Controller elected?
● How are broker failures detected?
● Why does the partition count matter for
the recovery time after a controller failure?
(Next 8 slides skipped)
25. 04. Optimizing and Tuning
Client Applications
https://docs.confluent.io/cloud/current/client-apps/optimizing/index.html#optimizing-and-tuning
27. Recommendations
27
Benchmarking
● Benchmark all applications with a significant & representative load
● Consider a test cluster with
the applications requirements configured (either it is durability, availability or any other)
real data (size, schema, serialization format, ...)
● Test the different parameters to see the impact in the test data (throughput, latency, ...) considering
different configurations (batch size, compression, linger, ...)
● Evaluate the traffic and leave space for growth when determining the number of partitions
● Low volume applications may need care too
● Re-evaluate after major changes in application or message content (JSON size, ...) and volume
Monitoring
● Should be used to identify bottlenecks in running clusters
● Client monitoring is as important as broker monitoring
28. Conclusion
28
Resources
● Optimizing Your Apache Kafka®
Deployment
● Optimizing and Tuning
● White paper
Optimization approach
● Determine service goals
● Understand Kafka’s internals
● Configure clients & cluster
● Benchmark, monitor & tune
Continue the conversation
● How to monitor the cluster & clients?
● Integration with external systems
● Tuning of Kafka Streams & ksqlDB
applications?