Slides from the 'MySQL Cluster as NoSQL' tutorial at Percona Live MySQL Conference 2012 in London.
Tutorial covers:
*MySQL Cluster administration
* NoSQL options for MySQL Cluster and when to use what
* Memcached (installation and configuration)
* Cluster/J
* NDBAPI
* Benchmarking of different access methods on a live cluster
Severalnines Self-Training: MySQL® Cluster - Part IISeveralnines
Part II of our free self-training slides on MySQL Cluster.
In this part we cover 'Detailed Concepts':
* Data Distribution & Partitioning
* Two Phase Commit Protocol
* Transaction Resources
Conference slides: MySQL Cluster Performance TuningSeveralnines
This presentation goes through performance tuning basics in MySQL Cluster.
It also covers the new parameters and status variables of MySQL Cluster 7.2 to determine issues with e.g disk data performance and query (join) performance.
Severalnines Self-Training: MySQL® Cluster - Part VIIISeveralnines
This document provides an agenda for a training on MySQL Cluster presented by Severalnines.com. The training will cover topics such as MySQL Cluster architecture, installation, performance tuning, management and administration, disk data, designing a cluster, and troubleshooting. Hands-on lab exercises are included to reinforce the concepts taught. Prerequisites for the training include basic SQL and database knowledge as well as laptop hardware requirements.
Breakthrough performance with MySQL Cluster (2012)Frazer Clement
Presentation from the MySQL Connect conference in San Francisco 2012.
Describes cluster architecture and impacts on performance, benchmarking, analysing and techniques for improving performance.
OSSCube MySQL Cluster Tutorial By Sonali At Osspac 09OSSCube
Sonali from OSSCube presents on MySQL Cluster Tutorial at OSSPAC 2009
OSSCube-Leading OpenSource Evangelist Company.
To know how we can help your business grow, contact:
India: +91 995 809 0987
USA: +1 919 791 5472
WEB: www.osscube.com
Mail: sales@osscube.com
Severalnines Training: MySQL® Cluster - Part IXSeveralnines
This document discusses best practices for designing a MySQL Cluster database infrastructure. It recommends dedicating instances for data and API nodes and not co-locating them. The number of nodes depends on storage, throughput and redundancy requirements. Hardware recommendations include fast CPUs, RAM sized for the dataset, and SSDs or RAID for storage. Performance planning requires benchmarking typical workloads to determine if resources need scaling. The document provides formulas and tools to help calculate storage and memory needs.
Microsoft SQL Server Distributing Data with R2 BertucciMark Ginnebaugh
This presentation by Paul Bertucci describes an ordered method of determining what users need and which SQL Server data distribution solution is best to use.
There are many needs of data throughout an organization. Getting data to those who need it can be accomplished many different ways with SQL Server 2008 technologies.
This presentation covers data replication, database mirroring and snapshots, older methods such as log shipping and linked servers, and new methods such as using the sync framework.
You'll Learn
* Each of SQL Server’s main data distribution solutions
* How to determine which solution to use to solve different purposes
MySQL Cluster is a database that provides in-memory real-time performance, web scalability, and 99.999% availability. It uses memory optimized tables with durability and can handle high volumes of both reads and writes simultaneously in a distributed, auto-sharding fashion while maintaining ACID compliance. It offers high availability through a shared nothing architecture with no single point of failure and self-healing capabilities.
Choosing a Next Gen Database: the New World Order of NoSQL, NewSQL, and MySQLScaleBase
In this webinar Matt Aslett of 451 Research joins ScaleBase to discuss the benefits and drawbacks of NoSQL, NewSQL & MySQL databases and explores real-life use cases for each.
The document discusses MySQL NDB 8.0 and high availability solutions for MySQL. It summarizes MySQL NDB Cluster, MySQL InnoDB Cluster, and MySQL Replication as high availability solutions. It also discusses features and performance of MySQL NDB Cluster 8.0, including linear scalability, predictable low-latency performance, and improved backup throughput.
MySQL Cluster Carrier Grade Edition is a high availability, distributed database solution based on MySQL Cluster. It provides real-time performance with 99.999% uptime through a shared-nothing architecture across up to 255 nodes. Key applications include high-traffic ecommerce sites, telecom subscriber databases, and other systems requiring high scalability and availability.
Connector/J Beyond JDBC: the X DevAPI for Java and MySQL as a Document StoreFilipe Silva
The document discusses Connector/J Beyond JDBC and the X DevAPI for Java and MySQL as a Document Store. It provides an agenda that includes an introduction to MySQL as a document store, an overview of the X DevAPI, and how the X DevAPI is implemented in Connector/J. The presentation aims to demonstrate the X DevAPI for developing CRUD-based applications and using MySQL as both a relational database and document store.
Die 10 besten PostgreSQL-Replikationsstrategien für Ihr UnternehmenEDB
Dieses Webinar hilft Ihnen, die Unterschiede zwischen den verschiedenen Replikationsansätzen zu verstehen, die Anforderungen der jeweiligen Strategie zu erkennen und sich über die Möglichkeiten klar zu werden, was mit jeder einzelnen zu erreichen ist. Damit werden Sie hoffentlich eher in der Lage sein, herauszufinden, welche PostgreSQL-Replikationsarten Sie wirklich für Ihr System benötigen.
- Wie physische und logische Replikation in PostgreSQL funktionieren
- Unterschiede zwischen synchroner und asynchroner Replikation
- Vorteile, Nachteile und Herausforderungen bei der Multi-Master-Replikation
- Welche Replikationsstrategie für unterschiedliche Use-Cases besser geeignet ist
Referent:
Borys Neselovskyi, Regional Sales Engineer DACH, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
Microsoft released SQL Azure more than two years ago - that's enough time for testing (I hope!). So, are you ready to move your data to the Cloud? If you’re considering a business (i.e. a production environment) in the Cloud, you need to think about methods for backing up your data, a backup plan for your data and, eventually, restoring with Red Gate Cloud Services. In this session, you’ll see the differences, functionality, restrictions, and opportunities in SQL Azure and On-Premise SQL Server 2008/2008 R2/2012. We’ll consider topics such as how to be prepared for backup and restore, and which parts of a cloud environment are most important: keys, triggers, indexes, prices, security, service level agreements, etc.
The document describes a migration from an Oracle database topology to a PostgreSQL database topology at ACI. It discusses the starting Oracle topology with issues around operational complexity and non-ACID compliance. It then describes the target PostgreSQL topology with improved performance, availability and lower costs. The document outlines decisions around tools, extensions, code changes and testing approaches needed for the migration. It also discusses options for migrating the data and cutting over to the new PostgreSQL environment.
From Nice to Have to Mission Critical: MySQL Enterprise Edition郁萍 王
This document outlines an agenda for a presentation on MySQL Enterprise Edition. The agenda includes an introduction to MySQL, discussing data in the modern enterprise, an overview of MySQL Enterprise Edition, Oracle product integrations and certifications, opportunities for learning more, and a question and answer session. It also includes a safe harbor statement indicating the product direction outlines are for information purposes only and not binding commitments.
MySQL InnoDB Cluster HA Overview & DemoKeith Hollman
Take a look at the High Availability option that you can use with your out-of-the-box MySQL: MySQL InnoDB Cluster. With MySQL Server 8.0, MySQL Shell & MySQL Router you can convert from single-primary to multi-primary and back again, in a single command. Want to know how?
Priyanka, a MySQL cluster developer, presented MySQL cluster in the MySQL User camp. The slide deck contains an introduction to the cluster module- the architecture,
auto-sharding, failover etc in the cluster module.
Galera cluster for MySQL - Introduction SlidesSeveralnines
This set of slides gives you an overview of Galera, configuration basics and deployment best practices.
The following topics are covered:
- Concepts
- Node provisioning
- Network partitioning
- Configuration example
- Benchmarks
- Deployment best practices
- Galera monitoring and management
Implementing High Availability Caching with MemcachedGear6
Typical Memcached deployments do not comprehensively address web site requirements for high availability. Depending on your web architecture, a single failure can disable your web caches. This presentation offers real world solutions to solving <a>high availability</a> challenges common to large, dynamic websites with Memcached, specifically:
* Options and benefits for deploying high availability services within Memcached
* How companies are approaching high availability
* Considerations on building and deploying high availability
o Recommendations for a typical Memcached environment
o Open source tools available
o High level costs for deployment
Severalnines Training: MySQL Cluster - Part XSeveralnines
The document discusses troubleshooting MySQL Cluster. The most common problems include configuration changes, running out of disk space or RAM, and network issues. When problems occur, error logs and trace files should be checked to localize the issue. If a node fails, optimized node recovery or initial node recovery may be used to restore it. If all nodes fail, a system restart or initial system restart with restore from backup may be required.
Galera Cluster for MySQL vs MySQL (NDB) Cluster: A High Level Comparison Severalnines
Galera Cluster for MySQL, Percona XtraDB Cluster and MariaDB Cluster (the three “flavours” of Galera Cluster) make use of the Galera WSREP libraries to handle synchronous replication.MySQL Cluster is the official clustering solution from Oracle, while Galera Cluster for MySQL is slowly but surely establishing itself as the de-facto clustering solution in the wider MySQL eco-system.
In this webinar, we will look at all these alternatives and present an unbiased view on their strengths/weaknesses and the use cases that fit each alternative.
This webinar will cover the following:
MySQL Cluster architecture: strengths and limitations
Galera Architecture: strengths and limitations
Deployment scenarios
Data migration
Read and write workloads (Optimistic/pessimistic locking)
WAN/Geographical replication
Schema changes
Management and monitoring
DIY: A distributed database cluster, or: MySQL ClusterUlf Wendel
Live from the International PHP Conference 2013: MySQL Cluster is a distributed, auto-sharding database offering 99,999% high availability. It runs on Rasperry PI as well as on a cluster of multi-core machines. A 30 node cluster was able to deliver 4.3 billion (not million) read transactions per second in 2012. Take a deeper look into the theory behind all the MySQL replication/clustering solutions (including 3rd party) and learn how they differ.
This document provides an overview of MySQL high availability solutions including InnoDB Cluster and NDB Cluster. InnoDB Cluster allows setting up a highly available MySQL cluster with auto-sharding using Group Replication and MySQL Router for transparent application routing. NDB Cluster is a memory-optimized database for low-latency applications requiring high scalability and availability. MySQL Shell provides a unified interface for deploying, managing and monitoring these MySQL HA solutions.
Webinar: Data Streaming with Apache Kafka & MongoDBMongoDB
This document summarizes a webinar about integrating Apache Kafka and MongoDB for data streaming. The webinar covered:
- An overview of Apache Kafka and how it can be used for data transport and integration as well as real-time stream processing.
- How MongoDB can be used as both a Kafka producer, to stream data into Kafka topics, and as a Kafka consumer, to retrieve streamed data from Kafka for storage, querying, and analytics in MongoDB.
- Various use cases for integrating Kafka and MongoDB, including handling real-time updates, storing raw and processed event data, and powering real-time applications with analytics models built from streamed data.
Data Streaming with Apache Kafka & MongoDB - EMEAAndrew Morgan
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
NativeX (formerly W3i) recently transitioned a large portion of their backend infrastructure from MS SQL Server to Apache Cassandra. Today, its Cassandra cluster backs its mobile advertising network supporting over 10 million daily active users producing over 10,000 transactions per second with an average database request latency of under 2 milliseconds. Going from relational to noSQL required NativeX's engineers to re-train, re-tool and re-think the way it architects applications and infrastructure. Learn why Cassandra was selected as a replacement, what challenges were encountered along the way, and what architecture and infrastructure were involved in the implementation.
The document discusses how SQL and NoSQL databases can work together for big data. It provides an overview of relational databases based on Codd's rules and how NoSQL databases are used for less structured data like documents and graphs. Examples of using MongoDB and Hadoop are provided. The document also discusses using MySQL with memcached to get the benefits of both SQL and NoSQL for accessing data.
Jan Steemann: Modelling data in a schema free world (Talk held at Froscon, 2...ArangoDB Database
Even though most NoSQL databases follow
the "schemafree" data paradigma, it is still import to choose the right data model to make the best of the underlying database technology. This talk provides an overview of
the different data storage models available in popular NoSQL databases. It also introduces some best practices on how to model your data for both best performance and best querying.
This document provides an overview of NoSQL databases. It begins by defining NoSQL as non-relational databases that are distributed, open source, and horizontally scalable. It then discusses some of the limitations of relational databases that led to the rise of NoSQL, such as issues with scalability and the need for flexible schemas. The document also summarizes some key NoSQL concepts, including the CAP theorem, ACID versus BASE, and eventual consistency. It provides examples of use cases for NoSQL databases and discusses some common NoSQL database types and how they address scalability.
The document discusses the rise of elastic SQL databases which provide the benefits of both traditional databases like ACID compliance and SQL capabilities as well as the elasticity of cloud databases. Elastic SQL databases allow scaling simply by adding or removing nodes, provide high availability and zero downtime, and can integrate with modern DevOps practices. NuoDB is highlighted as an example of an elastic SQL database that uses a distributed cache approach to enable elastic scaling while maintaining data consistency and durability.
Microsoft Azure Cosmos DB is a multi-model database that supports document, key-value, wide-column and graph data models. It provides high throughput, low latency and global distribution across multiple regions. Cosmos DB supports multiple APIs including SQL, MongoDB, Cassandra and Gremlin to allow developers to use their preferred API based on their application needs and skills. It also provides automatic scaling of throughput and storage across all data partitions.
Redis is a key-value store that can be used as a NoSQL database, cache, or message broker. It supports data structures like strings, hashes, lists, sets, and sorted sets. Redis is very fast, with over 100,000 reads/writes per second, and supports non-blocking operations. It can be used to build social connections by storing followers and following relationships in sets and finding friends with set intersection.
Introduction to NOSQL covering the basics and the the issues with RDBMS and discovering the various types of NOSQL databases with their advantages and limitations
Cloud Database Migration Made Easy: Migrating MySQL to NuoDBNuoDB
For organizations moving to cloud infrastructure, database migration can be the stuff of nightmares. When selecting a cloud-centric database, balancing ease of migration with the on-demand scaling and continuous availability your modern application needs can seem like a series of compromises... But it doesn’t have to be.
In these slides, we showcase how simple it is to move from a traditional relational database to NuoDB’s elastic SQL database and talk about how this compares to the complexity of moving to a NoSQL database.
Senior Product Manager Joe Leslie demonstrates how to use NuoDB’s built-in migrator facility to simplify migration from databases such as MySQL, Microsoft SQL Server, or Oracle over to NuoDB, minimizing the transition time, and making it easy to get started sooner.
SQL or NoSQL, is this the question? - George GrammatikosGeorge Grammatikos
This document provides an overview and comparison of SQL and NoSQL databases. It lists the most popular databases according to a Stack Overflow survey, including SQL databases like Azure SQL and NoSQL databases like Azure Cosmos DB. It then defines RDBMS and NoSQL databases and provides examples of relational and non-relational data models. The document compares features of SQL and NoSQL databases such as scalability, performance, data modeling flexibility and pricing. It also includes live demo instructions for provisioning Azure SQL and Cosmos DB databases.
Listen to this webinar for a technical discussion about how to evaluate an elastic SQL database, why it's different from evaluating a traditional database, and what to consider during your evaluation.
This document discusses NoSQL databases and why they are used. It begins by defining NoSQL as "not only SQL" or "not SQL" databases that do not use a relational schema. The need for NoSQL databases arose in the early 2000s due to difficulties scaling MySQL databases for large amounts of data. NoSQL databases are classified based on their data model, consistency, and performance. They offer flexible data models including structured, unstructured, and semi-structured data. NoSQL databases focus on developer agility by being easy to use and integrate with modern frameworks. While they scale well horizontally, they generally lack ACID compliance and transaction support of relational databases.
This document provides an overview of NoSQL and MongoDB. It begins with definitions of databases, DBMS, and data models. It then contrasts relational databases with NoSQL databases, explaining that NoSQL is better suited for large, unstructured datasets that require scalability and availability over consistency. MongoDB is introduced as a popular document-oriented NoSQL database, and use cases for Aadhar and eBay are described. The document concludes that both RDBMS and NoSQL systems have advantages, and the right tool should be selected based on each application's requirements.
NativeX (formerly W3i) recently transitioned a large portion of their backend infrastructure from MS SQL Server to Apache Cassandra. Today, its Cassandra cluster backs its mobile advertising network supporting over 10 million daily active users producing over 10,000 transactions per second with an average database request latency of under 2 milliseconds. Going from relational to noSQL required NativeX's engineers to re-train, re-tool and re-think the way it architects applications and infrastructure. Learn why Cassandra was selected as a replacement, what challenges were encountered along the way, and what architecture and infrastructure were involved in the implementation.
How big data moved the needle from monolithic SQL RDBMS to distributed NoSQLSayyaparaju Sunil
we will see what factors contributed to the evolution of the next thing and what kind of design choices were made by the engineers along the evolution. We will also see what we got rid of (or the tradeoffs) during the evolution process. We will talk about what kind of applications will be best suited to a particular type of database.
This document provides a comparison of SQL and NoSQL databases. It summarizes the key features of SQL databases, including their use of schemas, SQL query languages, ACID transactions, and examples like MySQL and Oracle. It also summarizes features of NoSQL databases, including their large data volumes, scalability, lack of schemas, eventual consistency, and examples like MongoDB, Cassandra, and HBase. The document aims to compare the different approaches of SQL and NoSQL for managing data.
Large Scale Cassandra Made Better in Containers - Chris Duchesne and Aaron Sp...{code} by Dell EMC
How do you take a NoSQL, highly scalable, high-performance distributed database providing high availability with no single point of failure and turn it into an on-demand service? You use Kubernetes and containers! Come learn how Cassandra, REX-Ray, and ScaleIO creates a new architecture for an always available distributed database.
This document discusses relational database management systems (RDBMS) and NoSQL databases. It notes that while SQL is useful for flat data, it does not scale well for large, unstructured, distributed data. The CAP theorem is discussed, noting that databases must sacrifice availability, consistency, or partition tolerance. Several categories of NoSQL databases are described, including document, graph, columnar, and key-value stores. Factors like scalability, transactions, data modeling, querying and access are compared between SQL and NoSQL options. The performance of different databases is evaluated for read-write workloads. The future of polyglot persistence using multiple database technologies is envisioned.
NuoDB is an elastic SQL database that combines the scale-out capabilities required by cloud applications with the transactional consistency and durability demanded by databases. It uses a distributed, peer-to-peer architecture that allows independent database services to scale elastically while providing a single logical database interface through ANSI SQL. NuoDB provides continuous availability, elastic scalability, and allows SQL applications to be migrated and managed in the cloud.
The document summarizes key topics from a lecture on database design for enterprise systems, including:
1) Logical and physical database design steps such as conceptual modeling and converting models to schemas.
2) Database security topics like authentication, authorization, and data encryption.
3) Characteristics of enterprise database environments including high availability, load balancing, clustering, replication, and integrating databases with continuous integration systems.
Similar to Conference tutorial: MySQL Cluster as NoSQL (20)
LIVE DEMO: CCX for CSPs, a drop-in DBaaS solutionSeveralnines
This webinar aims to equip Cloud Service Providers (CSPs) with the knowledge and tools to differentiate themselves from hyperscalers by offering a Database-as-a-Service (DBaaS) solution. The session will introduce and demonstrate CCX, a drop-in, premium DBaaS designed for rapid adoption.
Learn more about CCX for CSPs here: https://bit.ly/3VabiDr
DIY DBaaS: A guide to building your own full-featured DBaaSSeveralnines
More so than ever, businesses need to ensure that their databases are resilient, secure, and always available to support their operations. Database-as-a-Service (DBaaS) solutions have become a popular way for organizations to manage their databases efficiently, leveraging cloud infrastructure and advanced set-and-forget automation.
However, consuming DBaaS from providers comes with many compromises. In this guide, we’ll show you how you can build your own flexible DBaaS, your way. We’ll demonstrate how it is possible to get the full spectrum of DBaaS capabilities along with workload access and portability, and avoid surrendering control to a third-party.
From architectural and design considerations to operational requirements, we’ll take you through the process step-by-step, providing all the necessary information and guidance to help you build a DBaaS solution that is tailor-made to your unique use case. So get ready to dive in and learn how to build your own custom DBaaS solution from scratch!
We created this guide to help developers understand:
- Traditional vs. Sovereign DBaaS implementation models
- The DBaaS environment, elements and design principles
- Using a Day 2 operations framework to develop your blueprint
- The 8 key operations that form the foundation of a complete DBaaS
- Bringing the Day 2 ops framework to life with a provisional architecture
- How you can abstract the orchestration layer with Severalnines solutions
Cloud's future runs through Sovereign DBaaSSeveralnines
Sovereign DBaaS is a new way to do DBaaS that allows you to reliably scale your open-source database ops without being limited to a specific environment or ceding control of your infrastructure to third-party service providers.
With Sovereign DBaaS, users can leverage the benefits of modern deployment strategies, e.g. public cloud, hybrid, etc., with additional security, compliance, and risk mitigation. So what exactly is Sovereign DBaaS and why should you choose one?
Presented by Sanjeev Mohan, Principal Analyst at SanjMo and former Gartner Research VP, and Vinay Joosery, CEO of Severalnines, this webinar dives into the future of the cloud and database management and introduces a new solution, Sovereign DBaaS.
The state of the cloud and its current challenges
What is Sovereign DBaaS?
Agenda:
- Key features of Sovereign DBaaS
- Why you should choose a Sovereign DBaaS
- How you can implement Sovereign DBaaS with Severalnines
- Q&A
Tips to drive maria db cluster performance for nextcloudSeveralnines
200
● SSD
2000
● NVMe
4000
Tune for your hardware. Higher is better but avoid over-committing IOPS.
innodb_flush_log_at_trx_commit 1 Flush logs at each transaction commit for ACID compliance.
innodb_log_buffer_size 16M-64M Default is 8M. Increase for more transactions per second.
innodb_log_file_size 1G Default is 48M. Increase for more transactions per second.
innodb_flush_method O_DIRECT Bypass OS cache for better durability.
innodb_thread_concurrency 0 Allow InnoDB to manage thread concurrency level.
Working with the Moodle Database: The BasicsSeveralnines
Managing the database behind Moodle is key to improving performance and achieving uptime for your users. In this training video we will talk about the Moodle database including topics like configuration, monitoring, and schema management as well as show you how ClusterControl can help with the management of your eLearning LMS systems.
SysAdmin Working from Home? Tips to Automate MySQL, MariaDB, Postgres & MongoDBSeveralnines
Are you an SysAdmin who is now responsible for your companies database operations? Then this is the webinar for you. Learn from a Senior DBA the basics you need to know to keep things up-and-running and how automation can help.
(slides) Polyglot persistence: utilizing open source databases as a Swiss poc...Severalnines
This document discusses polyglot persistence, which is using multiple specialized databases rather than a single general-purpose database. It provides examples of VidaXL's use of polyglot persistence, including MySQL, MariaDB, PostgreSQL, SOLR, Elasticsearch, MongoDB, Couchbase, and Prometheus. The benefits discussed are using the right database for each job and gaining flexibility as the company transitioned to microservices. Challenges included increased complexity, and solutions involved automation, tooling, and hiring database experts.
Webinar slides: How to Migrate from Oracle DB to MariaDBSeveralnines
This document provides an overview and agenda for a webinar on migrating from Oracle DB to MariaDB. The webinar will cover why organizations are moving to open source databases, the benefits of migrating to MariaDB from Oracle, how to plan and execute the migration process, and post-migration management topics like monitoring, backups, high availability, and scaling in MariaDB. The presentation will include discussions of data type mapping, enabling PL/SQL syntax in MariaDB, available migration tools, and testing approaches.
Webinar slides: How to Automate & Manage PostgreSQL with ClusterControlSeveralnines
Running PostgreSQL in production comes with the responsibility for a business critical environment; this includes high availability, disaster recovery, and performance. Ops staff worry whether databases are up and running, if backups are taken and tested for integrity, whether there are performance problems that might affect end user experience, if failover will work properly in case of server failure without breaking applications, and the list goes on.
ClusterControl can be used to operationalize your PostgreSQL footprint across your enterprise. It offers a standard way of deploying high-availability replication setups with auto-failover, integrated with load balancers offering a single endpoint to applications. It provides constant health and performance monitoring through rich dashboards, as well as backup management and point-in-time recovery
See how much time and effort can be saved, as well as risks mitigated, with the help of a unified management platform over the more traditional, manual methods.
We’ve seen a 152% increase in ClusterControl installations by PostgreSQL users last year, so make sure you don’t miss out on the trend!
AGENDA
- Managing PostgreSQL “the old way”:
- Common challenges
- Important tasks to perform
- Tools that are available to help
- PostgreSQL automation and management with ClusterControl:
- Deployment
- Backup and recovery
- HA setups
- Failover
- Monitoring
- Live Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
Webinar slides: How to Manage Replication Failover Processes for MySQL, Maria...Severalnines
Failover is the process of moving to a healthy standby component, during a failure or maintenance event, in order to preserve uptime. The quicker it can be done, the faster you can be back online. However, failover can be tricky for transactional database systems as we strive to preserve data integrity - especially in asynchronous or semi-synchronous topologies. There are risks associated, from diverging datasets to loss of data. Failing over due to incorrect reasoning, e.g., failed heartbeats in the case of network partitioning, can also cause significant harm.
This webinar replay gives a detailed overview of what failover processes may look like in MySQL, MariaDB and PostgreSQL replication setups. We’ve covered the dangers related to the failover process, and discuss the tradeoffs between failover speed and data integrity. We’ve found out about how to shield applications from database failures with the help of proxies. And we've finally had a look at how ClusterControl manages the failover process, and how it can be configured for both assisted and automated failover.
So if you’re looking at minimizing downtime and meet your SLAs through an automated or semi-automated approach, then this webinar replay is for you!
AGENDA
- An introduction to failover - what, when, how
- in MySQL / MariaDB
- in PostgreSQL
- To automate or not to automate
- Understanding the failover process
- Orchestrating failover across the whole HA stack
- Difficult problems
- Network partitioning
- Missed heartbeats
- Split brain
- From assisted to fully automated failover with ClusterControl
- Demo
SPEAKER
Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.
What if …
- Traditional, labour-intensive backup and archive practices for your MySQL, MariaDB, MongoDB and PostgreSQL databases were a thing of the past?
- You could have one backup management solution for all your business data?
- You could ensure integrity of all your backups?
- You could leverage the competitive pricing and almost limitless capacity of cloud-based backup while meeting cost, manageability, and compliance requirements from the business.
Welcome to our webinar on Backup Management with ClusterControl.
ClusterControl’s centralized backup management for open source databases provides you with hot backups of large datasets, point in time recovery in a couple of clicks, at-rest and in-transit data encryption, data integrity via automatic restore verification, cloud backups (AWS, Google and Azure) for Disaster Recovery, retention policies to ensure compliance, and automated alerts and reporting.
Whether you are looking at rebuilding your existing backup infrastructure, or updating it, this webinar is for you!
AGENDA
- Backup and recovery management of local or remote databases
- Logical or physical backups
- Full or Incremental backups
- Position or time-based Point in Time Recovery (for MySQL and PostgreSQL)
- Upload to the cloud (Amazon S3, Google Cloud Storage, Azure Storage)
- Encryption of backup data
- Compression of backup data
- One centralized backup system for your open source databases (Demo)
- Schedule, manage and operate backups
- Define backup policies, retention, history
- Validation - Automatic restore verification
- Backup reporting
SPEAKER
Bartlomiej Oles, Senior Support Engineer at Severalnines, is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
Disaster Recovery Planning for MySQL & MariaDBSeveralnines
Bart Oles - Severalnines AB
Organizations need an appropriate disaster recovery plan to mitigate the impact of downtime. But how much should a business invest? Designing a highly available system comes at a cost, and not all businesses and indeed not all applications need five 9's availability.
We will explain fundamental disaster recovery concepts and walk you through the relevant options from the MySQL & MariaDB ecosystem to meet different tiers of disaster recovery requirements, and demonstrate how to automate an appropriate disaster recovery plan.
Krzysztof Ksiazek - Severalnines AB
So, you are a developer or sysadmin and showed some abilities in dealing with databases issues. And now, you have been elected to the role of DBA. And as you start managing the databases, you wonder…
* How do I tune them to make best use of the hardware?
* How do I optimize the Operating System?
* How do I best configure MySQL or MariaDB for a specific database workload?
If you're asking yourself the following questions when it comes to optimally running your MySQL or MariaDB databases, then this talk is for you!
We will discuss some of the settings that are most often tweaked and which can bring you significant improvement in the performance of your MySQL or MariaDB database. We will also cover some of the variables which are frequently modified even though they should not.
Performance tuning is not easy, especially if you're not an experienced DBA, but you can go a surprisingly long way with a few basic guidelines.
Performance Tuning Cheat Sheet for MongoDBSeveralnines
Bart Oles - Severalnines AB
Database performance affects organizational performance, and we tend to look for quick fixes when under stress. But how can we better understand our database workload and factors that may cause harm to it? What are the limitations in MongoDB that could potentially impact cluster performance?
In this talk, we will show you how to identify the factors that limit database performance. We will start with the free MongoDB Cloud monitoring tools. Then we will move on to log files and queries. To be able to achieve optimal use of hardware resources, we will take a look into kernel optimization and other crucial OS settings. Finally, we will look into how to examine performance of MongoDB replication.
Advanced MySql Data-at-Rest Encryption in Percona ServerSeveralnines
Iwo Panowicz - Percona & Bart Oles - Severalnines AB
The purpose of the talk is to present data-at-rest encryption implementation in Percona Server for MySQL.
Differences between Oracle's MySQL and MariaDB implementation.
- How it is implemented?
- What is encrypted:
- Tablespaces?
- General tablespace?
- Double write buffer/parallel double write buffer?
- Temporary tablespaces? (KEY BLOCKS)
- Binlogs?
- Slow/general/error logs?
- MyISAM? MyRocks? X?
- Performance overhead.
- Backups?
- Transportable tablespaces. Transfer key.
- Plugins
- Keyrings in general
- Key rotation?
- General-Purpose Keyring Key-Management Functions
- Keyring_file
- Is useful? How to make it profitable?
- Keyring Vault
- How does it work?
- How to make a transition from keyring_file
Polyglot Persistence Utilizing Open Source Databases as a Swiss Pocket KnifeSeveralnines
Art Van Scheppingen - vidaXL & Bart Oles - Severalnines AB
Over the past few years, VidaXL has become a European market leader in the online retail of slow moving consumer goods. When a company achieved over 50% year over year growth for the past 9 years, there is hardly enough time to overhaul existing systems. This means existing systems will be stretched to the maximum of their capabilities, and often additional performance will be gained by utilizing a large variety of datastores.
Polyglot persistence reigns in rapidly growing environments and the traditional one-size-fits-all strategy of monoglots is over.
VidaXL has a broad landscape of datastores, ranging from traditional SQL data stores, like MySQL or PostgreSQL alongside more recent load balancing technologies such as ProxySQL, to document stores like MongoDB and search engines such as SOLR and Elasticsearch.
Webinar slides: Free Monitoring (on Steroids) for MySQL, MariaDB, PostgreSQL ...Severalnines
Traditional server monitoring tools are not built for modern distributed database architectures. Let’s face it, most production databases today run in some kind of high availability setup - from simpler master-slave replication to multi-master clusters fronted by redundant load balancers. Operations teams deal with dozens, often hundreds of services that make up the database environment.
This is why we built ClusterControl - to address modern, highly distributed database setups based on replication or clustering. We wanted something that could provide a systems view of all the components of a distributed cluster, including load balancers.
Watch this replay of a webinar on free database monitoring using ClusterControl Community Edition. We show you how to monitor all your MySQL, MariaDB, PostgreSQL and MongoDB systems from a single point of control - whether they are deployed as Galera Clusters, sharded clusters or replication setups across on-prem and cloud data centers. We also see how to use Advisors in order to improve performance.
AGENDA
- Requirements for monitoring distributed database systems
- Cloud-based vs On-prem monitoring solutions
- Agent-based vs Agentless monitoring
- Deepdive into ClusterControl Community Edition
- Architecture
- Metrics Collection
- Trending
- Dashboards
- Queries
- Performance Advisors
- Other features available to Community users
SPEAKER
Bartlomiej Oles is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLSeveralnines
To operate PostgreSQL efficiently, you need to have insight into database performance and make sure it is at optimal levels.
With that in mind, we dive into monitoring PostgreSQL for performance in this webinar replay.
PostgreSQL offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics? And what are some of the most common causes for performance problems in production?
We discuss this and more in ordinary, plain DBA language. We also have a look at some of the tools available for PostgreSQL monitoring and trending; and we’ll show you how to leverage ClusterControl’s PostgreSQL metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
AGENDA
- PostgreSQL architecture overview
- Performance problems in production
- Common causes
- Key PostgreSQL metrics and their meaning
- Tuning for performance
- Performance monitoring tools
- Impact of monitoring on performance
- How to use ClusterControl to identify performance issues
- Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
This webinar builds upon a related blog post by Sebastian: https://severalnines.com/blog/performance-cheat-sheet-postgresql.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
1. MySQL Cluster and NoSQL
December 2012
Johan Andersson
Severalnines AB
johan@severalnines.com
Cell +46 73 073 60 99
2. Copyright 2011 Severalnines AB
Topics
RDBMS/NoSQL
API Overview
Memcached Installation
Configuration
Performance Tuning
Troubleshooting
Use Cases
2
3. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
3
4. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
4
5. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
5
6. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
6
7. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
7
8. Copyright 2012 Severalnines AB
RDBMS vs NoSQL
RDBMS NoSQL
Structure and relations are Structure and relations not as
important important
Relational schema Focus on storing/retrieving
Complex Queries Simple access
JOINs E.g. Key Value: get(), set()
ACID Eventual Consistency
Scalability usually not built-in Scalability built-in
Durability of data on one Durability of data guaranteed by
node having data on multiple nodes
8
9. Copyright 2012 Severalnines AB
Introducing MySQL Cluster
Shared Nothing database
Up to 255 nodes in a cluster
Automatic sharding
In-memory or hybrid disk data storage
Multiple APIs
Availability
Strong consistency with synchronous replication
Automatic fail-over within a cluster
Eventual consistency between clusters
9
11. Copyright 2012 Severalnines AB
#1 – Horizontal scalability
Data Nodes
Stores the data
Memory or disk tables
Can be added online
Shard 1 Shard 2 Shard 3
11
19. Copyright 2012 Severalnines AB
#3 – Schema
SQL/Relational
Add column
Add/remove index
Memcached
prefix key value
Key-value
<city: ldn 1>
Prefix Table Key-col Val-col policy city … code … …
city: AreaCode city code cluster ldn … 1 … …
Configuration/Mapping Table: ‘AreaCode’
19
20. Copyright 2012 Severalnines AB
#4 – Data Consistency
Strong consistency within a cluster
Eventual consistency across clusters
EU Cluster
US Cluster
20
21. Copyright 2012 Severalnines AB
#5 – Data Storage
Memory Tables
No disk checkpoints
Memory Tables
With disk checkpoints
Disk Data tables
Index in memory
Writes not IO bound
Transaction durability = data written in at least 2 nodes
21
23. NoSQL : Memcached
(new in 7.2)
Native Key-Value access (converts memcached proto to
ndbapi calls)
Bypasses SQL
Schema and schemaless data storage
MEMCACHED
APP NDB
By default server
- Every KV written to the same table
- Each KV in a single row
Or configure to use existing tables
24. NoSQL : REST
Bypasses SQL
Native HTTP/REST access
Loads in an Apache module (mod_ndb)
Apache NDB
26. NoSQL: NDBAPI (sync)
C++ API supporting GET/SET/RANGE_SCAN/SCAN
Bypasses SQL
NDBAPI
Ultra low latency NDB
client
Hand-optimize execution path
Lots of freedom (also to make mistakes)
27. NoSQL : NDBAPI (sync)
DEFINE AND STORAGE BUFFERS (NDB RECORD)
START TRANSACTION
CREATE OPERATION ( on table)
- DEFINE OPERATION (insert/update/read/delete) – PK operation
- GET/SET PK AND VALUES
- <repeat these for batching or read from many tables>
EXECUTE ( COMMIT / NO COMMIT)
CHECK STORAGE BUFFERS
28. NoSQL : NDBAPI (async)
Bypasses SQL
Similar to node.js with callbacks registered and executed
on completion
Ultra fast performance for GET/SET on PK
NDBAPI
Hand-optimize execution path NDB
client
Lots of freedom (also to make mistakes)
Scales with number of threads and number of Apps
29. NoSQL : NDBAPI (async)
DEFINE AND STORAGE BUFFERS (NDB RECORD)
PREPARE TRANSACTION
- ASSIGN A CALLBACK
- CREATE OPERATION ( on table)
- DEFINE OPERATION (insert/update/read/delete) – PK operation
- GET/SET PK AND VALUES
- <repeat these for batching or read from many tables>
<repeat and PREPARE up to 1024 TXs>
SEND to NDB
POLL for CALLBACKs
- Executes callbacks, and PREPARE a new TX if you want.
- CHECK STORAGE BUFFERS
30. NoSQL : NDBAPI (async)
Using the Async NDBAPI Oracle managed to get 1.05 Billion
Queries Per Minute
- flexAsync -a 25 -p 128 -t <cores> -l <iterations>
- 8 data nodes (48GB of RAM)
- 10 api nodes
- Intel X5670 (2 CPU x 6 cores)
- Infiniband (IPoIB)
31. Copyright 2011 Severalnines AB
Introduction
Memcached access to NDB is included in MySQL Cluster
7.2
Provides a Memcached Interface to NDB data
Using get/set to read and write data
Avoid SQL altogether (except for creating tables)
There several “run-time” models that can be configured
Affects mainly placement of data
31
32. Copyright 2011 Severalnines AB
Introduction
Memcached uses the NDBAPI (C++ direct API) to access
data in NDB.
There are two flavors of the NDBAPI
Synchronous NDBAPI
Asynchronous NDBAPI
Memcached uses the Asynchronous NDBAPI
32
34. Copyright 2011 Severalnines AB
Introduction
Synchronous Asynchronous
Start transaction Start transaction
Associate callback
Create op Create op
logic Set op type Set op type
Bind keys/values
Bind keys/values
Prepare transaction
ndb Execute
Send
Send request to NDB Send request to NDB
Check result
Poll
Check callbacks
34
35. Copyright 2011 Severalnines AB
Introduction
Asynchronous invocation gives
Higher degree of parallelism, up to 1024 transactions in flight
from each NDB object
Less threads needed to drive load
Both threads and transaction parallelism in one shot!
Harder programming model
Synchronous invocation gives
Easy programming model
One thread does one transaction at a time, less parallelism
Many threads needed to drive high load
35
36. Copyright 2011 Severalnines AB
Introduction
Memcached supported operations
GET / MULTI GET
SET
ADD
REPLACE
CAS
INCR
DECR
36
37. Copyright 2011 Severalnines AB
Installation
The memcached server is included in the MySQL Cluster
distributions
<basedir>/bin/memcached
Memcached also requires a plugin that is also included in
the distribution
<basedir>/lib/ndb_engine.so or /usr/lib64/ndb_engine.so
It requires a connect string to be able to join the NDB Cluster
+ “normal” memcached options
port, bind-address etc
37
38. Copyright 2011 Severalnines AB
Installation
Starting Memcached can be done as follows:
memcached -p11211
-E <basedir>/lib/ndb_engine.so
-u nobody
-d
-l 127.0.0.1
-e connectstring=127.0.0.1:1186
Options:
-l -- bind-address
-u -- user
-d -- daemon
-e -- connectstring and more NDB options
-E -- specifies a memcached plugin
38
39. Copyright 2011 Severalnines AB
Installation
memcached
server
Before we can start
memcached we must sanity
check NDB Cluster
memcached will by default
make two connections to the
Data Nodes.
This is same as
--ndb-cluster-connection-
pool=2
P0 P1
S1 S0
39
41. Copyright 2011 Severalnines AB
Installation
In the previous example you must add atleast two “slots”
Change config.ini and add
[mysqld]
[mysqld]
Perform a rolling restart
Stop and start the management servers one at a time
Stop and start one data node at a time
Stop and start the mysql servers one at a time
42. Copyright 2011 Severalnines AB
Installation
Now we can connect!
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @10.176.129.89 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 0,
Master)
id=4 @10.178.0.69 (mysql-5.5.27 ndb-7.2.8, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @10.176.131.164 (mysql-5.5.27 ndb-7.2.8)
id=2 @10.177.67.255 (mysql-5.5.27 ndb-7.2.8)
[mysqld(API)] 27 node(s)
id=5 @10.176.131.164 (mysql-5.5.27 ndb-7.2.8)
id=6 @10.176.131.164 (mysql-5.5.27 ndb-7.2.8)
id=7 @10.176.131.165 (mysql-5.5.27 ndb-7.2.8)
id=8 @10.176.131.165 (mysql-5.5.27 ndb-7.2.8)
id=9 (not connected, accepting connect from any host)
id=10 (not connected, accepting connect from any host)
43. Copyright 2011 Severalnines AB
Installation
But wait! We need to install the ndb_memcached
schema!
Only needed the first time
Table are stored in NDB
Defined in the file:
<basedir>/share/memcache-api/
ndb_memcache_metadata.sql
mysql –uroot –p < <basedir>/share/memcache-api/
ndb_memcache_metadata.sql
44. Copyright 2011 Severalnines AB
Exercise 1
Install the schema
/usr/local/mysql/share/memcache-api/
ndb_memcache_metadata.sql
Start memcached
The management server is listening on 127.0.0.1
Use port 11211
Use bind address 127.0.0.1
Don’t use the daemon option
Basedir = /usr/local/mysql/
Verify using the management client:
ndb_mgm –e “show”
45. Copyright 2011 Severalnines AB
Troubleshooting
Common errors :
bind(): Cannot assign requested address
Wrong bind address
Hanging on “Contacting primary management server (..) ...”
Wrong ndb-connectstring
Success:
done [0.759 sec
46. Copyright 2011 Severalnines AB
Configuration
One of the key benefits with Memcached is that it can
be used in multiple ways:
Store data in NDB only
Store data in NDB and cache in Memcached
Cache only on Memcached
An existing data model can also be presented to
Memcached
This requires a bit of setup to create mappings for the tables
being exposed to Memcached
Let’s do it now!
47. Copyright 2011 Severalnines AB
Configuration
Consider the following table. Goals:
Expose it to memcached
Read/write to it
Make two configuration – NDB Only and NDB + Caching
create table users(
uid integer auto_increment primary key,
name varchar(255),
email varchar(255),
view_cnt bigint unsigned default 0,
created bigint unsigned default 0,
json_data varbinary(12000)
) engine = ndb;
48. Copyright 2011 Severalnines AB
Concepts
Memcached uses two important concepts
CONTAINERS (table ndbmemcache.containers)
KEY_PREFIXES (table ndbmemcache.key_prefixes)
CONTAINERS
Specifies what tables, columns in the tables, keys etc
KEY_PREFIXES
Specifies key bindings, and roles (if data should be in ndb
only e.g).
48
49. Copyright 2011 Severalnines AB
Containers Table
DESC containers;
name - container name (PK)
db_schema - database where db_table is stored
db_table - name of the database table
key_columns - the columns mapping to the memcached key
value_columns - the columns that map to the
flags - not implemented
increment_column - for INCR / DECR - BIGINT UNSIGNED
cas_column - CAS , must be BIGINT UNSIGNED
expire_time_column - not implemented
large_values_table
49
51. Copyright 2011 Severalnines AB
Key_prefixes Table
DESC key_prefixes;
server_role_id - id referencing memcache_server_roles table
key_prefix - memcache search key prefix (e.g ‘myid:’)
cluster_id - id referencing ndb_clusters table
policy - referencing cache_policies table
container - name referencing containers.name
We will now explore the referenced tables and see what they contain.
51
52. Copyright 2011 Severalnines AB
Key_prefixes for Users
server_role_id = 1 /*db-only*/ /*Must match how memcached is
started*/
key_prefix = ‘user:’
cluster_id = 0
policy = ‘ndb-only’
container = ‘users_container’
insert into key_prefixes(server_role_id, key_prefix, cluster_id,
policy, container) values (1, 'user:' , 0 , 'ndb-only' ,
'users_container' );
52
53. Copyright 2011 Severalnines AB
Exercise 2
Create the ’Users’ table in database ’test’
Create the Container (use ndbmemcache)
Create the Key_prefix
Stop memcached , some options:
killlall -15 memcached
ctrl-c
killall -9 memcached
Start memcached
/usr/local//mysql/bin//memcached
-p11211
-E /usr/local//mysql/lib//ndb_engine.so -unobody
-e “connectstring='127.0.0.1';role=db-only “
What happens?
54. Copyright 2011 Severalnines AB
Troubleshooting
Common errors :
Specified a column that does not exist:
‘Invalid column "test.users.view_cnt” ‘ seg fault
The same column has been specified twice in the Container:
createRecord() failure: Duplicate column specification in
NdbDictionary::RecordSpecification
Mismatch between container.name and key_prefixes.container:
"users_containerxx" NOT FOUND in database.
Fixing the problem:
DELETE FROM key_prefixes …;
DELETE FROM containers … ;
55. Copyright 2011 Severalnines AB
Exercise 3
Insert a record into the users table:
mysql –uroot –ppassword
insert into users(name,email, view_cnt,created, json_data) values
('johan', 'johan@severalnines.com', 0, unix_timestamp(now()),
"{messages: ['msg1', 'msg2']}");
telnet localhost 11211
GET user:1
INCR user:1 1
GET user:1
Do you get what you expect?
58. Copyright 2011 Severalnines AB
Exercise 4
Create the Container and Key_prefix for the view_cnt.
telnet localhost 11211
GET user:1
INCR user_view_cnt:1 1
INCR user_view_cnt:1 1000
DECR user_view_cnt:1 100
GET user:1
Do you get what you expect?
59. Copyright 2011 Severalnines AB
Recap
One Container must be setup for each operation you
want to do:
Write/Read whole record
INCR/DECR
CAS
Etc.
One Key_prefix must be setup for each Container.
1 1
60. Copyright 2011 Severalnines AB
Accessing the Data
There are many client interfaces to memcached:
libmemcached (c/c++)
PECL/memcached (php)
PHP/libmemcached (php)
Spymemcached (java)
Python-memcached (python)
Cache::Memcached::Fast (perl)
Telnet
60
61. Copyright 2011 Severalnines AB
TELNET
Telnet can be used to access data stored in memcached:
telnet localhost 11211
get user 1
61
62. Copyright 2011 Severalnines AB
Caching Policies
Read-only/read-mostly data can be cached in the
Memcached server
CLIENT
P0 P1
S1 S0
62
63. Copyright 2011 Severalnines AB
Caching Policies –
Setup
A new Key_prefix must be created:
server_role_id = 3 /*caching*/ /*Must match how
memcached is started*/
key_prefix = ‘user_cache:’
cluster_id = 0
policy = ‘caching’
container = ‘users_container’
insert into key_prefixes(server_role_id, key_prefix,
cluster_id, policy, container) values (3,
'user_cache:' , 0 , 'caching' , 'users_container' );
64. Copyright 2011 Severalnines AB
Caching Policies
GET
Read data from Cache if exists in Cache
Read data from NDB if not exists in Cache populate
cache
STORE
Write data to Cache AND to NDB
Overwrites existing data in cache
65. Copyright 2011 Severalnines AB
Performance Tuning
Tunables are few
NDB Cluster connections can be set 0-4 (0 means it will “figure it
out”)
Send timeout , 1-10 (ms), default 1 (ms)
Force send On or Off (1 or 0), default Off
Set with scheduler options:
memcached -e “…;S:c1,t1,f1”
This would set:
Ndb_cluster_connections=1
Send timeout=1 (ms)
Force send = ON (1)
65
66. Copyright 2011 Severalnines AB
Shoot out
Host A and B: data node (ndbmtd)
Host C: MySQL Server or Memcached
Host C: Application
Users table with 10000 records
Get User based on UID. Queries:
SELECT name,email,created,view_cnt,json_data FROM
users WHERE uid=<random int 1-10000>
GET user:<random int 1-10000>
66
67. Copyright 2011 Severalnines AB
Shoot out
Access Method 4 threads 8 thread/ 16 threads
4NDB 8NDB 8NDB
SQL (python)** 1616 1376 ?? -
SQL ( C ) 3808 5712 9312
MEMCACHE (python) 3076 5516* 6944*
MEMCACHE (C++) 3300 7096* 14632*
NDBAPI (C++, sync) 5500 10425 15500
*) Max 4 ndb_cluster_connections is possible from MEMCACHED
**) Connector/Python was used
Averages measured over three runs.
For all C/C++ tests bencher was used to drive load.
Threading in Python doesn’t seem to be great.
67
68. Copyright 2011 Severalnines AB
Recommendation
Tuning the Memcached scheduler options make a difference:
Scheduler: starting for 1 cluster; c4,f0,g1,t1
10500 reads/sec
Scheduler: starting for 1 cluster; c4,f1,g1,t1
14632 reads/sec
Scheduler: starting for 1 cluster; c2,f1,g1,t1
11000 reads/sec
Set:
Scheduler option: f1 (force send = on)
Scheduler option: c4 (4 ndb cluster connections
Memcached option: -t <no workers> set depending on the number of
clients you need.
If you have many workers, >= 128 try force send = off.
68