A Free New World: Atlas Free Tier and How It Was Born Speaker: Louisa Berger, Senior Software Engineer Speaker: Vincent Do, Fullstack Engineer, MongoDB Level: 200 (Intermediate) Track: How We Build MongoDB Last year, MongoDB released Atlas – a new Database as as Service product that takes handles running, monitoring, and maintaining your MongoDB deployment in the Cloud. This winter, we added a new Free Tier option to the product, which allows users to try out Atlas with their own real data for free. Lead Automation engineer Louisa Berger and Atlas engineer Vincent Do will talk about how it works behind the scenes, and why you might want to try out Atlas. This talk is intended for developers, and will take you through the technical details of the architecture, and show you the techniques and challenges in building a multi-tenant MongoDB. What You Will Learn: - Insights on how/why you should use the Atlas free tier - How the Atlas free tier was designed and implemented - Best practices for building a multi-tenant MongoDB application
Speaker: Jay Gordon, Developer Advocate, MongoDB Level: 100 (Beginner) Track: Jumpstart MongoDB has grown into one of the world's more popular databases and continues to expand it's reach to developers. In this talk we will discuss MongoDB foundations that attendees can use to begin their journey in creating new apps. By the end of the talk, members attending should feel prepared for the rest of their time at MongoDB World with essential information on how MongoDB works. There is no need to have previous experience with MongoDB to attend this talk, however basic understanding of database systems is recommended. What You Will Learn: - Basic understanding of how MongoDB is similar to, yet different from, relational database systems. - How MongoDB can be installed to begin working with it immediately. - Understand the various models in which they can host MongoDB within bare metal and the cloud.
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business. This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
Speaker: Tom Spitzer, Vice President, Engineering, EC Wise, Inc. Session Type: 40 minute main track session Level: 200 (Intermediate) Track: Security MongoDB Community Server provides a wide range of capabilities for securing your MongoDB installation. In this session, we will focus on access control features, including authentication and authorization mechanisms, that enable you to enforce a least privilege model on user accounts. We will also discuss strategies for enabling and maintaining service and application accounts. Next we will present the encryption capabilities that are available in the community edition and discuss their benefits and possible shortcomings. Finally, we will talk about application level protections your developers can implement to keep risky code from getting to your MongoDB instance. What You Will Learn: - The workings of the MongoDB User Management Interface, the Authentication Database, basic Authentication mechanisms (SCRAM-SHA-1 and certificates), Roles, and Role Based Access controls – plus best practices for using these features to improve the security of your database. - How to use TLS/SSL for transport encryption, application encryption options, and field level redaction. - How injection attacks work and how to minimize the risk of injection attacks.
How do you determine whether your MongoDB Atlas cluster is over provisioned, whether the new feature in your next application release will crush your cluster, or when to increase cluster size based upon planned usage growth? MongoDB Atlas provides over a hundred metrics enabling visibility into the inner workings of MongoDB performance, but how do apply all this information to make capacity planning decisions? This presentation will enable you to effectively analyze your MongoDB performance to optimize your MongoDB Atlas spend and ensure smooth application operation into the future.
As a software adventurer, Charles “Indy” Sarrazin, has brought numerous customers through the MongoDB world, using his extensive knowledge to make sure they always got the most out of their databases. Let us embark on a journey inside the Document Model, where we will identify, analyze and fix anti-patterns. I will also provide you with tools to ease migration strategies towards the Temple of Lost Performance! Be warned, though! You might want to learn about design patterns before, in order to survive this exhilarating trial!
Join this talk and test session with MongoDB Support where you'll go over the configuration and deployment of an Atlas environment. Setup a service that you can take back in a production-ready state and prepare to unleash your inner genius.
Speaker: Wisdom Omuya, Software Engineer, MongoDB Session Type: 40 minute main track session Date/Time: June 21, 3:40 PM Room: Regency D Level: 200 (Intermediate) Track: How We Build MongoDB This session is geared towards business analysts and developers seeking to learn more about how the MongoDB BI Connector works. It will cover significant changes made up to and since the 2.0 release of the connector with a specific focus on various security and performance improvements. What You Will Learn: - What kinds of queries benefit from the performance improvements - Support for new authentication mechanisms in the BI Connector - General do's and don'ts for high performance queries
Find out more about our journey of migrating to MongoDB after using Oracle for our hotel search database for over ten years. - How did we solve the synchronization problem with the Master Database? - How to get fast search results (even with massive write operations)? - How other issues were solved
Speaker: Jay Runkel, Principal Solution Architect, MongoDB Speaker: Jayson Hurd, Comcast Level: 200 (Intermediate) Track: Operations Comcast is pioneering private-cloud initiatives to bring velocity, elasticity, and self-service to its internal customers. For databases, this means providing the infrastructure and tooling to support a DevOps model enabling application teams to request/provision, monitor, backup, upgrade, and tune their own environments. Using this approach, an extremely small operations team can manage a large number of applications and servers. We will discuss the business goals of velocity, elasticity and self-service, outlining the hidden benefits of this approach. The technical and process architectures will then be explored in detail, demonstrating how a recipe of IaaS, web, Ansible, and MongoDB Ops Manager are used to provide an automated self-service DBaaS platform. What You Will Learn: - How to leverage Ops Manager to support a self-service DevOps model. - Establishing requirements for your own MongoDB as a Service platform. - Best practices for building a DBaaS for MongoDB.
- The document discusses Amadeus' large-scale use of MongoDB for applications like flight recommendations and payments. - It introduces Kubernetes operators and the MongoDB Enterprise Operator, which allows deploying and managing MongoDB clusters on Kubernetes. - The presentation includes a live demo of deploying a sharded MongoDB cluster using the MongoDB Enterprise Operator.
Jane Uyvova Senior Solutions Architect, MongoDB March 21, 2017 MongoDB Evenings San Francisco Learn how easy it is to set up, operate, and scale your MongoDB deployments in the cloud with MongoDB Atlas.
Speaker: Jerry Reghunadh, Architect, CAPIOT Software Pvt. Ltd. Level: 200 (Intermediate) Track: Microservices One of the leading assisted e-commerce players in India approached CAPIOT to rebuild their ERP system from the ground up. Their existing PHP-MySQL setup, while rich in functionality and having served them well for under half a decade, would not scale to meet future demands due to the exponential grown they were experiencing. We built the entire system using a microservices architecture. To develop APIs we used Node.js, Express, Swagger and Mongoose, and MongoDB was used as the active data store. During the development phase, we solved several problems ranging from cross-service calls, data consistency, service discovery, and security. One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance. In addition, our current system has 36 independent services. We enabled services to auto-discover and make secure calls. We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas. What You Will Learn: - How we used Swagger and Mongoose to off-load validations and schema enforcements. We used Swagger to define our APIs first and enforce request and response validations and Mongoose as our ODM for schema validation. We also heavily depend on pre-save hooks to validate data and post-save hooks to trigger changes in other systems. This API-driven approach vastly enabled our frontend and backend teams to scrum together on a single API spec without worrying about the repercussions of changing API schemas. - How microservices and cross-service calls work. One of the issues that we faced is how to effectively design and make cross-service calls. Should we implement a cross-service call for every document that we require or should we duplicate and distribute the data, reducing cross-service calls? We found a balance between these two and engineered a solution that gave us good performance. - How we implemented microservice auto discovery: Our current system has 36 independent services, so we enabled services to auto-discover and make secure calls.
Learn how MongoDB Atlas has enabled Ticketek to grow rapidly across geographical boundaries and seamlessly support the adoption of new business initiatives. Tane Oakes, TEG Enterprise Architect, will do a deep dive on how MongoDB Atlas supports Ticketek's strategic multi-cloud initiative and how Ticketek uses MongoDB Stitch to establish a scalable and common API used by customers and partners. Tane will also explain how using MongoDB Atlas and MongoDB Stitch has helped reduce technical debt.
Speaker: Gheni Abla, Analytics Software Technical Architect, CoreLogic Level: 200 (Intermediate) Track: Data Analytics CoreLogic is a leading global property information, analytics and solutions provider. The company provides a range of analytic solutions for automated property valuation and appraisals. This presentation will cover a recent project at CoreLogic that utilized MongoDB for storing property and ownership data for over 150 million properties. MongoDB provided powerful support for storing and searching location-based property data. The MongoDB-Spark connector facilitated seamless integration between data access and the Spark-based distributed analytics processing and MongoDB’s replication capability provided high-availability across data centers. This session will cover CoreLogic’s software architecture and real-world development experiences with geospatial data and MongoDB-Spark connector. What You Will Learn: - How CoreLogic manages and stores data for over 150 million real estate properties in MongoDB, and utilizes MongoDB's geospatial data support. - How to distribute large-scale analytics process using Spark and improve data access efficiency using the MongoDB-Spark connector. - How to utilize MongoDB replication for implementing high-availability between two geographically dispersed data centers.
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business. This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
Scott Jehl of Filament Group discussed building responsive and responsible websites. He advocated for a layered approach using progressive enhancement. This involves a basic mobile-first experience enhanced for newer browsers. Images and layout adapt to different screensizes using responsive design principles. Accessibility, performance, and usability were highlighted as key areas of responsibility.
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
Jumpstart: Using Aggregation for Analytics Speaker: Ruben Terceño, Senior Solutions Architect, MongoDB Level: 200 (Intermediate) Track: Jumpstart The MongoDB aggregation framework allows you to perform real-time analytics on your live operational data set. It's an important tool to understand when considering analytics options for your application. In this session we will give you an overview of basic aggregation functionality. You should walk away with an understanding of when to use the aggregation framework for your needs and how to leverage different functions for different purposes. This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB aggregation basics so you can dive into more advanced sessions later in the day. What You Will Learn: - Discover the Aggregation Framework - Understand the sweet spot for MongoDB Analytics - Have fun crushing numbers!
Speaker: Ruben Terceño, Senior Solutions Architect, MongoDB Level: 200 (Intermediate) Track: Jumpstart The MongoDB aggregation framework allows you to perform real-time analytics on your live operational data set. It's an important tool to understand when considering analytics options for your application. In this session we will give you an overview of basic aggregation functionality. You should walk away with an understanding of when to use the aggregation framework for your needs and how to leverage different functions for different purposes. This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB aggregation basics so you can dive into more advanced sessions later in the day. What You Will Learn: - Discover the Aggregation Framework - Understand the sweet spot for MongoDB Analytics - Have fun crushing numbers!
How to build a MongoDB application from scratch in the MongoDB Shell and Python. How to add indexes and use explain to make sure you are using them properly.
The document provides instructions for installing and using MongoDB to build a simple blogging application. It demonstrates how to install MongoDB, connect to it using the mongo shell, insert and query sample data like users and blog posts, update documents to add comments, and more. The goal is to illustrate how to model and interact with data in MongoDB for a basic blogging use case.
MongoDB is a non-relational database that uses a document-based data model. It is an alternative to traditional relational databases and is optimized for storing large amounts of unstructured and semi-structured data. MongoDB does not require a predefined schema and allows flexible, dynamic queries against documents using JavaScript. While relational databases are better suited for transactions, MongoDB is designed for horizontal scalability, faster queries, and flexible data modeling.
This document provides an overview of a presentation by Smaisa Abeysinghe, VP of Delivery at WSO2, on rapid application development with JavaScript and data services. It includes details about the presenter and their background at WSO2, an overview of WSO2 as a company including their products and partnerships, and discusses challenges in rapid application development as well as how JavaScript can help address these challenges. The document also introduces Jaggery.js as a JavaScript framework for building multi-tier web applications, provides examples of getting started with Jaggery.js, and demonstrates RESTful URL mapping and HTTP verb mapping in sample applications.
Some background on why we have NoSQL databases. Some of the problems for which they seem to be more natural fit are explained.
This document contains the slides from a webinar on building a basic MongoDB application. It introduces MongoDB concepts and terminology, shows how to install MongoDB, create a basic blogging application with articles, users and comments, and add and query data. Key steps include installing MongoDB, launching the mongod process, connecting with the mongo shell, inserting documents, finding and querying documents, and updating documents by adding fields and pushing to arrays.
This document provides an overview of a MongoDB workshop. It includes an agenda with topics like an overview of databases, what is MongoDB, MongoDB commands, sharding and replication in MongoDB, and a demo. The workshop is hosted by Vivian at ThoughtWorks for the audience of NYC Open Data and will be presented by Kannan Sankaran and Roman Kubiak.
It is a NYC Open Data Meetup event. All credits went to Kannan and Roman. Event link: http://www.meetup.com/NYC-Open-Data/events/141123082/ Blog Post: http://www.nycopendata.com/2014/02/11/mongodb/
Speaker: Andrew Morgan Organizations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver. Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you're done. Replicate your complete application for your development, test, operations, and support teams. This session introduces you to technologies such as Docker, Kubernetes, and Kafka, which are driving the microservices revolution. Learn about containers and orchestration, and most importantly, how to exploit them for stateful services such as MongoDB.
The document discusses MongoDB's transactions feature. It provides an overview of MongoDB's journey to implementing transactions from versions 3.0 to 4.0. It describes how transactions will work in MongoDB 4.0, including examples of atomic operations across multiple documents using sessions and commit_transaction. The presentation encourages joining the beta program for MongoDB transactions and concludes with announcements about the next session and lunch break.
This document provides instructions on how to build a search engine using the Norch framework with JavaScript and Node.js. It discusses setting up Norch, getting and formatting data, indexing the data, querying the search engine, and connecting a front-end interface. The document outlines features like faceting, filtering, paging, matchers and integrating Norch with an Angular app.
This document provides instructions for setting up a MongoDB cluster using MongoDB Atlas. It includes steps for downloading and installing MongoDB locally, creating a free M0 cluster on Atlas, importing sample zip code data into the cluster, and performing basic queries on the data. The document also lists some next steps for learning more about MongoDB features like Compass, advanced queries, and paid capabilities like backups and peering.
Questo è il secondo webinar della serie Back to Basics che ti offrirà un'introduzione al database MongoDB. In questo webinar ti dimostreremo come creare un'applicazione base per il blogging in MongoDB.
In this talk we will focus on several of the reasons why developers have come to love the richness, flexibility, and ease of use that MongoDB provides. First we will give a brief introduction of MongoDB, comparing and contrasting it to the traditional relational database. Next, we’ll give an overview of the APIs and tools that are part of the MongoDB ecosystem. Then we’ll look at how MongoDB CRUD (Create, Read, Update, Delete) operations work, and also explore query, update, and projection operators. Finally, we will discuss MongoDB indexes and look at some examples of how indexes are used.
This presentation explores techniques and best practices for ingesting, manipulating, and storing configuration management data for managing multi-cloud infrastructure deployments using Ansible. The presentation focuses on techniques to ingest, manipulate, and optimize configuration management data to drive automation processes. It also examines using relational, NoSQL, and graph databases as well as sequential files for configuration management data. The speaker's background is typically focused on network and security automation use cases using Ansible.
This document discusses applying SOLID principles to infrastructure as code. It provides an overview of roles and profiles in infrastructure as code and how they relate to design patterns like Model View Controller (MVC). It also explains the five SOLID principles - single responsibility, open/closed, Liskov substitution, interface segregation and dependency inversion - and provides examples of applying them to infrastructure code through techniques like defining types, relationships between defined types and profiles, and creating abstract/generic defined types.
The document discusses using microservices architecture with MongoDB, Docker, Kafka, and Kubernetes. It begins with an overview of microservices and why they are used. It then covers MongoDB and why it is a good fit for microservices. The document discusses using Docker containers to deploy MongoDB and other services. It introduces Apache Kafka for messaging between microservices. Finally, it discusses using Kubernetes for orchestrating containers and deploying MongoDB across multiple data centers.
Speaker: Ronan Bohan, Solutions Architect, MongoDB Speaker: Viady Krishnan Level: 100 (Beginner) Track: Jumpstart Get started with the BI connector and Tableau in this introductory session. We will give you insight into how you can view your MongoDB data in traditional BI tools and an overview of connecting Tableau with MongoDB. After attending this session, students should be able connect their analytics tool of choice to a MongoDB data store using the BI connector, secure their client connection, and know how to enable authentication. Audience members should be familiar with analytics tools like Tableau to do business analytics, and know how to set up and run analytics in a BI tool. This session will use Tableau as an example. This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB basics so you can dive into more advanced technical sessions later in the day. What You Will Learn: - How to connect your analytics tool of choice to a MongoDB data store using the BI connector. - How to view MongoDB data in Tableau or another BI tool. - How to secure your client connection to MongoDB.
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe. This talk covers: Common components of an IoT solution The challenges involved with managing time-series data in IoT applications Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance. How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch". This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms. How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms? In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $. La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Chaque entreprise devient une entreprise de logiciels, fournissant des solutions client pour accéder à une variété de services et d'informations. Les entreprises commencent maintenant à valoriser leurs données et à obtenir de meilleures informations pour l'entreprise. Un défi crucial consiste à s'assurer que ces données sont toujours disponibles et sécurisées pour être conformes aux objectifs commerciaux de l'entreprise et aux contraintes réglementaires des pays. MongoDB fournit la couche de sécurité dont vous avez besoin, venez découvrir comment sécuriser vos données avec MongoDB.