The document is about an upcoming meetup hosted by ServerlessToronto.org on "Serverless Cloud Native Java with Spring Cloud GCP" presented by Ray Tsang. It includes an agenda for the event with topics on Spring Cloud GCP features and integrations with Google Cloud Platform services. There is also information about upcoming meetups from the organization and a thank you from Ray Tsang for attending the presentation.
Amazon Web Services has single handedly altered the IT landscape. The ways of monitoring legacy on-premise environments cannot be applied to the dynamic nature of a cloud-based IT infrastructure. To innovate in the cloud and effectively monitor this new infrastructure model, a modern approach is required. Join the experts from PagerDuty as they discuss how to build a modern ops environment for workloads running on AWS. Session sponsored by PagerDuty.
Pebble uses data science and analytics to improve its smartwatch products. Pebble's data team analyzes over 60 million records per day from the watches to measure user engagement, identify issues, and inform new product design. Their first problem was setting an engagement threshold using the accelerometer. Rapid testing of different thresholds against "backlight data" validated the optimal threshold. Pebble has since solved many problems using their analytics infrastructure at Treasure Data to query, explore, and gain insights from massive user data in real-time.
This document discusses building serverless applications with Go and the Serverless Application Model (SAM). It begins with confidentiality and disclaimer sections. It then provides an introduction to Project Flogo, an open source serverless framework for building event-driven applications. Project Flogo uses Go and allows developers to define app logic as flows that connect triggers and actions. The document discusses how Flogo provides both a visual UI and Go API for application development and describes ways to get started using Flogo's CLI, Docker images, or Go library.
The Data Warehouse plays a central role in any BI solution: it's the back end upon which everything in the coming years will be created. It must be capable of being flexible in order to support the fast changes needed by today's business, but also with a well-know and well-defined structure in order to support the "engineerization" of its development process, making it cost effective. In this full-day session, we will discuss architectural design details and techniques, Agile Modeling, unit testing, automation, and software engineering applied to a Data Warehouse project. The only way to do this is to have a clear idea of its architecture, understanding the concepts of measures and dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. This will allow you to start your BI project in the best way possible avoiding errors, making implementation effective and efficient, building the groundwork for a winning Agile approach, and helping you to define the way in which your team should work so that your BI solution will stand the test of time.
Retail TouchPoints' 2014 Holiday Connected Consumer Series session presented by Instart Logic #HolidayCCS
1) The document discusses various techniques for optimizing multi-tenant architectures on AWS including data partitioning, tenant-aware caching, and using tenant policies and profiles to customize access and optimize resource usage. 2) It emphasizes the importance of monitoring metrics at both the infrastructure and application levels to understand tenant usage patterns and identify opportunities for optimization. 3) The document also covers service selection considerations for multi-tenant workloads, strategies for optimizing serverless architectures on AWS, and using metering and billing to enable tiered tenant plans.
The document summarizes the upcoming presentations for the Brisbane Azure User Group (BAUG) from January to December 2019. Some of the highlighted topics include using Azure IoT to control devices remotely, serverless computing on Azure, data governance and compliance, and machine learning with Azure services. The document also advertises job opportunities in cloud integration solutions and announces new Azure features such as Synapse Analytics, managed certificates, and Azure Arc hybrid capabilities.
The document provides an overview of cloud computing and an introduction to Google Cloud. It discusses the different types of cloud services including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It then introduces various Google Cloud Platform (GCP) and G Suite products and services that fall under each category. Examples of code snippets using GCP and G Suite APIs in Python are also provided to demonstrate interacting with these cloud services programmatically.
Darin Briskman, Amazon Web Services delivers a keynote at the Canadian Executive Cloud & DevOps Summit in Toronto on June 9, 2017 on the topic of Artificial Intelligence.
Microservice oriented architectures have been implemented and deployed by many and are on the near-term agenda of many others. However, the distributed nature of microservices is a double edged sword, being the source of many of the benefits, but also the source of the pain and confusion that teams have endured. We will review best practices and recommended architectures for deploying microservices on AWS with a focus on how to exploit the benefits of microservices to decrease feature cycle times and costs while increasing reliability, scalability, and overall operational efficiency. Speaker: Craig Dickson, Solutions Architect, Amazon Web Services Featured Customer - MYOB
What’s new with AWS IoT? This is an Introduction to the AWS IoT Platform and an overview of new features. Join us for a discussion on the features launched over the last year, and the best practices on how to use the AWS IoT Platform to get your device data into the cloud.
The document provides an overview of Google Cloud Platform services, including compute options like App Engine and Compute Engine, storage services like Cloud Storage and Cloud Datastore, and additional services like Cloud Endpoints and Translate API. It describes the capabilities and features of each service, such as scalability, flexibility, and ease of use. Case studies show how companies have used services like App Engine to build and deploy applications on Google's infrastructure.
A new generation of sophisticated geospatial mobile apps are being developed, which are serverless and can scale to virtually unlimited users without any infrastructure or servers to manage. This session will take a practical approach to developing lean and cost-effective real-world location-based mobile apps through live demonstrations and code walkthroughs. It will showcase how cloud services can be used to authenticate users, store and synchronize data, understand behavior, react upon location and state changes, test apps and send notifications to nearby app users.
Budapest Spark Meetup - Apache Spark @enbrite.ly presentation held on March 30, 2016. The vision we all share at enbrite.ly is to create the next generation decision supporting system in online advertising that combines the market needs; anti-fraud, viewability, brand safety and traffic quality assurances in one platform. We do this by analyzing vast amount of data to create value for our customers. In the last 6 months we created our ETL pipeline, the core component of our data platform based on Apache Spark. In this presentation I share the journey from the whiteboard designs to the maintenance of a TB-scale data pipeline. I share the lessons we learned and the ups and downs using Spark in scale.
Serverless functions (like AWS Lambda, Google Cloud Functions, and Azure Functions) have the ability to scale almost infinitely to handle massive workload spikes. While this is a great solution for compute, it can be a MAJOR PROBLEM for other downstream resources like RDBMS, third-party APIs, legacy systems, and even most managed services hosted by your cloud provider. Whether you’re maxing out database connections, exceeding API quotas, or simply flooding a system with too many requests at once, serverless functions can DDoS your components and potentially take down your application. In this talk, we’ll discuss strategies and architectural patterns to create highly resilient serverless applications that can mitigate and alleviate pressure on “non-serverless” downstream systems during peak load times.
LEGO.com has accelerated innovation using serverless technologies on AWS. They describe how they used AWS Step Functions for parallel processing and failure handling. Amazon EventBridge was used for event-driven architectures and batching feedback events. This allowed for more scalable and reliable systems while reducing costs and maintenance. Moving to a serverless model with feature teams also improved development speed and business agility.
The document provides an overview of Amazon QuickSight, Amazon's business intelligence service. It discusses how QuickSight allows users to easily explore and analyze data from various AWS sources for a low cost. Key features highlighted include fast insights using SPICE technology, intuitive visualizations, mobile access, and easy data sharing capabilities. The document also demonstrates QuickSight through an example analysis of a fitness promotion's performance.
The document summarizes a meetup on data streaming and machine learning with Google Cloud Platform. The meetup consisted of two presentations: 1. The first presentation discussed using Apache Beam (Dataflow) on Google Cloud Platform to parallelize machine learning training for improved performance. It showed how Dataflow was used to reduce training time from 12 hours to under 30 minutes. 2. The second presentation demonstrated building a streaming pipeline for sentiment analysis on Twitter data using Dataflow. It covered streaming patterns, batch vs streaming processing, and a demo that ingested tweets from PubSub and analyzed them using Cloud NLP API and BigQuery.
The document summarizes a meetup on data streaming and machine learning with Google Cloud Platform. The meetup consisted of two presentations: 1. The first presentation discussed using Apache Beam and Google Cloud Dataflow to parallelize machine learning training for hyperparameter optimization. It showed how Dataflow reduced training time from 12 hours to under 30 minutes. 2. The second presentation demonstrated building a streaming Twitter sentiment analysis pipeline with Dataflow. It covered streaming patterns, batch vs streaming considerations, and a demo that ingested tweets from PubSub, analyzed sentiment with NLP, and loaded results to BigQuery.
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management. The slides were from a webinar presented January 24, 2017. The audience learned: - How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand - Best practices for efficient use of cloud compute and storage resources - Overcoming the need for file systems within a hybrid cloud environment - Understand how to eliminate latency between cloud and data center architectures - Learn how to best manage simulation, analytics, and big data workloads in dynamic environments - Look at market dynamics drawing companies to new storage models over the next several years Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
This document provides an overview of a workshop on cloud big data architectures. The workshop covers: 1. Different types of big data solutions and when to use each, such as Hadoop, NoSQL and big relational databases. 2. Data pipelines, including ETL tools, load testing patterns and connecting clouds. 3. Querying and visualizing data through business analytics, predictive analytics and visualization tools. 4. A brief introduction to IoT and how it relates to big data.
What do the terms serverless, containers, and virtual machines mean? Which should I use to build my app? The answer (as always) is "it depends." In this session learn the tradeoffs between these different approaches, whether you're building your app from scratch or want to move an existing web or mobile application to the cloud. We'll discuss open source tools such as Kubernetes, Istio, and Knative, and we'll discuss Google Cloud Platform tools like Compute Engine, Google Kubernetes Engine (GKE), App Engine, and Cloud Functions.
The document discusses building data pipelines in the cloud. It covers serverless data pipeline patterns using services like BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Pub/Sub. It also compares Cloud Dataflow and Cloud Dataproc for ETL workflows. Key questions around ingestion and ETL are discussed, focusing on volume, variety, velocity and veracity of data. Cloud vendor offerings for streaming and ETL are also compared.
We will present our O365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DataStax Enterprise on azure.
This document discusses how to run PHP applications on the Windows Azure platform. It describes how to set up and deploy PHP applications to run in both web and worker roles. It also covers tools for PHP development on Azure, including the Windows Azure SDK for PHP and command line tools. Additionally, it discusses how to use Azure services like SQL Azure and storage from PHP applications.
Join this workshop and accelerate your journey to production-ready Kubernetes by learning the practical techniques for reliably operating your software lifecycle using the GitOps pattern. The Weaveworks team will be running a full-day workshop, sharing their expertise as users and contributors of Kubernetes and Prometheus, as well as followers of GitOps (operations by pull request) practices. Using a combination of instructor led demonstrations and hands-on exercises, the workshop will enable the attendee to go into detail on the following topics: • Developing and operating your Kubernetes microservices at scale • DevOps best practices and the movement towards a “GitOps” approach • Building with Kubernetes in production: caring for your apps, implementing CI/CD best practices, and utilizing the right metrics, monitoring tools, and automated alerts • Operating Kubernetes in production: Upgrading and managing Kubernetes, managing incident response, and adhering to security best practices for Kubernetes
Apache Beam is a beautiful framework that blurs the line between Batch and Streaming, so check out this interactive tutorial by Patrick Lecuyer - Head of Specialist Customer Engineering at Google Canada. His examples run on GCP Dataflow, but what you'll learn will be portable across clouds, and distributed processing engines like Apache Flink, Apache Samza, Apache Spark, IBM Streams... regardless of where you do your Big Data processing! The meetup recording with TOC for easy navigation is at https://youtu.be/7pUYKX40RfA. P.S. For more interactive lectures like this, go to http://youtube.serverlesstoronto.org/ or sign up for our upcoming live events at https://www.meetup.com/Serverless-Toronto/events/
This document discusses considerations for making serverless applications production ready. It covers topics like testing, monitoring, logging, deployment pipelines, performance optimization, and security. The document emphasizes principles over specific tools, and recommends focusing on shipping working software through practices like embracing external services for testing instead of mocking.
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software
This document discusses cloud native data pipelines. It begins by introducing the speaker and their company, Agari, which applies trust models to email metadata to score messages. The document then discusses design goals for resilient data pipelines, including operability, correctness, timeliness and cost. It presents two use cases at Agari: batch message scoring and near real-time message scoring. For each use case, the pipeline architecture is shown including components like S3, SNS, SQS, ASGs, EMR and databases. The document discusses leveraging AWS services and tools like Airflow, Packer and Terraform to tackle issues like cost, timeliness, operability and correctness. It also introduces innovations like Apache Avro for
Google Cloud Platform is a cloud computing platform by Google that offers hosting on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube. Cloud Platform provides developer products to build a range of programs from simple websites to complex applications. Google Cloud Platform is a part of a suite of enterprise solutions from Google for Work and provides a set of modular cloud-based services with a host of development tools. For example, hosting and computing, cloud storage, data storage, translations APIs and prediction APIs. Topic Covered Why Google Cloud Platform ? Google Cloud Platform Services: First Insight !!!
AWS Lambda has changed the way we deploy and run software, but this new serverless paradigm has created new challenges to old problems - how do you test a cloud-hosted function locally? How do you monitor them? What about logging and config management? And how do we start migrating from existing architectures? In this talk Yan and Scott will discuss solutions to these challenges by drawing from real-world experience running Lambda in production and migrating from an existing monolithic architecture.
Learning Objectives: • Learn how to use CloudFront dynamic delivery features • See a live demo and learn how to take advantage of Cloud Front newest features Traditionally, content delivery networks (CDNs) were designed to accelerate static content. Amazon CloudFront supports delivery of an entire website, including dynamic, static, streaming and interactive content using a global network of edge locations. CloudFront integrates with other AWS services that are built to scale massively. Together, the solution can automatically scale to millions of users by leveraging the global reach of CloudFront and the auto scaling capability of AWS platform. In this talk, we introduce you to various design patterns and best practices to build a massively scalable solution using CloudFront. We discuss how this scale can be achieved without compromising on availability, security or cost.