Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This document discusses designing a modern data warehouse in Azure. It provides an overview of traditional vs. self-service data warehouses and their limitations. It also outlines challenges with current data warehouses around timeliness, flexibility, quality and findability. The document then discusses why organizations need a modern data warehouse based on criteria like customer experience, quality assurance and operational efficiency. It covers various approaches to ingesting, storing, preparing, modeling and serving data on Azure. Finally, it discusses architectures like the lambda architecture and common data models.
Modern Data Warehousing with the Microsoft Analytics Platform SystemJames Serra
The Microsoft Analytics Platform System (APS) is a turnkey appliance that provides a modern data warehouse with the ability to handle both relational and non-relational data. It uses a massively parallel processing (MPP) architecture with multiple CPUs running queries in parallel. The APS includes an integrated Hadoop distribution called HDInsight that allows users to query Hadoop data using T-SQL with PolyBase. This provides a single query interface and allows users to leverage existing SQL skills. The APS appliance is pre-configured with software and hardware optimized to deliver high performance at scale for data warehousing workloads.
The document discusses the challenges of modern data, analytics, and AI workloads. Most enterprises struggle with siloed data systems that make integration and productivity difficult. The future of data lies with a data lakehouse platform that can unify data engineering, analytics, data warehousing, and machine learning workloads on a single open platform. The Databricks Lakehouse platform aims to address these challenges with its open data lake approach and capabilities for data engineering, SQL analytics, governance, and machine learning.
Building Modern Data Platform with Microsoft AzureDmitry Anoshin
This document provides an overview of building a modern cloud analytics solution using Microsoft Azure. It discusses the role of analytics, a history of cloud computing, and a data warehouse modernization project. Key challenges covered include lack of notifications, logging, self-service BI, and integrating streaming data. The document proposes solutions to these challenges using Azure services like Data Factory, Kafka, Databricks, and SQL Data Warehouse. It also discusses alternative implementations using tools like Matillion ETL and Snowflake.
- Azure Databricks provides a curated platform for data science and machine learning workloads using notebooks, data services, and machine learning tools.
- Only a small fraction of real-world machine learning systems is composed of the actual machine learning code, as vast surrounding infrastructure is required for data collection, feature extraction, model training, and deployment.
- Azure Databricks can be used across many industries for applications like customer analytics, financial modeling, healthcare analytics, industrial IoT, and cybersecurity threat detection through machine learning on structured and unstructured data.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Databricks CEO Ali Ghodsi introduces Databricks Delta, a new data management system that combines the scale and cost-efficiency of a data lake, the performance and reliability of a data warehouse, and the low latency of streaming.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2OUz6dt.
Chris Riccomini talks about the current state-of-the-art in data pipelines and data warehousing, and shares some of the solutions to current problems dealing with data streaming and warehousing. Filmed at qconsf.com.
Chris Riccomini works as a Software Engineer at WePay.
Big data architectures and the data lakeJames Serra
The document provides an overview of big data architectures and the data lake concept. It discusses why organizations are adopting data lakes to handle increasing data volumes and varieties. The key aspects covered include:
- Defining top-down and bottom-up approaches to data management
- Explaining what a data lake is and how Hadoop can function as the data lake
- Describing how a modern data warehouse combines features of a traditional data warehouse and data lake
- Discussing how federated querying allows data to be accessed across multiple sources
- Highlighting benefits of implementing big data solutions in the cloud
- Comparing shared-nothing, massively parallel processing (MPP) architectures to symmetric multi-processing (
Introducing Snowflake, an elastic data warehouse delivered as a service in the cloud. It aims to simplify data warehousing by removing the need for customers to manage infrastructure, scaling, and tuning. Snowflake uses a multi-cluster architecture to provide elastic scaling of storage, compute, and concurrency. It can bring together structured and semi-structured data for analysis without requiring data transformation. Customers have seen significant improvements in performance, cost savings, and the ability to add new workloads compared to traditional on-premises data warehousing solutions.
Doug Bateman, a principal data engineering instructor at Databricks, presented on how to build a Lakehouse architecture. He began by introducing himself and his background. He then discussed the goals of describing key Lakehouse features, explaining how Delta Lake enables it, and developing a sample Lakehouse using Databricks. The key aspects of a Lakehouse are that it supports diverse data types and workloads while enabling using BI tools directly on source data. Delta Lake provides reliability, consistency, and performance through its ACID transactions, automatic file consolidation, and integration with Spark. Bateman concluded with a demo of creating a Lakehouse.
Tech talk on what Azure Databricks is, why you should learn it and how to get started. We'll use PySpark and talk about some real live examples from the trenches, including the pitfalls of leaving your clusters running accidentally and receiving a huge bill ;)
After this you will hopefully switch to Spark-as-a-service and get rid of your HDInsight/Hadoop clusters.
This is part 1 of an 8 part Data Science for Dummies series:
Databricks for dummies
Titanic survival prediction with Databricks + Python + Spark ML
Titanic with Azure Machine Learning Studio
Titanic with Databricks + Azure Machine Learning Service
Titanic with Databricks + MLS + AutoML
Titanic with Databricks + MLFlow
Titanic with DataRobot
Deployment, DevOps/MLops and Operationalization
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Introduction to Snowflake Datawarehouse and Architecture for Big data company. Centralized data management. Snowpipe and Copy into a command for data loading. Stream loading and Batch Processing.
This is Part 4 of the GoldenGate series on Data Mesh - a series of webinars helping customers understand how to move off of old-fashioned monolithic data integration architecture and get ready for more agile, cost-effective, event-driven solutions. The Data Mesh is a kind of Data Fabric that emphasizes business-led data products running on event-driven streaming architectures, serverless, and microservices based platforms. These emerging solutions are essential for enterprises that run data-driven services on multi-cloud, multi-vendor ecosystems.
Join this session to get a fresh look at Data Mesh; we'll start with core architecture principles (vendor agnostic) and transition into detailed examples of how Oracle's GoldenGate platform is providing capabilities today. We will discuss essential technical characteristics of a Data Mesh solution, and the benefits that business owners can expect by moving IT in this direction. For more background on Data Mesh, Part 1, 2, and 3 are on the GoldenGate YouTube channel: https://www.youtube.com/playlist?list=PLbqmhpwYrlZJ-583p3KQGDAd6038i1ywe
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
Mr. Pollock is an expert technology leader for data platforms, big data, data integration and governance. Jeff has been CTO at California startups and a senior exec at Fortune 100 tech vendors. He is currently Oracle VP of Products and Cloud Services for Data Replication, Streaming Data and Database Migrations. While at IBM, he was head of all Information Integration, Replication and Governance products, and previously Jeff was an independent architect for US Defense Department, VP of Technology at Cerebra and CTO of Modulant – he has been engineering artificial intelligence based data platforms since 2001. As a business consultant, Mr. Pollock was a Head Architect at Ernst & Young’s Center for Technology Enablement. Jeff is also the author of “Semantic Web for Dummies” and "Adaptive Information,” a frequent keynote at industry conferences, author for books and industry journals, formerly a contributing member of W3C and OASIS, and an engineering instructor with UC Berkeley’s Extension for object-oriented systems, software development process and enterprise architecture.
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
The document discusses migrating a data warehouse to the Databricks Lakehouse Platform. It outlines why legacy data warehouses are struggling, how the Databricks Platform addresses these issues, and key considerations for modern analytics and data warehousing. The document then provides an overview of the migration methodology, approach, strategies, and key takeaways for moving to a lakehouse on Databricks.
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...DataScienceConferenc1
Dragan Berić will take a deep dive into Lakehouse architecture, a game-changing concept bridging the best elements of data lake and data warehouse. The presentation will focus on the Delta Lake format as the foundation of the Lakehouse philosophy, and Databricks as the primary platform for its implementation.
Azure Data Factory ETL Patterns in the CloudMark Kromer
This document discusses ETL patterns in the cloud using Azure Data Factory. It covers topics like ETL vs ELT, the importance of scale and flexible schemas in cloud ETL, and how Azure Data Factory supports workflows, templates, and integration with on-premises and cloud data. It also provides examples of nightly ETL data flows, handling schema drift, loading dimensional models, and data science scenarios using Azure data services.
The new Microsoft Azure SQL Data Warehouse (SQL DW) is an elastic data warehouse-as-a-service and is a Massively Parallel Processing (MPP) solution for "big data" with true enterprise class features. The SQL DW service is built for data warehouse workloads from a few hundred gigabytes to petabytes of data with truly unique features like disaggregated compute and storage allowing for customers to be able to utilize the service to match their needs. In this presentation, we take an in-depth look at implementing a SQL DW, elastic scale (grow, shrink, and pause), and hybrid data clouds with Hadoop integration via Polybase allowing for a true SQL experience across structured and unstructured data.
J1 T1 3 - Azure Data Lake store & analytics 101 - Kenneth M. NielsenMS Cloud Summit
This document provides an overview and demonstration of Azure Data Lake Store and Azure Data Lake Analytics. The presenter discusses how Azure Data Lake can store and analyze large amounts of data in its native format. Key capabilities of Azure Data Lake Store like unlimited storage, security features, and support for any data type are highlighted. Azure Data Lake Analytics is presented as an elastic analytics service built on Apache YARN that can process large amounts of data. The U-SQL language for big data analytics is demonstrated, along with using Visual Studio and PowerShell for interacting with Azure Data Lake. The presentation concludes with a question and answer section.
These are the slides for my talk "An intro to Azure Data Lake" at Azure Lowlands 2019. The session was held on Friday January 25th from 14:20 - 15:05 in room Santander.
This document provides an introduction and overview of Azure Data Lake. It describes Azure Data Lake as a single store of all data ranging from raw to processed that can be used for reporting, analytics and machine learning. It discusses key Azure Data Lake components like Data Lake Store, Data Lake Analytics, HDInsight and the U-SQL language. It compares Data Lakes to data warehouses and explains how Azure Data Lake Store, Analytics and U-SQL process and transform data at scale.
Prague data management meetup 2018-03-27Martin Bém
This document discusses different data types and data models. It begins by describing unstructured, semi-structured, and structured data. It then discusses relational and non-relational data models. The document notes that big data can include any of these data types and models. It provides an overview of Microsoft's data management and analytics platform and tools for working with structured, semi-structured, and unstructured data at varying scales. These include offerings like SQL Server, Azure SQL Database, Azure Data Lake Store, Azure Data Lake Analytics, HDInsight and Azure Data Warehouse.
Running cost effective big data workloads with Azure Synapse and ADLS (MS Ign...Michael Rys
Presentation by James Baker and myself on Running cost effective big data workloads with Azure Synapse and Azure Datalake Storage (ADLS) at Microsoft Ignite 2020. Covers Modern Data warehouse architecture supported by Azure Synapse, integration benefits with ADLS and some features that reduce cost such as Query Acceleration, integration of Spark and SQL processing with integrated meta data and .NET For Apache Spark support.
Modern ETL: Azure Data Factory, Data Lake, and SQL DatabaseEric Bragas
This document discusses modern Extract, Transform, Load (ETL) tools in Azure, including Azure Data Factory, Azure Data Lake, and Azure SQL Database. It provides an overview of each tool and how they can be used together in a data warehouse architecture with Azure Data Lake acting as the data hub and Azure SQL Database being used for analytics and reporting through the creation of data marts. It also includes two demonstrations, one on Azure Data Factory and another showing Azure Data Lake Store and Analytics.
This document provides an overview of Azure Databricks, including:
- Azure Databricks is an Apache Spark-based analytics platform optimized for Microsoft Azure cloud services. It includes Spark SQL, streaming, machine learning libraries, and integrates fully with Azure services.
- Clusters in Azure Databricks provide a unified platform for various analytics use cases. The workspace stores notebooks, libraries, dashboards, and folders. Notebooks provide a code environment with visualizations. Jobs and alerts can run and notify on notebooks.
- The Databricks File System (DBFS) stores files in Azure Blob storage in a distributed file system accessible from notebooks. Business intelligence tools can connect to Databricks clusters via JDBC
Azure DataBricks for Data Engineering by Eugene PolonichkoDimko Zhluktenko
This document provides an overview of Azure Databricks, a Apache Spark-based analytics platform optimized for Microsoft Azure cloud services. It discusses key components of Azure Databricks including clusters, workspaces, notebooks, visualizations, jobs, alerts, and the Databricks File System. It also outlines how data engineers can leverage Azure Databricks for scenarios like running ETL pipelines, streaming analytics, and connecting business intelligence tools to query data.
This document provides an overview of Azure SQL Data Warehouse (SQL DWH), a cloud data warehouse service. It discusses SQL DWH's massively parallel processing (MPP) architecture that allows independent scaling of compute and storage. The document demonstrates how to create a SQL DWH, load data using PolyBase, and use common tools. It is intended to help users understand what SQL DWH is, how it works, and common scenarios it can be used for, such as processing large volumes of data without needing to purchase and manage hardware.
What is in a modern BI architecture? In this presentation, we explore PaaS, Azure Active Directory and Storage options including SQL Database and SQL Datawarehouse.
The document discusses building an end-to-end analytic solution in the cloud using Microsoft Azure tools, including ingesting data from various sources into Azure Data Factory, storing it in Azure Data Lake, transforming the data using U-SQL scripts in Azure Data Lake Analytics, developing predictive models with Azure Machine Learning Studio, and visualizing insights with Power BI. It provides examples of how each tool in the analytic lifecycle can be leveraged as part of an overall cloud-based analytics solution handling large volumes of data.
Azure Days 2019: Business Intelligence auf Azure (Marco Amhof & Yves Mauron)Trivadis
In dieser Session stellen wir ein Projekt vor, in welchem wir ein umfassendes BI-System mit Hilfe von Azure Blob Storage, Azure SQL, Azure Logic Apps und Azure Analysis Services für und in der Azure Cloud aufgebaut haben. Wir berichten über die Herausforderungen, wie wir diese gelöst haben und welche Learnings und Best Practices wir mitgenommen haben.
Azure provides several data related services for storing, processing, and analyzing data in the cloud at scale. Key services include Azure SQL Database for relational data, Azure DocumentDB for NoSQL data, Azure Data Warehouse for analytics, Azure Data Lake Store for big data storage, and Azure Storage for binary data. These services provide scalability, high availability, and manageability. Azure SQL Database provides fully managed SQL databases with options for single databases, elastic pools, and geo-replication. Azure Data Warehouse enables petabyte-scale analytics with massively parallel processing.
Modern Analytics Academy - Data Modeling (1).pptxssuser290967
This document provides an overview of Modern Analytics Academy and Azure Synapse Analytics. It introduces the Modern Analytics Academy team and their agenda to discuss modeling, data lakes, Synapse, and a demo. It then covers key concepts like the data lake, logical data warehouse, and data warehouse. It describes the role of data in modern analytics between data lakes and data warehouses. Finally, it introduces Azure Synapse Analytics and its capabilities for dedicated SQL pools, serverless SQL pools, and Apache Spark pools for unified analytics.
Azure Synapse Analytics is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics. It provides the freedom to query data at scale using either serverless or dedicated options. Azure HDInsight allows the use of open source frameworks like Hadoop, Spark, Hive, and Kafka for processing large volumes of data. Azure Databricks offers environments for SQL, data science/engineering, and machine learning. The Azure IoT Hub enables scalable IoT solutions by allowing bidirectional communication between IoT applications and connected devices.
The document discusses the Windows Azure platform and its core services including compute, storage, database, service bus, and access control. It then summarizes Microsoft SQL Azure, which provides familiar SQL Server capabilities in the cloud. Key points about SQL Azure include its scalable architecture with automatic replication and failover, flexible tenancy and deployment models, and support for both relational and non-relational data through existing SQL Server tools and APIs. The document also outlines some differences and limitations compared to on-premises SQL Server deployments.
Data Analytics Meetup: Introduction to Azure Data Lake Storage CCG
Microsoft Azure Data Lake Storage is designed to enable operational and exploratory analytics through a hyper-scale repository. Journey through Azure Data Lake Storage Gen1 with Microsoft Data Platform Specialist, Audrey Hammonds. In this video she explains the fundamentals to Gen 1 and Gen 2, walks us through how to provision a Data Lake, and gives tips to avoid turning your Data Lake into a swamp.
Learn more about Data Lakes with our blog - Data Lakes: Data Agility is Here Now https://bit.ly/2NUX1H6
Similar to Azure Synapse Analytics Overview (r2) (20)
Microsoft Fabric is the next version of Azure Data Factory, Azure Data Explorer, Azure Synapse Analytics, and Power BI. It brings all of these capabilities together into a single unified analytics platform that goes from the data lake to the business user in a SaaS-like environment. Therefore, the vision of Fabric is to be a one-stop shop for all the analytical needs for every enterprise and one platform for everyone from a citizen developer to a data engineer. Fabric will cover the complete spectrum of services including data movement, data lake, data engineering, data integration and data science, observational analytics, and business intelligence. With Fabric, there is no need to stitch together different services from multiple vendors. Instead, the customer enjoys end-to-end, highly integrated, single offering that is easy to understand, onboard, create and operate.
This is a hugely important new product from Microsoft and I will simplify your understanding of it via a presentation and demo.
Agenda:
What is Microsoft Fabric?
Workspaces and capacities
OneLake
Lakehouse
Data Warehouse
ADF
Power BI / DirectLake
Resources
Data Warehousing Trends, Best Practices, and Future OutlookJames Serra
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn:
- Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart
- Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon
- Step by step approach to building an effective data warehouse architecture
- Common reasons for the failure of data warehouse implementations and how to avoid them
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Power BI Overview, Deployment and GovernanceJames Serra
This document provides an overview of external sharing in Power BI using Azure Active Directory Business-to-Business (Azure B2B) collaboration. Azure B2B allows Power BI content to be securely distributed to guest users outside the organization while maintaining control over internal data. There are three main approaches for sharing - assigning Pro licenses manually, using guest's own licenses, or sharing to guests via Power BI Premium capacity. Azure B2B handles invitations, authentication, and governance policies to control external sharing. All guest actions are audited. Conditional access policies can also be enforced for guests.
Power BI has become a product with a ton of exciting features. This presentation will give an overview of some of them, including Power BI Desktop, Power BI service, what’s new, integration with other services, Power BI premium, and administration.
The breath and depth of Azure products that fall under the AI and ML umbrella can be difficult to follow. In this presentation I’ll first define exactly what AI, ML, and deep learning is, and then go over the various Microsoft AI and ML products and their use cases.
This document provides an overview and summary of the author's background and expertise. It states that the author has over 30 years of experience in IT working on many BI and data warehouse projects. It also lists that the author has experience as a developer, DBA, architect, and consultant. It provides certifications held and publications authored as well as noting previous recognition as an SQL Server MVP.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
Power BI for Big Data and the New Look of Big Data SolutionsJames Serra
New features in Power BI give it enterprise tools, but that does not mean it automatically creates an enterprise solution. In this talk we will cover these new features (composite models, aggregations tables, dataflow) as well as Azure Data Lake Store Gen2, and describe the use cases and products of an individual, departmental, and enterprise big data solution. We will also talk about why a data warehouse and cubes still should be part of an enterprise solution, and how a data lake should be organized.
In three years I went from a complete unknown to a popular blogger, speaker at PASS Summit, a SQL Server MVP, and then joined Microsoft. Along the way I saw my yearly income triple. Is it because I know some secret? Is it because I am a genius? No! It is just about laying out your career path, setting goals, and doing the work.
I'll cover tips I learned over my career on everything from interviewing to building your personal brand. I'll discuss perm positions, consulting, contracting, working for Microsoft or partners, hot fields, in-demand skills, social media, networking, presenting, blogging, salary negotiating, dealing with recruiters, certifications, speaking at major conferences, resume tips, and keys to a high-paying career.
Your first step to enhancing your career will be to attend this session! Let me be your career coach!
Is the traditional data warehouse dead?James Serra
With new technologies such as Hive LLAP or Spark SQL, do I still need a data warehouse or can I just put everything in a data lake and report off of that? No! In the presentation I’ll discuss why you still need a relational data warehouse and how to use a data lake and a RDBMS data warehouse to get the best of both worlds. I will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. I’ll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution. And I’ll put it all together by showing common big data architectures.
Differentiate Big Data vs Data Warehouse use cases for a cloud solutionJames Serra
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Microsoft Data Platform - What's includedJames Serra
This document provides an overview of a speaker and their upcoming presentation on Microsoft's data platform. The speaker is a 30-year IT veteran who has worked in various roles including BI architect, developer, and consultant. Their presentation will cover collecting and managing data, transforming and analyzing data, and visualizing and making decisions from data. It will also discuss Microsoft's various product offerings for data warehousing and big data solutions.
Learning to present and becoming good at itJames Serra
Have you been thinking about presenting at a user group? Are you being asked to present at your work? Is learning to present one of the keys to advancing your career? Or do you just think it would be fun to present but you are too nervous to try it? Well take the first step to becoming a presenter by attending this session and I will guide you through the process of learning to present and becoming good at it. It’s easier than you think! I am an introvert and was deathly afraid to speak in public. Now I love to present and it’s actually my main function in my job at Microsoft. I’ll share with you journey that lead me to speak at major conferences and the skills I learned along the way to become a good presenter and to get rid of the fear. You can do it!
Think of big data as all data, no matter what the volume, velocity, or variety. The simple truth is a traditional on-prem data warehouse will not handle big data. So what is Microsoft’s strategy for building a big data solution? And why is it best to have this solution in the cloud? That is what this presentation will cover. Be prepared to discover all the various Microsoft technologies and products from collecting data, transforming it, storing it, to visualizing it. My goal is to help you not only understand each product but understand how they all fit together, so you can be the hero who builds your companies big data solution.
Choosing technologies for a big data solution in the cloudJames Serra
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
The document summarizes new features in SQL Server 2016 SP1, organized into three categories: performance enhancements, security improvements, and hybrid data capabilities. It highlights key features such as in-memory technologies for faster queries, always encrypted for data security, and PolyBase for querying relational and non-relational data. New editions like Express and Standard provide more built-in capabilities. The document also reviews SQL Server 2016 SP1 features by edition, showing advanced features are now more accessible across more editions.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
1. Azure Synapse Analytics
James Serra
Data & AI Architect
Microsoft, NYC MTC
JamesSerra3@gmail.com
Blog: JamesSerra.com
Modified 5/16/20
2. About Me
Microsoft, Big Data Evangelist
In IT for 30 years, worked on many BI and DW projects
Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM
architect, PDW/APS developer
Been perm employee, contractor, consultant, business owner
Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference
Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure
Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data
Platform Solutions
Blog at JamesSerra.com
Former SQL Server MVP
Author of book “Reporting with Microsoft SQL Server 2012”
3. Agenda
Introduction
Studio
Data Integration
SQL Analytics
Data Storage and Performance Optimizations
SQL On-Demand
Spark
Security
Connected Services
4. Azure Synapse Analytics is a limitless analytics service, that brings together
enterprise data warehousing and Big Data analytics. It gives you the freedom
to query data on your terms, using either serverless on-demand or provisioned
resources, at scale. Azure Synapse brings these two worlds together with a
unified experience to ingest, prepare, manage, and serve data for immediate
business intelligence and machine learning needs.
5. Best in class price
per performance
Developer
productivity
Workload aware
query execution
Data flexibility
Up to 94% less expensive
than competitors
Manage heterogenous
workloads through
workload priorities and
isolation
Ingest variety of data
sources to derive the
maximum benefit.
Query all data.
Use preferred tooling for
SQL data warehouse
development
Industry-leading
security
Defense-in-depth
security and 99.9%
financially backed
availability SLA
Azure Synapse – SQL Analytics
focus areas
6. + many more
Leveraging ISV partners with Azure Synapse Analytics
Power BI Azure Machine Learning
Azure Data Share Ecosystem
Azure Synapse Analytics
7. What workloads are NOT suitable?
• High frequency reads and writes.
• Large numbers of singleton
selects.
• High volumes of single row
inserts.
Operational workloads (OLTP)
• Row by row processing needs.
• Incompatible formats (XML).
Data Preparations
SQL
SQL
8. What Workloads are Suitable?
Store large volumes of data.
Consolidate disparate data into a single location.
Shape, model, transform and aggregate data.
Batch/Micro-batch loads.
Perform query analysis across large datasets.
Ad-hoc reporting across large data volumes.
All using simple SQL constructs.
Analytics
12. Welcome to Azure Synapse Analytics
Data warehousing & big data analytics—all in one service
Azure brings these two worlds together
13. APACHE SPARKSQL ANALYTICS STUDIO DATA INTEGRATION
Synapse Analytics (PREVIEW)
“v2”
Synapse Analytics (GA)
(formerly SQL DW)
“v1”
Synapse Analytics (GA)
Add new capabilities
to the GA service
New GA features
• Resultset caching
• Materialized Views
• Ordered columnstore
• JSON support
• Dynamic Data Masking
• SSDT support
• Read committed snapshot isolation
• Private LINK support
Public preview features
• Workload Isolation
• Simple ingestion with COPY
• Share DW data with Azure Data Share
Private preview features
• Streaming ingestion & analytics in DW
• Native Prediction/Scoring
• Fast query over Parquet files
• FROM clause with joins
Private preview features
• Synapse Studio
• Collaborative workspaces
• Distributed T-SQL Query service
• SQL Script editor
• Unified security model
• Notebooks
• Apache Spark
• On-demand T-SQL
• Code-free data flows
• Orchestration Pipelines
• Data movement
• Integrated Power BI
Far future:
Gen3
“v3”
14. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
15. Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
Connected Services
Azure Data Catalog
Azure Data Lake Storage
Azure Data Share
Azure Databricks
Azure HDInsight
Azure Machine Learning
Power BI
3rd Party Integration
16. New Products/Features
• Azure Synapse Analytics – Umbrella name. For now just includes SQL DW. In preview adds a Synapse
Workspace which includes SQL DW and all the new product/features below
• Azure Synapse Studio – New product. Single pain of glass that is a web-based experience. Collaborative
workspaces. Access SQL Databases, Spark tables, SQL Scripts, notebooks (supports multiple languages),
Data flows (Data Integration), pipelines (Data Integration), monitoring, security. Has links to ADLS Gen2 and
Power BI workspace
• Data Integration – Really just Azure Data Factory (ADF). They use the same code base. Note in Synapse
Studio Data Flows are under “Develop”, Pipelines are under “Orchestrate”, and Datasets are under “Data” (In
ADF they are all under “Author”)
• Spark – Including Apache Spark is new. Similar to Spark in SQL Server 2019 BDC
• On-demand T-SQL – New feature. Was code-named Starlight Query
• T-SQL over ADLS Gen2 – New feature. Was code-named Starlight Query
• New SQL DW features (see next slides) – Some are GA now and some are in preview
• Multiple query options (see next slides) – Some are GA now and some are in preview
• Distributed Query Processor (see next slides) – some in preview or Gen3
17. New Synapse Features
GA features:
• Performance: Result-set caching
• Performance: Materialized Views
• Performance: Ordered clustered columnstore index
• Heterogeneous data: JSON support
• Trustworthy computation: Dynamic Data Masking
• Continuous integration & deployment: SSDT support
• Language: Read committed snapshot isolation
Public Preview features:
• Workload management: Workload Isolation
• Data ingestion: Simple ingestion with COPY
• Data Sharing: Share DW data with Azure Data Share
• Trustworthy computation: Private LINK support
Private Preview features:
• Data ingestion: Streaming ingestion & analytics in DW
• Built-in ML: Native Prediction/Scoring
• Data lake enabled: Fast query over Parquet files
• Language: Updateable distribution column
• Language: FROM clause with joins
• Language: Multi-column distribution support
• Security: Column-level Encryption
Note: private preview features require whitelisting.
19. Query Options
1. Provisioned SQL over relational database – Traditional SQL DW [existing]
2. Provisioned SQL over ADLS Gen2 – via external tables or openrowset [existing via external tables in PolyBase, openrowset
not yet in preview]
3. On-demand SQL over relational database - dependency on the flexible data model (data cells) over columnstore data [new,
not yet in preview: the ability to query a SQL relational database (and other types of data sources) will come later]
4. On-demand SQL over ADLS Gen2 – via external tables or openrowset [new in preview]
5. Provisioned Spark over relational database – [new in preview]
6. Provisioned Spark over ADLS Gen2 [new in preview]
7. On-demand Spark over relational database - On-demand Spark is not supported (but provisioned Spark can auto-pause)
8. On-demand Spark over ADLS Gen2 – On-demand Spark is not supported (but provisioned Spark can auto-pause)
Notes:
• Separation of state (data, metadata and transactional logs) and compute
• Queries against data loaded into SQL Analytics tables are 2-3X faster compared to queries over external tables
• Copy statement: Improved performance compared to PolyBase. PolyBase is not used, but functional aspects are supported
• Warm-up for first on-demand SQL query takes about 30-40 seconds
• If you create a Spark Table, that table will be created as an external table in SQL Pool or SQL On-Demand without having to keep a Spark cluster up and running
• Currently one on-demand SQL pool but by GA will support many
• Provisioned SQL may give you better and more predictable performance due to resource reservation
• Existing PolyBase via external tables is not pushdown (#2), but #4 will be pushdown (SQL on-demand will push down queries from the front-end to back-end nodes)
• Supported file formats are parquet, csv, json
• Each SQL pool can currently only access tables created within its pool (there is one database per pool), while on-demand SQL can not yet query a database
20. Distributed Query Processor (DQP) - Preview
• Auto-scale compute nodes (on-demand SQL in preview, provisioned SQL in Gen3) - Instruct the underlying fabric the need for
more compute power to adjust to peaks during the workload. If compute power is granted, the DQP will re-distribute tasks
leveraging the new compute container. Note that in-flight tasks in the previous topology continue running, while new queries get
the new compute power with the new re-balancing
• Compute node fault tolerance (on-demand SQL in preview, provisioned SQL in Gen3) - Recover from faulty nodes while a
query is running. If a node fails the DQP re-schedules the tasks in the faulted node through the remainder of the healthy topology
• Compute node hot spot: rebalance queries or scale out nodes (on-demand SQL in preview, provisioned SQL in Gen3) - Can
detect hot spots in the existing topology. That is, overloaded compute nodes due to data skew. In the advent of a compute node
running hot because of skewed tasks, the DQP can decide to re-schedule some of the tasks assigned to that compute node
amongst others where the load is less
• Multi-master cluster (provisioned SQL only in Gen3) - User workloads can operate over the same shareable relational data set
while having independent clusters to serve those various workloads. Allows for very high concurrency. So you could have multiple
SQL pools all accessing the same database. Databases are not tied to a pool
• Cross-database queries (provisioned SQL only in Gen3) – A query can specify multiple databases. This is because the data of
the databases are not in the pool. Rather, each pool just has the metadata of all the databases and the data of the databases are in
a separate sharable storage layer
• Query scheduler (provisioned SQL only in Gen3) – New way of executing queries within the data warehouse using a scheduler
and resource manager/estimator. When a query is submitted, estimates how many resources are needed to complete request and
schedules it (and can use workload importance/isolation). Will completely remove the need for concurrency limits. This is how
SQL Server works today
22. Query Demo
Relational Data ADLS Gen2
Provisioned SQL 3 5 (external table)
On-demand SQL X 1
Spark 4 2
Supported file formats are parquet, csv, json
24. Migration Path
SQL DW/Synapse – All of the data warehousing features that were generally available in Azure SQL Data Warehouse (intelligent workload
management, dynamic data masking, materialized views, etc.) continue to be generally available today. Businesses can continue running their
existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new capabilities which are
in preview (unified experience with Azure Synapse studio, query-as-a-service, built-in data integration, integrated Apache Spark, etc.) once
they become generally available and can use them in production if they choose to do so. Customers will not have to migrate any workloads as
SQL DW will simply be moved under a Synapse workspace
Azure Data Factory - Continue using Azure Data Factory. When the new functional of data integration within Azure Synapse becomes
generally available, we will provide the capability to import your Azure Data Factory pipelines into a Azure Synapse workspace. Your existing
Azure Data Factory accounts and pipelines will work with Azure Synapse if you choose not to import them into the Azure Synapse workspace.
Note that Azure-SSIS Integration Runtime (IR) will not be supported in Synapse
Power BI – Customers link to a Power BI workspace within Azure Synapse Studio so no migration needed
ADLS Gen2 – Customers link to ADLS Gen2 within Azure Synapse Studio so no migration needed
Azure Databricks – ADB notebooks can be exported as .ipynb files and then imported into Synapse Spark, that part is easy. The hard part is if
any code dependencies exist in the user code on features that are unique to ADB like dbutils or behaviors that are unique to ADB like ML
Runtime, GPU support etc
Azure HDInsight - The Spark runtime within the Azure Synapse service is different from HDInsight
SSMS – Can connect to on-demand SQL and provisioned SQL
25. Transforming data options
In order of easiest to hardest, less features to more features:
1. Azure Data Factory Wrangling Data flows
2. Azure Data Factory Mapping Data flows
3. T-SQL in on-demand or provisioned SQL pools
4. Synapse Spark
5. Databricks
26. ETL options
In order of easiest to hardest:
1. Azure Data Factory/SSIS
2. T-SQL in on-demand or provisioned SQL pools (COPY INTO)
3. T-SQL in on-demand or provisioned SQL pools (CETAS/CTAS)
4. Synapse Spark
5. Databricks
27. Provisioning Synapse workspace
Workspace: Single manageability point and security
boundary. Manage all the resources (i.e. pools) within
one unit. Tied to a specific region, subscription, and
resource group.
Provisioning Synapse is easy
Subscription
Resource Group
Workspace Name
Region
Data Lake Storage Account
32. Top 10 questions
1. Can I access storage accounts, other than the one I setup the Synapse workspace with?
Yes you can. To add a new storage account, just create a new linked service to the storage and it will show up in the data hub, under storage
2. Will anyone who has access to the workspace have access to the storage account?
No, folks who do not have access to the storage account will see it but won’t have access
3. Is Power BI in Synapse a replacement of Power BI Desktop ?
You can connect your Power BI service workspace to the Synapse workspace. It is not the authoring tool like Power BI desktop but more like
Power BI service
4. Can I run OPENROWSET statement using SQL Pool?
No, you can only run OPENROWSET statement from SQL on-demand
5. We can create a Spark pool and SQL pool from the portal, but how do we create a SQL on-demand database?
You can create a SQL on-demand database by running a create database on a SQL script in the studio
6. Can I import my existing Azure SQL DW into Synapse?
Not yet. When you create a SQL Pool, you have the option to create a blank DB, restore from a back up or Restore point. You can restore the
DB but it is a new database
7. Does Synapse Analytics support delta lake?
Yes, we support delta lake. It is already available.
8. VNet Integration and restricted access to client IP’s?
Soon
9. Is Data Discovery & Classification available?
Soon
10. Auditing to log analytics?
Soon
33. Top documentation links
• What is SQL on-demand?: link
• What is Apache Spark in Azure Synapse Analytics?: link
• Best practices for SQL pool in Azure Synapse Analytics: link
• Best practices for SQL on-demand in Azure Synapse Analytics: link
• Azure Synapse Analytics shared metadata: link
• Use maintenance schedules to manage service updates and maintenance: link
• Cheat sheet for Azure Synapse Analytics (formerly SQL DW): link
• Best practices for SQL Analytics in Azure Synapse Analytics (formerly SQL DW): link
35. Parallelism
• Uses many separate CPUs running in parallel to execute a single
program
• Shared Nothing: Each CPU has its own memory and disk (scale-out)
• Segments communicate using high-speed network between nodes
MPP - Massively
Parallel
Processing
• Multiple CPUs used to complete individual processes simultaneously
• All CPUs share the same memory, disks, and network controllers (scale-up)
• All SQL Server implementations up until now have been SMP
• Mostly, the solution is housed on a shared SAN
SMP - Symmetric
Multiprocessing
36. SQL DW Logical Architecture (overview)
“Compute” node Balanced storage
SQL
“Compute” node Balanced storage
SQL
“Compute” node Balanced storage
SQL
“Compute” node Balanced storage
SQL
DMS
DMS
DMS
DMS
Compute Node – the “worker bee” of SQL DW
• Runs Azure SQL Server DB
• Contains a “slice” of each database
• CPU is saturated by storage
Control Node – the “brains” of the SQL DW
• Also runs Azure SQL Server DB
• Holds a “shell” copy of each database
• Metadata, statistics, etc
• The “public face” of the appliance
Data Movement Services (DMS)
• Part of the “secret sauce” of SQL DW
• Moves data around as needed
• Enables parallel operations among the compute
nodes (queries, loads, etc)
“Control” node
SQL
DMS
37. SQL DW Logical Architecture (overview)
“Compute” node Balanced storage
SQL“Control” node
SQL
“Compute” node Balanced storage
SQL
“Compute” node Balanced storage
SQL
“Compute” node Balanced storage
SQL
DMS
DMS
DMS
DMS
DMS
1) User connects to the appliance (control node)
and submits query
2) Control node query processor determines
best *parallel* query plan
3) DMS distributes sub-queries to each compute
node
4) Each compute node executes query on its
subset of data
5) Each compute node returns a subset of the
response to the control node
6) If necessary, control node does any final
aggregation/computation
7) Control node returns results to user
Queries running in parallel on a subset of the data, using separate pipes effectively making the pipe larger
38. SQL DW Data Layout Options
“Compute” node Balanced storage
SQL
Balanced storage
Balanced storage
Balanced storage
“Compute” node
SQL
“Compute” node
SQL
“Compute” node
SQL
DMS
DMS
DMS
DMS
Time Dim
Date Dim ID
Calendar Year
Calendar Qtr
Calendar Mo
Calendar Day
Store Dim
Store Dim ID
Store Name
Store Mgr
Store Size
Product Dim
Prod Dim ID
Prod Category
Prod Sub Cat
Prod Desc
Customer Dim
Cust Dim ID
Cust Name
Cust Addr
Cust Phone
Cust Email
Sales Fact
Date Dim ID
Store Dim ID
Prod Dim ID
Cust Dim ID
Qty Sold
Dollars Sold
T
D
P
D
S
D
C
D
T
D
P
D
S
D
C
D
T
D
P
D
S
D
C
D
T
D
P
D
S
D
C
D
SalesFact
Replicated
Table copied to each compute node
Distributed
Table spread across compute nodes based on “hash”
Star Schema
44. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
45. Studio
A single place for Data Engineers, Data Scientists, and IT Pros to collaborate on enterprise analytics
https://web.azuresynapse.net
46. Synapse Studio
Synapse Studio divided into Activity hubs.
These organize the tasks needed for building analytics solution.
Overview Data
Monitor Manage
Quick-access to common
gestures, most-recently used
items, and links to tutorials
and documentation.
Explore structured and
unstructured data
Centralized view of all resource
usage and activities in the
workspace.
Configure the workspace, pool,
access to artifacts
Develop
Write code and the define
business logic of the pipeline
via notebooks, SQL scripts,
Data flows, etc.
Orchestrate
Design pipelines that that
move and transform data.
52. Data Hub – Storage accounts
Browse Azure Data Lake Storage Gen2 accounts and filesystems – navigate through folders to see data
ADLS Gen2 Account
Container (filesystem)
Filepath
53. Data Hub – Storage accounts
Preview a sample of your data
54. Data Hub – Storage accounts
See basic file properties
55. Data Hub – Storage accounts
Manage Access - Configure standard POSIX ACLs on files and folders
56. Data Hub – Storage accounts
Two simple gestures to start analyzing with SQL scripts or with notebooks.
T-SQL or PySpark auto-generated.
57. Data Hub – Storage accounts
SQL Script from Multiple files
Multi-select of files generates a SQL script that analyzes all those files together
58. Data Hub – Databases
Explore the different kinds of databases that exist in a workspace.
SQL pool
SQL on-demand
Spark
59. Data Hub – Databases
Familiar gesture to generate T-SQL scripts from SQL
metadata objects such as tables.
Starting from a table, auto-generate a single line of PySpark code
that makes it easy to load a SQL table into a Spark dataframe
60. Data Hub – Datasets
Orchestration datasets describe data that is persisted. Once a dataset is defined, it can be used in pipelines and
sources of data or as sinks of data.
62. Develop Hub
Overview
It provides development experience to
query, analyze, model data
Benefits
Multiple languages to analyze data
under one umbrella
Switch over notebooks and scripts
without loosing content
Code intellisense offers reliable code
development
Create insightful visualizations
63. Develop Hub - SQL scripts
SQL Script
Authoring SQL Scripts
Execute SQL script on provisioned SQL Pool or SQL
On-demand
Publish individual SQL script or multiple SQL
scripts through Publish all feature
Language support and intellisense
64. Develop Hub - SQL scripts
SQL Script
View results in Table or Chart form and export results in
several popular formats
65. Develop Hub - Notebooks
Notebooks
Allows to write multiple languages in one
notebook
%%<Name of language>
Offers use of temporary tables across
languages
Language support for Syntax highlight, syntax
error, syntax code completion, smart indent,
code folding
Export results
66. Develop Hub - Notebooks
Configure session allows developers to control how many resources
are devoted to running their notebook.
67. Develop Hub - Notebooks
As notebook cells run, the underlying
Spark application status is shown.
Providing immediate feedback and
progress tracking.
68. Dataflow Capabilities
Handle upserts, updates,
deletes on sql sinks
Add new partition methods Add schema drift support
Add file handling (move files
after read, write files to file
names described in rows etc)
New inventory of functions
(for e.g Hash functions for
row comparison)
Commonly used ETL
patterns(Sequence
generator/Lookup
transformation/SCD…)
Data lineage – Capturing sink
column lineage & impact
analysis(invaluable if this is
for enterprise deployment)
Implement commonly used
ETL patterns as
templates(SCD Type1, Type2,
Data Vault)
69. Develop Hub - Data Flows
Data flows are a visual way of specifying how to transform data.
Provides a code-free experience.
70. Develop Hub – Power BI
Overview
Create Power BI reports in the workspace
Provides access to published reports in the
workspace
Update reports real time from Synapse
workspace to get it reflected on Power BI
service
Visually explore and analyze data
We currently support only one Power BI
linked service creation to link a single
Power BI workspace
71. Develop Hub – Power BI
View published reports in Power BI workspace
72. Develop Hub – Power BI
Edit reports in Synapse workspace
73. Publish changes by simple save
report in workspace
Develop Hub – Power BI
Publish edited reports in Synapse workspace to Power BI workspace
76. Orchestrate Hub
It provides ability to create pipelines to ingest, transform and load data with 90+ inbuilt connectors.
Offers a wide range of activities that a pipeline can perform.
79. Monitoring Hub - Orchestration
Overview
Monitor orchestration in the Synapse workspace for the
progress and status of pipeline
Benefits
Track all/specific pipelines
Monitor pipeline run and activity run details
Find the root cause of pipeline failure or activity failure
80. Monitoring Hub - Spark applications
Overview
Monitor Spark pools, Spark applications for the progress and
status of activities
Benefits
Monitor Spark pools for the status as paused, active,
resume, scaling and upgrading
Track the usage of resources
83. Manage – Linked services
Overview
It defines the connection information needed to
connect to external resources.
Benefits
Offers pre-build 90+ connectors
Easy cross platform data migration
Represents data store or compute resources
84. Manage – Access Control
Overview
It provides access control management to workspace
resources and artifacts for admin and users
Benefits
Share workspace with the team
Increases productivity
Manage permissions on code artifacts and Spark
pools
85. Manage – Triggers
Overview
It defines a unit of processing that determines when a
pipeline execution needs to be kicked off.
Benefits
Create and manage
• Schedule trigger
• Tumbling window trigger
• Event trigger
Control pipeline execution
86. Manage – Integration runtimes
Overview
Integration runtimes are the compute infrastructure used by
Pipelines to provide the data integration capabilities across
different network environments. An integration runtime
provides the bridge between the activity and linked services.
Benefits
Offers Azure Integration Runtime or Self-Hosted Integration
Runtime
Azure Integration Runtime – provides fully managed,
serverless compute in Azure
Self-Hosted Integration Runtime – use compute resources in
on-premises machine or a VM inside private network
88. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
89. Azure
Integration Runtime
Command and Control
L E G E N D
Data
Orchestration @ Scale
Trigger Pipeline
Activity Activity
Activity Activity
Activity
Self-hosted
Integration Runtime
Linked
Service
90. Data Movement
Scalable
per job elasticity
Up to 4 GB/s
Simple
Visually author or via code (Python, .Net, etc.)
Serverless, no infrastructure to manage
Access all your data
90+ connectors provided and growing (cloud, on premises, SaaS)
Data Movement as a Service: 25 points of presence worldwide
Self-hostable Integration Runtime for hybrid movement
91. Azure (15) Database & DW (26) File Storage (6)
File
Formats(6)
NoSQL (3) Services and App (28) Generic (4)
Blob storage Amazon Redshift Oracle Amazon S3 AVRO Cassandra Amazon MWS Oracle Service Cloud Generic HTTP
Cosmos DB - SQL API DB2 Phoenix File system Binary Couchbase CDS for Apps PayPal Generic OData
Cosmos DB - MongoDB
API
Drill PostgreSQL FTP Delimited Text MongoDB Concur QuickBooks Generic ODBC
Data Explorer
Google
BigQuery
Presto
Google Cloud
Storage
JSON Dynamics 365 Salesforce Generic REST
Data Lake Storage Gen1 Greenplum
SAP BW Open
Hub
HDFS ORC Dynamics AX SF Service Cloud
Data Lake Storage Gen2 HBase SAP BW via MDX SFTP Parquet Dynamics CRM SF Marketing Cloud
Database for MariaDB Hive SAP HANA Google AdWords SAP C4C
Database for MySQL Apache Impala SAP table HubSpot SAP ECC
Database for PostgreSQL Informix Spark Jira ServiceNow
File Storage MariaDB SQL Server Magento Shopify
SQL Database Microsoft Access Sybase Marketo Square
SQL Database MI MySQL Teradata Office 365 Web table
SQL Data Warehouse Netezza Vertica Oracle Eloqua Xero
Search index Oracle Responsys Zoho
Table storage
90+ Connectors out of the box
92. Pipelines
Overview
It provides ability to load data from storage
account to desired linked service. Load data by
manual execution of pipeline or by
orchestration
Benefits
Supports common loading patterns
Fully parallel loading into data lake or SQL
tables
Graphical development experience
93. Prep & Transform Data
Mapping Dataflow
Code free data transformation @scale
Wrangling Dataflow
Code free data preparation @scale
94. Triggers
Overview
Triggers represent a unit of processing that
determines when a pipeline execution needs to be
kicked off.
Data Integration offers 3 trigger types as –
1. Schedule – gets fired at a schedule with
information of start date, recurrence, end date
2. Event – gets fired on specified event
3. Tumbling window – gets fired at a periodic time
interval from a specified start date, while
retaining state
It also provides ability to monitor pipeline runs and
control trigger execution.
95. Manage – Linked Services
Overview
It defines the connection information needed for
Pipeline to connect to external resources.
Benefits
Offers pre-build 85+ connectors
Easy cross platform data migration
Represents data store or compute resources
NOTE: Linked Services are all for Data Integration
except for Power BI (eventually ADC, Databricks)
96. Manage – Integration runtimes
Overview
It is the compute infrastructure used by Pipelines to provide
the data integration capabilities across different network
environments. An integration runtime provides the bridge
between the activity and linked Services.
Benefits
Offers Azure Integration Runtime or Self-Hosted Integration
Runtime
Azure Integration Runtime – provides fully managed,
serverless compute in Azure
Self-Hosted Integration Runtime – use compute resources in
on-premises machine or a VM inside private network
98. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
99. Platform: Performance
Overview
SQL Data Warehouse’s industry leading price-performance
comes from leveraging the Azure ecosystem and core SQL
Server engine improvements to produce massive gains in
performance.
These benefits require no customer configuration and are
provided out-of-the-box for every data warehouse
• Gen2 adaptive caching – using non-volatile memory solid-
state drives (NVMe) to increase the I/O bandwidth
available to queries.
• Azure FPGA-accelerated networking enhancements – to
move data at rates of up to 1GB/sec per node to improve
queries
• Instant data movement – leverages multi-core parallelism
in underlying SQL Servers to move data efficiently between
compute nodes.
• Query Optimization – ongoing investments in distributed
query optimization
100. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18 19 20 21 2215
The first and only
analytics system to have
run all TPC-H queries
at petabyte-scale
TPC-H queries
TPC-H 1 Petabyte query times
101. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 17 18 19 20 21 2215
Azure Synapse is the first
and only analytics
system to have run all
TPC-H queries at 1
petabyte-scale
TPC-H queries
TPC-H 1 Petabyte Query Execution
103. OVER clause
Defines a window or specified set of rows within a query
result set
Computes a value for each row in the window
Aggregate functions
COUNT, MAX, AVG, SUM, APPROX_COUNT_DISTINCT,
MIN, STDEV, STDEVP, STRING_AGG, VAR, VARP,
GROUPING, GROUPING_ID, COUNT_BIG, CHECKSUM_AGG
Ranking functions
RANK, NTILE, DENSE_RANK, ROW_NUMBER
Analytical functions
LAG, LEAD, FIRST_VALUE, LAST_VALUE, CUME_DIST,
PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK
ROWS | RANGE
PRECEDING, UNBOUNDING PRECEDING, CURRENT ROW,
BETWEEN, FOLLOWING, UNBOUNDED FOLLOWING
Windowing functions
SELECT
ROW_NUMBER() OVER(PARTITION BY PostalCode ORDER BY SalesYTD DESC
) AS "Row Number",
LastName,
SalesYTD,
PostalCode
FROM Sales
WHERE SalesYTD <> 0
ORDER BY PostalCode;
Row Number LastName SalesYTD PostalCode
1 Mitchell 4251368.5497 98027
2 Blythe 3763178.1787 98027
3 Carson 3189418.3662 98027
4 Reiter 2315185.611 98027
5 Vargas 1453719.4653 98027
6 Ansman-Wolfe 1352577.1325 98027
1 Pak 4116870.2277 98055
2 Varkey Chudukaktil 3121616.3202 98055
3 Saraiva 2604540.7172 98055
4 Ito 2458535.6169 98055
5 Valdez 1827066.7118 98055
6 Mensa-Annan 1576562.1966 98055
7 Campbell 1573012.9383 98055
8 Tsoflias 1421810.9242 98055
Azure Synapse Analytics > SQL >
104. Analytical functions
LAG, LEAD, FIRST_VALUE, LAST_VALUE, CUME_DIST,
PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK
Windowing Functions (continued)
--LAG Function
SELECT BusinessEntityID,
YEAR(QuotaDate) AS SalesYear,
SalesQuota AS CurrentQuota,
LAG(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');
BusinessEntityID SalesYear CurrentQuota PreviousQuota
---------------- ----------- --------------------- ---------------------
275 2005 367000.00 0.00
275 2005 556000.00 367000.00
275 2006 502000.00 556000.00
275 2006 550000.00 502000.00
275 2006 1429000.00 550000.00
275 2006 1324000.00 1429000.00
-- PERCENTILE_CONT, PERCENTILE_DISC
SELECT DISTINCT Name AS DepartmentName
,PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ph.Rate)
OVER (PARTITION BY Name) AS MedianCont
,PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ph.Rate)
OVER (PARTITION BY Name) AS MedianDisc
FROM HumanResources.Department AS d
INNER JOIN HumanResources.EmployeeDepartmentHistory AS dh
ON dh.DepartmentID = d.DepartmentID
INNER JOIN HumanResources.EmployeePayHistory AS ph
ON ph.BusinessEntityID = dh.BusinessEntityID
WHERE dh.EndDate IS NULL;
DepartmentName MedianCont MedianDisc
-------------------- ------------- -------------
Document Control 16.8269 16.8269
Engineering 34.375 32.6923
Executive 54.32695 48.5577
Human Resources 17.427850 16.5865
Azure Synapse Analytics > SQL >
105. Windowing Functions (continued)
ROWS | RANGE
PRECEDING, UNBOUNDING PRECEDING, CURRENT ROW,
BETWEEN, FOLLOWING, UNBOUNDED FOLLOWING
-- First_Value
SELECT JobTitle, LastName, VacationHours AS VacHours,
FIRST_VALUE(LastName) OVER (PARTITION BY JobTitle
ORDER BY VacationHours ASC ROWS UNBOUNDED PRECEDING ) AS
FewestVacHours
FROM HumanResources.Employee AS e
INNER JOIN Person.Person AS p
ON e.BusinessEntityID = p.BusinessEntityID
ORDER BY JobTitle;
JobTitle LastName VacHours FewestVacHours
--------------------------------- ---------------- ---------- -------------------
Accountant Moreland 58 Moreland
Accountant Seamans 59 Moreland
Accounts Manager Liu 57 Liu
Accounts Payable Specialist Tomic 63 Tomic
Accounts Payable Specialist Sheperdigian 64 Tomic
Accounts Receivable Specialist Poe 60 Poe
Accounts Receivable Specialist Spoon 61 Poe
Accounts Receivable Specialist Walton 62 Poe
Azure Synapse Analytics > SQL >
106. -- Syntax
APPROX_COUNT_DISTINCT ( expression )
-- The approximate number of different order keys by order status from the orders table.
SELECT O_OrderStatus, APPROX_COUNT_DISTINCT(O_OrderKey) AS Approx_Distinct_OrderKey
FROM dbo.Orders
GROUP BY O_OrderStatus
ORDER BY O_OrderStatus;
HyperLogLog accuracy
Will return a result with a 2% accuracy of true cardinality on average.
e.g. COUNT (DISTINCT) returns 1,000,000, HyperLogLog will return a value in the range of 999,736 to 1,016,234.
APPROX_COUNT_DISTINCT
Returns the approximate number of unique non-null values in a group.
Use Case: Approximating web usage trend behavior
Approximate execution
Azure Synapse Analytics > SQL >
108. Group by with rollup
Creates a group for each combination of column expressions.
Rolls up the results into subtotals and grand totals
Calculate the aggregates of hierarchical data
Grouping sets
Combine multiple GROUP BY clauses into one GROUP BY CLAUSE.
Equivalent of UNION ALL of specified groups.
Group by options
-- GROUP BY ROLLUP Example --
SELECT Country,
Region,
SUM(Sales) AS TotalSales
FROM Sales
GROUP BY ROLLUP (Country, Region);
-- Results --
Country Region TotalSales
Canada Alberta 100
Canada British Columbia 500
Canada NULL 600
United States Montana 100
United States NULL 100
NULL NULL 700
Azure Synapse Analytics > SQL >
-- GROUP BY SETS Example --
SELECT Country,
SUM(Sales) AS TotalSales
FROM Sales
GROUP BY GROUPING SETS ( Country, () );
109. Overview
Specifies that statements cannot read data that has been modified but
not committed by other transactions.
This prevents dirty reads.
Isolation level
• READ COMMITTED
• REPEATABLE READ
• SNAPSHOT
• READ UNCOMMITTED (dirty reads)
• SERIALIZABLE
READ_COMMITTED_SNAPSHOT
OFF (Default) – Uses shared locks to prevent other transactions from
modifying rows while running a read operation
ON – Uses row versioning to present each statement with a
transactionally consistent snapshot of the data as it existed at the start of
the statement. Locks are not used to protect the data from updates.
Snapshot isolation
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase SET
READ_COMMITTED_SNAPSHOT ON
Azure Synapse Analytics > SQL >
110. Overview
The JSON format enables representation of
complex or hierarchical data structures in tables.
JSON data is stored using standard NVARCHAR
table columns.
Benefits
Transform arrays of JSON objects into table
format
Performance optimization using clustered
columnstore indexes and memory optimized
tables
JSON data support – insert JSON data
-- Create Table with column for JSON string
CREATE TABLE CustomerOrders
(
CustomerId BIGINT NOT NULL,
Country NVARCHAR(150) NOT NULL,
OrderDetails NVARCHAR(3000) NOT NULL –- NVARCHAR column for JSON
) WITH (DISTRIBUTION = ROUND_ROBIN)
-- Populate table with semi-structured data
INSERT INTO CustomerOrders
VALUES
( 101, -- CustomerId
'Bahrain', -- Country
N'[{ StoreId": "AW73565",
"Order": { "Number":"SO43659",
"Date":"2011-05-31T00:00:00"
},
"Item": { "Price":2024.40, "Quantity":1 }
}]’ -- OrderDetails
)
Azure Synapse Analytics > SQL >
111. Overview
Read JSON data stored in a string column with the
following:
• ISJSON – verify if text is valid JSON
• JSON_VALUE – extract a scalar value from a JSON
string
• JSON_QUERY – extract a JSON object or array from a
JSON string
Benefits
Ability to get standard columns as well as JSON column
Perform aggregation and filter on JSON values
JSON data support – read JSON data
Azure Synapse Analytics > SQL >
-- Return all rows with valid JSON data
SELECT CustomerId, OrderDetails
FROM CustomerOrders
WHERE ISJSON(OrderDetails) > 0;
CustomerId OrderDetails
101
N'[{ StoreId": "AW73565", "Order": { "Number":"SO43659",
"Date":"2011-05-31T00:00:00“ }, "Item": { "Price":2024.40,
"Quantity":1 }}]'
-- Extract values from JSON string
SELECT CustomerId,
Country,
JSON_VALUE(OrderDetails,'$.StoreId') AS StoreId,
JSON_QUERY(OrderDetails,'$.Item') AS ItemDetails
FROM CustomerOrders;
CustomerId Country StoreId ItemDetails
101 Bahrain AW73565 { "Price":2024.40, "Quantity":1 }
112. Overview
Use standard table columns and values from JSON text
in the same analytical query.
Modify JSON data with the following:
• JSON_MODIFY – modifies a value in a JSON string
• OPENJSON – convert JSON collection to a set of
rows and columns
Benefits
Flexibility to update JSON string using T-SQL
Convert hierarchical data into flat tabular structure
JSON data support – modify and operate on JSON data
-- Modify Item Quantity value
UPDATE CustomerOrders SET OrderDetails =
JSON_MODIFY(OrderDetails, '$.OrderDetails.Item.Quantity',2)
Azure Synapse Analytics > SQL >
-- Convert JSON collection to rows and columns
SELECT CustomerId,
StoreId,
OrderDetails.OrderDate,
OrderDetails.OrderPrice
FROM CustomerOrders
CROSS APPLY OPENJSON (CustomerOrders.OrderDetails)
WITH ( StoreId VARCHAR(50) '$.StoreId',
OrderNumber VARCHAR(100) '$.Order.Date',
OrderDate DATETIME '$.Order.Date',
OrderPrice DECIMAL ‘$.Item.Price',
OrderQuantity INT '$.Item.Quantity'
) AS OrderDetails
OrderDetails
N'[{ StoreId": "AW73565", "Order": { "Number":"SO43659",
"Date":"2011-05-31T00:00:00“ }, "Item": { "Price":2024.40, "Quantity": 2}}]'
CustomerId StoreId OrderDate OrderPrice
101 AW73565 2011-05-31T00:00:00 2024.40
113. Overview
It is a group of one or more SQL statements or a
reference to a Microsoft .NET Framework
common runtime language (CLR) method.
Promotes flexibility and modularity.
Supports parameters and nesting.
Benefits
Reduced server/client network traffic, improved
performance
Stronger security
Easy maintenance
Stored Procedures
CREATE PROCEDURE HumanResources.uspGetAllEmployees
AS
SET NOCOUNT ON;
SELECT LastName, FirstName, JobTitle, Department
FROM HumanResources.vEmployeeDepartment;
GO
-- Execute a stored procedures
EXECUTE HumanResources.uspGetAllEmployees;
GO
-- Or
EXEC HumanResources.uspGetAllEmployees;
GO
-- Or, if this procedure is the first statement
within a batch:
HumanResources.uspGetAllEmployees;
Azure Synapse Analytics > SQL >
115. Columnar Storage Columnar Ordering
Table Partitioning Hash Distribution
Database Tables
Optimized Storage
Reduce Migration Risk
Less Data Scanned
Smaller Cache Required
Smaller Clusters
Faster Queries
Nonclustered Indexes
116. -- Create table with index
CREATE TABLE orderTable
(
OrderId INT NOT NULL,
Date DATE NOT NULL,
Name VARCHAR(2),
Country VARCHAR(2)
)
WITH
(
CLUSTERED COLUMNSTORE INDEX |
HEAP |
CLUSTERED INDEX (OrderId)
);
-- Add non-clustered index to table
CREATE INDEX NameIndex ON orderTable (Name);
Clustered Columnstore index (Default Primary)
Highest level of data compression
Best overall query performance
Clustered index (Primary)
Performant for looking up a single to few rows
Heap (Primary)
Faster loading and landing temporary data
Best for small lookup tables
Nonclustered indexes (Secondary)
Enable ordering of multiple columns in a table
Allows multiple nonclustered on a single table
Can be created on any of the above primary indexes
More performant lookup queries
Tables – Indexes
Azure Synapse Analytics > SQL >
117. OrderId Date Name Country
98137 11-3-2018 T FR
98310 11-3-2018 D DE
98799 11-3-2018 R NL
OrderId Date Name Country
82147 11-2-2018 Q FR
85016 11-2-2018 V UK
85018 11-2-2018 Q SP
OrderId Date Name Country
85016 11-2-2018 V UK
85018 11-2-2018 Q SP
85216 11-2-2018 Q DE
85395 11-2-2018 V NL
82147 11-2-2018 Q FR
86881 11-2-2018 D UK
93080 11-3-2018 R UK
94156 11-3-2018 S FR
96250 11-3-2018 Q NL
98799 11-3-2018 R NL
98015 11-3-2018 T UK
98310 11-3-2018 D DE
98979 11-3-2018 Z DE
98137 11-3-2018 T FR
… … … …
Logical table structure
OrderId
82147
85016
85018
85216
85395
Date
11-2-2018
Country
FR
UK
SP
DE
NL
Name
Q
V
Rowgroup1
Min (OrderId): 82147 | Max (OrderId): 85395
OrderId Date Name Country
98137 11-3-2018 T FR
98310 11-3-2018 D DE
98799 11-3-2018 R NL
98979 11-3-2018 Z DE
Delta Rowstore
Azure Synapse Analytics > SQL >
SQL Analytics Columnstore Tables
Clustered columnstore index
(OrderId)
…
• Data stored in compressed columnstore segments after
being sliced into groups of rows (rowgroups/micro-
partitions) for maximum compression
• Rows are stored in the delta rowstore until the number of
rows is large enough to be compressed into a
columnstore
Clustered/Non-clustered rowstore index
(OrderId)
• Data is stored in a B-tree index structure for performant
lookup queries for particular rows.
• Clustered rowstore index: The leaf nodes in the structure
store the data values in a row (as pictured above)
• Non-clustered (secondary) rowstore index: The leaf nodes
store pointers to the data values, not the values
themselves
+
OrderId PageId
82147 1001
98137 1002
OrderId PageId
82147 1005
85395 1006
OrderId PageId
98137 1007
98979 1008
OrderId Date Name Country
82147 11-2-2018 Q FR
85016 11-2-2018 V UK
85018 11-2-2018 Q SP
OrderId Date Name Country
98137 11-3-2018 T FR
98310 11-3-2018 D DE
98799 11-3-2018 R NL
… …
118. Overview
Queries against tables with ordered columnstore segments can
take advantage of improved segment elimination to drastically
reduce the time needed to service a query.
Ordered Clustered Columnstore Indexes
Azure Synapse Analytics > SQL >
-- Insert data into table with ordered columnstore index
INSERT INTO sortedOrderTable
VALUES (1, '01-01-2019','Dave’, 'UK')
-- Create Table with Ordered Columnstore Index
CREATE TABLE sortedOrderTable
(
OrderId INT NOT NULL,
Date DATE NOT NULL,
Name VARCHAR(2),
Country VARCHAR(2)
)
WITH
(
CLUSTERED COLUMNSTORE INDEX ORDER (OrderId)
)
-- Create Clustered Columnstore Index on existing table
CREATE CLUSTERED COLUMNSTORE INDEX cciOrderId
ON dbo.OrderTable ORDER (OrderId)
119. CREATE TABLE dbo.OrderTable
(
OrderId INT NOT NULL,
Date DATE NOT NULL,
Name VARCHAR(2),
Country VARCHAR(2)
)
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH([OrderId]) |
ROUND ROBIN |
REPLICATED
);
Round-robin distributed
Distributes table rows evenly across all distributions
at random.
Hash distributed
Distributes table rows across the Compute nodes by
using a deterministic hash function to assign each
row to one distribution.
Replicated
Full copy of table accessible on each Compute node.
Tables – Distributions
Azure Synapse Analytics > SQL >
120. CREATE TABLE partitionedOrderTable
(
OrderId INT NOT NULL,
Date DATE NOT NULL,
Name VARCHAR(2),
Country VARCHAR(2)
)
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH([OrderId]),
PARTITION (
[Date] RANGE RIGHT FOR VALUES (
'2000-01-01', '2001-01-01', '2002-01-01’,
'2003-01-01', '2004-01-01', '2005-01-01'
)
)
);
Overview
Table partitions divide data into smaller groups
In most cases, partitions are created on a date column
Supported on all table types
RANGE RIGHT – Used for time partitions
RANGE LEFT – Used for number partitions
Benefits
Improves efficiency and performance of loading and
querying by limiting the scope to subset of data.
Offers significant query performance enhancements
where filtering on the partition key can eliminate
unnecessary scans and eliminate IO.
Tables – Partitions
Azure Synapse Analytics > SQL >
121. OrderId Date Name Country
85016 11-2-2018 V UK
85018 11-2-2018 Q SP
85216 11-2-2018 Q DE
85395 11-2-2018 V NL
82147 11-2-2018 Q FR
86881 11-2-2018 D UK
93080 11-3-2018 R UK
94156 11-3-2018 S FR
96250 11-3-2018 Q NL
98799 11-3-2018 R NL
98015 11-3-2018 T UK
98310 11-3-2018 D DE
98979 11-3-2018 Z DE
98137 11-3-2018 T FR
… … … …
Logical table structure
Azure Synapse Analytics > SQL >
Tables – Distributions & Partitions
Physical data distribution
( Hash distribution (OrderId), Date partitions )
OrderId Date Name Country
85016 11-2-2018 V UK
85018 11-2-2018 Q SP
85216 11-2-2018 Q DE
85395 11-2-2018 V NL
82147 11-2-2018 Q FR
86881 11-2-2018 D UK
… … … …
OrderId Date Name Country
93080 11-3-2018 R UK
94156 11-3-2018 S FR
96250 11-3-2018 Q NL
98799 11-3-2018 R NL
98015 11-3-2018 T UK
98310 11-3-2018 D DE
98979 11-3-2018 Z DE
98137 11-3-2018 T FR
… … … …
11-2-2018 partition
11-3-2018 partition
x 60 distributions (shards)
Distribution1
(OrderId 80,000 – 100,000)
…
• Each shard is partitioned with the same
date partitions
• A minimum of 1 million rows per
distribution and partition is needed for
optimal compression and performance of
clustered Columnstore tables
122. Common table distribution methods
Table Category Recommended Distribution Option
Fact
Use hash-distribution with clustered columnstore index. Performance improves because hashing enables the
platform to localize certain operations within the node itself during query execution.
Operations that benefit:
COUNT(DISTINCT( <hashed_key> ))
OVER PARTITION BY <hashed_key>
most JOIN <table_name> ON <hashed_key>
GROUP BY <hashed_key>
Dimension Use replicated for smaller tables. If tables are too large to store on each Compute node, use hash-distributed.
Staging
Use round-robin for the staging table. The load with CTAS is faster. Once the data is in the staging table, use
INSERT…SELECT to move the data to production tables.
Azure Synapse Analytics > SQL >
124. Best in class price
performance
Interactive dashboarding with
Materialized Views
- Automatic data refresh and maintenance
- Automatic query rewrites to improve performance
- Built-in advisor
125. Overview
A materialized view pre-computes, stores, and maintains its
data like a table.
Materialized views are automatically updated when data in
underlying tables are changed. This is a synchronous
operation that occurs as soon as the data is changed.
The auto caching functionality allows Azure Synapse
Analytics Query Optimizer to consider using indexed view
even if the view is not referenced in the query.
Supported aggregations: MAX, MIN, AVG, COUNT,
COUNT_BIG, SUM, VAR, STDEV
Benefits
Automatic and synchronous data refresh with data changes
in base tables. No user action is required.
High availability and resiliency as regular tables
Materialized views
-- Create indexed view
CREATE MATERIALIZED VIEW Sales.vw_Orders
WITH
(
DISTRIBUTION = ROUND_ROBIN |
HASH(ProductID)
)
AS
SELECT SUM(UnitPrice*OrderQty) AS Revenue,
OrderDate,
ProductID,
COUNT_BIG(*) AS OrderCount
FROM Sales.SalesOrderDetail
GROUP BY OrderDate, ProductID;
GO
-- Disable index view and put it in suspended mode
ALTER INDEX ALL ON Sales.vw_Orders DISABLE;
-- Re-enable index view by rebuilding it
ALTER INDEX ALL ON Sales.vw_Orders REBUILD;
Azure Synapse Analytics > SQL >
126. In this example, a query to get the year total sales per customer is shown to
have a lot of data shuffles and joins that contribute to slow performance:
Materialized views - example
-- Get year total sales per customer
(WITH year_total AS
SELECT customer_id,
first_name,
last_name,
birth_country,
login,
email_address,
d_year,
SUM(ISNULL(list_price – wholesale_cost –
discount_amt + sales_price, 0)/2)year_total
FROM customer cust
JOIN catalog_sales sales ON cust.sk = sales.sk
JOIN date_dim ON sales.sold_date = date_dim.date
GROUP BY customer_id, first_name,
last_name,birth_country,
login,email_address ,d_year
)
SELECT TOP 100 …
FROM year_total …
WHERE …
ORDER BY …
Execution time: 103 seconds
Lots of data shuffles and joins needed to complete query
Azure Synapse Analytics > SQL >
No relevant indexed views created on the data warehouse
127. Now, we add an indexed view to the data warehouse to increase the performance of
the previous query. This view can be leveraged by the query even though it is not
directly referenced.
Materialized views - example
-- Create indexed view for query
CREATE INDEXED VIEW nbViewCS WITH (DISTRIBUTION=HASH(customer_id)) AS
SELECT customer_id,
first_name,
last_name,
birth_country,
login,
email_address,
d_year,
SUM(ISNULL(list_price – wholesale_cost – discount_amt +
sales_price, 0)/2) AS year_total
FROM customer cust
JOIN catalog_sales sales ON cust.sk = sales.sk
JOIN date_dim ON sales.sold_date = date_dim.date
GROUP BY customer_id, first_name,
last_name,birth_country,
login, email_address, d_year
Create indexed view with hash distribution on customer_id column
-- Get year total sales per customer
(WITH year_total AS
SELECT customer_id,
first_name,
last_name,
birth_country,
login,
email_address,
d_year,
SUM(ISNULL(list_price – wholesale_cost –
discount_amt + sales_price, 0)/2)year_total
FROM customer cust
JOIN catalog_sales sales ON cust.sk = sales.sk
JOIN date_dim ON sales.sold_date = date_dim.date
GROUP BY customer_id, first_name,
last_name,birth_country,
login,email_address ,d_year
)
SELECT TOP 100 …
FROM year_total …
WHERE …
ORDER BY …
Original query – get year total sales per customer
Azure Synapse Analytics > SQL >
128. The SQL Data Warehouse query optimizer automatically leverages the indexed view to speed up the same query.
Notice that the query does not need to reference the view directly
Indexed (materialized) views - example
Azure Synapse Analytics > SQL >
-- Get year total sales per customer
(WITH year_total AS
SELECT customer_id,
first_name,
last_name,
birth_country,
login,
email_address,
d_year,
SUM(ISNULL(list_price – wholesale_cost –
discount_amt + sales_price, 0)/2)year_total
FROM customer cust
JOIN catalog_sales sales ON cust.sk = sales.sk
JOIN date_dim ON sales.sold_date = date_dim.date
GROUP BY customer_id, first_name,
last_name,birth_country,
login,email_address ,d_year
)
SELECT TOP 100 …
FROM year_total …
WHERE …
ORDER BY …
Original query – no changes have been made to query
Execution time: 6 seconds
Optimizer leverages materialized view to reduce data shuffles and joins needed
129. EXPLAIN - provides query plan for SQL Data Warehouse
SQL statement without running the statement; view
estimated cost of the query operations.
EXPLAIN WITH_RECOMMENDATIONS - provides query
plan with recommendations to optimize the SQL
statement performance.
Materialized views- Recommendations
Azure Synapse Analytics > SQL >
EXPLAIN WITH_RECOMMENDATIONS
select count(*)
from ((select distinct c_last_name, c_first_name, d_date
from store_sales, date_dim, customer
where store_sales.ss_sold_date_sk =
date_dim.d_date_sk
and store_sales.ss_customer_sk =
customer.c_customer_sk
and d_month_seq between 1194 and 1194+11)
except
(select distinct c_last_name, c_first_name, d_date
from catalog_sales, date_dim, customer
where catalog_sales.cs_sold_date_sk =
date_dim.d_date_sk
and catalog_sales.cs_bill_customer_sk =
customer.c_customer_sk
and d_month_seq between 1194 and 1194+11)
) top_customers
130. Streaming Ingestion
Event Hubs
IoT Hub
T-SQL Language
Data Warehouse
Azure Data Lake
--Copy files in parallel directly into data warehouse table
COPY INTO [dbo].[weatherTable]
FROM
'abfss://<storageaccount>.blob.core.windows.net/<filepath>'
WITH (
FILE_FORMAT = 'DELIMITEDTEXT’,
SECRET = CredentialObject);
Heterogenous Data
Preparation &
Ingestion
COPY statement
- Simplified permissions (no CONTROL required)
- No need for external tables
- Standard CSV support (i.e. custom row terminators,
escape delimiters, SQL dates)
- User-driven file selection (wild card support)
SQL Analytics
131. Overview
Copies data from source to destination
Benefits
Retrieves data from all files from the folder and all its
subfolders.
Supports multiple locations from the same storage account,
separated by comma
Supports Azure Data Lake Storage (ADLS) Gen 2 and Azure
Blob Storage.
Supports CSV, PARQUET, ORC file formats
COPY command
Azure Synapse Analytics > SQL >
COPY INTO test_1
FROM
'https://XXX.blob.core.windows.net/customerdatasets/tes
t_1.txt'
WITH (
FILE_TYPE = 'CSV',
CREDENTIAL=(IDENTITY= 'Shared Access Signature',
SECRET='<Your_SAS_Token>'),
FIELDQUOTE = '"',
FIELDTERMINATOR=';',
ROWTERMINATOR='0X0A',
ENCODING = 'UTF8',
DATEFORMAT = 'ymd',
MAXERRORS = 10,
ERRORFILE = '/errorsfolder/'--path starting from
the storage container,
IDENTITY_INSERT
)
COPY INTO test_parquet
FROM
'https://XXX.blob.core.windows.net/customerdatasets/test
.parquet'
WITH (
FILE_FORMAT = myFileFormat
CREDENTIAL=(IDENTITY= 'Shared Access Signature',
SECRET='<Your_SAS_Token>')
)
132. Control Node
Compute Node
Storage
Result
Compute NodeCompute Node
Alter Database <DBNAME> Set Result_Set_Caching ON
Best in class price
performance
Interactive dashboarding with
Resultset Caching
- Millisecond responses with resultset caching
- Cache survives pause/resume/scale operations
- Fully managed cache (1TB in size)
133. Overview
Cache the results of a query in DW storage. This enables interactive
response times for repetitive queries against tables with infrequent
data changes.
The result-set cache persists even if a data warehouse is paused and
resumed later.
Query cache is invalidated and refreshed when underlying table data
or query code changes.
Result cache is evicted regularly based on a time-aware least
recently used algorithm (TLRU).
Benefits
Enhances performance when same result is requested repetitively
Reduced load on server for repeated queries
Offers monitoring of query execution with a result cache hit or miss
Result-set caching
-- Turn on/off result-set caching for a database
-- Must be run on the MASTER database
ALTER DATABASE {database_name}
SET RESULT_SET_CACHING { ON | OFF }
-- Turn on/off result-set caching for a client session
-- Run on target data warehouse
SET RESULT_SET_CACHING {ON | OFF}
-- Check result-set caching setting for a database
-- Run on target data warehouse
SELECT is_result_set_caching_on
FROM sys.databases
WHERE name = {database_name}
-- Return all query requests with cache hits
-- Run on target data warehouse
SELECT *
FROM sys.dm_pdw_request_steps
WHERE command like '%DWResultCacheDb%'
AND step_index = 0
Azure Synapse Analytics > SQL >
134. Result-set caching flow
Azure Synapse Analytics > SQL >
Client sends query to DW1 Query is processed using DW compute
nodes which pull data from remote
storage, process query and output back
to client app
2 Query results are cached in remote
storage so subsequent requests can
be served immediately
0101010001
0100101010
0101010001
0100101010
Subsequent executions for the same
query bypass compute nodes and can
be fetched instantly from persistent
cache in remote storage
3
0101010001
0100101010
Remote storage cache is evicted regularly
based on time, cache usage, and any
modifications to underlying table data.
4 Cache will need to be
regenerated if query results
have been evicted from cache
5
135. Overview
Pre-determined resource limits defined for a user or role.
Benefits
Govern the system memory assigned to each query.
Effectively used to control the number of concurrent queries that
can run on a data warehouse.
Exemptions to concurrency limit:
CREATE|ALTER|DROP (TABLE|USER|PROCEDURE|VIEW|LOGIN)
CREATE|UPDATE|DROP (STATISTICS|INDEX)
SELECT from system views and DMVs
EXPLAIN
Result-Set Cache
TRUNCATE TABLE
ALTER AUTHORIZATION
CREATE|UPDATE|DROP STATISTICS
Resource classes
/* View resource classes in the data warehouse */
SELECT name
FROM sys.database_principals
WHERE name LIKE '%rc%' AND type_desc = 'DATABASE_ROLE';
/* Change user’s resource class to 'largerc' */
EXEC sp_addrolemember 'largerc', 'loaduser’;
/* Decrease the loading user's resource class */
EXEC sp_droprolemember 'largerc', 'loaduser';
Azure Synapse Analytics > SQL >
136. Static Resource Classes
Allocate the same amount of memory independent of
the current service-level objective (SLO).
Well-suited for fixed data sizes and loading jobs.
Dynamic Resource Classes
Allocate a variable amount of memory depending on
the current SLO.
Well-suited for growing or variable datasets.
All users default to the smallrc dynamic resource class.
Resource class types
Static resource classes:
staticrc10 | staticrc20 | staticrc30 |
staticrc40 | staticrc50 | staticrc60 |
staticrc70 | staticrc80
Dynamic resource classes:
smallrc | mediumrc | largerc | xlargerc
Resource Class Percentage
Memory
Max. Concurrent
Queries
smallrc 3% 32
mediumrc 10% 10
largerc 22% 4
xlargerc 70% 1
Azure Synapse Analytics > SQL >
137. Overview
Queries running on a DW compete for access to system resources
(CPU, IO, and memory).
To guarantee access to resources, running queries are assigned a
chunk of system memory (a concurrency slot) for processing the
query. The amount given is determined by the resource class of
the user executing the query. Higher DW SLOs provide more
memory and concurrency slots
Concurrency slots @DW1000c: 40 concurrency slots
Memory (concurrency slots)
Smallrc query
(1 slot each)
Mediumrc query
(4 slots each)
Xlargerc query
(28 slots each)
Staticrc20 query
(2 slots each)
Azure Synapse Analytics > SQL >
138. Overview
The limit on how many queries can run at the same time is
governed by two properties:
• The max. concurrent query count for the DW SLO
• The total available memory (concurrency slots) for the DW SLO
Increase the concurrent query limit by:
• Scaling up to a higher DW SLO (up to 128 concurrent queries)
• Using lower resource classes that use less memory per query
Concurrent query limits
Queries
@DW1000c: 32 max concurrent queries, 40 slots
Concurrency slots
smallrc
(1 slot each)
mediumrc
(4 slots each)
staticrc50
(16 slots each)
staticrc20
(2 slots each)
15 concurrent queries
(40 slots used)
• 8 x smallrc
• 4 x staticrc20
• 2 x mediumrc
• 1 x staticrc50
Azure Synapse Analytics > SQL >
Concurrency limits based on resource classes
139. Workload Management
Overview
It manages resources, ensures highly efficient resource utilization,
and maximizes return on investment (ROI).
The three pillars of workload management are
1. Workload Classification – To assign a request to a workload
group and setting importance levels.
2. Workload Importance – To influence the order in which a
request gets access to resources.
3. Workload Isolation – To reserve resources for a workload
group.
Azure Synapse Analytics > SQL >
Pillars of Workload
Management
Classification
Importance
Isolation
140. Workload classification
Overview
Map queries to allocations of resources via pre-determined rules.
Use with workload importance to effectively share resources
across different workload types.
If a query request is not matched to a classifier, it is assigned to
the default workload group (smallrc resource class).
Benefits
Map queries to both Resource Management and Workload
Isolation concepts.
Manage groups of users with only a few classifiers.
Monitoring DMVs
sys.workload_management_workload_classifiers
sys.workload_management_workload_classifier_details
Query DMVs to view details about all active workload classifiers.
CREATE WORKLOAD CLASSIFIER classifier_name
WITH
(
[WORKLOAD_GROUP = '<Resource Class>' ]
[IMPORTANCE = { LOW |
BELOW_NORMAL |
NORMAL |
ABOVE_NORMAL |
HIGH
}
]
[MEMBERNAME = ‘security_account’]
)
WORKLOAD_GROUP: maps to an existing resource class
IMPORTANCE: specifies relative importance of
request
MEMBERNAME: database user, role, AAD login or AAD
group
Azure Synapse Analytics > SQL >
141. Workload importance
Overview
Queries past the concurrency limit enter a FiFo queue
By default, queries are released from the queue on a
first-in, first-out basis as resources become available
Workload importance allows higher priority queries to
receive resources immediately regardless of queue
Example Video
State analysts have normal importance.
National analyst is assigned high importance.
State analyst queries execute in order of arrival
When the national analyst’s query arrives, it jumps to
the top of the queue
CREATE WORKLOAD CLASSIFIER National_Analyst
WITH
(
[WORKLOAD_GROUP = ‘smallrc’]
[IMPORTANCE = HIGH]
[MEMBERNAME = ‘National_Analyst_Login’]
Azure Synapse Analytics > SQL >
143. CREATE WORKLOAD GROUP group_name
WITH
(
MIN_PERCENTAGE_RESOURCE = value
, CAP_PERCENTAGE_RESOURCE = value
, REQUEST_MIN_RESOURCE_GRANT_PERCENT = value
[ [ , ] REQUEST_MAX_RESOURCE_GRANT_PERCENT = value ]
[ [ , ] IMPORTANCE = {LOW | BELOW_NORMAL | NORMAL | ABOVE_NORMAL | HIGH} ]
[ [ , ] QUERY_EXECUTION_TIMEOUT_SEC = value ]
)[ ; ]
Workload Isolation
Overview
Allocate fixed resources to workload group.
Assign maximum and minimum usage for varying
resources under load. These adjustments can be done live
without having to SQL Analytics offline.
Benefits
Reserve resources for a group of requests
Limit the amount of resources a group of requests can
consume
Shared resources accessed based on importance level
Set Query timeout value. Get DBAs out of the business of
killing runaway queries
Monitoring DMVs
sys.workload_management_workload_groups
Query to view configured workload group.
Azure Synapse Analytics > SQL >
0.4,
40%
0.2,
20%
0.4,
40%
RESOURCE ALLOCATION
group A
group B
Shared
144. Dynamic Management Views (DMVs)
Azure Synapse Analytics > SQL >
Overview
Dynamic Management Views (DMV) are queries that return information
about model objects, server operations, and server health.
Benefits:
Simple SQL syntax
Returns result in table format
Easier to read and copy result
145. SQL Monitor with DMVs
Overview
Offers monitoring of
-all open, closed sessions
-count sessions by user
-count completed queries by user
-all active, complete queries
-longest running queries
-memory consumption
Azure Synapse Analytics > SQL >
--count sessions by user
SELECT login_name, COUNT(*) as session_count FROM
sys.dm_pdw_exec_sessions where status = 'Closed' and session_id
<> session_id() GROUP BY login_name;
-- List all open sessions
SELECT * FROM sys.dm_pdw_exec_sessions where status <> 'Closed'
and session_id <> session_id();
-- List all active queries
SELECT * FROM sys.dm_pdw_exec_requests WHERE status not in
('Completed','Failed','Cancelled') AND session_id <> session_id()
ORDER BY submit_time DESC;
List all active queries
List all open sessions
Count sessions by user
146. Developer Tools
Azure Synapse Analytics > SQL >
Visual Studio - SSDT database projects
SQL Server Management Studio
(queries, execution plans etc.)
Azure Data Studio (queries, extensions etc.)
Azure Synapse Analytics
Visual Studio Code
147. Developer Tools
Azure Synapse Analytics > SQL >
Visual Studio - SSDT
database projects
SQL Server Management StudioAzure Data StudioAzure Synapse Analytics
Visual Studio Code
Azure Cloud Service
Offers end-to-end
lifecycle for analytics
Connects to multiple
services
Runs on Windows
Create, maintain
database code, compile,
code refactoring
Runs on Windows,
Linux, macOS
Light weight editor,
(queries and
extensions)
Runs on Windows
Offers GUI support to
query, design and
manage
Runs on Windows,
Linux, macOS
Offers development
experience with light-
weight code editor
148. Continuous integration and delivery (CI/CD)
Overview
Database project support in SQL Server Data Tools
(SSDT) allows teams of developers to collaborate over a
version-controlled data warehouse, and track, deploy
and test schema changes.
Benefits
Database project support includes first-class
integration with Azure DevOps. This adds support for:
• Azure Pipelines to run CI/CD workflows for any
platform (Linux, macOS, and Windows)
• Azure Repos to store project files in source control
• Azure Test Plans to run automated check-in tests to
verify schema updates and modifications
• Growing ecosystem of third-party integrations that
can be used to complement existing workflows
(Timetracker, Microsoft Teams, Slack, Jenkins, etc.)
Azure Synapse Analytics > SQL >
149. Azure Advisor recommendations
Suboptimal Table Distribution
Reduce data movement by replicating tables
Data Skew
Choose new hash-distribution key
Slowest distribution limits performance
Cache Misses
Provision additional capacity
Tempdb Contention
Scale or update user resource class
Suboptimal Plan Selection
Create or update table statistics
Azure Synapse Analytics > SQL >
150. Maintenance windows
Overview
Choose a time window for your upgrades.
Select a primary and secondary window within a seven-day
period.
Windows can be from 3 to 8 hours.
24-hour advance notification for maintenance events.
Benefits
Ensure upgrades happen on your schedule.
Predictable planning for long-running jobs.
Stay informed of start and end of maintenance.
Azure Synapse Analytics > SQL >
151. Automatic statistics management
Overview
Statistics are automatically created and maintained for SQL pool.
Incoming queries are analyzed, and individual column statistics
are generated on the columns that improve cardinality estimates
to enhance query performance.
Statistics are automatically updated as data modifications occur in
underlying tables. By default, these updates are synchronous but
can be configured to be asynchronous.
Statistics are considered out of date when:
• There was a data change on an empty table
• The number of rows in the table at time of statistics creation
was 500 or less, and more than 500 rows have been updated
• The number of rows in the table at time of statistics creation
was more than 500, and more than 500 + 20% of rows have
been updated
-- Turn on/off auto-create statistics settings
ALTER DATABASE {database_name}
SET AUTO_CREATE_STATISTICS { ON | OFF }
-- Turn on/off auto-update statistics settings
ALTER DATABASE {database_name}
SET AUTO_UPDATE_STATISTICS { ON | OFF }
-- Configure synchronous/asynchronous update
ALTER DATABASE {database_name}
SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
-- Check statistics settings for a database
SELECT is_auto_create_stats_on,
is_auto_update_stats_on,
is_auto_update_stats_async_on
FROM sys.databases
Azure Synapse Analytics > SQL >
152. Event Hubs
IoT Hub
Heterogenous Data
Preparation &
Ingestion
Native SQL Streaming
- High throughput ingestion (up to 200MB/sec)
- Delivery latencies in seconds
- Ingestion throughput scales with compute scale
- Analytics capabilities (SQL-based queries for joins,
aggregations, filters)
- Removes the needtouse Spark for streaming
Streaming Ingestion
T-SQL Language
Data Warehouse
SQL Analytics
156. --T-SQL syntax for scoring data in SQL DW
SELECT d.*, p.Score
FROM PREDICT(MODEL = @onnx_model, DATA =
dbo.mytable AS d)
WITH (Score float) AS p;
Machine Learning
enabled DW
Native PREDICT-ion
- T-SQL based experience (interactive./batch scoring)
- Interoperability with other models built elsewhere
- Execute scoring where the data lives
Upload
models
T-SQL Language
Data Warehouse
Data
+
Score
models
Model
Create
models
Predictions
=
SQL Analytics
157. Data Lake
Integration
ParquetDirect for interactive
data lake exploration
- >10X performance improvement
- Full columnar optimizations (optimizer, batch)
- Built-in transparent caching (SSD, in-memory,
resultset)
13X
SQL Analytics
158. Azure Data Share
Enterprise data sharing
- Share from DW to DW/DB/other systems
- Choose data format to receive data in (CSV, Parquet)
- One to many data sharing
- Share a single or multiple datasets
159. --View most recent snapshot time
SELECT top 1 *
FROM sys.pdw_loader_backup_runs
ORDER BY run_id DESC;
Snapshots and restores
Overview
Automatic copies of the data warehouse state.
Taken throughout the day, or triggered manually.
Available for up to 7 days, even after data warehouse
deletion. 8-hour RPO for restores from snapshot
Regional restore in under 20 minutes, no matter data size.
Snapshots and geo-backups allow cross-region restores.
Automatic snapshots and geo-backups on by default.
Benefits
Snapshots protect against data corruption and deletion.
Restore to quickly create dev/test copies of data.
Manual snapshots protect large modifications.
Geo-backup copies one of the automatic snapshots each
day to RA-GRS storage. This can be used in an event of a
disaster to recover your SQL data warehouse to a new
region. 24-hour RPO for a geo-restore
160. SQL Analytics
new features available
GA features:
- Performance: Resultset caching
- Performance: Materialized Views
- Performance: Ordered columnstore
- Heterogeneous data: JSON support
- Trustworthy compution: Dynamic Data Masking
- Continuous integration & deployment: SSDT support
- Language: Read committed snapshot isolation
Public preview features:
- Workload management: Workload Isolation
- Data ingestion: Simple ingestion with COPY
- Data Sharing: Share DW data with Azure Data Share
- Trustworthy computation: Private LINK support
Private preview features:
- Data ingestion: Streaming ingestion & analytics in DW
- Built-in ML: Native Prediction/Scoring
- Data lake enabled: Fast query over Parquet files
- Language: Updateable distribution column
- Language: FROM clause with joins
- Language: Multi-column distribution support
- Security: Column-level Encryption
Note: private preview features require whitelisting
163. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
164. Synapse SQL on-demand scenarios
What’s in this file? How many rows are there? What’s the max value?
SQL On-demand reduces data lake exploration to the right-click!
How to convert CSVs to Parquet quickly? How to transform the raw data?
Use the full power of T-SQL to transform the data in the data lake
165. SQL On-Demand
Overview
An interactive query service that provides T-SQL queries over
high scale data in Azure Storage.
Benefits
Serverless
No infrastructure
Pay only for query execution
No ETL
Offers security
Data integration with Databricks, HDInsight
T-SQL syntax to query data
Supports data in various formats (Parquet, CSV, JSON)
Support for BI ecosystem
Azure Synapse Analytics > SQL >
Azure Storage
SQL On
Demand
Query
Power BI
Azure Data Studio
SSMS
SQL DW
Read and write
data files
Curate and transform data
Sync table
definitions
Read and write
data files
166. SQL On Demand – Querying on storage
Azure Synapse Analytics > SQL On Demand
167. SQL On Demand – Querying CSV File
Overview
Uses OPENROWSET function to access data
Benefits
Ability to read CSV File with
- no header row, Windows style new line
- no header row, Unix-style new line
- header row, Unix-style new line
- header row, Unix-style new line, quoted
- header row, Unix-style new line, escape
- header row, Unix-style new line, tab-delimited
- without specifying all columns
Azure Synapse Analytics > SQL >
SELECT *
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/population/populat
ion.csv',
FORMAT = 'CSV',
FIELDTERMINATOR =',',
ROWTERMINATOR = 'n'
)
WITH (
[country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
[country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
[year] smallint,
[population] bigint
) AS [r]
WHERE
country_name = 'Luxembourg'
AND year = 2017
168. SQL On Demand – Querying CSV File
Read CSV file - header row, Unix-style new line
Azure Synapse Analytics > SQL On Demand
SELECT *
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/population-
unix-hdr/population.csv',
FORMAT = 'CSV',
FIELDTERMINATOR =',',
ROWTERMINATOR = '0x0a',
FIRSTROW = 2
)
WITH (
[country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
[country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
[year] smallint,
[population] bigint
) AS [r]
WHERE
country_name = 'Luxembourg'
AND year = 2017
Read CSV file - without specifying all columns
SELECT
COUNT(DISTINCT country_name) AS countries
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/popul
ation/population.csv',
FORMAT = 'CSV',
FIELDTERMINATOR =',',
ROWTERMINATOR = 'n'
)
WITH (
[country_name] VARCHAR (100) COLLATE Latin1_Gener
al_BIN2 2
) AS [r]
169. SQL On Demand – Querying folders
Overview
Uses OPENROWSET function to access data from
multiple files or folders
Benefits
Offers reading multiple files/folders through usage of
wildcards
Offers reading specific file/folder
Supports use of multiple wildcards
Azure Synapse Analytics > SQL On Demand
SELECT YEAR(pickup_datetime) as [year], SUM(passenger_count) AS
passengers_total, COUNT(*) AS [rides_total]
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/taxi/*.*’,
FORMAT = 'CSV’
, FIRSTROW = 2 )
WITH (
vendor_id VARCHAR(100) COLLATE Latin1_General_BIN2,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count INT,
trip_distance FLOAT,
rate_code INT,
store_and_fwd_flag VARCHAR(100) COLLATE Latin1_General_BIN2,
pickup_location_id INT,
dropoff_location_id INT,
payment_type INT,
fare_amount FLOAT,
extra FLOAT, mta_tax FLOAT,
tip_amount FLOAT,
tolls_amount FLOAT,
improvement_surcharge FLOAT,
total_amount FLOAT
) AS nyc
GROUP BY YEAR(pickup_datetime)
ORDER BY YEAR(pickup_datetime)
170. SQL On Demand – Querying folders
Azure Synapse Analytics > SQL On Demand
SELECT
payment_type,
SUM(fare_amount) AS fare_total
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_2017-*.csv',
FORMAT = 'CSV',
FIRSTROW = 2 )
WITH (
vendor_id VARCHAR(100) COLLATE Latin1_General_BIN2,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count INT,
trip_distance FLOAT,
<…columns>
) AS nyc
GROUP BY payment_type
ORDER BY payment_type
Read subset of files in folderRead all files from multiple folders
SELECT YEAR(pickup_datetime) as [year],
SUM(passenger_count) AS passengers_total,
COUNT(*) AS [rides_total]
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/t*i/',
FORMAT = 'CSV',
FIRSTROW = 2 )
WITH (
vendor_id VARCHAR(100) COLLATE Latin1_General_BIN2,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count INT,
trip_distance FLOAT,
<… columns>
) AS nyc
GROUP BY YEAR(pickup_datetime)
ORDER BY YEAR(pickup_datetime)
171. SQL On Demand – Querying specific files
Overview
filename – Provides file name that originates row
result
filepath – Provides full path when no parameter is
passed or part of path when parameter is passed
that originates result
Benefits
Provides source name/path of file/folder for row
result set
Azure Synapse Analytics > SQL On Demand
SELECT
r.filename() AS [filename]
,COUNT_BIG(*) AS [rows]
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_201
7-1*.csv’,
FORMAT = 'CSV',
FIRSTROW = 2
)
WITH (
vendor_id INT,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count SMALLINT,
trip_distance FLOAT,
<…columns>
) AS [r]
GROUP BY r.filename()
ORDER BY [filename]
Example of filename function
172. SQL On Demand – Querying specific files
Azure Synapse Analytics > SQL On Demand
SELECT
r.filepath() AS filepath
,r.filepath(1) AS [year]
,r.filepath(2) AS [month]
,COUNT_BIG(*) AS [rows]
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_*-*.csv’,
FORMAT = 'CSV',
FIRSTROW = 2 )
WITH (
vendor_id INT,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count SMALLINT,
trip_distance FLOAT,
<… columns>
) AS [r]
WHERE r.filepath(1) IN ('2017’)
AND r.filepath(2) IN ('10', '11', '12’)
GROUP BY r.filepath() ,r.filepath(1) ,r.filepath(2)
ORDER BY filepath filepath year month rows
https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_2017-10.csv 2017 10 9768815
https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_2017-11.csv 2017 11 9284803
https://XXX.blob.core.windows.net/csv/taxi/yellow_tripdata_2017-12.csv 2017 12 9508276
Example of filepath function
173. SQL On Demand – Querying Parquet files
Overview
Uses OPENROWSET function to access data
Benefits
Ability to specify column names of interest
Offers auto reading of column names and data types
Provides target specific partitions using filepath function
Azure Synapse Analytics > SQL On Demand
SELECT
YEAR(pickup_datetime),
passenger_count,
COUNT(*) AS cnt
FROM
OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/parquet/taxi/*/*/*',
FORMAT='PARQUET'
) WITH (
pickup_datetime DATETIME2,
passenger_count INT
) AS nyc
GROUP BY
passenger_count,
YEAR(pickup_datetime)
ORDER BY
YEAR(pickup_datetime),
passenger_count
174. SQL On Demand – Creating views
Overview
Create views using SQL On Demand queries
Benefits
Works same as standard views
Azure Synapse Analytics > SQL On Demand
USE [mydbname]
GO
IF EXISTS(select * FROM sys.views where name = 'populationView')
DROP VIEW populationView
GO
CREATE VIEW populationView AS
SELECT *
FROM OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/csv/population/population.csv',
FORMAT = 'CSV',
FIELDTERMINATOR =',',
ROWTERMINATOR = 'n'
)
WITH (
[country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
[country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
[year] smallint,
[population] bigint
) AS [r]
SELECT
country_name, population
FROM populationView
WHERE
[year] = 2019
ORDER BY
[population] DESC
175. SQL On Demand – Creating views
Azure Synapse Analytics > SQL On Demand
176. SQL On Demand – Querying JSON files
Azure Synapse Analytics > SQL On Demand
SELECT *
FROM
OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/json/books/book
1.json’,
FORMAT='CSV',
FIELDTERMINATOR ='0x0b',
FIELDQUOTE = '0x0b',
ROWTERMINATOR = '0x0b'
)
WITH (
jsonContent varchar(8000)
) AS [r]
Overview
Read JSON files and provides data in tabular format
Benefits
Supports OPENJSON, JSON_VALUE and JSON_QUERY
functions
177. SQL On Demand – Querying JSON files
SELECT
JSON_QUERY(jsonContent, '$.authors') AS authors,
jsonContent
FROM
OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/json/books/*.json',
FORMAT='CSV',
FIELDTERMINATOR ='0x0b',
FIELDQUOTE = '0x0b',
ROWTERMINATOR = '0x0b'
)
WITH (
jsonContent varchar(8000)
) AS [r]
WHERE
JSON_VALUE(jsonContent, '$.title') = 'Probabilistic and Statist
ical Methods in Cryptology, An Introduction by Selected Topics'
Azure Synapse Analytics > SQL On Demand
SELECT
JSON_VALUE(jsonContent, '$.title') AS title,
JSON_VALUE(jsonContent, '$.publisher') as publisher,
jsonContent
FROM
OPENROWSET(
BULK 'https://XXX.blob.core.windows.net/json/books/*.json',
FORMAT='CSV',
FIELDTERMINATOR ='0x0b',
FIELDQUOTE = '0x0b',
ROWTERMINATOR = '0x0b'
)
WITH (
jsonContent varchar(8000)
) AS [r]
WHERE
JSON_VALUE(jsonContent, '$.title') = 'Probabilistic and Statisti
cal Methods in Cryptology, An Introduction by Selected Topics'
Example of JSON_QUERY functionExample of JSON_VALUE function
178. Create External Table As Select
Overview
Creates an external table and then exports results of the
Select statement. These operations will import data into the
database for the duration of the query
Steps:
1. Create Master Key
2. Create Credentials
3. Create External Data Source
4. Create External Data Format
5. Create External Table
Azure Synapse Analytics > SQL On Demand
-- Create a database master key if one does not already exist
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
;
-- Create a database scoped credential with Azure storage account key
as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>'
, SECRET = '<azure_storage_account_key>'
;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/'
, CREDENTIAL = AzureStorageCredential
, TYPE = HADOOP
)
-- Create an external file format
CREATE EXTERNAL FILE FORMAT MyAzureCSVFormat
WITH (FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR = ',',
FIRST_ROW = 2)
--Create an external table
CREATE EXTERNAL TABLE dbo.FactInternetSalesNew
WITH(
LOCATION = '/files/Customer',
DATA_SOURCE = MyAzureStorage,
FILE_FORMAT = MyAzureCSVFormat
)
AS SELECT T1.* FROM dbo.FactInternetSales T1 JOIN dbo.DimCustomer T2
ON ( T1.CustomerKey = T2.CustomerKey )
OPTION ( HASH JOIN );
183. Azure Synapse Analytics
Integrated data platform for BI, AI and continuous intelligence
Platform
Azure
Data Lake Storage
Common Data Model
Enterprise Security
Optimized for Analytics
METASTORE
SECURITY
MANAGEMENT
MONITORING
DATA INTEGRATION
Analytics Runtimes
PROVISIONED ON-DEMAND
Form Factors
SQL
Languages
Python .NET Java Scala R
Experience Azure Synapse Studio
Artificial Intelligence / Machine Learning / Internet of Things
Intelligent Apps / Business Intelligence
METASTORE
SECURITY
MANAGEMENT
MONITORING
184. • Apache Spark 2.4 derivation
• Linux Foundation Delta Lake 0.4 support
• .Net Core 3.0 support
• Python 3.6 + Anacondas support
• Tightly coupled to other Azure Synapse services
• Integrated security and sign on
• Integrated Metadata
• Integrated and simplified provisioning
• Integrated UX including nteract based notebooks
• Fast load of SQL Analytics pools
Azure Synapse Apache Spark - Summary
• Core scenarios
• Data Prep/Data Engineering/ETL
• Machine Learning via Spark ML and Azure ML
integration
• Extensible through library management
• Efficient resource utilization
• Fast Start
• Auto scale (up and down)
• Auto pause
• Min cluster size of 3 nodes
• Multi Language Support
• .Net (C#), PySpark, Scala, Spark SQL, Java
185. What is Delta Lake?
• OSS storage layer for Spark
• Provides:
• ACID transactions
• History of changed
• Time travel in data history
• Schema evolution
• …
186. Languages
Overview
Supports multiple languages to develop
notebook
• PySpark (Python)
• Spark (Scala)
• .NET Spark (C#)
• Spark SQL
• Java
• R (early 2020)
Benefits
Allows to write multiple languages in one
notebook
%%<Name of language>
Offers use of temporary tables across
languages
188. Spark Unifies:
Batch Processing
An unified, open source, parallel, data processing framework for Big Data Analytics
Spark Core Engine
Spark SQL
Batch processing
Spark Structured
Streaming
Stream processing
Spark MLlib
Machine
Learning
Yarn
Spark MLlib
Machine
Learning
Spark
Streaming
Stream processing
GraphX
Graph
Computation
http://spark.apache.org
Apache Spark
189. Traditional Approach: MapReduce jobs for complex jobs, interactive query, and online event-hub processing
involves lots of (slow) disk I/O
HDFS
Read
HDFS
Write
HDFS
Read
HDFS
Write
CPU
Iteration 1
Memory CPU
Iteration 2
Memory
Motivation for Apache Spark
190. Traditional Approach: MapReduce jobs for complex jobs, interactive query, and online event-hub processing
involves lots of (slow) disk I/O
Solution: Keep data in-memory with a new distributed execution engine
HDFS
Read
Input
CPU
Iteration 1
Memory CPU
Iteration 2
Memory
10–100x faster than
network & disk
Minimal
Read/Write Disk
Bottleneck
Chain Job Output
into New Job Input
HDFS
Read
HDFS
Write
HDFS
Read
HDFS
Write
CPU
Iteration 1
Memory CPU
Iteration 2
Memory
Motivation for Apache Spark
195. Synapse Service
Job Service Frontend
Spark API
Controller …
Job Service Backend
Spark Plugin
Gateway
Resource
Provider
DB
Synapse Studio
AAD
Auth Service
Instance
Creation Service
DBDB
Azure
Spark Instance
VM VM VM VM VM
…
VM
Synapse Job Service • User creates Synapse Workspace and Spark pool and
launches Synapse Studio.
• User attaches Notebook to Spark pool and enters
one or more Spark statements (code blocks).
• The Notebook client gets user token from AAD and
sends a Spark session create request to Synapse
Gateway.
• Synapse Gateway authenticates the request and
validates authorizations on the Workspace and Spark
pool and forwards it to the Spark (Livy) controller
hosted in Synapse Job Service frontend.
• The Job Service frontend forwards the request to Job
Service backend that creates two jobs – one for
creating the cluster and the other for creating the
Spark session.
• The Job service backend contacts Synapse Resource
Provider to obtain Workspace and Spark pool details
and delegates the cluster creation request to
Synapse Instance Service.
• Once the instance is created, the Job Service
backend forwards the Spark session creation request
to the Livy endpoint in the cluster.
• Once the Spark session is created the Notebook
client sends Spark statements to the Job Service
frontend.
• Job Service frontend obtains the actual Livy endpoint
for the cluster created for the particular user from
the backend and sends the statement directly to Livy
for execution.