This document provides tips for optimizing performance in Power BI by focusing on different areas like data sources, the data model, visuals, dashboards, and using trace and log files. Some key recommendations include filtering data early, keeping the data model and queries simple, limiting visual complexity, monitoring resource usage, and leveraging log files to identify specific waits and bottlenecks. An overall approach of focusing on time-based optimization by identifying and addressing the areas contributing most to latency is advocated.
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
How do you turn data from many different sources into actionable insights and manufacture those insights into innovative information-based products and services? Industry leaders are accomplishing this by adding Hadoop as a critical component in their modern data architecture to build a data lake. A data lake collects and stores data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field. A data lake cost-effectively scales to collect and retain massive amounts of data over time, and convert all this data into actionable information that can transform your business. Join Hortonworks and Informatica as we discuss: - What is a data lake? - The modern data architecture for a data lake - How Hadoop fits into the modern data architecture - Innovative use-cases for a data lake
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
Log Analytics and Application Insights can help with monitoring and managing integration solutions built with Microsoft technologies. They provide performance monitoring of APIs, functions, logic apps and other components. While end-to-end tracing has some limitations, the tools allow for custom logging, out-of-box views of data, and testing the availability of key applications and services.
Did you know Microsoft provides a Hadoop Platform-as-a-Service (PaaS)? It’s called Azure HDInsight and it deploys and provisions managed Apache Hadoop clusters in the cloud, providing a software framework designed to process, analyze, and report on big data with high reliability and availability. HDInsight uses the Hortonworks Data Platform (HDP) Hadoop distribution that includes many Hadoop components such as HBase, Spark, Storm, Pig, Hive, and Mahout. Join me in this presentation as I talk about what Hadoop is, why deploy to the cloud, and Microsoft’s solution.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn: - Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart - Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon - Step by step approach to building an effective data warehouse architecture - Common reasons for the failure of data warehouse implementations and how to avoid them
Apache Hadoop is revolutionizing business intelligence and data analytics by providing a scalable and fault-tolerant distributed system for data storage and processing. It allows businesses to explore raw data at scale, perform complex analytics, and keep data alive for long-term analysis. Hadoop provides agility through flexible schemas and the ability to store any data and run any analysis. It offers scalability from terabytes to petabytes and consolidation by enabling data sharing across silos.
Barbara Zigman has over 25 years of experience in telecommunications management positions involving business development, sales, marketing, and product management. She has worked for several service providers and has led teams supporting the sale of complex technical products and services. Her technical expertise includes fiber networks, TDM networks, IP networking, PBX/VoIP systems, and wireless technologies.
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
Learn how SQL Server can scale to HUNDREDS of terabytes for BI solutions. This session will focus on Fast Track Solutions and Appliances, Reference Architectures, and Parallel Data Warehousing (PDW). Included will be performance numbers and lessons learned on a PDW implementation and how a successful BI solution was built on top of it using SSAS.
This document discusses how Apache Hadoop provides a solution for enterprises facing challenges from the massive growth of data. It describes how Hadoop can integrate with existing enterprise data systems like data warehouses to form a modern data architecture. Specifically, Hadoop provides lower costs for data storage, optimization of data warehouse workloads by offloading ETL tasks, and new opportunities for analytics through schema-on-read and multi-use data processing. The document outlines the core capabilities of Hadoop and how it has expanded to meet enterprise requirements for data management, access, governance, integration and security.
Power BI con caracteristicas de inteligencia artificial en integracion con machine learning, controles de analisis de influenziadores, Q&A
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
This document discusses design principles for a modern data warehouse based on case studies from de Bijenkorf and Travelbird. It advocates for a scalable cloud-based architecture using a bus, lambda architecture to process both real-time and batch data, a federated data model to handle structured and unstructured data, massively parallel processing databases, an agile data model like Data Vault, code automation, and using ELT rather than ETL. Specific technologies used by de Bijenkorf include AWS services, Snowplow, Rundeck, Jenkins, Pentaho, Vertica, Tableau, and automated Data Vault loading. Travelbird additionally uses Hadoop for initial data processing before loading into Redshift
SQL Power Consulting is a Toronto-based consulting firm founded in 1988 that specializes in data warehousing and business intelligence solutions. They offer professional consulting services to support the end-to-end deployment of BI solutions. Their methodology involves multiple phases including requirements review, architecture design, project planning, ETL and report design/build, and warranty support. They emphasize critical success factors like commitment from stakeholders, flexible architectures, productivity tools, and delivering business value for clients.
Tableau Desktop allows users to connect directly to data sources to create visualizations. Tableau Server acts as a middle layer, querying data sources on behalf of client web applications and browsers. It caches frequently accessed data to improve performance. Users can publish Tableau workbooks and data sources to Tableau Server from Tableau Desktop. This allows visualization of live, up-to-date data through a web browser without needing direct access to the underlying sources.
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
The document discusses building a data warehouse in SQL Server. It provides an agenda that covers topics like an overview of data warehousing, data warehouse design, dimension and fact tables, and physical design. It also discusses components of a data warehousing solution like the data warehouse database, ETL processes, and security considerations.
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
The document discusses building a data platform for analytics in Azure. It outlines common issues with traditional data warehouse architectures and recommends building a data lake approach using Azure Synapse Analytics. The key elements include ingesting raw data from various sources into landing zones, creating a raw layer using file formats like Parquet, building star schemas in dedicated SQL pools or Spark tables, implementing alerting using Log Analytics, and loading data into Power BI. Building the platform with Python pipelines, notebooks, and GitHub integration is emphasized for flexibility, testability and collaboration.
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here? In this webinar, we say no. Databases have not sat around while Hadoop emerged. The Hadoop era generated a ton of interest and confusion, but is it still relevant as organizations are deploying cloud storage like a kid in a candy store? We’ll discuss what platforms to use for what data. This is a critical decision that can dictate two to five times additional work effort if it’s a bad fit. Drop the herd mentality. In reality, there is no “one size fits all” right now. We need to make our platform decisions amidst this backdrop. This webinar will distinguish these analytic deployment options and help you platform 2020 and beyond for success.
The document discusses optimizing a data warehouse by offloading some workloads and data to Hadoop. It identifies common challenges with data warehouses like slow transformations and queries. Hadoop can help by handling large-scale data processing, analytics, and long-term storage more cost effectively. The document provides examples of how customers benefited from offloading workloads to Hadoop. It then outlines a process for assessing an organization's data warehouse ecosystem, prioritizing workloads for migration, and developing an optimization plan.
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
Organizations that have vast amounts of data in legacy applications often experience difficulties delivering that data to business unit end-users. Register to learn how Eliza Corporation and Scholastic overcame this challenge by leveraging a Data Lake solution from NorthBay on AWS to optimize data analytics and provide greater visibility. AWS and NorthBay will give you an in-depth overview of how you can use a Data Lake in conjunction with your existing on-premises or cloud-based Data Warehouse. NorthBay helps organizations scale their ETL and data warehousing workloads using Amazon EMR and Amazon Redshift. Join us to learn: • Best practices for using a Data Lake in conjunction with your existing data warehouse • The key aspects of introducing agile and scrum methodologies into an enterprise • The most impactful cost-savings levers that are addressed via a cloud data warehouse migration Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Download & Share Technology Presentations http://goo.gl/k80oY0 Student Guide & Best http://goo.gl/6OkI77
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020. Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms. Data lakes will be built in cloud object storage. We’ll discuss the options there as well. Get this data point for your data lake journey.
You can watch the replay of this Geek Sync webinar in the IDERA Resource Center: http://ow.ly/pg7N50A4svf. Today's data management professional is finding their landscape changing. They have multiple database platforms to manage, multi-OS environments and everyone wants it now. Join IDERA and Kellyn Pot’Vin-Gorman as she discusses the power of auto deployment in Azure when faced with complex environments and tips to increase the knowledge you need at the speed of light. Kellyn will cover scripting basics, advanced Portal features, opportunities to lessen the learning curve and how multi-platform and tier doesn't have to mean multi-cloud. Attendees can expect to learn how to build automation scripts efficiently, even if you have little scripting experience, and how to work with Azure automation deployments. This session will allow you to begin building a repository of multi-platform development scripts to use as needed. About Kellyn: Kellyn Pot’Vin-Gorman is a member of the Oak Table Network and an IDERA ACE and Oracle ACE Director alumnus. She is the newest Technical Solution Professional in Power BI with AI in the EdTech group at Microsoft. Kellyn is known for her extensive work with multi-database platforms, DevOps, cloud migrations, virtualization, visualizations, scripting, environment optimization tuning, automation, and architecture design. She has spoken at numerous technical conferences for Oracle, Big Data, DevOps, Testing and SQL Server. Her blog, http://dbakevlar.com and social media activity under her handle, DBAKevlar is well respected for her insight and content.
The document summarizes new features in SQL Server 2016 SP1, organized into three categories: performance enhancements, security improvements, and hybrid data capabilities. It highlights key features such as in-memory technologies for faster queries, always encrypted for data security, and PolyBase for querying relational and non-relational data. New editions like Express and Standard provide more built-in capabilities. The document also reviews SQL Server 2016 SP1 features by edition, showing advanced features are now more accessible across more editions.
This document discusses using Oracle Business Intelligence Enterprise Edition (OBIEE) and the Data Vault data modeling technique to virtualize a business intelligence environment in an agile way. Data Vault provides a flexible and adaptable modeling approach that allows for rapid changes. OBIEE allows for the virtualization of dimensional models built on a Data Vault foundation, enabling quick iteration and delivery of reports and dashboards to users. Together, Data Vault and OBIEE provide an agile approach to business intelligence.
Overview of data lakes architectures, governance and lessons learned. Presented at RVA Data Engineering Meetup on 12/15/2020.
This document discusses a community conference focused on cloud computing. It promotes connecting, sharing, and learning at the event. Several speakers are highlighted including Rohan Kumar from Microsoft who will give a keynote on data platforms. The document discusses major trends converging around intelligence, cloud, big data and IoT. It promotes Microsoft solutions for optimizing IT and business transformation through an intelligent platform, self-managed services, a modern data platform, and integrated intelligence.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Relational databases can become rigid and limit flexibility over time as data needs change. This can lead to services becoming tightly coupled and difficult to independently deploy (Relational Database Stockholm Syndrome). The document discusses an alternative approach that uses a distributed log (Apache Kafka) to store data as events, with domain-specific services processing these events independently. This allows for greater agility, flexibility and independent deployment of services.
This document provides an overview of using Polybase for data virtualization in SQL Server. It discusses installing and configuring Polybase, connecting external data sources like Azure Blob Storage and SQL Server, using Polybase DMVs for monitoring and troubleshooting, and techniques for optimizing performance like predicate pushdown and creating statistics on external tables. The presentation aims to explain how Polybase can be leveraged to virtually access and query external data using T-SQL without needing to know the physical data locations or move the data.