The document discusses Cisco's Hadoop as a service offering on their Intercloud platform. Some key points:
- Cisco provides managed Hadoop, including Cloudera's distribution, on optimized instances with local storage and object storage. This offers a scalable, reliable, and secure environment for Hadoop workloads.
- Use cases discussed include predictive maintenance using IoT data and analyzing customer journeys across multiple channels.
- A pilot test showed Cisco's platform could process over 100 million records from production data across various Hadoop jobs.
- Cisco also discusses their data virtualization product CiscoDV, which can integrate data across on-premises, cloud sources on Cisco and AWS.
-
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...DataWorks Summit
Back in 2014, our team set out to change the way the world exchanges and collaborates with data. Our vision was to build a single tenant environment for multiple organisations to securely share and consume data. And we did just that, leveraging multiple Hadoop technologies to help our infrastructure scale quickly and securely.
Today Data Republic’s technology delivers a trusted platform for hundreds of enterprise level companies to securely exchange, commercialise and collaborate with large datasets.
Join Head of Engineering, Juan Delard de Rigoulières and Senior Solutions Architect, Amin Abbaspour as they share key lessons from their team’s journey with Hadoop:
* How a startup leveraged a clever combination of Hadoop technologies to build a secure data exchange platform
* How Hadoop technologies helped us deliver key solutions around governance, security and controls of data and metadata
* An evaluation on the maturity and usefulness of some Hadoop technologies in our environment: Hive, HDFS, Spark, Ranger, Atlas, Knox, Kylin: we've use them all extensively.
* Our bold approach to expose APIs directly to end users; as well as the challenges, learning and code we created in the process
* Learnings from the front-line: How our team coped with code changes, performance tuning, issues and solutions while building our data exchange
Whether you’re an enterprise level business or a start-up looking to scale - this case study discussion offers behind-the-scenes lessons and key tips when using Hadoop technologies to manage data governance and collaboration in the cloud.
Speakers:
Juan Delard De Rigoulieres, Head of Engineering, Data Republic Pty Ltd
Amin Abbaspour, Senior Solutions Architect, Data Republic
Achieving cloud scale with microservices based applications on azureUtkarsh Pandey
This document discusses different categories of cloud services and the balance of control and responsibility they provide. It also summarizes benefits reported by organizations that shifted application development and deployment from Azure Infrastructure as a Service to Azure Platform as a Service, including a 466% return on investment and 80% reduction in IT time. Additionally, it outlines challenges with monolithic applications and benefits of containerization and microservices for scalability, reliability, and adopting new technologies.
3 Things to Learn About:
-How Kudu is able to fill the analytic gap between HDFS and Apache HBase
-The trade-offs between real-time transactional access and fast analytic performance
-How Kudu provides an option to achieve fast scans and random access from a single API
Part 2: Cloudera’s Operational Database: Unlocking New Benefits in the CloudCloudera, Inc.
3 Things to Learn About:
*On-premises versus the cloud
*Design & benefits of real-time operational data in the cloud
*Best practices and architectural considerations
Unify Stream and Batch Processing using Dataflow, a Portable Programmable Mod...DataWorks Summit
Google Cloud Dataflow is a fully managed service that allows users to build batch or streaming parallel data processing pipelines. It provides a unified programming model for batch and streaming workflows. Cloud Dataflow handles resource management and optimization to efficiently execute data processing jobs on Google Cloud Platform.
This document discusses total cost of ownership considerations for Hadoop implementations. It outlines different deployment methods like on-premise Hadoop, Hadoop appliances, and Hadoop as a service through cloud providers. For on-premise implementations, it identifies key cost categories and provides a sample TCO calculation over 36 months. It also discusses factors for managing implementation risks from vendors and internal IT. The document concludes by outlining scenarios for when on-premise or Hadoop as a service may be preferable based on organizational needs and IT resources.
This document summarizes a presentation given by Chris Nauroth and Sheetal Dolas of Hortonworks on keeping Hadoop clusters running optimally. It describes several common operational challenges faced by Hadoop users through examples, and how the SmartSense tool can help address these issues by continuously evaluating cluster configurations, identifying risks, and providing recommendations. The presentation covers topics such as unstable NameNodes, high CPU usage, HDFS upgrades, container sizing, accidental data deletion, and time synchronization issues across nodes.
VMworld 2013: Big Data Platform Building Blocks: Serengeti, Resource Manageme...VMworld
VMworld 2013
Abhishek Kashyap, Pivotal
Kevin Leong, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses deploying Hadoop in the cloud. Some key benefits of using Hadoop in the cloud include scalability, flexibility, automated failover, and cost efficiency. Microsoft's Azure HDInsight offering provides a fully managed Hadoop and Spark service in the cloud that allows users to setup clusters in minutes without having to manage the infrastructure. It also integrates with other Azure services like Data Lake Store, Stream Analytics, and Machine Learning to provide end-to-end big data analytics solutions.
This document discusses Azure HDInsight and how it provides a managed Hadoop as a service on Microsoft's cloud platform. Key points include:
- Azure HDInsight runs Apache Hadoop and related projects like Hive and Pig in a cloud-based cluster that can be set up in minutes without hardware to deploy or maintain.
- It supports running queries and analytics jobs on data stored locally in HDFS or in Azure cloud storage like Blob storage and Data Lake Store.
- An IDC study found that Microsoft customers using cloud-based Hadoop through Azure HDInsight have 63% lower total cost of ownership than an on-premises Hadoop deployment.
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...DataWorks Summit
The document discusses Hive LLAP (Live Long and Process) as a high performance and cost-effective alternative to traditional Massively Parallel Processing (MPP) databases for querying large datasets on Hadoop. It describes Walmart's implementation of Hive LLAP on their data lake to improve query performance for business users. A proof-of-concept found Hive LLAP queries were up to 50% faster when using 15 nodes instead of 10, and it performed comparably or better than two MPP databases with similar or larger infrastructures. Walmart plans to further evaluate Hive LLAP on newer Hadoop distributions and technologies to improve availability and workload management.
The document discusses Apache Hive and Apache Druid for fast SQL on big data. It provides performance benchmarks showing Hive LLAP is faster than Presto and Spark SQL for TPC-DS queries. It describes features of Hive LLAP including in-memory caching, query result caching, and metadata caching. It also discusses new Hive 3 features like materialized views and optimizer improvements. The document then provides an overview of Apache Druid's capabilities for real-time ingestion and querying of streaming data before discussing how Hive and Druid can work together, with Hive able to push down queries to Druid.
This presentation will describe the analytics-to-cloud migration initiative underway at Fannie Mae. The goal of this effort is threefold: (1) build a sustainable process for data lake hydration on the cloud and (2) modernize the Fannie Mae enterprise data warehouse infrastructure and (3) retire Netezza.
Fannie Mae partnered with Impetus for modernization of its Netezza legacy analytics platform. This involved the use of the Impetus Workload Migration solution—a sophisticated translation engine that automated the migration of their complex Netezza stored procedures, shell and scheduler scripts to Apache Spark compatible scripts. This delivered substantial savings in time, effort and cost, while reducing overall project risk.
Included in the scope of the automation project was an automated assessment capability to perform detailed profiling of the current workloads. The output from the assessment stage was a data-driven offloading blueprint and roadmap for which workloads to migrate. A hybrid cloud-based big data solution was designed based on that. In addition to fulfilling the essential requirement of historical (and incremental) data migration and automated logic translation, the solution also recommends optimal storage formats for the data in the cloud, performing SCD Type 1 and Type 2 for mission-critical parameters and reloading the transformed data back for reporting/analytical consumption.
This will include the following topics:
i. Fannie Mae analytics overview
ii. Why cloud migration for analytics?
iii. Approach, major challenges, lessons learned
Speaker
Kevin Bates, Vice President for Enterprise Data Strategy Execution, Fannie Mae
This document provides an overview of installing and programming with Apache Spark on the Hortonworks Data Platform (HDP). It discusses how Spark fits within HDP and can be used for batch processing, streaming, SQL queries and machine learning. The document outlines how to install Spark on HDP using Ambari and describes Spark programming with Resilient Distributed Datasets (RDDs), transformations, actions and caching/persistence. It provides examples of Spark APIs and programming patterns.
Zeta Architecture: The Next Generation Big Data ArchitectureMapR Technologies
The Zeta Architecture is a high-level enterprise architectural construct which enables simplified business processes and defines a scalable way to increase the speed of integrating data into the business. The result? A powerful, data-centric enterprise.
Hadoop is being used across organizations for a variety of purposes like data staging, analytics, security monitoring, and manufacturing quality assurance. However, most organizations still have separate systems optimized for specific workloads. Hadoop has the potential to relieve pressure on these systems by handling data staging, archives, transformations, and exploration. Going forward, Hadoop will need to provide enterprise-grade capabilities like high performance, security, data protection, and support for both analytical and operational workloads to fully replace specialized systems and become the main enterprise data platform.
This Big Data case study outlines the Hadoop infrastructure deployment for a Fortune 100 media and telecommunications company.
Hadoop adoption in this company had grown organically across multiple different teams, starting with “science projects” and lab initiatives that quickly grew and expanded. Going forward, some of the options they considered for their Big Data deployment included expanding their on-premises infrastructure and using a Hadoop-as-a-Service cloud offering.
Fortunately, they realized that there is a third option: providing the benefits of Hadoop-as-a-Service with on-premises infrastructure. They selected the BlueData EPIC software platform to virtualize their Hadoop infrastructure and provide on-demand access to virtual Hadoop clusters in a secure, multi-tenant model.
Learn more about this case study in the blog post at: http://www.bluedata.com/blog/2015/05/big-data-case-study-hadoop-infrastructure
Insights into Real-world Data Management ChallengesDataWorks Summit
Oracle began with the belief that the foundation of IT was managing information. The Oracle Cloud Platform for Big Data is a natural extension of our belief in the power of data. Oracle’s Integrated Cloud is one cloud for the entire business, meeting everyone’s needs. It’s about Connecting people to information through tools which help you combine and aggregate data from any source.
This session will explore how organizations can transition to the cloud by delivering fully managed and elastic Hadoop and Real-time Streaming cloud services to built robust offerings that provide measurable value to the business. We will explore key data management trends and dive deeper into pain points we are hearing about from our customer base.
How to Succeed in Hadoop: comScore’s Deceptively Simple Secrets to Deploying ...MapR Technologies
Get an insider's view into one of the most talked-about Hadoop deployments in the world!
As more enterprises realize the value of big data, Hadoop is moving from lab curiosity to genuine competitive advantage. But how can you confidently deploy it in a production environment?
In this joint webinar with Syncsort, learn firsthand from industry thought leader, Mike Brown, CTO of comScore, how to offload critical data and optimize your enterprise data architecture with Hadoop to increase performance while lowering costs.
Introducing Cognitive Threat Analytics (CTA), Cisco's automated breach detection technology based on statistical modeling and machine learning of network traffic behaviors, whose goal is to identify end-user devices within the monitored network that from network perspective do not represent a communication of a legitimate human user behind their web browser, but actually represent a malware-infected (breached) device establishing its command & control communication to an external malicious infrastructure. The CTA technology produces actionable security intelligence for security operations and threat research to act on. The STIX/TAXII API standards are being used for the security intelligence interchange. An integration is available with the leading SIEM vendors and other STIX/TAXII compliant clients.
A session in the DevNet Zone at Cisco Live, Berlin. Join us for a case study discussion about DevOps principles and how they were incorporated into an Infinite Video project.
This document provides an introduction to Snort rule syntax and content matching. It describes the basic components of a Snort rule including the rule header, action, protocols, addresses, ports, and rule options. It then covers various content matching techniques like content, pcre, and content modifiers like nocase, offset, depth, distance, and within. It also discusses negated content matching, content buffers, and fast_pattern. Finally, it provides examples of how content matching can be used for detection strategies like traffic triage and isolating vulnerable application traffic.
SBT Concepts, part 2 discusses SBT project structure and commands. It explains how to create an SBT project with directories for sources and resources. The document shows how to define build settings in build.sbt or a custom Build.scala file. It demonstrates common SBT commands like compile, run, console, and how to view settings and tasks. Finally, it provides an overview of configurations, plugins, and delegates in SBT.
This document is the user manual for Snort version 2.8.6. It provides an overview of Snort's capabilities in different operating modes like sniffer, packet logger, and network intrusion detection system modes. It also describes how to configure Snort, including preprocessor and rule configuration, as well as output and logging options. The document contains detailed information on topics like includes, rule profiling, output modules, and more.
This document provides instructions for setting up an intrusion prevention system (IPS) using VMware ESXi, Snort IPS, and Debian Linux. It describes configuring the ESXi host with multiple virtual switches and network adapters. It then guides installing and configuring Debian, dependencies like libpcap and Snort on a virtual machine. It also covers configuring PulledPork to automatically download and install Snort rule updates. The goal is to inspect all external network traffic for protection.
The document discusses upgrading Snort from an intrusion detection system (IDS) to an intrusion prevention system (IPS) to provide active network traffic control. An IDS operates in detection mode only using port mirroring, while an IPS requires original traffic and can actively block threats. The document provides instructions for configuring Snort in inline mode between two network segments using two network cards and iptables rules to redirect traffic. It notes that Snort IPS provides transparent control and flexibility through multiple queues and rule sets when using the NFQ module.
Rome 2017: Building advanced voice assistants and chat botsCisco DevNet
If it takes minutes to code a simple bot, building professional bots represents quite a challenge. Soon you realize you need serious programming and API architecture experience but also “Bot” specific skills. In this session, we'll first show the code of advanced Chat and Voice interactions, and then explore the challenges faced when building advanced Bots (Context storage, NLP approaches, Bot Metadata, OAuth scopes), and discuss interesting opportunities from latest industry trends (Bot platforms, Serverless, Microservices). This talk is about showing the code and sharing lessons learned.
A session in the DevNet Zone at Cisco Live, Berlin. Flare allows users with mobile devices to discover and interact with things in an environment. It combines multiple location technologies, such as iBeacon and CMX, with a realtime communications architecture to enable new kinds of user interactions. This session will introduce the Flare REST and Socket.IO API, server, client libraries and sample code, and introduce you to the resources available on DevNet and GitHub.
Development of Aerogel is an aftereffect of consolidated innovative work in material science and innovation. Aerogel is inferred by substitution of the liquid part of the gel with a gas.
Humans are responsible for climate change according to the evidence presented in the document. The document discusses two studies that provide evidence that climate change is occurring and is caused by human activity. The first study uses climate models to show that climate change is influencing severe weather patterns in Australia. The second study finds a decline in natural vegetation in rainforests associated with thousands of years of human activity like farming. Most Americans now believe climate change is happening and is caused by humans. Everyday activities like driving large, gas-guzzling vehicles and reliance on cars for transportation contribute to pollution and climate change. Walking and public transportation could help reduce environmental impacts.
419 scam (also referred to as Nigerian scam) is a popular form of fraud in which the fraudster tricks the victim into
paying a certain amount of money under the promise of a future, larger payoff. Using a public dataset we study how these forms of scam campaigns are organized and evolve over time
Ken Owens, the CTO of Cisco Intercloud Services, presented on Cisco's migration from MapReduce jobs to Spark jobs for processing customer interaction data. The document discussed Cisco's need to embrace both traditional and hyperscale application deployment across data centers, clouds, and edges. It also covered Cisco's analysis platform requirements, AWS and Cisco Intercloud sizing comparisons, and performance results from testing the migration of MapReduce jobs to Spark on the Cisco Intercloud.
How Cisco Migrated from MapReduce Jobs to Spark Jobs - StampedeCon 2015StampedeCon
At the StampedeCon 2015 Big Data Conference: The starting point for this project was a MapReduce application that processed log files produced by the support portal. This application was running on Hadoop with Ruby Wukong. At the time of the project start it was underperforming and did not show good scalability. This made the case for redesigning it using Spark with Scala and Java.
Initial review of the Ruby code revealed that it was using disk IO excessively, in order to communicate between MapReduce jobs. Each job was implemented as a separate script passing large data volumes through. Spark is more efficient in managing intermediate data passed between MapReduce jobs – not only it keeps it in memory whenever possible, it often eliminates the need for intermediate data at all. However, that alone not brought us much improvement since there were additional bottlenecks at data aggregation stages.
The application involved a global data ordering step, followed by several localized aggregation steps. This first global sort required significant data shuffle that was inefficient. Spark allowed us to partition the data and convert a single global sort into many local sorts, each running on a single node and not exchanging any data with other nodes. As a result, several data processing steps started to fit into node memory, which brought about a tenfold performance improvement.
DEVNET-1140 InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...Cisco DevNet
Data gravity is a reality when dealing with massive amounts and globally distributed systems. Processing this data requires distributed analytics processing across InterCloud. In this presentation we will share our real world experience with storing, routing, and processing big data workloads on Cisco Cloud Services and Amazon Web Services clouds.
Analyzing petabytes of smartmeter data using Cloud Bigtable, Cloud Dataflow, ...Edwin Poot
The Energy Industry is in transition due to the exponential growth of data being generated by the ever increasing number of connected devices which comprise the Smart Grid. Learn how Energyworx uses GCP to collect and ingest this IoT data with ease and is helping her customers uncover hidden value from this data, allowing them to create new business models and concepts.
Cisco Virtualized Multi-tenant Data Center solution (VMDC) is an architectural approach to IT which delivers a Cloud Ready Infrastructure. The architecture encompasses multiple systems and functions defining a standard framework for an IT organization. Standardization allows the organization to achieve operational efficiencies, reduce risk and achieve cost reductions while offering a consistent platform for business.
Oracle's cloud computing strategy is to support both public and private clouds to give customers choice. Oracle offers the technology to build private clouds or run workloads in public clouds. It also offers applications deployed in private shared services environments or via public SaaS. The strategy is based on Oracle's existing virtualization, grid computing, shared services, and management technologies and provides customers the most complete, open, and integrated cloud vision and offerings.
PaaS Lessons: Cisco IT Deploys OpenShift to Meet Developer DemandCisco IT
Cisco IT added OpenShift by Red Hat to its technology mix to rapidly expose development staff to a rich set of web-scale application frameworks and runtimes. Deploying Platform-as-a-Service (PaaS) architectures, like OpenShift, bring with it:
- A Focus on the Developer Experience
- Container Technology
- Network Security and User Isolation
- Acceleration of DevOps Models without Negatively Impacting Business
In this session, Cisco and Red Hat will take you through:
- The problems Cisco set out to solve with PaaS. - How OpenShift aligned with their needs.
- Key lessons learned during the process.
Business & IT Strategy Alignment: This track targets the juncture of business and IT considerations necessary to create competitive advantage. Example topics include: new architecture deployments, competitive differentiators, long-term and hidden costs, and security.
Attendees will learn how to align architecture and technology decisions with their specific business needs and how and when IT departments can provide competitive advantage.
CL2015 - Datacenter and Cloud Strategy and PlanningCisco
This document discusses strategies for data center and cloud transformation over the next 5 years. It outlines key digital business trends like data growth, cloud adoption, and security threats that are driving organizations' IT initiatives. These include managing increased data and applications, optimizing cloud strategies, addressing disruptive business models, and securing distributed data and applications. The document advocates adopting flexible consumption models, automation, and supporting edge/IoT applications. It positions Cisco as uniquely able to enable digital transformations through its portfolio of networking, compute, storage, automation, analytics, and security solutions.
The document discusses accelerating cloud services. It notes that today's challenges for IT include rapid service delivery through cloud and devices. Big data, cloud computing, and high-performance computing are clear trends shaping enterprises. The cloud provides opportunities to extract value from data but also poses big challenges. The journey to cloud is inevitable as it addresses pressures on IT organizations from business leaders. The document advocates re-architecting the datacenter with cloud and software-defined infrastructure to provide agility, optimize resources, and enable new applications and services. It highlights Intel's ongoing investments in datacenter technologies and custom processors for AWS, and provides case studies of how Intel and AWS help customers leverage the power of cloud.
Cisco & Microsoft Converged InfrastructureAymen Mami
Cisco & Microsoft Converged Infrastructure workshop @ Cisco Regional Summit Tunisia 2017
What is Cisco UCS Offering
Cisco partnership with SAN storage manufacturers
Cisco partnership with Microsoft for private cloud
Integration between Microsoft System center & Cisco UCS manager
This document discusses predictive maintenance of robots in the automotive industry using big data analytics. It describes Cisco's Zero Downtime solution which analyzes telemetry data from robots to detect potential failures, saving customers over $40 million by preventing unplanned downtimes. The presentation outlines Cisco's cloud platform and a case study of how robot and plant data is collected and analyzed using streaming and batch processing to predict failures and schedule maintenance. It proposes a next generation predictive platform using machine learning to more accurately detect issues before downtime occurs.
Cisco Connect 2018 Thailand - Secure, intelligent platform for the digital bu...NetworkCollaborators
This document discusses digital transformation and the secure, intelligent platform needed to enable it. It notes that digital transformation involves adopting new technologies and business models to increase agility, productivity and customer experiences while reducing costs. The platform should amass and unlock big data, embrace multi-cloud environments, reinvent the network, and leverage machine learning/AI to drive business insights. Cisco's strategies for its Spark, DNA Center and other platforms aim to provide such a secure, intelligent platform for digital business.
The Enterprise Guide to Building a Data Mesh - Introducing SpecMeshIanFurlong4
For organisations to successfully adopt data mesh, setting up and maintaining infrastructure needs to be easy.
We believe the best way to achieve this is to leverage the learnings from building a ‘central nervous system‘, commonly used in modern data-streaming ecosystems. This approach formalises and automates of the manual parts of building a data mesh.
This presentation introduces SpecMesh; a methodology and supporting developer toolkit to enable business to build the foundations of their data mesh.
With all the hype around Cloud and SDN, business decision makers are finding themselves trying to navigate through many new concepts and consequently needing to change the way they have traditionally selected their IT infrastructure. Technologies are now becoming more integrated and it is more important than ever to help your business be agile enough to keep up with the demands of your users and your customers. Come hear from Lisa Guess to learn how organizations can embrace Cloud technologies such as automation, SDN and Orchestration platforms to help you build next-generation networks.
Cisco Enhances Data Protection, Increases Bandwidth and Simplifies End to End Storage Management
Protect
• Enhance disaster recovery and Business Continuance
• Integrated FCIP on Director Class
Scale
• Nexus 9K for Storage Networking
• 100G /50G/25G IP Storage connectivity
Simplify Operations
• DCNM Connect
• Storage End-to-end Provisioning
MongoDB World 2019: Wipro Software Defined Everything Powered by MongoDBMongoDB
Software defined addresses customer’s next generation IT requirements such as enabling agility and scalability. SDx powers development of domain aligned vertical driven data services such as IoT and Analytics as part of SDX Modern Data Platform based on MongoDB which facilitates digital disruption.
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Presentation data center transformation cisco’s virtualization and cloud jo...xKinAnx
The document discusses Cisco's journey towards virtualization and cloud computing. It describes Cisco's global data center strategy, which includes building a new world-class data center in Allen, Texas and developing a cloud strategy and services called CITEIS. CITEIS provides infrastructure as a service using Cisco UCS, Nexus switches, and other Cisco technologies to enable automated, self-service provisioning and elastic infrastructure capacity.
Cisco Connect Toronto 2018 sd-wan - delivering intent-based networking to t...Cisco Canada
This document discusses Cisco SD-WAN and its ability to deliver intent-based networking to branches and the WAN. It begins by noting the business challenges of traditional network architectures in supporting modern needs around mobility, cloud applications, and security. It then introduces Cisco SD-WAN as a software-defined solution that provides automated, predictive, and business-intent driven networking through centralized control, application-aware policies, hybrid WAN transport, and integrated security and analytics capabilities. Key components of the Cisco SD-WAN architecture are also summarized, including the data, control, management, and orchestration planes.
Hope, fear, and the data center time machineCisco Canada
The document discusses Cisco's vision for application-centric infrastructure (ACI) which provides policy-driven automation across networks, compute, storage and security to enable agility. ACI uses concepts like endpoint groups, policies and profiles to simplify management and deliver applications securely on premises or across hybrid clouds. The document also highlights Cisco technologies that integrate with ACI like Tetration for network analytics, Cisco CloudCenter for hybrid cloud orchestration, and Cisco UCS for converged infrastructure.
Similar to DEVNET-1166 Open SDN Controller APIs (20)
Learn how and why John McDonough contributes to Ansible and how you can too. We’ll arm you with what you need to know, things like Python, Git, and YAML.
How to Build Advanced Voice Assistants and ChatbotsCisco DevNet
Learn more about the CodeMotion Voice Machine and Cisco DevNet Chatbot. Understand what a typical bot journey is and where to go to get more information about Cisco Spark and Tropo.
Cisco Spark and Tropo and the Programmable WebCisco DevNet
This document discusses integration platforms as a service (iPaaS) and provides examples of how Cisco Spark, Tropo, and Webex can be integrated using iPaaS solutions. It outlines key iPaaS concepts, popular iPaaS solutions like IFTTT, Zapier and Built.io, and use cases for both consumers and enterprises. It also describes an anatomy of a potential iPaaS solution using Built.io and highlights opportunities to learn more through Cisco DevNet labs and sessions.
Device Programmability with Cisco Plug-n-Play SolutionCisco DevNet
Cisco Open Plug-n-Play solution allows customers to reduce the costs associated with deployment/installation of network devices, increase the speed and reduce the complexity of deployments without compromising the security. Using Cisco Plug-n-Play solution, customers can do Zero Touch Installs of Cisco gear in various deployment scenarios and deployment locations.
Watch the DevNet 2052 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=91108&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
Building a WiFi Hotspot with NodeJS: Cisco Meraki - ExCap APICisco DevNet
This document discusses building a WiFi hotspot using Node.js and the Cisco Meraki ExCap API. It describes using Node.js and Express to create web services that handle click-through, sign-on, and social login splash pages. Sessions are stored in MongoDB. Templates are rendered using Handlebars. The API provides parameters like login URLs and splash page URLs. Code examples show routing and passport authentication strategies for social logins.
Application Visibility and Experience through Flexible NetflowCisco DevNet
The world of applications is changing rapidly in the enterprise; from the way applications are increasingly hosted in the cloud, the diverse nature of apps and to the way they are consumed by many devices. The need for organizations and network administrators is to focus on "Fast IT" - "Innovation in the Enterprise" is growing, which means having to spend less time on daily operations, maintenance and troubleshooting and more time on delivering business value with newer services. Cisco AVC with its NBAR2 technology is designed to detect applications and measure application performance through measuring round trip time, retransmission rates, jitter, delay, packet loss, MoS, URL statistics etc. Those details are transmitted using Flexible Netflow/IPFIX, so partners could leverage the data for application usage reporting, performance reporting and troubleshooting application issues to deliver best possible application experience.
Watch the DevNet 2047 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92664&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
The WAN Automation Engine (WAE) is a software platform that provides multivendor and multilayer visibility and analysis for service provider and large enterprise networks. It plays a critical role in answering key questions of network resource availability, and when appropriate can automate and simplify Traffic Engineering mechanisms such as RSVP-TE and Segment Routing. This session will focus on use-cases and APIs for developers.
Watch the DevNet 2035 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92720&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
Cisco's Open Device Programmability Strategy: Open DiscussionCisco DevNet
Cisco DNA is an open and extensible, software-driven architecture built on a set of design principles with the objective of providing:
- Insights & Actions to drive faster business innovation
- Automaton & Assurance to lower IT costs and complexity while meeting business and user expectations
- Security & Compliance to reduce risk as the organization continues to expand and grow. The architecture extends to Cisco network elements.
This session will focus on the open, model-driven, programmable interfaces available across Cisco's network elements which enable you to leverage and extend your network through applications that directly access the routers and switches in your network.
Watch the DevNet 1028 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=91041&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
Open Device Programmability: Hands-on Intro to RESTCONF (and a bit of NETCONF)Cisco DevNet
In this small group, hands-on workshop session you'll learn how to write your first Python application that uses YANG, NETCONF and , RESTCONF to access operational and configuration data on a device.
Watch the DevNet 2044 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92725&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
NETCONF & YANG Enablement of Network DevicesCisco DevNet
A technical discussion and a demo showing how Tail-f's ConfD management agent can be used to implement NETCONF and YANG, the industry-leading solution for providing a programmable management interface in a network element. ConfD is recognized as the best-in-breed embedded software for implementing management functions in network elements, including physical devices and virtualized network functions (VNF) for NFV.
This Workshop is a best fit for engineers who are involved in the design and development of embedded software for network devices. Attendees will gain a basic understanding of what NETCONF and YANG are and how ConfD provides a solution for embedding this technology in the network devices. More information about ConfD can be found at: https://developer.cisco.com/site/confD/
Watch the DevNet 1216 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92703&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
UCS Management APIs A Technical Deep DiveCisco DevNet
The document provides an overview and technical details of the UCS Management APIs:
- It discusses the structure, features, object model, and workflow of the UCS XML API. It also covers methods for sessions, queries, filters, and configurations.
- The API uses HTTP/HTTPS and XML, with role-based authentication and a published object model hierarchy. It supports transactions, high availability, and event subscriptions.
- Key methods and functionality covered include sessions, queries with filtering, resolving objects by DN/class/scope, configurations, and events/statistics. Understanding the low-level UCS API enables programmatic access to UCS environments.
The DevOps model is rapidly transforming IT operations and development practices. But what are the precursors necessary to implement DevOps? To achieve an agile, virtualized, and highly automated IT environment, what technological requirements need to be in place? OpenStack has the potential to facilitate DevOps implementation and practices at several different layers in the data center. In this session we'll quickly discuss what DevOps is, then discuss many components that are logically required to move towards DevOps in your environment. Finally we'll explore in depth several ways OpenStack can provide these baseline components.
Watch the DevNet 1104 replay from the Cisco Live On-Demand Library at: https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=92695&backBtn=true
Check out more and register for Cisco DevNet: http://ow.ly/jCNV3030OfS
NetDevOps for the Network Dude: How to get started with API's, Ansible and Py...Cisco DevNet
This document provides an agenda and overview for a presentation on network automation using APIs, Ansible, and Python. The presentation introduces network programmability and automation tools like Ansible, discusses using infrastructure as code approaches, and provides examples of automating network device configurations and modules using Python and Jinja templates. It aims to help network engineers get started with network automation.
The document outlines an agenda for a presentation on developing Tropo applications. The presentation covers topics like making incoming and outgoing calls, text messaging, call control features, and advanced speech concepts. Sample code is provided for different programming languages.
The document describes a Cisco Spark & Tropo API workshop that covers setting up a quiz application using the Cisco Spark and Tropo APIs. The workshop includes touring a demo quiz app, setting up an interactive voice response system with Tropo, adding a SMS bridge to onboard participants to a Cisco Spark room, and connecting an interactive assistant bot to a Spark room. Hands-on exercises guide attendees on configuring the various components.
Coding 102 REST API Basics Using SparkCisco DevNet
This document provides an overview and agenda for a workshop on REST API basics using the Cisco Spark API. The agenda includes an introduction to REST APIs and what makes them useful, a tour of the Cisco Spark API and its endpoints, and hands-on exercises for interacting with the Cisco Spark API using Postman and JavaScript examples. Attendees will learn how to retrieve room and membership data, add messages to rooms, and call API functions from JavaScript code. The workshop aims to help developers get started using the Cisco Spark API and provides resources for continuing their education on API design and development.
Cisco APIs: An Interactive Assistant for the Web2Day Developer ConferenceCisco DevNet
Stève Sfartz is an API evangelist at Cisco who presented on Cisco APIs and leveraging them through examples. The presentation covered Cisco technologies like Connected Mobile Experience (CMX), Mobility IQ, and Cisco Spark which have REST APIs that can be used to access location data, analytics, and collaboration features. It encouraged developers to join the Cisco DevNet community to learn about APIs, take labs, and interact with other developers.
DevNet Express - Spark & Tropo API - Lisbon May 2016Cisco DevNet
Direct from the Cisco DevNet Lisbon Portugal Express event in May 2016. Learn about Cisco DevNet, Spark and Tropo APIs any why there's never been a better time to innovate with Cisco.
Direct from DevNet@TAG in Milan and Rome in May 2016! Learn about Cisco DevNet, Spark and Tropo APIs any why there's never been a better time to innovate with Cisco.
Choosing PaaS: Cisco and Open Source Options: an overviewCisco DevNet
This document discusses container platforms and PaaS. It provides context on containers and supporting technologies like Docker. It describes how containers are limited when confined to a single host, and how schedulers can distribute containers across multiple hosts. It outlines common production tools used with containers like configuration management, monitoring, and logging. It compares PaaS and containers, noting how PaaS consumed containers before they were widely known, and how the lines between the two are blurring as container platforms provide more services. It introduces Mantl as Cisco's container stack designed to run container workloads and big data applications across clouds.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
2. Hadoop in the Cisco
Cloud
Kartik Kanakasbesan
kartikka@cisco.com
3. • Introduction
• What is Cisco Cloud services?
• Cisco’s Big Data as a service
• Why Hadoop in the Cloud?
• Use Cases for Hadoop in the Cloud
• Customer Experiences so far
• Going forward
Agenda
4. The Intercloud
The globally connected network of clouds
Enterprise
Private
Clouds
Public
Clouds
Intercloud
Alliance
Intercloud
Services
INTERCLOUD
Intercloud
Providers
CIMK v1.0
5. The Intercloud
Customer Value
INTERCLOUD CONTROL
Across services, in
every location
COMPLIANCE
Manage risk locally
and globally
CHOICE
Cloud the way
you need it
Public
Clouds
Enterprise
Private
Clouds
Intercloud
Alliance
Intercloud
Services
Intercloud
Providers
with…
and…
CIMK v1.0
Intercloud
Services
6. Worldwide leader in
cloud building, cloud
services and
managed services
Extensive global
partner network
Global Scale
Workload mobility
Open Standards
Customer ValueCisco
Cisco & OpenStack – Delivering value
430 Companies and
growing
17000+ individual
members
2655 Cumulative
contributions
OpenStack
7. Cisco Intercloud Services
OpenStack and
standards based
cloud
OPEN
STANDARDS
Self-service
infrastructure
enabling
application
lifecycle
GLOBAL
SCALE
PUBLIC
CLOUD
PLATFORM APIS
Workload mobility
with control and
compliance
Empowering
developers
and cloud-
scale
applications
RAPID
INNOVATION
Best-of-breed
of Cisco’s
products and
best practices
8. Cisco Intercloud Services: Target Customers
• Hybrid workloads
requiring common
network and
security policies
Enterprise
• Value-added
services with
NGN/NFV
• Federation
capabilities
Network-Based
Service Providers
Developers
• SaaS, Network-
centric workloads
• IOT/IOE, SP Video,
Collaboration, and
Mobility workloads
9. IoT World Forum Reference Model
Levels
Application
(Reporting, Analytics, Control)
Data Abstraction
(Aggregation & Access)
Data Accumulation
(Storage)
Edge Computing
(Analysis & Transformation)
Connectivity
(Communication & Processing Units)
Physical Devices & Controllers
(The “Things” in IoT)
Collaboration & Processes
(Involving People & Business Processes)
Sensors, Devices, Machines,
Intelligent Edge Nodes of all types
Center
Edge
1
2
3
4
5
6
7
11. Cisco Intercloud services – platform
Cloud-Centric Networking, Security, Policy
Core Cloud Services Analytics Building Blocks
Application
Enablement Tools
Enterprise/Hybrid Services
Cisco Micro Services
Collaboration SP Video Analytics
Inter-Region Virtual
Private Backbone
Automated Private
VPN Connectivity
Network-optimized
Workload Placement
Managed Public + Private Cloud
Marketplace for Cisco, Third Party ISV, Enterprise Applications and Services
Third Party Open
Source Tools
Security Network/Device ManagementIOE
Global SP Backbone
Dashboard
Basic Cloud
Resource
Monitoring
Network
Performance
Metrics
Deployment/
Management Tools,
PaaS
VNF Library Orchestration, Auto-Scaling
Federated Network
& Security Policies
Compute Storage Database Virtual Network LB VPN Hadoop
Service Chaining
Data Virtualization
Intercloud Fabric support for
heterogeneous environments
Data Ingest
App-Level Sovereignty, Privacy Policies
12. Market place
3rd party
algorithms etc.
OpenstackAPIs,SQL,REST
Data Ingestion as a
service
Hadoop as a
service
Machine Learning
as a service
Data Warehouse
services
Data Virtualization
services
Other….
Vision for Cisco Cloud Provided Data Service*
IOE/IOT Applications
Proactive
Maintenance
Manufacturing
Apps
Machine as a
service
Oil and Gas
Service Provider Analytics Apps
Network
Diagnostic
Service
Provider
Analytics
Customer
Loyalty
Analytics
Feature
Analytics
Collaboration Analytics Apps
Telepresence
Analytics
Collaboration
Analytics
Social Analytics
Sentiment
Analysis
Others Applications
Marketing Apps
Availability
Analytics
Demand
Planning
Sentiment
Analysis
Deliver an
integrated and
managed
environment of
these primitives
Deliver
analytics
applications to
customers
(Hybrid, on-
Premise or
Software as a
service
DataSources
Remove the
burden of
managing the
infrastructure
Allow Organization
and Line of
businesses to focus
on Market
Opportunities and
develop Analytics
Applications
CIS Provided Data Services
*Subject to change based on market feedback
13. Cisco’s Big Data as a service*
Hadoop as a service (aaS)
Data Ingestion -aaS
Visualization -aaS
Machine learning -aaS
Data Virtualization- aaS
Analytics -aaS• All these services
need
• Provisioning
• Monitoring
• Scaling
• Consumption model
• Integrated
• Individually
• Cisco Branded service
• Minimal Flexibility
on Vendor choice
Provisioned on
Big Optimized instances
• Local Storage
• Object and Block Storage
*Subject to change based on market feedback
14. Cisco’s Hadoop as service*
Hadoop
Reliable,
Secure &
Monitored
HaaS
Cisco’s Hadoop as a service
• Provides market leading Cloudera’s
Hadoop distribution
• Flexibility to deploy Hadoop optimized
templates for Streaming and Batch
processing
• Data ingest with Apache Kafka
• Support Apache Spark Stack
• Core, SQL, Mlib,& GraphX
• Running on YARN
• Secure access to Hadoop APIs
• Integrate with on premise Hadoop
distributions (if needed)
Openstack
*Subject to change based on market feedback
16. Why Hadoop in the Cloud makes sense?
• Reducing barriers in adopting Hadoop
• Cloud and Hadoop provide the perfect
solution to “test” the Hadoop waters
• Help customers build IoE/IoT applications
faster with Cisco’s solutions
• Run your Hadoop dev/test workloads in
the cloud and provision them on premises
• Leverage Cisco’s Networking capabilities
as a differentiator for your capabilities
• Provide consistent policies on the cloud
just like on-premises
• Provide a scalable, reliable, and secure
environment
17. • $16B by 2020*
• Targeted to grow by 70.8% CAGR
• Over 20 plus players in the market
• Highly fragmented Amazon, Azure, IBM, Google, Rackspace, and many more
• North America is the leading market
• Europe is further behind
• AP markets are maturing fast
Hadoop as a service(HaaS) Market size & forecast
*Source:GigaOM
18. Use Case: Preventative Maintenance
Data ingestion Data Processing
In-memory
database
Hadoop
In-memory
querying
Real-time
query with
low latency
1000’s of robots streaming
messages (structured &
unstructured data)
Lambda architecture in the Cloud for
IoT/IoE (elastically scalability and secure)
Data
aggregation at
the plant floor
done by a
Cisco UCS
box
19. Use Case: Omni-Channel Customer Journeys
Server
Logs
Social
& Chat
Mobile
Event
Stream
s
Call
Center
S/W
Download
Open Trouble
Ticket
Assign
Engineer
Update
Trouble Ticket
Close Trouble
Ticket
Resolve
Trouble Ticket
Read Support
Documents
View Design
Documents
View Tech
Documents
New
Registration
Bug Search FAQs
Contract
Details
Product
Details
Device
Coverage
Interaction Touch points
Channels
Journey
Case Resolution
Software Upgrade
The customers’ interaction with Cisco across multiple touch points to get the desired business
outcome.
20. Pilot Test Data
• Test performed on one day’s production data
• Total no. of records processed – 110,852,667
• Total data ingest size – 32GB/day
• Total no. of M/R jobs in the data pipeline – 17
• Two test cycles
• Cycle 1: Heterogeneous CCS nodes (vCPUs,
storage, memory)
• Cycle 2: Homogeneous CCS nodes
21. AWS to CIS Migration – Success Criteria
Successful synthesis of customer interaction data
Successful automation of the end-end data process pipeline
Build behavioral insight services
Access to data and services via data discovery and visualization tools
Meet the performance, scale and platform stability requirements
Successful deployment of CiscoDV on CIS
Connect HDFS and Hive DS with CiscoDV via Hive and Impala
Build and expose insight services for consumption by limited users
22. AWS and CIS Data Node Sizing Comparison
Hadoop Cluster for Batch and Query Analytics
Node Service AWS Instance Type vCPU Mem Storage
Number of
Data Nodes
Comments
Data Nodes/
Node Master m3.2xlarge 8 30 2x80 GB 30
Each hadoop data node has 1500GB of EBS
available for HDFS storage
AWS Sizing
CCS Sizing
Node Service CCS Instance Type vCPU Mem Storage
Number of
Data Nodes
Comments
Data Nodes/
Node Master GP-2XLarge 8 32 50 35
Each hadoop data node has 1500GB of
volume storage available for HDFS storage
25. Discover data beyond the enterprise: Virtual integration that combines
traditional enterprise data, Big Data stores on CIS and AWS, cloud data from
SaaS providers and, Cisco Customers and Partners
Seamless interoperability offers easy access to data across distributed data
sources in the intercloud analytics platform
Universal data governance maximizes enforcement of data security rules
Analytics Data Hubs: Deployment flexibility to build hybrid/virtual sandboxes
that enable nimble data discovery and rapid data analytics to support multiple
LOBs
In addition to Hadoop: Cisco Data virtualization
26. CiscoDV on Intercloud Analytics Platform (CIS)
Scenario 1
CIS Cisco DV to Cisco
Enterprise Data Store
Scenario 2
CIS CiscoDV to Impala and
Hive on CIS Intercloud
Analytics Platform
Scenario 3
CIS Cisco DV to Hive on AWS
Big Data Cluster
Scenario1
Scenario 3
28. Cisco’s Hadoop services is available to select customers only before
General Availability
Part of the broader Cisco Big Data as a service play
Let us know the kind of tools you use for
Visualization
Machine Learning
How can we address your Data challenges together ?
Going forward
The Intercloud is a globally connected network of clouds. Built by Cisco and our partners, the Intercloud gives our customers a nearly unlimited choice of cloud infrastructure and applications with the compliance and control they need to connect to the cloud with confidence.
The Intercloud includes Enterprise Private clouds with Intercloud Fabric and ACI to make them Intercloud Ready. Intercloud Providers – Cisco Powered Services Providers who are adding Cisco Intercloud Technologies (Cisco Intercloud Fabric (ICF), Application-Centric Infrastructure (ACI), OpenStack, and the Cisco ONE Enterprise Cloud Suite) to their cloud services to create Intercloud Services. Cisco and the other Intercloud Alliance Partners deliver rigidly standardized services from a common infrastructure.
Cisco and our partners enable Choice with:
Flexible consumption and deployment models
Cloud automation integrated at the customer’s pace
Hundreds of trusted providers and partners
Workload placement regardless of hypervisor
Thousands of ready-to-consume, proven services and integrated applications
Choice helps customers to:
Improve their strategic allocation of IT budget
Enhance their ability to better align IT and business requirements
Accelerate their time to market to capture new revenue opportunities
Expand their markets
Cisco and our partners enable Compliance by:
Providing security solutions and services to protect users, data, and workloads; enable visibility, secure connections, and advanced threat protection
Enabling secure workload portability and placement across private and public clouds
Offing cloud services from data centers located around the world with local and regional hosting
Providing validated architectures and an ecosystem of proven Intercloud Providers and Resellers
Compliance helps customers to:
Manage their exposure to risk from
Network and data security threats
Integration of private cloud resources into public clouds
Industry and government compliance regulations
Maintaining multiple, point-to-point business and financial relationships with cloud providers
Cisco and our partners enable Control by
Unifying service and capacity management
Controlling placement and secure portability of workloads
Assuring application performance with application-centric policies that “follow” the workload
Leveraging service catalogs to allow IT to easily broker services for the business
Leveraging integrated application data from across multiple clouds
Control helps customers to:
Lower costs through operational efficiencies
Enable IT to assume the role of service broker to better partner with the business
Minimize risk with flexible capacity management, consistent security policies, control of sensitive information, and multi-vendor support
Deliver a consistent user experience
Why should you consider Cisco OpenStack Private Cloud?
With a track record of running large-scale ops and early private clouds like Ticketmaster and Yahoo, the founders of Metacloud, now a part of Cisco, have world-class OpenStack and operations engineering experience going back to 2011.
Combined with Cisco’s leadership in cloud and managed services, and with our extensive partner network and intercloud focus, we deliver a better customer experience -- a public cloud experience for developers and operations teams, delivered as a service to infrastructure teams.
Remove middle
Key Points:
Devices have the potential to generate much more data than people.
This introduces the need to filter and aggregate data as close to the edge of the network as possible.
As a result, there will be a new category of “middleware” (perhaps we should call it “edgeware”) to collect, normalize, filter, aggregate, and provide the data from the devices and their controllers to the applications.
Key Points:
Devices have the potential to generate much more data than people.
This introduces the need to filter and aggregate data as close to the edge of the network as possible.
As a result, there will be a new category of “middleware” (perhaps we should call it “edgeware”) to collect, normalize, filter, aggregate, and provide the data from the devices and their controllers to the applications.
Add builds
Q3
Integrated & Multi-tenant Platform as a Service
Hadoop & Data ingestion
Machine learning
Data Warehouse
Composite as a service
Elastic environment, managed, stable, and secure
Allow teams to focus on
Application and algorithm development
No need to manage the Big Data infrastructure