LightMesh quickly launched its next generation SaaS CMDB despite the challenge of a complex business domain by leveraging the xnlogic framework with Neo4j. In this talk, David will take you through some of the gotchas of enterprise application development with graphDB and how to solve them.
In this talk, we shared some of our highlights of the GraphQL Europe conference. You can see the full coverage of the conference here: https://www.graph.cool/talks/
My talk from GraphQL Summit 2017! In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching. Stay tuned for the video recording from GraphQL Summit!
Three years ago, with the release of the GraphQL specification, Facebook took a fresh stab at the topic of "API design between remote services and applications." The key aspects of GraphQL provide a common, schema-based, domain-specific language and flexible, dynamic queries at interface boundaries. In the talk, I'd like to compare GraphQL and REST and showcase benefits for developers and architects using a concrete example in application and API development, data source and system integration.
What if you could create a GraphQL API by combining many smaller APIs? That's what we're aiming for with schema stitching, the new feature in the Apollo graphql-tools package.
Learn how to build advanced GraphQL queries, how to work with filters and patches and how to embed GraphQL in languages like Python and Java. These slides are the second set in our webinar series on GraphQL.
Despite the “Graph” in the name, GraphQL is mostly used to query relational databases or object models. But it is really well suited to querying graph databases too. In this talk, I’ll demonstrate how I implemented a GraphQL endpoint for the Neo4j graph database and how you would use it in your app.
The document discusses GraphQL, Relay, and some of their benefits and challenges. Some key points covered include: - GraphQL allows for declarative and UI-driven data fetching which can optimize network requests. - Relay uses GraphQL and allows defining data requirements and composing queries to fetch nested data in one roundtrip. - Benefits include simpler API versioning since fields can be changed without breaking clients. - Challenges include verbose code, lack of documentation, and not supporting subscriptions or local state management out of the box. - Overall GraphQL aims to solve many data fetching problems but has a complex setup process and learning curve.
This document provides an overview of graph databases and algorithms using Neo4j. It discusses Neo4j's built-in graph algorithms for pathfinding, centrality, community detection, similarity and link prediction. It also covers Neo4j Streams for real-time graph processing and integrations with Kafka. Grandstack and Neo4j-GraphQL are presented as options for building GraphQL APIs on Neo4j.
> REST & GraphQL > GraphQL Jargons > Demo with GitHub APIs > Tool Chains > Workshop - Exploring the world of Pokemons with GraphQL youtube: https://www.youtube.com/watch?v=g0WAyOfA2Ls
GraphQL is quickly becoming mainstream as one of the best ways to get data into your React application. When we see people modernize their app architecture and move to React, they often want to migrate their API to GraphQL as part of the same effort. But while React is super easy to adopt in a small part of your app at a time, GraphQL can seem like a much larger investment. In this talk, we’ll go over the fastest and most effective ways for React developers to incrementally migrate their existing APIs and backends to GraphQL, then talk about opportunities for improvement in the space. If you’re using React and are interested in GraphQL, but are looking for an extra push to get it up and running at your company, this is the talk for you!
A presentation I gave at the Berkeley Association of Women in EECS about how to stand out as a new grad candidate.
GraphQL is a query language for APIs that allows flexible querying of data from a server. It was originally created by Facebook in 2012 and open sourced in 2015. Some key benefits of GraphQL include allowing apps to control the specific data received from servers instead of receiving all possible data like with REST APIs, and GraphQL queries mirroring the response structure. GraphQL schemas define query and mutation parameters as well as return data types.
This presentation explores the concepts around facebook query language for information retrieval & transformation.
A brief introduction about GraphQL. Repo with a Java running sample: https://github.com/rodrigocprates/people-graphql-api
The document discusses how GraphQL provides a solution for problems with traditional REST APIs by allowing flexible data fetching with one query. It summarizes pain points like over-fetching or under-fetching data and inconsistent features between platforms. The document then explains what GraphQL is, how it evolved from internal use at Facebook, popular brands using it, its specifications and implementations in different languages. It demonstrates how GraphQL enables flexible querying of data without versioning or multiple endpoints. The document also covers related tools like GraphiQL, schemas and types, and how GraphQL can be used with React. It concludes by discussing upcoming areas of focus like prioritizing data and supporting real-time updates.
Sashko Stubailo, core developer on the Apollo team at the Meteor Development Group, kindly provided his slides that he used for his talk.
This presentation is about Web APIs in general and MicroProfile GraphQL in particular. It has been used for EclipseCon 2020 and is backed by a GitHub project (link on slide 11).
The document provides information about an upcoming webinar hosted by The Briefing Room. The webinar will feature David Besemer, CTO of Composite Software, who will discuss how Composite addresses the challenges of data integration and providing data for analytics. The webinar aims to explain how Composite's data virtualization platform can help analysts more easily access and work with data from various sources through self-service analytic sandboxes and data hubs. The webinar also hopes to demonstrate how Composite can help organizations gain business insights faster while reducing costs compared to traditional data integration and warehousing approaches.
- Common data science obstacles - Data Value Pyramid - 5 Attributes of Successful Data Science Teams http://yhat.com http://twitter.com/yhathq
This document summarizes a presentation given on July 11, 2013 in London by Rackspace's Unlocked team. The presentation introduced the team members and discussed why unlocked events are held. It then covered topics including the hybrid cloud, how developers are driving innovation, and a case study of how HubSpot uses the hybrid cloud. Key points emphasized that the hybrid cloud gives developers the most power and freedom, and that developers driving innovation is important.
The document discusses 7 habits of data effective companies. It describes how companies have evolved through different digital maturity phases from analog to born-digital. The key differences observed between phases include impact on cost, value extraction, and capabilities. The 7 habits discussed are: treating data processing as an industrial process, focusing on latency and waste reduction, being use case driven and value stream aligned, initially centralizing data, architecting for failure and sharing, treating it as a software engineering problem, and following the Unix philosophy of building specialized components. The document provides examples and illustrations for each habit.
The document discusses how telecommunications companies can leverage graph databases to derive value from five key "graphs": the network graph, customer graph, call graph, master data graph, and help desk graph. It provides examples of how companies are using graph databases to improve network management, customer analytics, and other use cases. Finally, it outlines the benefits that have driven telecommunications firms to adopt graph databases, including improved query performance, agile development, and an extensible data model.
The document discusses how telecommunications companies can leverage graph databases to derive value from five key "graphs": the network graph, customer graph, call graph, master data graph, and help desk graph. It provides examples of how companies are using graph databases to improve network management, customer analytics, and other tasks. Reasons for adopting graph databases include faster querying of connected data, better matching of the data model to business domains, and improved maintainability. The presentation encourages attendees to connect at upcoming GraphConnect conferences to learn more.
Enterprise data science is not just creating dashboard, reports, ad-hoc query, models and/or algorithms, it’s beyond all - Take a look at our approach to enterprise data sciences, it’ very complex and it’s very difficult to implement as it’s involved integrating data across enterprise business function regardless of data source, format and structure There are many instances where people talk about enterprise data sciences (Oracle 12C, HADOOP, SAP) but “have you seen enterprise data sciences in a real system as a live demo”, in most cases the answers is “no” but now there is an opportunity to review enterprise data sciences with CloneSkills. I would say confidently say that there is no one in the world who integrated “Oracle 12C” and SAP HANA with HADOOP for real-time data integration except CloneSkills technical architect Mr. Karthik
This document provides an introduction to agile principles and practices. It discusses that agile values responding to change, continuous delivery, collaboration between teams, and delivering working software frequently through iterative development. It outlines three common agile practices: continuous feedback through testing, test-driven development, and continuous integration. The document emphasizes failing fast and delivering minimum viable products to adapt to changing needs.
The Briefing Room with Dr. Robin Bloor and Actian Live Webcast July 14, 2015 Watch the Archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=bbd4395ea2f8c60a03cfefc68c7aa823 Innovation often implies risk, which is why businesses have many issues to weigh when considering change. Yet the remarkable growth of data is driving many traditional systems into the ground, forcing information workers to take a critical look at their existing tools. Technologies like Hadoop offer economical solutions to big data management, but to truly take advantage of its capabilities, organizations must modernize their infrastructure. Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he explains how and why organizations should improve legacy systems. He’ll be briefed by Todd Untrecht of Actian, who will tout his company’s Actian Vortex, a SQL-in-Hadoop solution. He will show how integrating a SQL engine directly in the Hadoop cluster can lead to faster analytics and greater control, while still maintaining existing investments. Visit InsideAnalysis.com for more information.
The document discusses how architecture and agile development can seem contradictory, but presents approaches like dual track agile and the zipper model to balance architecture and agility. It explains that the most common causes of software mistakes are changing requirements, poor software management, and accumulating technical debt from unfixed issues. The presentation argues that architecture is needed in agile projects to support adaptability and anticipate changes while minimizing technical debt.
This document provides a roadmap for developing an enterprise graph strategy. It outlines key steps such as identifying a use case, designing a graph model using sample data, building APIs and demo applications, and deploying to production. It also provides examples of graph architectures, data processing techniques, and analytics capabilities. The goal is to solve a "graphy problem" by connecting disparate data sources and enabling new questions to be answered through graph queries and algorithms.
Using Schema Examination Tools to Ensure Information Quality whitepaper. Data Quality is one of the hottest topics in any IT shop. Although very important, Data Quality is far from being enough because decisions are based on information, not on data. Having quality data does not assure quality information. To have quality information, it is necessary to have quality data, but this is not sufficient on its own. We need more.
This resume is for Amol Kumar, a Software Engineer currently deployed in Chengdu, China working on ETL development and team management. He has over 8 years of experience in information technology with a focus on development, production support, and project management. He is certified in IBM Cognos and Infosphere Datastage and has expertise in technologies like Oracle, UNIX, and OBIEE. He has experience managing projects in countries like India, the US, Australia, the UK, China, and Japan.
This document provides an overview of how Drupal was implemented in the business office of a County Office of Education (COE) that serves K-12 schools in California. It describes issues with existing fragmented and outdated software systems. The COE aims to improve customer service, deploy modern web-based systems, embrace open-source standards, and establish agile development practices using Drupal.
The “definition” of Data Scientist says that one should know Math and Statistics, has a domain or business-specific knowledge and knows how to put it in programming code. Nobody knows to what extent this knowledge should be present in a single unicorn. One’s for sure - it grows over time. Knowing to implement and use ML models as repeatable tasks is what separates statisticians and researchers from the Data Scientists that help businesses improve their performance. That’s where the art of coding jumps in.
The document provides an agenda and information about a GoDataFest workshop on Google Cloud Platform for data. The agenda includes an introduction to GCP for data, a session on roles and tools on GCP for different data roles, and a session where participants will build projects on GCP in mixed workgroups. It outlines the goals and tools used by different roles like data engineer, analytics engineer, and Looker user. It also provides information on Google Cloud technologies like BigQuery, Dataform, Looker, and how they fit into the modern data lifecycle and platform. Participants are then divided into mixed workgroups based on their preferred role and given insights to explore in their projects.
Philip Rathle, VP of Product at Neo4j, presents on the Connected Data Imperative at Neo4j GraphDay NYC
The document discusses quantitative metrics for SaaS businesses, including lifetime value (LTV), cost to acquire customers (CAC), average revenue per user (ARPU), churn, conversion rates, and monthly recurring revenue (MRR). It emphasizes testing product experiences and features using metrics like trials, paid conversions, and engagement. Split testing and A/B testing are recommended to quantitatively evaluate changes. Continuous delivery, user stories, and qualitative user feedback are also presented as important techniques.
This document discusses how Tableau and MongoDB can work together for visual analytics of big data. It describes how MongoDB is a NoSQL database that can handle unstructured and semi-structured data like JSON, and how Tableau allows users to connect to MongoDB through an ODBC driver and visualize the data without needing to write code. The document outlines scenarios where big data comes from human, machine, and process sources and how the combination of Tableau and MongoDB's schema-on-read approach reduces the need for ETL. It also previews demos of connecting Tableau to MongoDB using both the ODBC driver and a PostgreSQL interface.
GoPro is a powerful global brand, thanks in large part to its innovative cameras and accessories that capture moments other cameras just miss: surfing in Maui, skiing in Tahoe, recording your child’s first steps. And today, the company is nearly as well known for its user-generated social and content networks. Join us for this special webinar hosted by Tableau, Trifacta, and Cloudera—featuring GoPro. We’ll dive into GoPro’s data strategy and architecture, from ingest and processing to data prep and reporting, all on AWS.
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Gursev Pirge, PhD Senior Data Scientist - JohnSnowLabs