The document summarizes how Telenor Norway, a subsidiary of Telenor Group generating $2 billion in mobile revenues annually, used a graph database to optimize resource authorization performance. Their previous relational database solution took 20 minutes to calculate user access and lacked scalability. By modeling authorization rules and data as a graph, traversal queries now take milliseconds, improving the user experience. The graph database improved performance 1000x, simplified complex business rules, and allows autonomous scaling to onboard more corporate customers.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
1) The document discusses the art of building event streaming applications using various techniques like bounded contexts, stream processors, and architectural pillars.
2) Key aspects include modeling the application as a collection of loosely coupled bounded contexts, handling state using Kafka Streams, and building reusable stream processing patterns for instrumentation.
3) Composition patterns involve choreographing and orchestrating interactions between bounded contexts to capture business workflows and functions as event-driven data flows.
This document discusses functional reactive programming and RxJava. It begins with an overview of functional reactive programming principles like being responsive, resilient, elastic and message-driven. It then covers architectural styles like hexagonal architecture and onion architecture. The rest of the document dives deeper into RxJava concepts like Observables, Observers, Operators, and Schedulers. It provides code examples to demonstrate merging, filtering and transforming streams of data asynchronously using RxJava.
The eBay Architecture: Striking a Balance between Site Stability, Feature Ve...Randy Shoup
The eBay architecture document discusses how eBay scales its platform to handle massive traffic while maintaining high availability and rapid feature development. Some key points are:
1) eBay uses horizontal scaling techniques like database sharding and separating functions across application servers to scale individual components.
2) The architecture emphasizes statelessness, caching, and minimizing database transactions to improve scalability and availability.
3) eBay evolved its architecture over several major versions to address scaling issues and allow for exponential growth in users and traffic over time.
Redis and Kafka - Simplifying Advanced Design Patterns within Microservices A...HostedbyConfluent
The adoption and popularity of the microservices architecture continues to grow across a spectrum of enterprises in every industry. Although a consensus on an implementation standard has yet to be reached, advanced design patterns and lessons learned about the complexities and pitfalls of deploying microservices at scale have been established by thought leaders and the development community. With Redis and Kafka becoming de facto standards across most microservices architectures, we will discuss how their combination can be used to simplify the implementation of event-driven design patterns that will provide real-time performance, scalability, resiliency, traceability to ensure compliance, observability, reduced technology sprawl, and scale to thousands of services. In this discussion, we will decompose a real-time event-driven payment-processing microservices workflow to explore capturing telemetry data, event sourcing, CQRS, orchestrated SAGA workflows, inter-service communication, state machines, and more.
batbern43 Self Service on a Big Data PlatformBATbern
Kafka has been used for several years at Swisscom to stream data from various sources to sinks such as Hadoop. Providing Kafka and Hadoop as a Service to multiple teams in a large company presents governance, security and multi-tenancy challenges. In this talk we will present how we have built our self-service Swisscom Big Data Platform which enables teams to use Kafka, Hadoop and Kubernetes internally. We will explain how we have tackled these challenges by describing our governance model, our identity & ACLs management, and our self-service capabilities. We will also present how we leverage Kubernetes and how it simplifies our operations.
Building Cloud-Native App Series - Part 1 of 11
Microservices Architecture Series
Design Thinking, Lean Startup, Agile (Kanban, Scrum),
User Stories, Domain-Driven Design
Introducing Change Data Capture with DebeziumChengKuan Gan
This document discusses change data capture (CDC) and how it can be used to stream change events from databases. It introduces Debezium, an open source CDC platform that captures change events from transaction logs. Debezium supports capturing changes from multiple databases and transmitting them as a stream of events. The summary discusses how CDC can be used for data replication between databases, auditing, and in microservices architectures. It also covers deployment of CDC on Kubernetes using OpenShift.
Communication Patterns Using Data-Centric Publish/SubscribeSumant Tambe
Fundamental to any distributed system are communication patterns: point-to-point, request-reply, transactional queues, and publish-subscribe. Large distributed systems often employ two or more communication patterns. Using a single middleware that supports multiple communication patterns is a very cost-effective way of developing and maintaining large distributed systems. This talk will begin with an introduction of Data Distribution Service (DDS) – an OMG standard – that supports data-centric publish-subscribe communication for real-time distributed systems. DDS separates state management and distribution from application logic and supports discoverable data models. The talk will then describe how RTI Connext Messaging goes beyond vanilla DDS and implements various communication patterns including request-reply, command-response, and guaranteed delivery. You will also learn how these patterns can be combined to create interesting variations when the underlying substrate is as powerful as DDS. We’ll also discuss APIs for creating high-performance applications using the request-reply communication pattern.
The document discusses Microservices architecture and compares it to monolithic architecture. It covers topics like infrastructure for Microservices including API gateways, service discovery, event buses. It also discusses design principles like domain-driven design, event sourcing and CQRS. Microservices are presented as a better approach as they allow independent deployments, scale independently and use multiple programming languages compared to monolithic applications.
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Guido Schmutz
Spark Streaming and Kafka Streams are two popular stream processing platforms. Spark Streaming uses micro-batching and allows for code reuse between batch and streaming jobs. Kafka Streams is embedded directly into Apache Kafka and leverages Kafka as its internal messaging layer. Both platforms support stateful stream processing operations like windowing, aggregations, and joins through distributed state stores. A demo application is shown that detects dangerous driving by joining truck position data with driver data using different streaming techniques.
The document provides an overview of microservices architecture. It discusses key characteristics of microservices such as each service focusing on a specific business capability, decentralized governance and data management, and infrastructure automation. It also compares microservices to monolithic and SOA architectures. Some design styles enabled by microservices like domain-driven design, event sourcing, and functional reactive programming are also covered at a high level. The document aims to introduce attendees to microservices concepts and architectures.
An introduction to KrakenD, the ultra-high performance API Gateway with middlewares. An opensource tool built using go that is currently serving traffic in major european sites.
RedisConf18 - The Versatility of Redis - Powering our critical business using...Redis Labs
Redis was used in three cases by a company to power critical systems:
1) As the primary database for a blacklist service to improve performance and scalability over their previous architecture.
2) As a cache to provide fast access to high throughput services by caching over 160 million records.
3) As a distributed lock to orchestrate connections to mobile carriers and load balance them across instances.
IBM Cloud Direct Link 2.0 is the NextGen offering on Direct Link. This presentation provide details on the new DL 2.0 offering and difference between DL 1.0 and 2.0
Kafka and event driven architecture -apacoug20Vinay Kumar
Event-driven architecture in APIs and microservice are very important topics if you are developing modern applications with new technology, platforms. This session explains what is Kafka and how we can use in event-driven architecture. This session explains the basic concepts of publisher, subscriber, streams, and connect. Explain how Kafka works. The session covers developing different functions with different programming languages and shows how they can share messages by using Kafka. What are the options we have in Oracle stack? Which tool make it possible event-driven architecture in Oracle stack. Speaker will also explain Oracle Event HUB, OCI streaming, and Oracle AQ implementation.
Manage the Digital Transformation with Machine Learning in a Reactive Microse...DataWorks Summit
Digital transformation is undoubtedly the biggest challenge of Dutch telecom operator KPN in the coming years. By adopting new technologies like machine learning into the current operations procedures, companies like KPN will save money by eliminating manual tasks and better manage our telecom infrastructure.
Machine learning is changing the world, is automating processes that the human brain has done in the past, such as recommendations. By combining Bussiness Process Management (BPM) and machine learning together we can take automation to the other level. But how can we integrate machine learning and BPM into a reactive microservices architecture?
In this session, we will discuss an integration pattern for achieving this using technical components like Apache SPARK as a machine learning platform Apache KAFKA as a distributed streaming platform to trigger the external microservices, and (BPM) Bussiness Process Management tooling to manage just the workflow and lets the external microservices do the work.
Speaker
Patrick de Vries, Architect
KPN
Using Graph Databases in Real-Time to Solve Resource Authorization at Telenor...Neo4j
Sebastian talks about how they use neo4j to protect data in business critical services running in production. The talk covers both high-level architecture, and detailed technical considerations.
The document outlines Neo4j's product strategy and roadmap. It discusses trends like increasing cloud adoption and the blending of transactional and analytical use cases. The roadmap focuses on cloud-first capabilities, ease of use for developers, trusted fundamentals of the database, and enabling AI through graph algorithms and knowledge graphs. Key announcements include new graph algorithms, change data capture for integration, autonomous clustering for scalability, and innovations in graph embeddings and generative AI integration.
Madhulatha has over 7 years of experience in information technology, specializing in data warehousing. She is proficient in ETL tools like Datastage and Talend, databases like Oracle and Teradata, and reporting tools like Qlik View. She seeks a challenging role in software development utilizing her skills in programming languages, data analysis and design, and project experience across various domains.
Some interesting case studies of how we helped our clients adopt DevOps. The cases cover various fields within DevOps space: CI/CD, Monitoring, Cloud Migration
This document provides a summary of the experience and skills of an IT professional named M Vamsikrishna from Hyderabad, India. It outlines his 3+ years of experience as an ETL Developer using IBM Infosphere Datastage and working on medium to large projects. It also lists his technical skills including Datastage, Teradata, SQL, and Linux. It provides details on some of the projects he has worked on, including roles and responsibilities, along with the technologies used.
This document contains a resume for Debarpan Mukherjee. It summarizes his professional experience as a System Engineer at Tata Consultancy Services for over 4 years, working on projects for clients like Intel and Deutsche Bank. It also lists his educational qualifications including a B.Tech in Computer Science and Engineering, and technical skills including Oracle PL/SQL, Unix, and data warehousing concepts.
This document contains the resume of Subbarao P, who has 3.5 years of experience working with WebMethods Integration Platform. He has expertise in building and maintaining B2B applications and EAI integrations using WebMethods, and experience with various adapters and protocols. The resume lists three projects he worked on, including maintaining Coca-Cola's global B2B integration infrastructure and developing new interfaces for an upgrade project.
Jitesh Kumar is a senior associate software analyst with over 3 years of experience in the IT industry. He has strong skills in software design, development, testing, technical support and system support, particularly for supply chain management systems. He has extensive experience with SQL Server 2005/2008/2012, programming languages like SQL, VB.Net, C#, and JavaScript. He has participated in several projects for clients like Dabur India, Reckitt Benckiser, Microsoft, Samsung and others developing and supporting distributor automation systems.
Mayank Aggarwal has over 3.5 years of experience working as a System Administrator for TCS and previously as a Business Analyst for Tech Mahindra LTD and IBM India PVT LTD for clients like Airtel and Vodafone. He has extensive experience with technologies like Java, SQL, Oracle, DB2, UNIX/Linux, and software like JBoss, Tomcat, Apache server, and MQ. His objective is to work in challenging positions that provide opportunities for learning and contributing.
Importing Large Sets of Content from Trusted Partners into your RepositoryBlueFish
The document discusses a solution developed by Blue Fish Development Group to help Solvay Pharmaceuticals import large amounts of content from external partners into its document repository. The solution leveraged existing migration tools to create a simple spreadsheet-based system that allowed partners to upload files without needing access to Solvay's repository. It streamlined the import process, ensured data accuracy, and provided reporting. The flexible design allowed reuse of components and reduced costs compared to custom development.
PLNOG 3: Tomasz Mikołajczyk - Data scalability. Why you should care?PROIDEA
This document discusses data scalability and introduces GridwiseTech, a vendor-independent scalable technology expert. It explains that IT systems are constantly growing due to increased users, applications, and data which can lead to infrastructure bottlenecks. To improve efficiency, GridwiseTech introduces scalability through distributed processing, load balancing, and scaling out data. It then summarizes a case study where GridwiseTech helped an electronic manufacturer scale its infrastructure to ensure scalability on each functional layer and achieve significant performance improvements like 10x faster data processing.
N. Sathish Kumar has over 10 years of experience in the IT industry. He has expertise in Java, Spring, Hibernate, Oracle, SQL Server, and legacy modernization tools like BluAge. Some of his projects include modernizing banking applications, developing web applications for failure analysis tracking and supply chain management, and migrating mainframe screens to new interfaces. He is skilled at all phases of the software development life cycle from analysis to deployment.
1 Billion Events per Day, Israel 3rd Java Technology Day, June 22, 2009Moshe Kaplan
We presented at the Israeli 3rd Java Technology Day, the largest SUN Microsystems/MySQL event in israel. We presented here the essentials parts of building a real life web/enterprise system that needs to handle the performance needs of 1 billion events per day (a case study from the ad networks billing systems). We presented the adoption rate in the internet, Load Balancers (HAProxy, Apache, Radware, F5, Cisco), Web Servers, In Memory Database (IMDB inc. Memcached, Gigaspaces, Teracotta and Oracle Coherence) and finally Sharding (inc. Veritical, Static Horizontal and dynamic). A great example for a performance boosting architecture.
Maximizing Data Lake ROI with Data Virtualization: A Technical DemonstrationDenodo
Watch full webinar here: https://bit.ly/3ohtRqm
Companies with corporate data lakes also need a strategy for how to best integrate them with their overall data fabric. To take full advantage of a data lake, data architects must determine what data belongs in the Lake vs. other sources, how end users are going to find and connect to the data they need as well as the best way to leverage the processing power of the data lake. This webinar will provide you with a deep dive look at how the Denodo Platform for data virtualization enables companies to maximize their investment in their corporate data lake.
Watch on-demand this webinar to learn:
- How to create a logical data fabric with Denodo
- How to leverage the a data lake for MPP Acceleration and Summary Views
- How to leverage Presto with Denodo for file based data lakes (ie. S3, ADLS, HDFS, etc.)
- Rajendra Kumar Sahu is seeking a position that allows for continuous learning and professional development.
- He has over 4 years of experience implementing and developing Maximo 7, 7.5, and 7.6 using agile methodologies and DevOps processes.
- His skills include Maximo data migration, loading, integration, customization, automation scripting, and configuration. He is also experienced with databases like Oracle, DB2, and technologies like WebSphere.
Yuriy Chapran - Building microservices.Yuriy Chapran
- Microservices are small, autonomous services that work together to form applications. Each service focuses on doing a single job and communicates through well-defined interfaces.
- There are several common design approaches for microservices including business capability services, API gateways, load balancers, message queues, caching, and circuit breakers. Choreography is preferred over orchestration.
- Implementing microservices provides benefits like independent deployability and scalability but also introduces complexity around distribution, eventual consistency, and operations.
Harness the Power of the Cloud for Grid Computing and Batch Processing Applic...RightScale
This document summarizes a presentation about harnessing the power of cloud computing for grid computing. It discusses how RightScale provides automated management of grid computing workloads in the cloud, allowing users to easily deploy and control large numbers of servers. Demos show how RightScale enables graceful scaling of server arrays, automated queue handling, and analyzing results to quantify economic benefits like cost savings and increased agility compared to on-premise grid solutions.
Database Performance monitoring tool for Microsoft SQL Server 2005 & 2008 (included in "SQL Server 2008 R2 Unleashed" best-selling book), Sybase ASE 11.5 to 15.5 and Oracle 8i to 11g.
Presenting Data – An Alternative to the View ControlTeamstudio
In this webinar, Paul Della-Nebbia, an IBM Champion, will show how to implement a different alternative for displaying information from Domino views. Paul will cover how to use the Dojo Data Grid (included with XPages) to display a data grid that provides unique features like infinite scrolling, click to sort column headers, adjustable column widths, filtering, and the ability to drag and drop column headers to reorder. As the user scrolls through, the view data is retrieved as needed which improves performance and usability.
Similar to Using Graph Databases in Real-time to Solve Resource Authorization at Telenor - Sebastian Verheughe @ GraphConnect SF 2013 (20)
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Atelier - Architecture d’applications de Graphes - GraphSummit ParisNeo4j
Atelier - Architecture d’applications de Graphes
Participez à cet atelier pratique animé par des experts de Neo4j qui vous guideront pour découvrir l’intelligence contextuelle. En utilisant un jeu de données réel, nous construirons étape par étape une solution de graphes ; de la construction du modèle de données de graphes à l’exécution de requêtes et à la visualisation des données. L’approche sera applicable à de multiples cas d’usages et industries.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SOPRA STERIA - GraphRAG : repousser les limitations du RAG via l’utilisation ...Neo4j
Romain CAMPOURCY – Architecte Solution, Sopra Steria
Patrick MEYER – Architecte IA Groupe, Sopra Steria
La Génération de Récupération Augmentée (RAG) permet la réponse à des questions d’utilisateur sur un domaine métier à l’aide de grands modèles de langage. Cette technique fonctionne correctement lorsque la documentation est simple mais trouve des limitations dès que les sources sont complexes. Au travers d’un projet que nous avons réalisé, nous vous présenterons l’approche GraphRAG, une nouvelle approche qui utilise une base Neo4j générée pour améliorer la compréhension des documents et la synthèse d’informations. Cette méthode surpasse l’approche RAG en fournissant des réponses plus holistiques et précises.
ADEO - Knowledge Graph pour le e-commerce, entre challenges et opportunités ...Neo4j
Charles Gouwy, Business Product Leader, Adeo Services (Groupe Leroy Merlin)
Alors que leur Knowledge Graph est déjà intégré sur l’ensemble des expériences d’achat de leur plateforme e-commerce depuis plus de 3 ans, nous verrons quelles sont les nouvelles opportunités et challenges qui s’ouvrent encore à eux grâce à leur utilisation d’une base de donnée de graphes et l’émergence de l’IA.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
Petr Matuska, Sales & Sales Engineering Lead, GraphAware
Western Australia Police Force’s adoption of Neo4j and the GraphAware Hume graph analytics platform marks a significant advancement in data-driven policing. Facing the challenges of growing volumes of valuable data scattered in disconnected silos, the organisation successfully implemented Neo4j database and Hume, consolidating data from various sources into a dynamic knowledge graph. The result was a connected view of intelligence, making it easier for analysts to solve crime faster. The partnership between Neo4j and GraphAware in this project demonstrates the transformative impact of graph technology on law enforcement’s ability to leverage growing volumes of valuable data to prevent crime and protect communities.
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product UpdatesNeo4j
David Pond, Lead Product Manager, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Choose our Linux Web Hosting for a seamless and successful online presence
Using Graph Databases in Real-time to Solve Resource Authorization at Telenor - Sebastian Verheughe @ GraphConnect SF 2013
1. Using Graph Databases in Real-Time to
Solve Resource Authorization at Telenor
Graph Connect San Francisco – 4 Oct 2013
by Sebastian Verheughe
2. Telenor Norway
Subsidiary of the Telenor Group
2 billions USD in mobile revenues 2012
Sebastian Verheughe
Lead Developer for Neo4j solution
Coding Architect
3. Disclaimer
The presentation is not identical to the implementation
due to security reasons but shows how we have
modeled and solved the problem in general.
However, all presented data (numbers & charts) are
real, unfiltered and extracted from the production logs
5. Telenor Norway Middleware Services
Channel
Channel
used by 42 channels
calls 35 sub-systems
10,000 code classes
500 requests/second
20,000 orders/day
Backend
Backend
Channel
Channel
MOBILE
MW
Channel
Channel
Providing business logic
and data for all channels
in the mobile value chain
BUSINESS
LOGIC
& DATA
Backend
Handles users with
access to X00,000
resources
Backend
Backend
Backend
6. Our Problem
20 minutes to calculate all accessible resources
1500 lines of SQL to implement the authorization logic
“solved” by caching data going stale
and the solution did not scale…
7. Why a Graph Database?
Access
Parent Company
User
Which resources
does the user
have access to?
Part of Company
Sales
Finance Production
HR
Sub
The questions we wanted answered
required traversal of tree structures.
Tablet
Subscription
Owner
Sub
Tablet
Uses
Subscription
Phone
8. Tailored Read Model
The Model makes read queries
as simple and efficient as possible.
First find your questions
then model your graph
graph model
=
relational model
10. Conditional Rules
ACCESS is given with the following include parameters:
access to subsidiaries and access to content
Only find children of PARENT COMPANY
given access to subsidiaries is allowed
User
Only look at PART OF COMPANY
given access to content is allowed
Only look at SUBSCRIPTION OWNER
given access to content is allowed
12. Graph Algorithm
Prerequisite: The user node
1. Follow all ACCESS relationships and
read the access parameters on the relationship
2. Follow all PARENT COMPANY relationships given
access to subsidiaries is allowed
3. Follow all PART OF COMPANY relationships given
access to content is allowed
4. Follow all SUBSCRIPTION OWNER relationships given
access to content is allowed
13. Solution Value
1. Performance optimized from minutes to seconds.
2. Simplicity of writing and understanding business rules
for the query traversal.
3. Scalability by performance allowing us to onboard
more corporate customers (project business case)
Autonomous Service
with it’s own life-cycle and data repository.
14. Authorization Complexity
• Not a collection of isolated customer trees *
• Not all users of a customer have equal access
• Not a fixed schema, form or size for all
customers
• Real-time updated with customer & product
data
The data form a highly connected living graph
* Covered later in Technical Details
15. How we Started with Neo4j
1. Searched the internet for articles about graph
database and different solutions.
2. Downloaded and quickly prototyped the solution we
liked that matched our requirements (Neo4j).
3. Workshop with Neo4j and our project developers to
quickly gain competence and ensure design QA.
4. Solution QA with Neo4j before production and help
with performance issues / tuning.
16. Lessons Learned
• Choose a solution/technology that fits your problem
• New way of thinking – build competence in org.
• Profile your java code to make it really fast
• Don’t put everything into the graph (functional creep)
• Need to know how traversal works (e.g. shortest
path)
• Benchmark the graph to evaluate your traversal
speed
17. Alternative In-Memory RDBMS
Option 1: Use existing database
- Performance issues due to shared data / suboptimal
structure
- Complexity since SQL not designed for traversal
Option 2: Separate database
+ Might reach same performance as graph db
+ Familiar technology
- Complexity since SQL not designed for traversal
Decided to go with our instinct
Graph Database
18. Different Graph Structures
get all accessible subscriptions
2 000
1 000
1 700 ms
Company X: 147 000
750 ms
Company Y: 52 000
1300 ms
Company Z: 95 000
Data from test – repeated prod sampling gave ~2.4 sec for 215,000 subscriptions
19. Different Graph Structures
check access to single subscription
2 000
1 000
1 ms
Company X: 147 000
1 ms
Company Y: 52 000
1 ms
Company Z: 95 000
20. Production Performance
retrieve all accessible resources
RDBMS Disk
RDBMS (mem cached)
Graph In-Heap
Company X
12 min
18 sec
< 2 sec
Company Y
22 min
58 sec
< 2 sec
Company Z
3 min
15 sec
< 2 sec
Check single resource access
1 ms
No operational problems in production
25. Implementing the Algorithm
Lets look at the Neo4j Traversal Framework
Iterable<Node> getAccessibleResources(…) {
Evaluator myEvaluator = …
Expander myExpander = …
return Traversal.description()
.evaluator(myEvaluator)
.expander(myExpander)
.traverse(startNode).nodes();
}
26. Implementing the Algorithm
Evaluator is a simple filter, e.g. for Node
type
class MyEvaluator implements Evaluator {
public Evaluation evaluate(Path path) {
if <I am interested in this node>
return Evaluation.INCLUDE_AND_CONTINUE;
else
return Evaluation.EXCLUDE_AND_CONTINUE;
}
}
27. Implementing the Algorithm
The custom Expander contains business
rules!
class ResAuthExpander implements PathExpander<PathExpander> {
…
public … expand(Path path, BranchState<…> state) {
if (path.lastRelationship rel == ACCESS)
accToSub = rel.getProperty(ACCESS_TO_SUBSIDIARIES);
accToCont = rel.getProperty(ACCESS_TO_CONTENT);
state.set( getExpander(accToSub, accToCont) );
}
return state.get().expand(…)
}
Single expander class to control business
28. Implementing the Algorithm
Generates the valid relationships to traverse.
public getExpander(boolean accToSub, boolean accToCont) {
PathExpander exp = StandardExpander.DEFAULT.add(ACCESS,…);
if (accToSub)
exp.add(PARENT_COMPANY,…)
if (accToCont)
exp.add(PART_OF_COMPANY,…).add(SUBSCRIPTION_OWNER,…);
return exp;
}
}
29. U-Turn Strategy
4.
Access
User
Does the user
have access to
subscription X?
3.
5.
Up to find path quickly
Down to check access
6.
2.
7.
1.
8.
X
Subscription
Reversing the traversal increases performance from n/2 to
2d where n and d are tree size and depth (we went from 1s to
30. The Zigzag Problem
What if we also have reversed access to the subscription
payer?
User
Op
IT
E
d
Jo
Subscriptions
Solvable by adding state to the traversal (or check path)
31. The Many-to-Many Problem
The nodes Op & IT may be connected through many
subscriptions
Does the user
have access to
department Op?
Op
IT
Access
User
Subscription
Traversal becomes time consuming (e.g. M2M market)
However, we only needed to implement the rule for direct access to
sub.
32. Deployment View
• Two equal instances of Neo4j embedded in Tomcat
• Access through Java API due to need for custom logic
• Using Neo4j 1.8 without HA (did not like ZooKeeper)
Resource
Authorization
Neo4j
tx log
RDBMS
Message Queue
Resource
Authorization
Neo4j
33. Dual Model Cost
There are some drawbacks with dual models
also
• Not possible to simply join the ACL with resource
tables in the relational database - queries needed
redesign
• The complexity added by code and infrastructure
necessary to manage an additional model.
• Not ordinary competence (in Norway at least)
34. Unexplored Areas
Combining Access Control List & Graph
• Best of both worlds (simple logic, fast lookup)
Algorithm
–
–
–
–
Find all affected users when the graph is updated
Invalidate users access control list
Calculate all accessible resources for each user
Store result in users access control list
Could then skip the U-turn and many-to-many problem.
35. Was is worth it?
Yes!
The user experience is important in
Telenor
37. Web References
• Telenor Norway
• The Project - How NOSQL Paid off for
Telenor
• JavaWorld - Graphs for Security
Editor's Notes
Why: access to secret numbers, access to modify/delete subscriptions, possibility to send/receive messages
We use Neo4j for our business critical services, both customer/product services , but also operational services.A channel is client type, e.g. the web solution for corporate customer, or helpdesk solution, or app, and may consist of many clients
The project business case was based on a future point in time where we could not any more onboard any large corporate customers
Drawing on the white board the required logic made us understand that a graph database might be a good solution
Take the hit on write, and make read easy! (for us, read performance is the problem – not write performance)Also, don’t blindly copy tables/foreign keys into nodes and relationships – drop what’s not needed and remember that relationships may have properties in graph
RDBMS is still mastering the data as it is used in many different use-cases where that is beneficial.
TIME LEFT: 30 minutes (10 used)
The last part is important to us. It was really hard to extract the resource authorization out of the relational database, but not we can much more easy replace the current implementation with another one in the future if neccesary.
Production logs does not contain user data, so just one big organization was sampled to get production data for a specific customer
TIME LEFT: 20 minutes (20 used)Graphperformance based on test environment, see charts in the technical section for production numbers not specific for a unique customer
We only have detailed logs for a short while back – so we cannot review all data since production.First production two years ago with limited traffic, full production since spring 2013
We always continue, since we also have our custom expander. This way, we have a clean separation of concern in our code.We also have more advanced filters peaking around the node before it decides to include or exclude the node
This is the most important part of the code, the one place where we now are able to write down the business logic in a simple and natural way.Note that we only have ONE class containing the business rules independently of which use-case we are running.
The relationships and directions that are allowed to traverse given the different switch parameters.
This is possible since we have a tree graph. Demonstrates the importance of understanding how a graph works, because than you may greatly improve performance by smart traversal strategies.
TIME LEFT: 10 minutes (30 used)
Extra knowledge, such as which subscription you are