The document discusses the history and concepts of cloud computing. It began with clustering and grid computing, where computers were grouped together to function as a single computer or where multiple clusters acted as a grid. Cloud computing evolved this concept further by providing dynamically scalable, virtualized resources as an internet-based service. Common types of cloud services include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document then discusses various components, applications, and benefits of cloud computing architectures.
We fingerprinted nearly 500K code repositories and 160 million code commits across Web3 to create the 2021 Electric Capital Developer Report.
Originally published January 5, 2022. Updated January 26, 2022 to issue a correction.
Created by Electric Capital
https://www.electriccapital.com
Business Analysis: Challenges and Opportunities in 2015 (Denys Gobov Product ...IT Arena
Lviv IT Arena is a conference specially designed for programmers, designers, developers, top managers, inverstors, entrepreneur and startuppers. Annually it takes place on 2-4 of October in Lviv at the Arena Lviv stadium. In 2015 conference gathered more than 1400 participants and over 100 speakers from companies like Facebook. FitBit, Mail.ru, HP, Epson and IBM. More details about conference at itarene.lviv.ua.
Bridging the Gap Between Real Time/Offline and AI/ML Capabilities in Modern S...Amazon Web Services
Building real-time collaboration applications can be difficult, and adding intelligence to an app to make it stand out remains a challenge. In this session, learn how to build real-time chat serverless apps infused with AWS machine learning (ML) services. We dive into enhancing a real-time chat application with search capabilities, chatroom bots providing automated responses , and on-demand message translation using Amazon AI/ML services.
Predictions: Worldwide IT Spending to Reach $4 Trillion in 2021
CIOs are the future?!
NFTs...not so much
Poor Data Quality Could Cost You...$12.8M a year
7 Lessons on how tech transformations can deliver value
92% of VCs consider themselves to be value add investors. MOST venture-backed #entrepreneurs disagree
Blockchain-Powered Businesses: Company presentation by Marjorie Hernandez de Vogelsteller, LUKSO at the NOAH Conference Berlin 2019, 13-14 June, STATION.
Energy Tokens Pitch Deck - Creating Energy Asset Liquidity Through Blockchain...Alastair Caithness
ENERGY TOKENS is a complete solution trading platform using Blockchain enabled technology to acquire, hold and transfer direct interests in energy producing assets, including oil/gas and solar/wind.
The way energy assets have been acquired, held and transferred has not changed for over 100-years.
The Energy Tokens platform has been developed to address inherent problems facing the energy industry.
Problems with Energy Producing Assets:
* Difficult to sell/buy fractional interests:
* Direct investments are made through private negotiated transactions
* Non-industry investors have limited access to these investments
* Limited liquidity opportunities by holders of non-controlling interests prior to sale of entire asset
* Legal conveyance mechanism is antiquated through filings in local land registries
Our Solution;
* Permissioned Blockchain Data Analysis Artificial Intelligence Liquidity for Investors
* Democratizing energy investment opportunities
* Creating liquidity for illiquid energy investments
* Facilitating better access to capital for energy development projects
Our Target Market
* By 2030, US capital investment in renewable energy is expected to exceed $500 billion
* There are approximately 1,000,000 oil wells operating in the United States and over 9,000 independent oil and natural gas producers
Visit https://www.energytokens.io for daily news updates
Contact Us to get a copy of the Business Plan and White Paper
Developer Report (Published: December 2020, Updated: April 2021)Maria Xinhe Shen
We fingerprinted 276,000+ code repositories and 89 million code commits to create this 2020 Developer Report.
The Developer Report deeply analyzes developer activity across all open source crypto ecosystems.
Created by Electric Capital
https://www.electriccapital.com
DAppTotal Research Report On DeFi Industry (First Half Of 2019)peckshield
The document provides an overview and analysis of the decentralized finance (DeFi) industry in the first half of 2019. It finds that the total value locked in DeFi applications grew almost 5-fold over this period to $1.49 billion as of June 30, 2019. Specifically, it analyzes trends in stablecoins, lending, and decentralized exchanges (DEX) and finds that stablecoin transaction volume and the origination amounts of DeFi lending increased substantially, while DEX trading volumes also rose significantly. However, the document notes that user adoption of DeFi applications remains relatively low, representing ongoing challenges around user education and mainstream adoption.
We fingerprinted nearly 500K code repositories and 160 million code commits across Web3 to create the 2021 Electric Capital Developer Report.
Created by Electric Capital
https://www.electriccapital.com
The document discusses Bitcoin's potential future developments and Okcoin's role in supporting those developments. It outlines a roadmap to Bitcoin becoming a global store of value, medium of exchange, unit of account, and programmable money. Okcoin has been enabling Lightning Network payments, which have grown significantly in usage. The document also advocates for further standardization and interoperability to help Bitcoin reach more people worldwide while avoiding becoming dominated by Wall Street.
Ravencoin is a blockchain protocol forked from Bitcoin that is designed specifically for transferring assets like tokenized property from one party to another. It allows users to create tokens to represent real world assets like securities, investments, physical goods, and more. Ravencoin aims to solve issues with Bitcoin and Ethereum not being purpose-built for asset transfers by creating a simple blockchain focused solely on issuing, tracking, and transferring assets through tokenization.
We fingerprinted 27,000+ code repositories and 22 million code commits to create this H1 2019 Developer Report.
Developers are a leading indicator for where value will be created and accrue in crypto.
This report focuses on developer activity from June 2018 to June 2019.
Published by Electric Capital.
electriccapital.com
We fingerprinted 20,000+ code repos and 16M commits to create a Dev Report on where crypto developers are focused. Developers are a leading indicator for where value will be created and accrue in crypto.
This report focuses on developer activity from Jan, 2018 to Feb, 2019.
Published by Electric Capital.
electriccapital.com
Electric Capital Developer Report (Published: December 2020)Maria Xinhe Shen
We fingerprinted 276,000+ code repositories and 89 million code commits to create this 2020 Developer Report.
The Developer Report deeply analyzes developer activity across all open source crypto ecosystems.
Created by Electric Capital
https://www.electriccapital.com
We fingerprinted 276,000+ code repositories and 89 million code commits to create this 2020 Developer Report.
The Developer Report deeply analyzes developer activity across all open source crypto ecosystems.
Created by Electric Capital
https://www.electriccapital.com
We fingerprinted 276,000+ code repositories and 89 million code commits to create this 2020 Developer Report.
The Developer Report deeply analyzes developer activity across all open source crypto ecosystems.
Created by Electric Capital
https://www.electriccapital.com
Two key words for the post COVID-19 pandemic economic recovery will be the ESG (Environmental, Social, and Governance) management and digital transformation acceleration.
It is also expected that three core technologies in the data-centric digital economy - AI, Blockchain, IoT and their convergence - will support a long-term sustainable economic system development.
Contents
I. AI, Blockchain, IoT, and Their Convergence Technology Innovation Status
II. ESG Digital Transformation Innovation Status
III. AI, Blockchain, IoT, and Their Convergence for ESG Digital Innovation Use Case Examples
1. Use Case for Reducing Carbon Footprints (AI + IoT)
2. Use Case for Increasing Renewable Energy Use (AI + Blockchain)
3. Use Case for Waste Management (AI +Blockchain+IoT)
4. Use Case for Workplace/Workforce Management in Social-Human Care (AI+IoT)
5. Use Case for Cybersecurity & Privacy in Social-Business Relationship Management (AI +Blockchain+IoT)
6. Use Case for Corporate Governance (Blockchain)
The Human Body in the IoT. Tim Cannon + Ryan O'SheaFuture Insights
Making the most of our data and the human body in the internet of things. The document discusses biohacking and implantable devices that can send biometric data wirelessly from the body to a phone. It also discusses the history of citizen science and how innovations in accessibility can empower citizens. The future possibilities discussed include active and passive control of digital systems using feedback from the peripheral and central nervous systems.
Smaller is Better - Exploiting Microservice Architectures on AWS - Technical 201Amazon Web Services
Microservice oriented architectures have been implemented and deployed by many and are on the near-term agenda of many others. However, the distributed nature of microservices is a double edged sword, being the source of many of the benefits, but also the source of the pain and confusion that teams have endured. We will review best practices and recommended architectures for deploying microservices on AWS with a focus on how to exploit the benefits of microservices to decrease feature cycle times and costs while increasing reliability, scalability, and overall operational efficiency.
Speaker: Craig Dickson, Solutions Architect, Amazon Web Services
Featured Customer - MYOB
Business and IT agility through DevOps and microservice architecture powered ...Lucas Jellema
IT needs to run in production in order to generate business value. DevOps is among other things a way of thinking focusing on production software. A business application requires a tailor made platform to generate business value. The combination of application and its platform is a DevOps product. The DevOps team has full responsibility for that product through its entire lifecycle.
The microservices architecture promises flexibility, scalability, and optimal use of compute resources. Via independent components with well-defined scope and responsibility, interface, and ownership that are evolved and managed in an automated DevOps process, this architecture leverages current technologies and hard-learned insights from past decades.
This session defines the objectives of Business with IT, of microservices and DevOps and introduces Containers and the container platform Kubernetes as crucial ingredients for making DevOps happen.
Microservices, Containers, Scheduling and Orchestration - A PrimerGareth Llewellyn
This document provides an overview of microservices, containers, scheduling and orchestration. It defines microservices as small, autonomous services that work together with bounded contexts. Containers provide operating system-level virtualization and isolation for microservices. Container cluster managers like Docker Swarm, Kubernetes and Mesosphere DC/OS provide scheduling, service discovery, load balancing and other orchestration capabilities for containers. The document examines characteristics of moving from monolithic to microservice architectures and different deployment patterns using containers, VMs and hardware virtualization.
AWS Summit Auckland - Smaller is Better - Microservices on AWSAmazon Web Services
The document provides an overview of microservices including:
- Defining microservices and comparing them to SOA
- The benefits of a microservices architecture like improved agility, scalability, and innovation
- Common microservice patterns on AWS like serverless and container-based services
- How microservices can address business problems like long feature cycles and technical problems like lack of testability
- A customer story of how MYOB adopted microservices on AWS to support their online products
- Tips for evolving architectures including focusing on automation, organizational structure, and individual service design.
This document provides an introduction and overview of containers, Kubernetes, IBM Container Service, and IBM Cloud Private. It discusses how microservices architectures break monolithic applications into smaller, independently developed services. Containers are presented as a standard way to package applications to move between environments. Kubernetes is introduced as an open-source system for automating deployment and management of containerized applications. IBM Cloud Container Service and IBM Cloud Private are then overviewed as platforms that combine Docker and Kubernetes to enable deployment of containerized applications on IBM Cloud infrastructure.
AWS Innovate: Smaller IS Better – Exploiting Microservices on AWS, Craig DicksonAmazon Web Services Korea
This document provides an overview of microservices and how they can be implemented on AWS. It begins with defining microservices as independent services that work together to form an application. It then discusses how microservices address issues with monolithic architectures like tight coupling and lack of modularity. Various microservice patterns on AWS are presented, including using EC2 instances, ECS, Lambda, and serverless architectures. The document also explores how microservices can help address both business problems like long feature cycle times and technical problems like lack of testability. Overall, it aims to explain what microservices are, how they can be deployed on AWS, and the types of issues they can help organizations solve.
Cloud 2.0: Containers, Microservices and Cloud HybridizationMark Hinkle
In a very short time cloud computing has become a major factor in the way we deliver infrastructure and services. Though we’ve quickly breezed through the ideas of hosted cloud and orchestration. This talk will focus on the next evolution of cloud and how the evolution of technologies like container (like Docker), microservices the way Netflix runs their cloud) and how hybridization (applications running on Mesos across Kubernetes clusters in both private and public clouds).
node.js and Containers: Dispatches from the Frontierbcantrill
This document discusses node.js and containers for microservices architectures. It describes how microservices architectures break large monolithic applications into many smaller independent services. Node.js is well-suited for microservices due to its lightweight footprint and asynchronous nature. Containers provide an efficient way to run many independent services on a single machine by virtualizing at the operating system level. The document outlines lessons learned from rewriting a cloud orchestration system called SmartDataCenter using a microservices and container-based architecture.
This document provides an overview of cloud native concepts including:
- Cloud native is defined as applications optimized for modern distributed systems capable of scaling to thousands of nodes.
- The pillars of cloud native include devops, continuous delivery, microservices, and containers.
- Common use cases for cloud native include development, operations, legacy application refactoring, migration to cloud, and building new microservice applications.
- While cloud native adoption is growing, challenges include complexity, cultural changes, lack of training, security concerns, and monitoring difficulties.
Are you considering Microservice architecture for your next project?
Are you planning to migrate an existing legacy / monolithic application to Microservices?
Are you curious about Microservice architecture?
If the answer to one of the above questions is YES, then this session is for you.
Join me to know all about Microservice architecture:
- When to adopt it?
- When not to adopt it?
- How to assess your team’s readiness to adopt Microservice architecture?
- Starting a new project with Microservice architecture.
- Migrate an existing project to Microservice architecture.
- Microservice architecture main anti-patterns and how to fix them.
- Are monoliths really that bad?
Presentation on the current state of cloud computing and the role that open source, containers and microservices are playing in the cloud.
Presented to Florida Linux Users Exchange on April 9th, 2015
Docker concepts and microservices architecture are discussed. Key points include:
- Microservices architecture involves breaking applications into small, independent services that communicate over well-defined APIs. Each service runs in its own process and communicates through lightweight mechanisms like REST/HTTP.
- Docker allows packaging and running applications securely isolated in lightweight containers from their dependencies and libraries. Docker images are used to launch containers which appear as isolated Linux systems running on the host.
- Common Docker commands demonstrated include pulling public images, running interactive containers, building custom images with Dockerfiles, and publishing images to Docker Hub registry.
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
Christian Posta is a principal middleware specialist and architect who has worked with large microservices architectures. He discusses why companies are moving to microservices and cloud platforms like Kubernetes and OpenShift. He covers characteristics of microservices like small autonomous teams and decentralized decision making. Posta also discusses breaking applications into independent services, shedding dependencies between teams, and using contracts and APIs for communication between services.
This presentation is conducted on 14th Sept in Limerick DotNet User Group.
(https://www.meetup.com/preview/Limerick-DotNet/events/xskpdnywmbsb)
SlideShare Url: https://www.slideshare.net/lalitkale/introduction-to-microservices-80583928
In this presentation, new architectural style - Microservices and it's emergence is discussed. We will also briefly touch base on what are not microservices, Conway's law and organization design, Principles of microservices and service discovery mechanism and why it is necessary for microservices implementation.
About Speaker:
Lalit is a senior developer, software architect and consultant with more than 12 yrsof .NET experience. He loves to work with C# .NET and Azure platform services like App Services, Virtual Machines, Cortana, and Container Services. He is also the author of 'Building Microservices with .NET Core' (https://www.packtpub.com/web-development/building-microservices-net-core) book.
To know more and connect with Lalit, you can visit his LinkedIn profile below. https://www.linkedin.com/in/lalitkale/
This presentation will be useful for software architects/Managers, senior developers.
Do share your feedback in comments.
Sviluppare velocemente applicazioni sicure con SUSE CaaS Platform e SUSE ManagerSUSE Italy
The document describes an event called Expert Days 2019 focused on developing secure applications quickly using SUSE CaaS Platform and SUSE Manager. It includes an agenda with topics on IT transformation for innovation, terminology around SUSE CaaS Platform and SUSE Manager, and a live demo of a jTracker microservices application running on containers. Partners BS Company and SUSE will provide real experiences using these open source tools to reduce development time while maintaining enterprise security standards.
Cloud Computing as Innovation Hub - Mohammad Fairus KhalidOpenNebula Project
Cloud computing provides an innovation platform beyond just cost savings. New technologies like containers, microservices, and APIs enable collaboration and mobility. Applications are designed to be stateless, transactional, and deployed atomically. This paradigm shift supports real-time scalability, insights from big data, and interconnected devices and people. Use cases include neighborhood watch, emergency response, and open data platforms. Cloud is impacted by mobility, social media, and the internet of things, moving away from silos towards collaboration across applications, data, and people.
Daniel Raisch - raisch@br.ibm.com
Passados dez 10 do ínício do se convencionou chamar de Transformação Digital, as principais iniciativas que caracterizam essa transformação como Cloud, Mobile, Analytics , atingiram sua maturidade e já estão na agenda de prioridades de mais de 70% das empresas brasileiras. Nessa apresentação vamos mostrar a curva de evolução dessas iniciativas ao longo desse período e qual o estado da arte em que cada uma se encontra na indústria.
Similar to Accelerate Delivery: Business case for Agile DevOps, CI/CD and Microservices (20)
Just a JSON parser plus a small subset of JSONPath.
Small (currently 4200 lines of code)
Very fast, uses an index overlay from the ground up.
Does not do JavaBean serialization but can serialize into basic Java types and can map to Java classes and Java records.
This talk was done in Feb 2020. Sergey and I co-presented at CTO Forum on Microservices and Service Mesh (how they relate, requirements, goals, best practices and how DevOps and Agile has had convergence in the set of features for Service Mesh and gateways around observability, feature flags, etc.)
Early Draft: Service Mesh allows developers to focus on business logic while the crosscutting network data layer code is handled by the Service Mesh. This is a boon because this code can be tricky to implement and hard to test all of the edge cases. Service Mesh takes this a few steps further than AOP or Servlet Filters or custom language-specific frameworks because it works regardless of the underlying programming language being used which is great for polyglot development shops. Thus standardizing how these layers work, while allowing teams to pick the best tools or languages for the job at hand. Kubernetes and Istio Service Mesh automate best practices for DevSecOps needs like: failover, scale-out, scalability, health checks, circuit breakers, rate limiters, metrics, observability, avoiding cascading failure, disaster recovery, and traffic routing; supporting CI/CD and microservices architecture.
Istio’s ability to automate and maintaining zero trust networks is its most important feature. In the age of high-profile data breaches, security is paramount. Companies want to avoid major brand issues that impact the bottom line and shrink market capitalization in an instant. Istio allows a standard way to do mTLS and auto certificate rotation which helps prevent a breach and limits the blast radius if a breach occurs. Istio also takes the concern of mTLS from microservices deployments and makes it easy to use taking the burden off of application developers.
This document summarizes key points from the book Accelerate about achieving high performance through DevOps practices. It discusses that high performing teams deploy code more frequently with shorter lead times and change fail rates. They use trunk-based development and loosely coupled architectures. Implementing continuous delivery, monitoring, and a lean approach improves software delivery, quality, and reduces burnout. Culture capabilities like learning and collaboration also impact performance. Overall, DevOps practices can double organizational metrics like profitability and productivity. The document advocates transforming through understanding these practices.
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. Talks about Akka, Kafka, QBit, in-memory computing, from a practitioners point of view. Based on the talks delivered by Geoff Chandler, Jason Daniel, and Rick Hightower at JavaOne 2016 and SF Fintech at Scale 2017, but updated.
Reactive Java: Promises and Streams with Reakt (JavaOne Talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PPT so there is more notes. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
Reactive Java: Promises and Streams with Reakt (JavaOne talk 2016)Rick Hightower
see labs at https://github.com/advantageous/j1-talks-2016
Import based on PDF. This is from our JavaOne Talk 2016 on Reakt, reactive Java programming with promises, circuit breakers, and streams. Reakt is a reactive Java lib that provides promises, streams, and a reactor to handle asynchronous call coordination. It was influenced by the design of promises in ES6. You want to async-call serviceA and then serviceB, take the results of serviceA and serviceB, and then call serviceC. Then, based on the results of call C, call D or E and then return the results to the original caller. Calls to A, B, C, D, and E are all async calls, and none should take longer than 10 seconds. If they do, then return a timeout to the original caller. The whole async call sequence should time out in 20 seconds if it does not complete and should also check for circuit breakers and provide back pressure feedback so the system does not have cascading failures. Learn more in this session.
High-Speed Reactive Microservices - trials and tribulationsRick Hightower
Covers how we built a set of high-speed reactive microservices and maximized cloud/hardware costs while meeting objectives in resilience and scalability. This has more notes attached as it is based on the ppt not the PDF.
High-speed reactive microservices (HSRM) are microservices that are in-memory, non-blocking, own their data through leasing, and use streams and batching. They provide advantages like lower costs, ability to handle more traffic with fewer resources, and cohesive codebases. The example service described handles 30k recommendations/second on a single thread through batching, streaming, and data faulting. The document discusses attributes of HSRM like single writer rules and service stores, and related concepts like reactive programming, streams, and service sharding.
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
Netty Notes Part 2 - Transports and BuffersRick Hightower
This document provides notes on Netty Part 2 focusing on transports and buffers. It discusses the different Netty transport options including NIO, epoll, and OIO. It explains that Netty provides a common interface for different implementations. The document also covers Netty buffers including ByteBuf, direct vs array-backed buffers, composite buffers, and buffer pooling. It emphasizes that performance gains come from reducing byte copies and buffer allocation.
WebSocket MicroService vs. REST MicroserviceRick Hightower
Comparing the speed of RPC calls over WebScoket Microservices versus REST based microservices. Using wrk, QBit, and examples in Java we show how much faster WebSocket is for doing RPC service calls.
Consul: Microservice Enabling Microservices and Reactive ProgrammingRick Hightower
Consul is a service discovery system that provides a microservice style interface to services, service topology and service health.
With service discovery you can look up services which are organized in the topology of your datacenters. Consul uses client agents and RAFT to provide a consistent view of services. Consul provides a consistent view of configuration as well also using RAFT. Consul provides a microservice interface to a replicated view of your service topology and its configuration. Consul can monitor and change services topology based on health of individual nodes.
Consul provides scalable distributed health checks. Consul only does minimal datacenter to datacenter communication so each datacenter has its own Consul cluster. Consul provides a domain model for managing topology of datacenters, server nodes, and services running on server nodes along with their configuration and current health status.
Consul is like combining the features of a DNS server plus Consistent Key/Value Store like etcd plus features of ZooKeeper for service discovery, and health monitoring like Nagios but all rolled up into a consistent system. Essentially, Consul is all the bits you need to have a coherent domain service model available to provide service discovery, health and replicated config, service topology and health status. Consul also provides a nice REST interface and Web UI to see your service topology and distributed service config.
Consul organizes your services in a Catalog called the Service Catalog and then provides a DNS and REST/HTTP/JSON interface to it.
To use Consul you start up an agent process. The Consul agent process is a long running daemon on every member of Consul cluster. The agent process can be run in server mode or client mode. Consul agent clients would run on every physical server or OS virtual machine (if that makes more sense). Client runs on server hosting services. The clients use gossip and RPC calls to stay in sync with Consul.
A client, consul agent running in client mode, forwards request to a server, consul agent running in server mode. Clients are mostly stateless. The client does LAN gossip to the server nodes to communicate changes.
A server, consul agent running in server mode, is like a client agent but with more tasks. The consul servers use the RAFT quorum mechanism to see who is the leader. The consul servers maintain cluster state like the Service Catalog. The leader manages a consistent view of config key/value pairs, and service health and topology. Consul servers also handle WAN gossip to other datacenters. Consul server nodes forwards queries to leader, and forward queries to other datacenters.
A Datacenter is fairly obvious. It is anything that allows for fast communication between nodes, with as few or no hops, little or no routing, and in short: high speed communication. This could be an Amazon EC2 availability zone, a networking environment like a subnet, or any private, low latency, high
The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. QBit is a Java first programming model. It uses common Java idioms to do reactive programming.
It focuses on Java 8. It is one of the few of a crowded field of reactive programming libs/frameworks that focuses on Java 8. It is not a lib written in XYZ that has a few Java examples to mark a check off list. It is written in Java and focuses on Java reactive programming using active objects architecture which is a focus on OOP reactive programming with lambdas and is not a pure functional play. It is a Java 8 play on reactive programming.
Services can be stateful, which fits the micro service architecture well. Services will typically own or lease the data instead of using a cache.
CPU Sharded services, each service does a portion of the workload in its own thread to maximize core utilization.
The idea here is you have a large mass of data that you need to do calculations on. You can keep the data in memory (fault it in or just keep in the largest part of the histogram in memory not the long tail). You shard on an argument to the service methods. (This was how I wrote some personalization engine in the recent past).
Worker Pool service, these are for IO where you have to talk to an IO service that is not async (database usually or legacy integration) or even if you just have to do a lot of IO. These services are semi-stateless. They may manage conversational state of many requests but it is transient.
ServiceQueue wraps a Java object and forces methods calls, responses and events to go through high-speed, batching queues.
ServiceBundle uses a collection of ServiceQueues.
ServiceServer uses a ServiceBundle and exposes it to REST/JSON and WebSocket/JSON.
Events are integrated into the system. You can register for an event using an annotation @EventChannel, or you can implement the event channel interface. Event Bus can be replicated. Event busses can be clustered (optional library). There is not one event bus. You can create as many as you like. Currently the event bus works over WebSocket/JSON. You could receive events from non-Java applications.
Find out more at: https://github.com/advantageous/qbit
Groovy JSON support and the Boon JSON parser are up to 3x to 5x faster than Jackson at parsing JSON from String and char[], and 2x to 4x faster at parsing byte[].
Groovy JSON support and Boon JSON support are also faster than Jackson at encoding JSON strings. Boon is faster than Jackson at serializing/de-serializing Java instances to/fro JSON. The core of the Boon JSON parser has been forked into Groovy 2.3 (now in Beta). In the process Boon JSON support was improved and further enhanced. Groovy and Boon JSON parsers speeds are equivalent. Groovy now has the fastest JSON parser on the JVM.
MongoDB quickstart for Java, PHP, and Python developersRick Hightower
Quick introduction to MongoDB.
Covers major features, CRUD, DB operations, comparison to SQL, basic console, etc.
Covers architecture of Replica Sets, Autosharding, MapReudce, etc.
Examples in JavaScript, Java, PHP and Python.
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
5. About Presenter• Author of best-selling agile development book, early adopter of TDD,
DevOps, Agile, etc.
• Highest leadership scores with highest performing team in a 2,000+
person group: Group utilized: Jenkins pipelines, Event Hub/EEL, High-
code coverage, PR process, Trunk based git like GitHub flow, full
automation, etc.
• Two awards from CIO at fortune 100, amazing results G.O.A.T and
Engineering Excellence
• Written open source software used by millions
• Early adopter and advocate of MicroServices and reactive high-speed
streaming, 12 factor deployment, container orchestration, in-memory
compute.
• Speaker at conferences on microservice development, a Java Champion
(chosen from millions of devs), parsers, distributed data grids, books,
articles, etc.
• Worked on Vert.x, QBit, Reakt, Groovy, Boon, etc.
6. Slide deck based on many books, and
more
• Past experience
• Latest trends
7. Outline
• How we got here
• History of
• MicroServices
• DevOps / Agile
• CI/CD
• Kubernetes
• Business
proposition
• CD/CD - DevOps practices
• Continuous delivery
• Continuous integration
• Lean management and
monitoring / KPIs
• SCM / Version Control /
GitOps / Immutable
infrastructure
• Trunk-based development
• Concrete Best practices and
demo
8. Brief history of Microservices and
Agile,CI/CD
Brief history of time
9. How we got here
• Web pages that were brochures
• eCommerce
• Legacy integration
• Rush to “Webify” businesses
• SOA: Wrap legacy systems as services to use from web
• Virtualization, Virtualization2.0, Cloud, Containers, and now
Container orchestration
10. “Now you can run a JVM in a Docker image which is
just a process pretending to be an OS running in an
OS that is running in the cloud which is running inside
of a virtual machine which is running in Linux server
that you don’t own that you share with people who you
don’t know.”
Microservices Architecture
11. “The Java EE container is no longer needed because
servers are not giant refrigerator boxes that you order
from Sun and wait three months for (circa 2000)..… One
issue with enterprise components is they assume the use
of hardware servers which are large monoliths and you
want to run a lot of things on the same server. Well, turns
out in (today), that makes no sense. Operating systems and
servers are ephemeral, virtualized resources and can be
shipped like a component. We have EC2 images AMIs, …
Kubernetes and Docker. The world changed. Move on….
Microservices just recognize this trend so you are not
developing like you did when the hardware, cloud
orchestration, multi-cores, and virtualization was not there.
You did not develop code in the 90s with punch cards did
you?”
Microservices Architecture
Microservices: Natural
Evolution
12. Microservices• Focus is building small, reusable, scalable services
• Adopt the Unix single purpose utility approach to service
development
• Small so they can be released more often and are written to be
malleable
• Easier to write
• Easier to change
• Go hand in hand with continuous integration and continuous
delivery
• Heavily REST based and messaging
What is microservice architecture?Microservices Architecture
13. Microservices: Key
ingredients
• Independently deployable, small, domain-driven services
• Own their data (no shared databases)
• Communication through a well-defined wire protocol
usually JSON over HTTP (curl-able interfaces)
• Well defined interfaces and minimal functionality
• Avoiding cascading failures and synchronous calls -
reactive designing for failure
What is microservice architecture?
14. Microservices: Evolution of
SOA
• SOA and Microservices have common goals and
purposes
• Refinement to meet goals of polyglot devices and
3rd generation virtualization (cloud, container,
container orchestration)
• Parts of SOA that worked well
• MS: Web technologies to provide scalability,
modular, domain-drive, small, and continuously
deployable cloud-based services
What is microservice architecture?
It’s not the daily
increase but daily
decrease. Hack away
at the unessential. --
Bruce Lee
15. SOA vs. Microservices
“Microservices Architecture is taking what perhaps started out as SOA and
applying lessons learned as well as pressure to support polyglot devices,
deploy more rapidly and the architecture liquidity that cloud computing and
virtualization/containerization provide. You mix all that together and you
can see where Microservices Architecture started. Microservices
Architecture is in general less vendor driven than SOA and more needs
driven by demands of application development and current cloud
infrastructure.”
16. MicroServices: Unix
Philosophy• Microservices compares to Unix philosophy,
• Ken Thompson, Unix creator, said Unix has a
philosophy of:
one tool, one job
• “Unix philosophy emphasizes building short, simple,
clear, modular, and extendable code that can be
easily maintained and repurposed by developers
other than its creators”
What is microservice architecture?
17. MicroServices: Achieving
Resilience• Avoid synchronous calls to avoid cascading
failures
• Microservices tend to embrace streams,
queues, actor systems, event loops and
other async calls
• Spend more time with distributed logging /
log aggregation w/ MDC and now distributed
tracing
18. MicroServices: Monitoring and
KPIs
• User Experience KPIs
• Debugging (requests per second,
#threads, #connections, failed auth,
expired tokens, etc.)
• Circuit Breaker (monitor health, restarts,
act/react)
• Cloud Orchestration (monitor load, spin
up instances)
• Health checks and observable KPIs
"doveryai no proveryai"
(trust, but verify)
Microservice Monitoring
19. “Just remember Microservices are not a new
thing, and they are not cool or hip.
Microservices are obvious evolutionary
architecture to address the revolutionary things
that already happened: web, cloud, mobile,
server virtualization, OS containerization,
container orchestration, multi-core servers,
cheaper and cheaper RAM, 64 bit computing,
10GBE, 100GBE, etc.”
Microservices Architecture
20. MicroServices: Continuous
Deployment
• Microservices are Continuously deployable
services
• Microservices focus on business capability,
and a refocus on object oriented
programming roots and organizing code
around business domains with data and
business rules co-located in the same
process or set of processes
21. MicroServices: Continuous
Deployment
• Focus of Microservices is on breaking up applications into small
(micro) reusable services which might be useful to other
services or other applications.
• Services can be deployed independently
• Each of these services to be tweaked, and then redeployed
independently
• This is where the "micro" part of microservices comes into
play
• Microservice vs Monolith
What is microservice architecture?
23. XP, Agile, Scrum, TDD,
CI/CD• TDD, CI and CI/CD
• Test Driven Development and Agile
• XP, Agile, Scrum
• CI/CD (Jenkins and the tools that came
before)
• CI/CD needs automated testing
24. DevOps Background / Styles
• DevOps aka DevSecOps
• Heroku and the birth of 12 factor deployment, DevOps, KPIs, SRE
• YBYOI vs. SRE vs. DevOps and where do you fit?
• SRE: Observability, Log aggregation, KPIs/Metrics, Distributed/Trace
logging
• Container Orchestration: Yarn, Mesos, Marathon, Nomad, Borg and
Kubernetes
• What is GitOps? What is immutable infrastructure?
• What is cloud native?
25. Kubernetes (K8s)
• Services, Stateful sets, Namespaces, Tags, Ingress, Egress
• Helm/Kustomize for packaging an app or set of related uServices and deploy to
K8s
• Multi-cloud support and just cloud support
• Monitoring built-in (or at least easily pluggable)
• Easy to ramp up (or easier)
• Supports flexible deployment models
• Integrates with Cloud providers services or runs standalone (on prem or cloud)
• What came before and now: Heroku/PaaS/IAAS/EC2, Docker, Docker Swarm,
Mesos/Marathon, Nomad, ECS, EKS, etc. K8s is the current mindshare champ.
29. Acceleration in Practice
• Make customers happy
• Deliver more
• Less Burnout
• Grow the value of the
company
• Make more money
30. Organizational performance
• High performers 2x the rate will exceed organizational
performance goals as low performers:
• 2x profitability
• 2x productivity
• 2x market share
• 2x number of customers
31. Organizational performance
Part II
• High performers twice as likely to exceed non-commercial
performance goals as low performers
• 2x better quantity of products and services
• 2x operating efficiency
• 2x customer satisfaction
• 2x quality of products/services
• 2x achieving organizational/mission goals
33. Software delivery
performance• Deploy frequency, Lead time, mean time to restore (MTTR),
and change fail percentage do well to predict overall software
delivery performance
• Improving software delivery performance improves tempo and
stability
• Software delivery performance improves organizational
performance and quality/customer satisfaction
• Deploy frequency is highly correlated with continuous delivery
and use of version control best practices
34. Software delivery
performance II• Lead time is highly correlated with good version control
and automated testing
• MTTR is highly correlated with version control and
monitoring
• Software delivery performance is negatively correlated
with deployment pain
• Software delivery performance is correlated with
organizational investment in DevOps
35. Quality
Source, Forsgren PhD, Nicole. Jez Humble, Gene Kim, Accelerate . IT Revolution Press. Kindle Edition.
st amount of manual work across all practices - configuration m
36. Culture
• 5 factors most associated with burnout are
negatively impacted by bad software delivery
performance
• Deployment pain and poor software delivery
practices cause organizational burnout
37. Improve culture by improving
practices• Technical practices predict continuous delivery
• Improve organizational culture, identity, job
satisfaction, software delivery performance, less
burnout, less deployment pain, and less time
spent on rework!
• High performers spend 50% less time
remediating security issues than low
performers
38. Trunk based Development (like Github flow)
• High performers have shortest integration times
and branch lifetimes
• Branch life and integration typically lasting hours
or a day
• Low performers have longest integration times
and branch lifetimes
• Branch life and integration typically lasting days or
weeks
39. Architecture
• Loosely coupled, well-encapsulated
architecture drives IT performance.
• 2017 dataset biggest contributor to
continuous delivery was loosely coupled,
well-encapsulated architecture
40. Lean Product Management
Capabilities
• Experimental approach to product
development highly correlates with
continuous delivery
• Lean product development capabilities
predict improvements in organizational
culture like reduced burnout higher software
delivery performance and overall
organizational performance
42. Accelerate DevOps
• Continuous delivery
• Architecture
• Product and process
• Lean Management and monitoring
• Cultural
43. Continuous delivery
capabilities
1. Implement continuous delivery / continuous deployment
2. Version control all production artifacts
3. Automate your deployment pipeline
4. Implement continuous integration
5. Use trunk-based development methods (like Github flow
instead of git flow)
6. Implement test automation
7. Shift left on security
44. Product and Process
Capabilities
• Gather and implement customer feedback
• Make the flow of work through the system
visible
• Work in small batches
• Foster and enable team experimentation
45. Culture capabilities
• Support a generative culture
• Encourage and support learning
• Encourage collaboration
• Make work as meaningful as possible
• Support and encourage transformational
leadership
46. Architectural Capabilities to
Accelerate
• Use loosely coupled architecture
• Release new services on demand
without outages
• Empower the team to select tools; trust team
to pick the best tools
47. Lean management and Monitoring
Capabilities
• Have a light weight change approval process
• Monitor application and system KPIs to inform business
decisions
• Proactively check system health
• Preemptively detect and mitigate problems
• Improve process and work within WIP limits
• Set up visible dashboards to monitor/communicate WIP,
quality, applications and systems
49. Lean Management
• Process: Small Batches
• Decompose into features that allow for rapid development
• MVP - prototype with just enough features to proved business value or enable validated
learning
• Quickly gather customer requirements (A/B testing, customer satisfaction surveys, etc.)
• Team experimentation
• Lean Management: Change approval
• Lean Management: Proactive notification
• Lean Management: Monitoring and KPIs
• Lean Management: WIP limits, visualizing work
50. Companies with regulatory requirements or strict CCB
Can focus on Continuous Delivery
Continuous Deployment can be part of a workflow and
Based on the Continuous Delivery
51. Version control - SCM
• GitOps - keeping application code, system configuration,
application configuration, and scripts for automating build and
configuration in version control.
• Factors together predict IT performance
• Key component of continuous delivery
• Immutable infrastructure
• GitOps - keeping system and application config in git
(versioned) correlates high with delivery performance
52. Deployment Automation
• Deployment automation
• Containers, config, immutable infrastructure
• Comprehensive configuration management
(automation scripts), continuous integration and
continuous testing
• Key metric of GitOps is how much diffs exists in
system config from git to deploy
53. Continuous Integration
• Continuous integration
• Relies on SCM and Deployment automation
• Relies on automated tests
• Unit
• Integration
• Acceptance
54. Trunk-based development
• Trunk-based development
• Like GitHub Flow but shorter lived branches
• Fewer active branches that never outlive a sprint
• Branch-off master per feature, bug fix, etc.
• More PRs more often
• No code freezes; integration periods less than a day
• Polar opposite of git flow
55. Shift left on security
• Integrating security into design and testing
phases
• Security reviews of applications, including
the infosec team in design and demo
process
• Using pre-approved security libraries and
packages, and testing security features as a
part of automated testing suite
56. Continuous delivery
• The ability to deliver
• Build quality in
• Work in small batches
• Automate repetitive tasks including testing &
deployments
• Pursue continuous improvement
• Ownership
• Comprehensive configuration management
• Continuous integration
• Continuous testing
You can’t skip steps.
There is investment
up front.
Today’s speed up can
be tomorrows painted
yourself
In a corner.
58. Prefer Microservices…
• Monoliths can speed up MVP and prototypes but at
a cost
• Monoliths make make CI/CD slower?
• Monoliths make Automated tests suites harder to
build and they are needed for CI/CD
• Smaller Monoliths and SOA is a move in the right
direction, but monoliths should be considered
technical debt
59. Prefer Microservices
• Refactoring to Microservices is a journey
• Know when to employ Microservices
• Fits the CI/CD well
• Fits small batch well
• How Micro can mean different things as you get better at
Microservices
• Why today's micro could be tomorrows Monolith?
• Adoption is a Journey
60. Embrace Observability from the
start
• Log aggregation
• Time series data base
• Log all KPIs for clusters
• Log all KPIs for applications and services
• Alerting
• Know when and how to employ distributed tracing
• Distributed/Trace logging
61. What is a PR? And how to ensure quality
with it
• A PR is a pull-request
• PR gives other developers a chance to review code before it
committed to master
• PR via small batch (why small? JIRA story or even a task or two)
• With GitHub and WebHooks you can block PRs from merging
• Code coverage met, build works, unit tests run, other checks via
Jenkins
• Review and approved by at least two people
62. Tests to create: TDD
• Unit tests
• Perf testing JMH
• Functional tests
• At the HTTP
layer
• At the Spring
Boot layer
• Acceptance tests
• Smoke integration tests
• Full integration tests
• Synthetic testing
• Code Coverage (sonarqube)
• Security/Vulnerability dependency license
checks
• Aqua, Fortify
• Full integration perf testing
63. CI/CD to enforce quality
• CI/CD
• Deploy often (daily or more)
• Test often (after every checkin run all automated tests)
• Block PRs from merging until they pass tests
• Block PRs until they are reviewed
• Block PRs until they reach a certain code coverage
64. Testing is a MUST!
• You can't do CI/CD without automated
testing
• Testing allows you to move quickly with
confidence
65. Embrace small batch work
• Goal of three PRs per week
• Goal of one to two tasks from JIRA per PR
• Use JIRA # in commits and PR comments
• Break stories and features up into tasks
• Check in interim stories
• Use feature flags if it is hard to break up a feature or
story
66. Automated deployment
• Merging into master triggers a deploy to integration and sends a Team message
• Approval from Product Manager pushes code to Prod or Demo
• Checking into main branch from PR
• Artifacts and scripts for deployment checked into git and should not be modified
• Puts code staging area to be checked by Product Manager
• into containers and deploys to cluster (some ephemeral some not)
• Once checked by Infosec and Product Manager: Canary Deploys to 3 to 5% of
traffic and is monitored (end goal)
• Then more and more as it is monitored and is ok
67. Using docker, helm,
Kubernetes
• Uber dev tools
• Persistent cluster set up
• Helm install Kubernetes
• Easy to integrate locally
• Docker, docker-deploy, helm, etc.
• Local integration possible and repeatable
70. Transformation is a business
imperative• You can’t afford not to transform
• Transformation requires a deep understanding of
practices
• Having a team called DevOps is not doing DevOps
per se
• Culture of DevOps, Agility, Lean, MVP, etc. is a
clear win
• There are guides, books, practices, and information
71. Read Accelerate by Forsgren PhD, Nicole. Jez Humble, Gene Kim.
Also read The Loop Approach: How to Transform your organization from
The inside out! By Sebastian Klein and Ben Hughes
Also read Cloud Native DevOps with Kubernetes by John Arundel and Justin Domingus
Editor's Notes
Author of best-selling agile development book, early adopter of TDD, DevOps, Agile, etc.
Highest leadership scores of any senior director in a 2,000 person org (happiest, most productive team). Highest performing team in a 1,000+ person group: Jenkins pipelines, Event Hub/EEL, High-code coverage, PR process, Trunk based git like GitHub flow, full automation, etc.
Two awards from CIO at fortune 500, amazing results (G.O.A.T and Engineering Excellence)
Amazing results finishing projects deemed impossible under tight deadlines. Grew team from 12 to 50+, Talent Magnet due to culture of excellence
Written open source software used by millions
Early adopter and proponent of MicroServices and streaming high-speed streaming, 12 factor deployment, container orchestration, in-memory compute and uService architecture
Speaker at conferences on microservice development, a Java Champion (chosen from 10,000,000 Java Developers), parsers, distributed data grids, books, articles, etc.
Mentoring, consulting, papers, blogs, specifications, JSRs for distributed compute, streaming
Worked on Vert.x, QBit, Reakt, Groovy, Boon, etc.
Rick Hightower
Rick consults and does contract development for high-speed computing, Java-based uServices, Apache Spark, Apache Kafka, and Apache Cassandra. He also writes about microservice development and reactive streaming. Rick is a frequent speaker regarding high-speed, reactive microservice development and has spoken recently at JavaOne as well at FinTech at scale in SF. He specializes in high-speed, in-memory, non-blocking, microservices development which often includes Java EE, QBit, Reakt, Akka, Vert.x, Cassandra, Kafka, and cloud deployments. He has architected and implemented 100 million-users, in-memory content preference engines using Java reactive, streaming, and actor-based system as well as architected and implemented OAuth rate limiter (API gateway) for streaming music service to rate limit all backend services per partner/vendor/mobile app. Rick also contributed to the reference implementations enterprise caches as well as being a member of several spec. committees (JSR-347, JSR-107, etc.). He also is the author of the Boon JSON parser and parsing utilities which ships with Groovy.
Enterprise Applications flashback
While it is true Enterprise Applications are often built with a three tier architecture: backend code, database code and a GUI written in HTML/JavaScript. This has more to do with the world changing than an active choice. The real drivers for Microservices Architecture are cloud/virtualization/OS containerization, EC2, proliferation of mobile devices, cheaper memory, more virtualization, more cores, SSD, trend towards fatter clients, 10 GBE, 100 GBE, etc. The big enterprise web app is becoming obsolete to a certain extent at least for large applications. It is not like we one day came up with a better idea, and then one day we were like hey what if we could make our code more modular. The ground changed under our collective feet. The way we build, deploy, and consume has changed. Hardware evolved. Virtualization evolved. Containerization happened. Cloud computing became a real thing, and so compelling that it is hard to ignore. The smart phone / tablet / mobile revolution happened. Microservice is the response to these external events.
Microservices and NoSQL are two trends that are more focused on how to address software development where deployments are increasingly cloud based and clients are increasingly mobile based. Just like you can’t compare client / server development of the mid-90s to mainframe development from the 70s, you can’t compare enterprise applications from 2001 to microservice development targeting mobile and web clients and other microservices in 2015. The world changes. We adjust. Microservice trend is course correction not a new religion.
History of Enterprise Applications
Remember 1990s, the reason why Enterprise Applications are written with three tiers was was to avoid DLL hell, and the monolith. We just did it with the tools available at the time. Back in the day, we used to build apps that were two tiered. You had to actually go to each users machine and help them install the app. There was a damn good chance they downloaded some shareware that installed a DLL that screwed up the install, and you were in hell. It was not like we were, “Hey James!” .. “What Martin?” “Do you want to build a huge monolith?” “Sure Martin!”.
We tried Applets but Java GUI development back then sucked (1999). I don't think it sucks so bad now, but it lost its window of opportunity for adoption. Then we were left with HTML/JavaScript clients and forcing everyone to use the same browser in the corporation at least for the corporate apps. I worked at many corporations (2003) that banned JavaScript due to incompatibilities of browsers. You were left with screen painting with HTML. It was a like a colorful green screen of yesteryear, but way slower, but pretty. There are many reasons why this style of “Enterprise Development” does not work in the cloud and for mobile devices. The server-side applications no longer need to handle HTTP requests, get data from a database and execute all domain logic, and draw pretty pictures in HTML. Much pain, and great expense has been incurred trying get this three tier architecture to scale in the cloud for various devices written in a polyglot of languages. Microservices exists because mobile, cloud, cheaper RAM, cheaper disks, and improved virtualization. It is really just taking the world where it is. It is not revolutionary at all.
Server components, EAR files and WAR files.. may they rest in peace
If you have lived through COM, DCOM, CORBA, EJBs, OSGi, J2EE, SOAP, SOA, DCE, etc. then you know the idea of services and components is not a new thing, but they are a date expired concept for the most part. One issue with enterprise components is they assume the use of hardware servers which are large monoliths and you want to run a lot of things on the same server. That is why we have WAR files and EAR files, and all sorts of nifty components and archives. Well turns out in 2015 (and less so in 2019), that makes no sense. Operating systems and servers are ephemeral, virtualized resources and can be shipped like a component. We have EC2 images AMIs, OpenStack, Vagrant and Docker. The world changed. Move on. Microservices just recognize this trend so you are not developing like you did when the hardware, cloud orchestration, multi-cores, and virtualization was not there. You did not develop code in the 90s with punch cards did you? So don’t use an EAR file or a WAR file in 2015.
Now you can run a JVM in a Docker image which is just a process pretending to be an OS running in an OS that is running in the cloud which is running inside of a virtual machine which is running in Linux server that you don’t own that you share with people who you don’t know. Got a busy season? Well then, spin up 100 more server instances for a few weeks or hours. This is why you run Java microservices as standalone processes and not running inside of a Java EE container.
The Java EE container is no longer needed because servers are not giant refrigerator boxes that you order from Sun and wait three months for (circa 2000). Don’t fight classpath, classloader hell of Java EE. Hell your whole damn OS is now an ephemeral container (Docker). Deliver an image with all the libs you need, don’t deploy to a Java EE server which has to be versioned and configured. You are only running one service in it anyway. Turns out you don’t have five war files running in the same Java EE container since oh about 2007. Let it go.
If you are deploying a WAR file to a Java EE container then you are probably not doing microservice development. If you have more than one WAR file in the container or an EAR file, then you are definitely not doing microservice development. If you are deploying your service as an AMI or docker container and your microservice has a main method, then you might be writing a microservice.
Microservices architectures opt to break software not into components but into reusable, independently release-able services which run as one or more processes. Application and other services communicate with each other. So where we might have used a server side component, we use a microservice running in independent processes. Where we might have had WAR files or EAR files now we have a Docker container or a Amazon AMI that has the entire app preloaded and configure with exactly the libraries it needs (Java and otherwise).
JSON, HTTP, WebSocket … NO WSDL!
Now you just have to document the Microservices HTTP/JSON interface so other developers can call into it. We could say REST, and certainly you can use concepts from REST, but hey HTTP calls are enough to be considered a Microservice.
Keep this in mind: No XML. No SOAP. No WSDL. No WADL. JSON! Ok you can add some meta data and document how to talk to your service, but the idea is the docs should be documented with curl. If you are only using SOAP or XML then you are not producing a microservice. JSON is a must.
Documents should sound more like: I give you this request with these headers, params and JSON body and you respond with this JSON. Keep it simple. You can provide things in addition to JSON, but JSON is the minimum requirement. If you are not delivering up JSON and consuming JSON over HTTP or HTTP WebSocket then what you wrote is not probably not a microservice.
Introduction To Microservices
Microservice architecture is a method of developing software systems. Its focus is building small, reusable, scalable services. Applying Microservices becomes very important when you have to create services for polyglot devices: wearables, Internet of Things (IOT), mobile, desktop, and web. The trend towards providing services for rich, native mobile application and web applications started the trend towards Microservices adoption. This is one reason why microservices lean heavily on web technologies like HTTP/REST/WebSocket with JSON,Message Pack, and their ilk. The web technologies provide a low barrier to entry and least common denominator to communication.
The closest to a standard definition of microservices is Microservices by James Lewis and Martin Fowler.
…
Continuous delivery: The microservices architectural approach is to create smaller services that are focused on a small business domain or crosscutting concern. Microservices adopt the Unix single purpose utility approach to service development. They are small so they can be released more often and are written to be malleable. They are easier to write. They are easier to change. Microservices go hand in hand with continuous integration and continuous delivery. The services are independent enough not to need a gigantic release train to release improvements or new features. In the Java world, this means you will be using other microservice like Jenkins to provide frequent releases.
Key ingredients to microservices architecture are:
independently deployable, small, domain-driven services
communication through a well-defined wire protocol usually JSON over HTTP (curl-able interfaces)
well defined interfaces and minimal functionality
avoiding cascading failures and synchronous calls - reactive designing for failure
This comes up a lot so let's address this now.
It is very common for some to confuse Service-Oriented Architecture (SOA) and Microservice Architecture. In a sense, SOA and Microservice Architecture are related in some goals and purposes.
Microservices is a refocus and refinement of some of the original goals of SOA to meet the demands of a polyglot device environment that can scale and support continuous deployments in a cloud / third generation virtualization environment.
Since SOA was later muddled with BPEL, ESBs, SOAP, WSDL, and their ilk, it is easier to drop the SOA moniker and just focus on the parts that worked well. Keep the parts that work. Get rid of the rest.
The focus on Microservices Architecture is web technologies to provide scalability, modular, domain-drive, small, and continuously deployable cloud-based services.
Microservices Architecture is taking what perhaps started out as SOA and applying lessons learned as well as pressure to support polyglot devices, deploy more rapidly and the architecture liquidity that cloud computing and virtualization/containerization provide. You mix all that together and you can see where Microservices Architecture started. Microservices Architecture is in general less vendor driven than SOA and more needs driven by demands of application development and current cloud infrastructure.
For more thoughts on SOA vs. Microservices read Microservices Architecture.
Microservices Architecture is taking what perhaps started out as SOA and applying lessons learned as well as pressure to support polyglot devices, deploy more rapidly and the architecture liquidity that cloud computing and virtualization/containerization provide. You mix all that together and you can see where Microservices Architecture started. Microservices Architecture is in general less vendor driven than SOA and more needs driven by demands of application development and current cloud infrastructure.
Cope with failure
Microservices are designed to cope with failure. Since microservices tend to call each other, a downstream service that fails should not block upstream services and clients. Synchronous communication is avoided to avoid cascading failures. Thus Microservices tend to embrace async calls, streams, queues and actor systems. To handle failure, it is easier to embrace Queue theory so you can detect failure and provide alternatives or just fail gracefully without blocking clients. The ability to react to failure is a key characteristic of Microservices.
In order to learn about downstream failure or learn about other functioning nodes and implement some sort of circuit breaker, one needs Service Discovery. Tools like consul, etcd and SkyDNSare perfect for Service Discovery.
Adopting Microservices is not a free lunch. Steps to mitigate inherent complexity include distributed logging and MDC, microservices monitoring and stats, and reactive programming to coordinate async calls. The reactive call coordination would encompass what to do if there is a failure and what to do if a downstream service does not fail, but just times out or worse hangs.
User Experience and Microservices Monitoring
With Microservices which are released more often, you can try new features and see how they impact user usage patterns. With this feedback, you can improve your application. It is not uncommon to employ A/B testing and multi-variant testing to try out new combinations of features. Monitoring is more than just watching for failure. With big data, data science, and microservices, monitoring microservices runtime stats is required to know your application users. You want to know what your users like and dislike and react.
Debugging and Microservices Monitoring
Runtime statistics and metrics are critical for distributed systems. Since microservices architecture use a lot of remote calls. Monitoring microservices metrics can include request per second, available memory, #threads, #connections, failed authentication, expired tokens, etc. These parameters are important for understanding and debugging your code. Working with distributed systems is hard. Working with distributed systems without reactive monitoring is crazy. Reactive monitoring allows you to react to failure conditions and ramp of services for higher loads.
Circuit Breaker and Microservices Monitoring
You can employ the Circuit Breaker pattern to prevent a catastrophic cascade, and reactive microservices monitoring can be the trigger. Downstream services can be registered in a service discovery so that you can mark nodes as unhealthy as well react by reroute in the case of outages. The reaction can be serving up a deprecated version of the data or service, but the key is to avoid cascading failure. You don't want your services falling over like dominoes.
Cloud Orchestration and Microservices Monitoring
Reactive microservices monitoring would enable you to detect heavy load, and spin up new instances with the cloud orchestration platform of your choice (EC2, CloudStack, OpenStack, Rackspace, boto, etc.).
Continuously deployable services
The focus on Microservices is a focus on business capability, and a refocus on object oriented programming roots and organizing code around business domains with data and business rules co-located in the same process or set of processes.
The focus is on breaking up applications into small reusable services which might be useful to other services or other applications. The services can be deployed independently. This allows each of these services to be tweaked, and then redeployed independently. This is where the "micro" part of microservices comes into play. The services are small and independent. This is where microservices have been compared to Unix philosophy, they provide services that handle requests and give responses. Ken Thompson, Unix creator, said Unix has a philosophy of one tool, one job.
The Unix philosophy emphasizes building short, simple, clear, modular, and extendable code that can be easily maintained and repurposed by developers other than its creators.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
High performers are twice as likely to exceed organizational performance goals as low performers: profitability, productivity, market share, number of customers.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
High performers are twice as likely to exceed noncommercial performance goals as low performers: quantity of products/ services, operating efficiency, customer satisfaction, quality of products/services, achieving organizational/mission goals.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
In a follow-up survey to the initial 2014 data collection effort, we gathered stock ticker data and performed additional analysis on responses from just over 1,000 respondents across 355 companies who volunteered the organization they worked for. For those who worked for publicly traded companies, we found the following (this analysis was not replicated in later years because our dataset was not large enough): –High performers had 50% higher market capitalization growth over three years compared to low performers.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
The four measures of software delivery performance (deploy frequency, lead time, mean time to restore, change fail percentage) are good classifiers for the software delivery performance profile. The groups we identified—high, medium, and low performers—are all significantly different across all four measures each year. Our analysis of high, medium, and low performers provides evidence that there are no trade-offs between improving performance and achieving higher levels of tempo and stability: they move in tandem. Software delivery performance predicts organizational performance and noncommercial performance.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Lead time is highly correlated with version control and automated testing. MTTR is highly correlated with version control and monitoring. Software delivery performance is correlated with organizational investment in DevOps. Software delivery performance is negatively correlated with deployment pain. The more painful code deployments are, the poorer the software delivery performance and culture.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Unplanned work and rework: –High performers reported spending 49% of their time on new work and 21% on unplanned work or rework. –Low performers spend 38% of their time on new work and 27% on unplanned
work or rework. –There is evidence of the J-curve in our rework data. Medium performers spend more time on unplanned rework than low performers, with 32% of their time spent on unplanned work or rework. Manual work: –High performers report the lowest amount of manual work across all practices (configuration management, testing, deployments, change approval process) at statistically significant levels. –There is evidence of the J-curve again. Medium performers do more manual work than low performers when it comes to deployment and change approval processes, and these differences are statistically significant.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
BURNOUT AND DEPLOYMENT PAIN: Deployment pain is negatively correlated with software delivery performance and Westrum organizational culture. The five factors most highly correlated with burnout are Westrum organizational culture (negative), leaders (negative), organizational investment (negative), organizational performance (negative), and deployment pain (positive).
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Trunk-based development: –High performers have the shortest integration times and branch lifetimes, with branch life and integration typically lasting hours or a day. –Low performers have the longest integration times and branch lifetimes, with branch life and integration typically lasting days or weeks. Technical practices predict continuous delivery, Westrum organizational culture, identity, job satisfaction, software delivery performance, less burnout, less deployment pain, and less time spent on rework. High performers spend 50% less time remediating security issues than low performers.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
A loosely coupled, well-encapsulated architecture drives IT performance. In the 2017 dataset, this was the biggest contributor to continuous delivery.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
LEAN PRODUCT MANAGEMENT CAPABILITIES The ability to take an experimental approach to product development is highly correlated with the technical practices that contribute to continuous delivery. Lean product development capabilities predict Westrum organizational culture, software delivery performance, organizational performance, and less burnout.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.
Forsgren PhD, Nicole. Accelerate . IT Revolution Press. Kindle Edition.