Kamon is an open-source tool for monitoring JVM applications like those using Akka. It provides metrics collection and distributed tracing capabilities. The document discusses how Kamon 1.0 can be used to monitor Akka applications by collecting automatic and custom metrics. It also describes how to set up Kamon with Prometheus and Grafana for metrics storage and visualization. The experience of instrumenting an application at EMnify with Kamon is presented as an example.
What is pipeline as code in continuous delivery/continuous deployment environment. How to set up Multibranch pipeline to fully benefit from pipeline features. Jenkins master-node concept in Kubernetes cluster.
This document discusses continuous delivery and the new features of Jenkins 2, including pipeline as code. Jenkins 2 introduces the concept of pipeline as a new type that allows defining build pipelines explicitly as code in Jenkinsfiles checked into source control. This enables pipelines to be versioned, more modular through shared libraries, and resumed if interrupted. The document provides examples of creating pipelines with Jenkinsfiles that define stages and steps for builds, tests and deployments.
1) The document discusses various ways to secure a website or client's users, including getting an SSL certificate, setting up HTTPS, ensuring strong security practices with headers and configurations. 2) It describes letting's encrypt as a free and easy way to get SSL certificates with automated renewal, and quality testing services like QUALYS to check SSL configuration. 3) Additional security best practices discussed include HTTP headers like HSTS, CSP, and PKP to prevent vulnerabilities and protect against MITM attacks. Regular testing and integrating checks into development processes are recommended.
Discuss how to use the Jenkins Job Builder to manage your Jenkins Jobs as the Infrastructure as Code approach.
This document discusses using Docker containers to test Python applications in varied environments. The proposed solution is to: 1. Create Docker images for dependencies like databases. 2. Build a test image with the source code and testing tools. 3. Run tests by launching a container from the test image linked to dependency containers. 4. The packnsend tool is used to initialize images, run tests across multiple environments, and clean up containers after testing.
This document discusses using Docker and Jenkins to create a continuous delivery pipeline. It recommends using Docker to build, test, and deploy code in isolated environments at each stage. Jenkins can run in a Docker container and trigger Docker builds. The Job DSL plugin allows Jenkins jobs to be defined with Groovy scripts for easy automation and templating of jobs. The document provides resources for learning more about continuous delivery with Docker and Jenkins Job DSL.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance). Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code. At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
The document outlines Julien Pivotto's presentation on building pipelines at scale using Jenkins and Puppet. It discusses how Puppet can be used to define Jenkins job configurations and pipelines for applications and infrastructure to allow easy deployment of new pipelines. It also covers alternative approaches using Jenkins plugins to define pipelines through Groovy scripts to reduce complexity compared to Puppet management.
http://www.meetup.com/BruJUG/events/228994900/ During this session, you will presented a solution to the problem of scalability of continuous delivery in Jenkins, when your organisation has to deal with thousands of jobs, by introducing a self-service approach based on the "pipeline as code" principles.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points: - Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration. - This provides configuration as code that is version controlled and reusable across projects. - Jenkins plugins support running builds and tests in parallel across Docker containers. - Notifications can be sent to services like Slack on failure. - The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
A brief introduction to containerization, Docker, and getting started with your first containerized Rails application. Source code can be found at https://github.com/rheinwein/rails-demo-apps
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
We often use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, and Compose can work together to make your tests fast.
We all know how flexible Kubernetes extensions can be - Tekton and Knative are examples. But did you know it's also pretty easy to extend kubectl, the Kubernetes superstar CLI? In this session we see how a kubectl plugin is designed and then from scratch, we will build our own plugin using Quarkus. That will give us the opportunity to discover the command mode of Quarkus, rediscover how native compilation can create super fast binaries, and see how the Kubernetes-client extensions make it super easy to interact with a Kubernetes cluster.
Presented at: https://apacheconeu2016.sched.org/event/8ULR n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline. In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
This document discusses using Jenkins, Puppet, and Mcollective to implement a continuous delivery pipeline. It recommends using infrastructure as code with Puppet, nodeless Puppet configurations, and Mcollective to operate on collectives of servers. Jenkins is used for continuous integration and triggering deployments. Packages are uploaded to a repository called Seli that provides a REST API and can trigger deployment pipelines when new packages are uploaded. The goal is to continuously test, deploy, and release changes through full automation of the software delivery process.
The document discusses the new Jenkins Workflow engine. It provides an overview of continuous delivery and how Jenkins is used to orchestrate continuous delivery processes. The new Workflow engine in Jenkins allows defining complex build pipelines using a Groovy DSL, with features like stages, interactions with humans, and restartable builds. Examples of using the new Workflow syntax are demonstrated. Possible future enhancements to Workflow are also discussed.
Slides for Ignite talk @ DevOps Days Galway 2018. How we do Automated Testing of our Kubernetes based Microservices in Pull Requests.
In this session, we will look at how Apache Flink can be used to stream anonymized API request and response data from a production environment to make sure staging environments are up-to-date and reflect the most recent features (and bugs) that comprise a service. The talk will also examine how to deal with issues of data retention, throttling, and persistence, finishing with recommendations for how to use these sandbox environments to rapidly prototype and test new features and fixes.
This document discusses using Kafka Streams to transform operational metrics data from Priceline applications before loading it into Splunk. It describes how the legacy monitoring system worked, and the motives for moving to Kafka and Kafka Streams. It then explains how data is collected from applications into Kafka topics, and how various transformations like formatting, key application, and aggregation are performed using Kafka Streams before loading to Splunk. It also discusses testing, monitoring, and debugging Kafka Streams applications.
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022 Azure Event Hubs is a hyperscale PaaS event stream broker with protocol support for HTTP, AMQP, and Apache Kafka RPC that accepts and forwards several trillion (!) events per day and is available in all global Azure regions. This session is a look behind the curtain where we dive deep into the architecture of Event Hubs and look at the Event Hubs cluster model, resource isolation, and storage strategies and also review some performance figures.
"Just as the Apache Kafka Brokers provide JMX metrics to monitor your cluster's health, Kafka Streams provides a rich set of metrics for monitoring your application's health and performance. The metrics to observe for a given use-case of Kafka Streams will vary significantly from application to application. Learning how to build and customize monitoring of those applications will help you maintain a healthy Kafka Streams ecosystem. Takeaways * An analysis and overview of the provided metrics, including the new end-to-end metrics of Kafka Streams 2.7. * See how to extract metrics from your application using existing JMX tooling. * Walkthrough how to build a dashboard for observing those metrics. * Explore options of how to add additional JMX resources and Kafka Stream metrics to your application. * How to verify you built your dashboard correctly by creating a data control set to validate your dashboard. * Go beyond what you can collect from the Kafka Stream metrics."
This document discusses Typesafe's Reactive Platform and Apache Spark. It describes Typesafe's Fast Data strategy of using a microservices architecture with Spark, Kafka, HDFS and databases. It outlines contributions Typesafe has made to Spark, including backpressure support, dynamic resource allocation in Mesos, and integration tests. The document also discusses Typesafe's customer support and roadmap, including plans to introduce Kerberos security and evaluate Tachyon.
Symantec is a leader in security software and has built a next generation analytics platform using open source technologies like Hadoop, Storm and Kafka. The platform processes 300,000 events per second from security events and alerts in real-time. The analytics cluster includes a Kafka cluster for streaming data, a Storm cluster for real-time processing, and tools for automated deployment, monitoring and performance measurement of the platform.
The document provides an overview of using Scala and Akka for building distributed sensor networks. Some key points: - The speaker uses Scala and Akka at their company for building distributed systems to manage traffic and sensor networks. - Akka actors are used to build distributed and fault-tolerant systems. Camel is used for integration between actors. Remote actors allow building systems that span multiple machines. - Examples of systems built include a border control sensor network and distributed traffic management systems. Scala and Akka provide benefits like less code, fault tolerance, and ability to interoperate with existing Java systems. - Topics covered include using Akka actors, remote actors, Camel integration,
Intro to Apache Apex presented at Women in Big Data meetup as part of the Streaming Technologies session
My @TriangleDevops talk from 2013-10-17. I covered the work that led us to @NetflixOSS (Acme Air), the work we did on the cloud prize (NetflixOSS on IBM SoftLayer/RightScale) and the @NetflixOSS platform (Karyon, Archaius, Eureka, Ribbon, Asgard, Hystrix, Turbine, Zuul, Servo, Edda, Ice, Denominator, Aminator, Janitor/Conformity/Chaos Monkeys of the Simian Army).
This document provides an overview and agenda for a presentation on Confluent, streaming, and KSQL. The presentation includes: an introduction to Confluent and Apache Kafka; an explanation of why streaming platforms are useful; an overview of the Confluent Platform and its components; key concepts in streaming and Kafka; a demonstration of Kafka Streams, Kafka Connect, and KSQL; and resources for further information. The presentation aims to explain streaming concepts, demonstrate Confluent tools, and allow for a question and answer session.
This document summarizes a talk given at the Apache Big Data Europe 2015 conference. It discusses the Apache Kafka distributed commit log system and how it can be used for real-time data processing and analytics. Specifically, it compares the Lambda and Kappa architectures for stream processing, describing how the Kappa architecture uses Kafka to allow reprocessing of data from the commit log and avoid maintaining separate batch and stream processing systems. Examples of using Kafka and stream processing for applications like fraud detection and IoT data analysis are also provided.
See what happens when you overlay Ceilometer functionalities on top of a scalable solution like Monasca ... This is Ceilosca
In this presentation you will learn about: • CloudFormation 101 – The building block of Infrastructure as Code • CodePipeline and CodeCommit 101 – Tools for our IaC pipeline • Review of an example IaC Pipeline – Automated validation – Least privilege enforcement – Manual review/approval
This document contains an agenda and overview of Confluent and streaming with Kafka. The agenda includes introductions to Confluent, streaming, KSQL, and a demo. Confluent is presented as the company founded by the creators of Apache Kafka to develop streaming platforms based on Kafka. Key concepts of streaming, the Confluent platform, and Kafka Streams, Kafka Connect, and KSQL are summarized. The document concludes with resources and time for questions.
NOTE: This was converted to Powerpoint from Keynote. Slideshare does not play the embedded videos. You can download the powerpoint from slideshare and import it into keynote. The videos should work in the keynote. Abstract: In this presentation, we will describe the "Spark Kernel" which enables applications, such as end-user facing and interactive applications, to interface with Spark clusters. It provides a gateway to define and run Spark tasks and to collect results from a cluster without the friction associated with shipping jars and reading results from peripheral systems. Using the Spark Kernel as a proxy, applications can be hosted remotely from Spark.