The concept of Reactive Streams (aka Reactive Extensions, Reactive Functional Programming, or simply Rx) has become increasingly popular recently, and with good reason. The Reactive Streams specification provides a universal abstraction for asynchronously processing data received across multiple sources (e.g. database, user input, third-party services), and includes mechanisms for controlling the rate at which data is received. This makes it a powerful tool within a Microservice platform. And did we mention that the Groovy lang community is quite involved? In this talk we’ll explore the various features and concepts of Reactive Streams. We’ll talk about some typical use cases for Rx and more importantly, how to implement them. We’ll focus primarily on RxGroovy and Ratpack, then provide example implementations that show you how to get started with this powerful technique.
Do it like the "DevOps Unicorns" Etsy, Facebook and Co: Deploy more frequently. But how and why? Challenges? Deploying Software Faster without Failing Faster is possible through Metrics driven Engineering. Identify problems early on using a "Shift-Left in Quality". This requires a Level-Up of Dev, Test, Ops, Biz See some of the metrics that I think you need to look at and how to upgrade your engineering team to produce better quality right from the start
The document discusses creating high performing web apps with React. It emphasizes reliability over speed, noting that reliability means the experience works under different circumstances like without JavaScript. It lists challenges to reliability like poor networks or outdated browsers. It recommends using HTML, CSS, and JavaScript responsibly, using HTML for initial content delivery and JavaScript for interactivity. Server-side rendering with Node.js is suggested to deliver meaningful initial HTML with React components for both reliability and SEO. Caching techniques like versioning assets are also recommended to improve performance.
Talk given at DevTeach Montreal on RxJS - The Basics & The Future. Example repo: https://github.com/ladyleet/rxjs-test Have questions? Find me on twitter http://twitter.com/ladyleet
As a Tester you need to level up. You can do more than functional verification or reporting Response Time In my Performance Clinic Workshops I show you real life exampls on why Applications fail and what you can do to find these problems when you are testing these applications. I am using Free Tools for all of these excercises - especially Dynatrace which gives full End-to-End Visibility (Browser to Database). You can test and download Dynatrace for Free @ http://bit.ly/atd2014challenge
This talk centers on two things: a set of patterns for the architecture of high-scale data systems; and a framework for understanding the tradeoffs we make in designing them.
The document outlines 5 steps to get started with React Native: 1) Use Expo to build apps without iOS/Android setup, 2) Add routing with react-navigation and state management with Redux, 3) Style with Flexbox or libraries like react-native-ui-kitten, 4) Access native features by linking libraries, and 5) Test with Jest. While React Native uses native components, it allows building smooth hybrid mobile apps with JavaScript and familiar React patterns.
Presentation at my company to all the Interns about What DevOps is to me and why I'm passionate about it. NOTE: Liberally gathered stuffs from the internetz. If I did something wrong by doing so or by you let's chat. I want to work with you to make it better :)
As presented at Boston and NYC Web Perf Meetup. Its time to level up Web Performance Optimization started by Steve Souders. We need to look beyond the rim of the browser as there are many problems happenig from browser to database. In this presentation I showed how Browser Diagnostics needs to evolve into End-to-End Application Diagnostics and Monitoring. Showing 5 real life examples on why applications failed and the metrics to look at to identify these problems early on
Build powerful concurrent & distributed app with the Actor Model in Akka.NET, UserGroup presentation on using Akka.NET for Microsoft .NET applications
I gave this presentation at the Sydney Continuous Delivery Meetup Group. The main goal was to talk about Performance Metrics that you should monitor along the pipeline. I examples in 4 different areas where deployments failed and how metrics would have helped preventing these problems
This document summarizes a presentation on optimizing application performance. It discusses a case study where a retailer's website was crashing during sales events. The development team optimized the .NET code, SQL queries, and front-end to improve performance. Key lessons included properly simulating load tests, identifying infrastructure issues, using profiling tools to find bottlenecks, caching data, and improving front-end performance by reducing requests and file sizes.
Most common Frontend & Backend Performance Problems. Automatically find them in your CI by looking at the right Metrics.
In March 2010, the UK had an election with a televised leadership debate. We built an application that scaled to 2.7 million hits in 90 minutes on top of AppEngine which we built in about 3 days. See the mistakes we made and what went well.
Our presentation on Israel Rails Conference 2012. Vitaly talking about rails performance, how to measure, what to improve and just as much important - what not to improve
Learn the basics of use and power of RxJS in NativeScript & Angular in this presentation given at NativeScript Developer Days in New York City September 2017
This presentation discusses continuous database deployments. It begins with an introduction of the presenter and an overview of topics to be covered. It then contrasts manual database change management with continuous deployment. The main methods covered are schema-based, using the database schema in source control; script-based, using change scripts; and code-based, coding database changes. Benefits include reduced errors and faster releases. Best practices discussed include backing up data and deploying breaking changes in steps. The presentation concludes with a call for questions.
Rowan Udell gave a presentation on serverless testing at the Sydney Serverless Meetup on October 18th 2018. The presentation discussed why testing serverless applications is important, different types of tests including unit tests, integration tests, end-to-end tests, and manual testing. Unit tests should be written to isolate functionality at a small and local level, while integration and end-to-end tests are more valuable for serverless due to the remote nature of deploying to the cloud. The presentation emphasized that testing serverless applications requires different approaches than traditional applications due to factors like limited surface area and scaling in the cloud.
This document discusses how Groovy fits into various roles in cloud computing. It begins with an introduction to the author and their background in cloud and DevOps tooling. It then outlines how Groovy can be used for microservices with Ratpack, immutable infrastructure, packaging with Gradle, automating builds with Jenkins, managing cloud infrastructure with Spinnaker, automating server tasks with Groovy scripts and SSH, and more. The document also advertises an upcoming talk covering these topics in more detail.
My presentation at Groovy and Grails eXchange 2012. Trying to tease out various issues in the tension between dynamic and static languages on the JVM. Groovy is the only language that can be both a dynamic and a static language.
Groovy3 and the new MOP are closing in! But the time of this talk the new MOP will not be done, but I will show some examples of how old Groovy code will look like transferred to the new MOP.
Christian Chevalley has over 30 years of experience in software development and has been developing platforms based on openEHR since 2010. He has delivered a first release of EtherCIS, an open source openEHR server. EtherCIS takes advantage of PostgreSQL's mixed support for relational and JSON datatypes in a single table structure. It uses the jOOQ library extensively for SQL coding in Java and allows easy migration between database backends like Oracle or DB2. EtherCIS is fully compliant with the Ehrscape API and its development includes adding validation handling and common provisioning tools.
1) The document discusses jOOQ, an object-oriented SQL library that allows writing SQL queries in Java in a type-safe, fluent, and injection-safe manner. 2) jOOQ uses a domain-specific language (DSL) to write SQL queries, requiring only 1 minute of setup. It supports automatic database modeling and generates Java classes from the database schema. 3) The document provides examples of common SQL queries like selects, inserts, filters, joins, and group by written using the jOOQ DSL in a concise and readable way compared to traditional string-based SQL.
The slides from Lukas Eder's jOOQ presentation at Topconf 2013. The slides talk about the history of the Java and SQL integration, starting with JDBC, EJB 2.0, Hibernate, JPA, culminating in the claim that SQL is evolving in an entirely different direction than what is covered by Enterprise Java. This is where jOOQ comes in. jOOQ is currently the only platform in the Java market aiming at making SQL a first-class citizen in Java. This website depicts what every CTO / software architect should consider at the beginning of every new Java project: http://www.hibernate-alternative.com This version of the presentation on Slideshare is licensed under the terms of the CC-BY-SA license 3.0: http://creativecommons.org/licenses/by-sa/3.0 The jOOQ name, the jOOQ logo and the picture with the harbour worker are trademarks by Data Geekery GmbH. Please contact us if you want to use our trademarks in a derived presentation of yours. contact@datageekery.com
Øredev 2016; Malmö, Sweden; 9 November 2016; video is here: https://www.youtube.com/watch?v=03PXmPc7Q3g
The document discusses metaprogramming in Groovy using the Meta Object Protocol (MOP). It explains that MOP allows modifying classes at runtime by adding/changing methods and properties. Examples are provided of adding string truncation methods to classes using MetaClass, and overriding Integer and Boolean method behavior. Categories are introduced as a way to make metaclass changes persistent only within a code block. Extension modules are also covered as a mechanism to enhance classes by providing extension JAR files and metadata.
"Clean Code" by Bob Martin is probably one of the most important practical documents out there; A must read for all developers, if you will. In this talk I will show how you can use Groovy and its rich ecosystem to apply the discussed principals, thus cleaning and vastly improving your codebase while still maintaining your sanity and joy. By Noam Tenne
With Groovy 2.4 you can programming for Android, this presentation about why Groovy is cool on Android.
Metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data. - Wikipedia The Groovy language supports two flavors of metaprogramming: # Runtime metaprogramming, and # Compile-time metaprogramming. The first one allows altering the class model and the behavior of a program at runtime, while the second only occurs at compile-time.
These are the slides of the talk given during the Confoo 2012 conference. For building an Android app from inside the IDE, Google provides with ADT, an Eclipse plugin to create emulators , compile your code, run the tests, package it and deploy it to a device. Reading this presentation, you will learn how to all those steps in a "headless"way, outside the IDE, so that tools such as Jenkins / Hudson or even Travis-CI can build and test your applications. Also, this presentation introduces to the reader the concept of Continuous Quality Control with Sonar and Continuous Deployment with Nexus : possible even for Android apps now !
Kotlin is a JVM language developed by Jetbrains. Its version 1.0 (production ready) was released at the beginning of the year and made some buzz within the android community. This session proposes to discover this language, which takes up some aspects of groovy or scala, and that is very close to swift in syntax and concepts. We will see how Kotlin boosts the productivity of Java & Android application development and how well it accompanies reactive development.
to share the same infrastructure for all our customers. We therefore built a highly sophisticated model of physical and logical farms, partitioning the traffic and optimizing resources. We operate 700+ JEE nodes, split in 30+ logical clusters, deployed on less than 10 physical server pools. Today, this infrastructure is delivering a billion dynamic pages per month, for more than 5 million bookings, with a 10 times factor growth expected in the coming years. Even though thousands of parameters are available to tailor our products to any one particular needs, the recent evolution of the IT Industry towards PAAS ecosystems modified customer expectations: they are now looking for the capability to extend our applications, interact with their own IT, influence our business logic or even graphical interface. To support this vision, we started developing an extensibility framework, based on scripting technologies. Though being language agnostic, we quickly decided to invest on the Groovy language and rely on JSR 223 to embed it into our applications. However, transforming a multi-tenant & community SAAS ecosystem into a flexible PAAS environment implies to take up multiple challenges, especially around sandboxing ? access & resource control ? or productivity and production constraints, such as hot-reloading or instantaneous fallback mechanism. This presentation will therefore focus on how Groovy and its extensibility mechanisms allow us to progress on these topics, what are the limitations faced due to its dynamicity nature, and how we?re thrilled by the new features coming in next releases.
Are you sure you are doing continuous delivery? Yeah right! We thought we were too. The journey to continuous delivery (CD) is long, winding and always evolving. Like with many things we thought we had achieved all we could with continuous delivery and then... our business model changed, and then... Originally presented at QCon New York June 2016 https://qconnewyork.com/ny2016/ny2016/presentation/we-thought-we-were-doing-continuous-delivery-and-then.html
A long time ago in a galaxy far, far away... Java open source developers managed to the see the previously secret plans to the Empire's ultimate weapon, the JAVA™ COLLECTIONS FRAMEWORK. Evading the dreaded Imperial Starfleet, a group of freedom fighters investigate common developer errors and bugs to help protect their vital software. In addition, they investigate the performance of the Empire’s most popular weapon: HashMap. With this new found knowledge they strike back! Pursued by the Empire's sinister agents, JDuchess races home aboard her JVM, investigating proposed future changes to the Java Collections and other options such as Immutable Persistent Collections which could save her people and restore freedom to the galaxy....
Apache Groovy is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn syntax. It integrates smoothly with any Java program, and immediately delivers to your application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time meta-programming and functional programming. In this presentation, we'll see how Groovy simplifies the life of Java Developers. Basically, this talk would be for beginners where I would introduce powerful Groovy concepts like - Groovy Collections, Closure, Traits etc.
最近流行っているらしいO/Rマッパ「jOOQ」の話を中心に、どうやってCRUDするのか、SELECT結果をどうやってJavaクラスにマッピングするのか、joinとかどうするのか、springframework (spring-boot)などとどう組み合わせるか、そんな話を講義形式で30-40分ほどでお話します。 日時:2016年2月8日(水)19:30〜20:30(19:15開場) 場所:株式会社ビズリーチ 東京都渋谷区渋谷2-15-1 渋谷クロスタワー 12F 参加費:無料 持参物:名刺1枚(名札用)
People are excited about developing Android applications with Kotlin. From new side projects to existing enterprise level Java architectures, Kotlin can improve code quality and readability while reducing lines of code and eliminating entire classes of bugs. Find out why Kotlin is being used by developers at companies like Square, Trello, and Pintrest.
This document discusses using Groovy on Android apps. Groovy is a JVM language that offers simplicity compared to Java and interoperability. To use Groovy in an Android project, the Groovy and Grooid dependencies are added along with plugins. Groovy classes can then be created alongside Java sources. Benefits include collections APIs, AST transformations for domain objects, and libraries like Fluent and SwissKnife. Potential downsides are issues with Android Studio integration and large APK sizes.
A talk given at GeeCON 2018 in Krakow, Poland. Classically-trained (if you can call it that) software engineers are used to clear problem statements and clear success and acceptance criteria. Need a mobile front-end for your blog? Sure! Support instant messaging for a million concurrent users? No problem! Store and serve 50TB of JSON blobs? Presto! Unfortunately, it turns out modern software often includes challenges that we have a hard time with: those without clear criteria for correctness, no easy way to measure performance and success is about more than green dashboards. Your blog platform better have a spam filter, your instant messaging service has to have search, and your blobs will inevitably be fed into some data scientist's crazy contraption. In this talk I'll share my experiences of learning to deal with non-deterministic problems, what made the process easier for me and what I've learned along the way. With any luck, you'll have an easier time of it!
An overview of RxJS and Reactive Programming as presented at the Modern Web UI Meetup at Mozilla Mountain View, May 13, 2015
This document discusses scaling a web application, particularly those built with PHP and MySQL. It begins with introductions and then outlines various strategies for scaling applications and databases. For applications, it recommends profiling code and queries to identify bottlenecks, optimizing frameworks, caching, and monitoring. For databases, it suggests technologies like Memcached, database replication using master-slave, sharding, MySQL Cluster, and storage engines. The overall message is that scaling requires understanding applications and systems, identifying pain points, and having a plan to optimize performance as needs grow.
Webinar by Clarke Ching Agile and ToC expert. Agile: the Good, the Bad and the Ugly. If your Agile is broken then this is how to fix it! Your Agile teams are busy. Busy delivering. Busy improving. Your quality is amazing. Rework is low. The product looks great. Your users love it. You are a high performing team! But your internal customers say your teams are slow. This session will teach you how to use the Theory of Constraints to figure out how to speed up, by finding the one thing that’s slowing them down. This webinar will cover how, in an Agile environment: - to better control scope creep, - to reinforce your relationship with the I.T. Development team’s client, - to be able to make commitments and honour them and - to decide where your bottleneck should be. About the speaker Clarke Ching is a computer scientist with an MBA who discovered Goldratt’s Theory of Constraints (ToC) in 2003 and has been using it ever since to accelerate Agile initiatives. He is fascinated by Agile and obsessed with ToC. He wrote the amazon best-sellers Rolling Rocks Downhill and The Bottleneck Rules. Rolling Rocks Downhill teaches 3 things: the fundamentals of Agile combined with ToC; how to use those fundamentals to deliver big projects faster and on time; and how to deliver quietly huge transformations. It’s been featured in The Guardian newspaper and The Spectator magazine. It was one of Barbara Oakley’s top 10 books of 2019. It was the #2 best-selling Leadership book on amazon.com, just behind Steven Covey’s 7-habits book. He has been Agile / Lean / ToC expert in: GE Energy, Dell, Royal London (life insurance & pensions), Gazprom and Standard Life Aberdeen among other organizations. He is the past Chairperson of Agile Scotland. He is a lecturer at Victoria University School Of Management in New Zealand where he now lives. Today he is the founder and Chief Productivity Officer of Odd Socks Consulting
The document discusses search capabilities for the PlayStation Network. It describes how the initial system indexed 200,000 documents per second for the PS Store, but more capabilities were needed. It then details the challenges in moving from a relational database to NoSQL to support indexing 1 million documents per second across multiple services for 65 million monthly active users.
Talk - Essential Big-O for the DevOps engineer: quantifying scalability of software systems"? Big-O analysis is the workhorse of scalability analysis of software systems, but many people don't appreciate its importance in DevOps. This talk will introduce the essential intuitions and techniques behind Big-O analysis, with the goal of facilitating more precise communications between DevOps and software engineers who speak in Big-O terms, and applying the principles directly to DevOps scalability problems such as networking.
The document discusses some of the challenges of building web applications and strategies for overcoming them. It notes that web app development can be difficult because details evolve and change, leading to complex and changing requirements. It advocates for having a good team, using version control, automation tools, testing, and managing requirements and work with tools like JIRA and maintaining a story board. The document emphasizes the importance of iterative development, continuous integration, and shipping features regularly.
There are three main points summarized: 1. There are many interesting slow radio transients that could be detected through imaging surveys like accretion flares, orphan gamma-ray bursts, and flare stars. 2. Radio surveys are increasing in sensitivity and field of view by orders of magnitude with instruments like LOFAR, enabling the detection of more rare transient events. 3. TraP is a transient detection pipeline that works by extracting sources from radio images, matching to known sources, identifying new bright sources, analyzing light curves, and making the results accessible through a user-friendly web interface.
The document discusses continuous integration and deployment practices. It begins by describing environments like local, development, test, and production. It then discusses manual deployment processes and the teams involved, including developers, DBAs, sysadmins, and QA. The presentation advocates automating deployments through pipelines that build, run metrics and tests, package, and deploy code. It emphasizes making the code environment-agnostic and managing dependencies. Overall, the document promotes practices for continuous integration and deployment that help software work reliably through faster feedback and deployment.
Building a strong Tableau community is important for successful deployment. It provides formalized definitions and knowledge, engagement, and a place to get questions answered. To build a community, organizations should use existing internal forums and wikis to provide documentation like data sources and dictionaries. The platform already used in an organization should be utilized to ensure flexibility. Regular "Tableau Doctor" sessions, either in-person or remote, can help many users per week. Communities need roles like the Tableau Doctor to answer questions and the Tableau Workflow Developer Community to help with APIs.
Despite the best of intentions, we sometimes find ourselves working on a team of size one. Groups shrink for many reasons: attrition, mergers and acquisitions, transfers, and financial distress. It's never comfortable being a Single Point of Failure, but how can you survive this state of non-redundancy? Are there any benefits to being a team of "me, myself, and I", or is it all a pit of despair? What kind of red flags should you be on the lookout for? And, most importantly, what compelling leverage can you try to use to encourage team growth back to a reasonable size? At DevOps Days Silicon Valley 2015, I shared a series of unfortunate events that led to my current status: the Human SPoF; I also discussed some of the tactics I've used to survive. Automation, tools, and code-as-infrastructure are a force multiplier applied correctly, allowing one engineer to do the work of many. However, these wonders come with a price tag. I also covered some strategies to grow a team, and ways to maintain sanity while keeping the lights blinking and the disks spinning in a 24x7 real-time environment with over 2000 servers.
The document describes Norman's seven stages of action, which is a framework that outlines the steps involved in human interaction with technology. The seven stages are: 1) goal, 2) plan, 3) specify, 4) perform, 5) perceive, 6) interpret, and 7) compare. It also discusses the gulfs of execution and evaluation, and how good design can help bridge these gulfs to improve usability.
Agile has helped teams to collaborate and organize work better. That’s great. Better teamwork and better understanding of the work definitely helps a team to do right things. Agile has also lead the way toward technical practices such as Continuous Integration and Delivery, Test Driven Development and SOLID-architecture principles. Great, these things definitely help the team to do things right. Then again, most of the time in software projects goes into problem solving and similar creative acts. Agile has relatively little to give on these areas. Currently, agile is not about creativity nor is it about problem solving. This coaching circle session will focus on the creative core of software development: solving creatively novel, original and broad problems more effectively all the time. I will introduce some principles and tools I’ve found useful when helping people to solve hard problems and to find creative solutions.
The document discusses the history and current state of search at Reddit. It describes how Reddit started with basic search tools like Postgres and Lucene, struggled to scale these systems, and transitioned to outsourcing search before bringing it back in-house. The current search architecture is explained along with improvements made to indexing, ingestion, and relevance. Future plans are outlined to further improve search quality, support new content types, and redesign the search user experience.
Talk by Martin Buhr, Founder of Loadzen.com at Devtank on the 31st of January about the importance of load testing your site as a startup, how http://loadzen.com was built and the lessons learned.
Generative Adversarial Network and its Applications on Speech and Natural Language Processing, Part 2. 발표자: Hung-yi Lee(국립 타이완대 교수) 발표일: 18.7. Generative adversarial network (GAN) is a new idea for training models, in which a generator and a discriminator compete against each other to improve the generation quality. Recently, GAN has shown amazing results in image generation, and a large amount and a wide variety of new ideas, techniques, and applications have been developed based on it. Although there are only few successful cases, GAN has great potential to be applied to text and speech generations to overcome limitations in the conventional methods. In the first part of the talk, I will first give an introduction of GAN and provide a thorough review about this technology. In the second part, I will focus on the applications of GAN to speech and natural language processing. I will demonstrate the applications of GAN on voice I will also talk about the research directions towards unsupervised speech recognition by GAN.conversion, unsupervised abstractive summarization and sentiment controllable chat-bot.
The document outlines minimum and recommended requirements for automated deployments. Minimum requirements include consistent deployments with no manual intervention, idempotence, high availability, pre/post condition checks with automated rollbacks, critical state alerting, and transparent logging. Recommended practices include checkpointing, blackout periods, synthetic traffic/processing for validation, and real-time visibility.
Vendors always have the latest system they would like you to pay for. Lets face it, even if its for free, there is a cost in implementing it, managing it and decommissioning it. This talk will cover the negative, functional and performance testing done to try and cover these areas in a reasonable timescale, and highlight some of the issues we run into when attempting the testing, along with some tips and tricks developed after testing a half dozen vendors equipment over the past three years.
This document outlines a structured approach for debugging distributed systems. It begins with observing and documenting what is known about the problem. The next steps are to create a minimal reproducer, debug the client side, check DNS and routing, and inspect the connection. Further debugging involves inspecting traffic and messages, debugging the server side, and wrapping up with documentation and a post-mortem. Key tools mentioned include logging, tracing, tests, SSH, network inspection tools, and remote debugging. The document emphasizes focusing on details, eliminating single points of failure, and gaining an understanding of potential failure modes through experience debugging real issues.
This document outlines a structured approach for debugging distributed systems. It begins with observing and documenting what is known about the problem. The next steps are to create a minimal reproducer, debug the client side, check DNS and routing, and inspect the connection. Further debugging involves inspecting traffic and messages, debugging the server side, and wrapping up with documentation and a post-mortem. Key tools mentioned include logging, tracing, testing, and network debugging tools. The document argues that understanding failure modes is important for building reliable distributed systems.
Event Sourcing is a modern but non-trivial data model for building scalable and powerful systems. Instead of mapping a single Entity to a single row in a datastore, in an Event Sourced system we persist all changes for an Entity in an append-only journal. This design provides a wealth of benefits: a built-in Audit Trail, Time-Based reporting, powerful Error Recovery, and more. It creates flexible, scalable systems and can easily evolve to meet changing organizational demands. That is, once you have some experience with it. Event Sourcing is straightforward in concept, but it does bring additional complexity and a learning curve that can be intimidating. People coming from traditional ORM systems often wonder: how does one model relations between Entities? How is Optimistic Locking handled? What about datastore constraints? Based on over eight years of experience with building ES systems in Spring applications, we will demonstrate the basics of Event Sourcing and some of the common patterns. We will see how simple it can be to model events with available tools like Spring Data JPA, JOOQ, and the integration between Spring and Axon. We’ll walk through sample code in an application that demonstrates many of these techniques. However, it’s also not strictly about the code; we’ll see how a process called Event Modeling can be a powerful design tool to align Subject Matter Experts, Product, and Engineering. Attendees will leave with an understanding of the basic Event Sourcing patterns, and hopefully a desire to start creating their own Journals.
In this presentation we will present the general philosophy of Clean Architecture, Hexagonal Architecture, and Ports & Adapters: discussing why these approaches are useful and general guidelines for introducing them to your code. Chiefly, we will show how to implement these patterns within your Spring (Boot) Applications. Through a publicly available reference app, we will demonstrate what these concepts can look like within Spring and walkthrough a handful of scenarios: isolating core business logic, ease of testing, and adding a new feature or two.
Many presentations on microservices offer a high-level view of the architecture; rarely do you hear what it’s like to work in such an environment. Stephen Pember shares his experience migrating from a monolith to microservices across several companies, highlighting the mistakes made along the way and offering advice.
Many presentations on Microservices offer a high-level view; rarely does one hear what it’s like to work in such an environment. Individual services are somewhat trivial to develop, but now you suddenly have countless others to track. You’ll become obsessed over how they communicate. You’ll have to start referring to the whole thing as “the Platform”. You will have to take on some considerable DevOps work and start learning about deployment pipelines, metrics, and logging. Don’t panic. In this presentation we’ll discuss what we learned over the past four years by highlighting our mistakes. We’ll examine what a development lifecycle might look like for adding a new service, developing a feature, or fixing bugs. We’ll see how team communication is more important than one might realize. Most importantly, we’ll show how - while an individual service is simple - the infrastructure demands are now much more complicated: your organization will need to introduce and become increasingly dependent on various technologies, procedures, and tools - ranging from the ELK stack to Grafana to Kubernetes. Lastly, you’ll come away with the understanding that your resident SREs will become the most valued members of your team.
Over the past few years, Gradle has become a popular build tool in the JVM space. This is not surprising, considering the power and the features it brings, compared with its competitors. However, one thing Gradle lacks is history and the collective knowledge at the same level of other alternatives: how does one organize a Gradle project in an ‘idiomatic’ fashion? We feel that we’ve put together a decent build pipeline for each of our microservices over the years, and each one starts with their build.gradle file(s). We’d like to share it, although we’re not sure if it’s the ‘correct’ way. In this talk, we’ll walk through a sample project structure and build process. We’ll discuss the various checks and tools we use (e.g. Sonar, CodeNarc, Jenkins) at each step of the build. We’ll explain how each of the components in the process work for us, and share samples of our Groovy scripts. Most importantly, though, we’d like to hear what the audience are using in their builds!
The document discusses various topics related to surviving in a microservices environment. It begins by outlining some benefits of microservices such as reduced coupling, continuous delivery, and efficient scaling. It then covers infrastructure topics like managing logs, metrics, deployments, builds, and environments. Architecture topics discussed include overall design, technologies, testing approaches, communication methods, and data persistence. The document also addresses team communication and processes. It concludes by providing some miscellaneous advice for working with microservices.
Event storage offers many practical benefits to distributed systems providing complete state changes over time, but there are a number of challenges when building an event store mechanism. Stephen Pember explores some of the problems you may encounter and shares real-world patterns for working with event storage.
This talk is an introduction to a powerful combination in the big data space: Apache Spark and Cassandra. Spark is a cluster-computing framework that allows users to perform calculations against resilient in-memory datasets using a functional programming interface. Cassandra is a linearly scalable, fault tolerant, decentralized datastore. These two technologies are complicated, but integrate well and provide such a level of utility that whole companies have formed around them. In this talk we’ll learn how Spark and Cassandra can be leveraged within your Groovy Application: Spark normally asks for a Scala environment. We’ll talk about Spark and Cassandra from a high level and walk through code examples. We’ll discuss the pitfalls of working with these technologies - like modeling your data appropriately to ensure even distribution in Cassandra and general packaging woes with Spark - and ways to avoid them. Finally, we’ll explore how we at ThirdChannel are using these technologies.
Many presentations on Microservices offer a high level view; rarely does one hear what it’s like to work in such an environment. Individual services are somewhat trivial to develop, but now you suddenly have countless others to track. You’ll become obsessed over how they communicate. You’ll have to start referring to the whole thing as “the Platform”. You will have to take on some DevOps work and start learning about deployment pipelines, metrics, and logging. Don’t panic. In this presentation we’ll discuss what we learned over the past three years. We’ll examine what a development lifecycle might look like for adding a new service, developing a feature, or fixing bugs. We’ll dive a bit into DevOps and see how one will become dependent on various metric and centralized logging tools, like Kubernetes and the ELK stack. Finally we’ll talk about team communication and organization... and how they are likely the most important tool for surviving a Microservices development team.
The document discusses various topics related to surviving in a microservices environment. It addresses questions around infrastructure, architecture, team communication and provides advice. Key points include the importance of centralized logging and monitoring, avoiding tight coupling between services, ensuring an overall architectural vision, and being reluctant to add new process unless something goes wrong. The document emphasizes that most of the challenge with microservices is in infrastructure.
As businesses grow, so does the complexity of their software. New features, new models, and new background processes all continue to be added. . .and developers struggle to make sense of it all. Yet the end user demands a swift and functional experience when interacting with your application. It is paramount to be open to alternative patterns that help tame complex, high-demand services. Two such patterns are command-query responsibility segregation (CQRS) and event sourcing (ES). Command-query responsibility segregation is an architectural pattern for user-facing applications that extends from the now standard Model-View-Controller (MVC) pattern and is an alternative to the CRUD pattern. At its core, CQRS is about changing how we think of and work with our data by introducing two types of models: all user actions become commands, and a read-only query model powers our views. Commands and queries are logistically separated, providing additional decoupling of our application. CQRS also calls for changes in how we store and structure our data. Enter event sourcing. Instead of persisting the current state of our domain objects or entities, we record historical events about our data. The key advantage is that we can examine our application data at any point in time, rather than just the current state. This pattern changes how we persist and process our data but is surprisingly efficient. While each of the two patterns can be used exclusively, they complement each other beautifully and facilitate the construction of decoupled, scalable applications or individual services. Stephen Pember explores the fundamentals of each pattern and offers several examples and demonstration code to show how one might actually go about implementing CQRS and ES. Steve discusses task-based UIs and domain-driven design as he outlines some of the advantages—and challenges—that ThirdChannel has seen when developing systems using CQRS and ES over the past year.
A common pattern in application development is to build systems where the data is directly linked to the current state of the application; one row in the database equates to one entity’s current state. Only ever knowing the current state of the data is adequate for many systems, but imagine the possibilities if one had access to the state of the data at any point in time. Enter Event Sourcing: instead of persisting the current state of our Domain Objects or Entities, we record historical events about our data. This pattern changes how we persist and process our data, but is surprisingly lightweight. In this talk I will present the basic concepts of Event Sourcing and the positive effects it can have on analytics and performance. We’ll discuss how storing historical events provides extremely powerful views into our data at any point in time. We’ll see how naturally it couples with the Event-oriented world of modern Reactive systems, and how easily it can be implemented in Groovy. We’ll examine some practical use cases and example implementations in Ratpack. Event Sourcing will change how you think about your data.