SlideShare a Scribd company logo
22nd October 2021
Manchester MuleSoft
Meetup Group
RTF Architecture
Guidelines for Manchester
MuleSoft Meetup #6 [Virtual]
● Welcome to Manchester MuleSoft Meetup! We will start our
introduction session at 5 pm IST.
● If any issues with audio/video or Bevy, write down your registered
name in chat section so that we can provide you audio/video access
in Bevy.
● Please keep yourself muted unless you have any question.
● We encourage keeping your video on for making our meetup
interactive.
● You'll can also write down your questions in chat section.
● We appreciate your valuable feedback. Thanks.
2
3
● Introductions
● RTF Architecture
● Trivia Quiz
● Networking time
Agenda
4
●About the organizer:
○ Terry Lockwood
○ Akshata Sawant
●About the sponsor:
○ MuleSoft
Introductions
A SHOW OF HANDS:
Who is new to this Meetup?
Important Announcements
Latest Releases/News
● Event-Driven API (AsyncAPI)
https://docs.mulesoft.com/release-
notes/platform/event-driven-api
● Participate in MuleSoft Hackathon 2021
https://blogs.mulesoft.com/dev-guides/mulesoft-
hackathon-2021/
6
Latest Releases/News
● Become a leading community mentor -
https://tinyurl.com/sv4rupew
● The calendar subscription link for Mulesoft meetups:
https://calendar.google.com/calendar/ical/idc4qavc8b81c9oop81obr
s27k%40group.calendar.google.com/public/basic.ics
7
Speaker
8
Agenda
● MuleSoft deployment strategies
● Runtime fabric operating models
● RTF on self-managed Kubernetes architecture
● RTF advanced concepts
● RTF logging and monitoring
● RTF components
MuleSoft Deployment Strategies
Runtime plane Vs Control plane
Cloudhub
On-Premise runtime
RTF Runtime
Private Cloud Edition
RTF operating models - on VMs or bare metal
● Cloud infrastructure provided by customer
● Support for AWS, Azure, or on-prem
● *No need for Kubernetes expertise
It contains all of the components it requires.
These components, including Docker and
Kubernetes, are optimized to work efficiently
with Mule runtimes and other MuleSoft
services.
RTF operating models - on self-managed
Kubernetes
• Install Runtime Fabric services in managed
Kubernetes solutions (EKS, AKS, or GKE)
• Provide and manage your own Kubernetes cluster
• More flexibility, cheaper, and less overhead
• Kubernetes expertise required
RTF operating models - on self-managed
Kubernetes
Shared responsibility model
• Mulesoft provides support for
• Runtime Fabric components
• Mule Runtime server
• Mule deployment resources
• Kubernetes Administrator responsible for
• Creating, configuring, and managing EKS, AKS,
and GKE
• Ingress controller
• Logs and metrics
• Container runtime and networking
Runtime Fabric operating models comparison
Capabilities Runtime Fabric on Self-Managed Kubernetes
Runtime Fabric on VMs or
bare metal
Kubernetes and Docker
Provision your own EKS/AKS/GKE cluster
MuleSoft supplies Docker images Included
Linux distribution Support for all versions RHEL and CentOS only
Node autoscaling Autoscaling supported using Azure, AWS, or GCP settings Not Supported
External log forwarding Provision your own log forwarder Included
Internal load balancer Set up your own Ingress Included
Anypoint Security Edge Not Supported Supported
Ops Center
Enable similar monitoring and alerting from
AWS/Azure/GCP console Included
The Big Picture
Dockerize
App
Your App
Docker Image
Repository
Docker Hub
Elastic Container Registry
(ECR)
Deploy Into
Amazon EKS
Google Kubernetes Engine
K8s Cluster
K8s on EC2
……..
Any other K8s Cluster
Implementation
Dockerfile
Container
Exchange
rtf-runtime-registry.kprod.msap.io
Docker History
RTF Architecture
Kubernetes Architecture
Who Manages Nodes?
Control Plane manages, monitors,
plans, schedules nodes
Worker Nodes host Containers
Control Plane Components
Control Plane manages, monitors,
plans, schedules nodes
Worker Nodes host Containers
Key Value Store for critical cluster info
Puts containers to proper nodes
Ensures proper state of cluster components
Kubernetes Cluster State
X
Desired State: 3 Nodes
Current State: 2 Nodes
c-m brings one node up so current state = desired state
Who Specifies State?
Desired State: 6 Containers
Current State: 0 Containers
YOU
Manifest File
Run 6 copies of an container image
Who Specifies State?
Desired State: 6 Containers
Current State: 6 Containers
YOU
Manifest File
Run 6 copies of an container image
Gateway to Control Plane
Desired State Vs Current State
YOU
Manifest File
Do this on the cluster
Exposes Kubernetes API
Kubernetes Architecture
Control Plane manages, monitors,
plans, schedules nodes
Worker Nodes host Containers
Key Value Store for critical cluster info
Puts containers to proper nodes
Ensures proper state of cluster components
Exposes Kubernetes API
What’s In Node?
Container Runtime Engine
Docker, Containerd, CRI-O, frakti
Control Plane - Node Communication
Control Plane manages, monitors,
plans, schedules nodes
Status of the node and containers to
Control Plane <aws-node>
Container-Container Communication
Control Plane manages, monitors,
plans, schedules nodes
Status of the node and containers to
Control Plane <aws-node>
Allows network communications
<kube-proxy>
Putting it All Together
Control Plane manages, monitors,
plans, schedules nodes
Status of the node and containers to
Control Plane <aws-node>
Allows network
communications <kube-proxy>
Key Value Store for critical cluster info
Puts containers to proper nodes
Ensures proper state of cluster components
Exposes Kubernetes API
Container Runtime
Container Runtime
Container Runtime
Sizing Exercise
How many pods??
Ng-1 Ng-2
T3.micro T3.micro T3.small
Sizing Exercise - solution
Ng-1 Ng-2
T3.micro T3.micro T3.small
T3.micro: Max ENI=2, Max IP4=2 , total pods = 2*(2-
1)+2=4*2=8
T3.small: Max ENI=3, Max IP4=4, total pods =3*(4-1)+2=11
Each Node: 1 proxy and 1 aws-node, total =3*2=6
dns service: 2 for HA
total pods=(8+11)-6-2=11
# of ENI * (# of IPv4 per ENI - 1) + 2
https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
RTF Resource allocation limitations
Resource Limitation
Nodes 30 max
Node types VM based required (e.g., not Fargate)
Replicas per node 20 to 25 per core, up to 40 per node.
Associated environment per
Runtime Fabric instance
50 max environments
Business groups 50 max Runtime Fabric instances per business group
RTF Pods
• Docker image containing
– Base Linux operating system
– Java Virtual Machine
– Mule runtime
• Container image cannot be
customized, not possible to
deploy other Docker containers
• No mountable volumes, not directly
accessible
• CPU Footprint between
20mCPU to 50mCPU to
sample apps container, 50mCPU
for monitoring container (streams
application metrics to the control plane
and cannot be disabled)
• Memory allocation: 700Mi for mule4,
500Mi for mule3 + 50Mi for monitoring
container
Putting it All Together
Control Plane manages, monitors,
plans, schedules nodes
Key Value Store for critical cluster info
Puts containers to proper nodes
Ensures proper state of cluster components
Exposes Kubernetes API
Container Runtime
Container Runtime
Container Runtime
Forward logs to log aggregator
(200mCPU, 200Mi)
Communicate with control plane
(100mCPU, 200Mi)
Mule-clusterip-service
(50mCPU, 100Mi)
resource-cache
(100mCPU,50Mi)
RTF Performance and Startup
vCPU Cores Concurrent Connections Avg Response Time (ms)
1.00 10 15
0.50 5 15
0.10 1 25
0.07 1 78
vCPU Cores Approximate Startup Time
1.00 Less than 1 minute
0.50 Under 2 minutes
0.10 6 to 8 minutes
0.07 10 to 14 minutes
Run performance and load testing on your Mule applications to determine the number of resources to allocate.
RTF Installation Steps
1. Create a Runtime Fabric configuration
from Anypoint Runtime Manager
2. Install Runtime Fabric on a provisioned
EKS Kubernetes cluster using rtfctl install
3. Verify the progress and results rtfctl
status
4. Install the license using rtfctl apply mule-
license
APP Deployment
Life Of A Simple APP
Experience layer
Process Layer
user-eapi
user-eapi 10.16.48.53
10.16.93.80
user-papi
10.18.32.61
user-papi
10.18.16.23
10.16.10.01
10.18.10.21
System Layer
10.20.10.20
sf-sapi
10.20.10.61
sf-sapi
10.20.10.65
ClusterIP
ClusterIP
ALB Ingress
env:ns app1
Application Load Balancer
Path: app1.epam.com Path:
app2.epam.com
rtf:rtf-ingress env:ns app1 env:ns app2
env:ns app2
● Anypoint Runtime Fabric for self-
managed Kubernetes does not
include a built-in load balancer.
● Runtime Fabric on self-managed
Kubernetes supports any Ingress
controller that is compatible with the
Kubernetes service provided by your
vendor (EKS, AKS, or GKE)
● An application can be accessed
from another application within the
same Runtime Fabric cluster,
bypassing the Ingress
● For apps running in the same
namespace: http(s)://<app
name>:8081/api
● For apps running in different
namespaces: http(s)://<app
name>.<namespace>:8081/ap
i
https://k8s-rtf1-a88537f113-589639490.us-west-1.elb.amazonaws.com
Host: *.epam.com
path /*
Scalability
Quiz: What Resources Created?
Deployment
Replicaset
Pod
Containers 1 vCPU = 1000m (milicore)
pod requesting CPU of 20m
(Half of .02 vCPU) and 800
MiB of Memory
pod CPU limit 1000m (1
vCPU) and 800 MiB of Memory
Horizontal Pod Autoscaler
Horizontal Pod Autoscaler (HPA)
Amazon EC2 (m5.large)
CPU
Traffic
CPU
Scale pod if pod CPU > 50%
Horizontal Pod Autoscaler (HPA)
Amazon EC2 (m5.large)
CPU CPU
Traffic
Scale pod if pod CPU > 50%
Horizontal Pod Autoscaler (HPA)
Amazon EC2 (m5.large)
CPU CPU
CPU
Scale pod if pod CPU > 50%
Traffic
RTF -HPA
• Add/remove resources using
Runtime Manager
• Add/remove Pods (replicas) using
Runtime Manager
• Replicas are distributed across
the worker nodes for HA
• CPU bursting shares CPU cores
with other applications, This
means performance for one
application is influenced by other
applications running on the
worker node
EC2 Scaling
Application Load Balancer
Auto Scaling Policy:
Scale if CPU > 50% (based on CloudWatch Metrics)
Auto Scaling Group
CPU
Amazon EC2
(m5.large)
Traffic
Application Load Balancer
Auto Scaling Policy:
Scale if CPU > 50% (based on CloudWatch Metrics)
Auto Scaling Group
CPU CPU
Traffic
Amazon EC2 Amazon EC2
(m5.large) (m5.large)
EC2 Scaling
Autoscaler
CPU
Traffic
Amazon EC2
(m5.large)
CPU CPU
CPU
Amazon EKS
Auto Scaling group
Amazon EC2
(m5.large)
CPU
CPU
Application Clustering
Mule Runtime Cluster
Mule runtime clusters are more tightly coupled and are aware of each other
• Allows multi-threaded applications that leverage all attached nodes assigned resources
Logging & Monitoring
EKS Logging
● EKS Control Plane Logging
● EKS Worker Nodes Logging
EKS Logging
● EKS Control Plane Logging
○ K8 api
○ audit
○ authenticator
○ controllerManager
○ scheduler
● EKS Worker Nodes Logging
EKS Logging
● EKS Control Plane Logging
○ K8 api
○ audit
○ authenticator
○ controllerManager
○ scheduler
● EKS Worker Nodes Logging
○ System logs from kubelet, kube-proxy, or
dockerd
○ Application logs from application containers
EKS Control Plane Logging
● EKS Control Plane Logging
○ K8s api
○ audit
○ authenticator
○ controllerManager
○ scheduler
Amazon
CloudWatc
h
LOGS
Stream
Amazon Elasticsearch
Service
Logging Caveat
EC2
Amazon Elastic Block Store
(EBS)
If instance terminated, logs are gone
LOGS
Logs need to be aggregated in a meaningful way
App 1 App 2 App 3
Logging Caveat
LOGS
Logging Architecture
(Abstract)
RTF Worker Nodes
✔ Mule application generates logging messages using the Log4j framework
✔ Typically stored in per-application log files in $MULE_HOME/logs
✔ Mule runtime logs are stored in a separate log file (mule_ee.log)
✔ Container redirect logs to /var/log/containers/*.log files
https://docs.mulesoft.com/runtime-fabric/1.10/runtime-fabric-logs
Implementation
LOG BACKEND
(ES, Splunk, CloudWatch)
= Logging agent running as daemon, reading logs from /var/log
and sending to logging backend
Log-forwarder Log-forwarder Log-forwarder
• You can also access logs
through Anypoint Monitoring
(Titanium customers)
• With Anypoint Monitoring or kubectl
commands, you can view the logs from
a deployed application
Implementation
= Logging agent running as daemon, reading logs from /var/log
and sending to logging backend
Agent
For Streaming to Multiple Log Backends
Amazon Elasticsearch
Service
etc.
Stream
Amazon Kinesis Data Firehose
RTF Monitoring
dias-anypoint-monitoring-sidecar
Security
Security considerations
Security can and should be applied on various levels
• Infrastructure: widest range, often cross-domain
• Network: transport/protocol security
• Operating system: OS users, filesystem privileges
• Runtime: hardening a JDK or Mule installation
• Applications: such as endpoint security, services, and RBAC
• Content: hiding/masking sensitive data
RTF Layered security
Anypoint Platform
(Platform level)
API Management
(Implementation level)
Tokenization
(Data level)
● Operations
● Data security
● Passwords and credentials
● Facilities and network
● Secure connectivity
● Data sovereignty
● Third-party authentication
● On-premises security
Create new APIs and integrations
from prebuilt API security fragments,
access patterns, and policies vetted by
security experts
Secure data in motion across the
enterprise with automatic
tokenization and encryption and reduce
the risk of data breaches
Persistence Gateway
Overview
● Provides a mechanism for persisting Object Store
entries across Mule applications replicas
● All information is deleted when the application is
removed
○ Requires Mule runtimes of version 4.2.1 or later
○ Uses a customer-provided database as data source
(PostgreSQL 9 and above)
● Uses Kubernetes secret to store database
connection details
● Configured via a Kubernetes custom resource and
an opaque secret
● Maximum TTL for data stored is set by default to 30
days
DEMO
Manchester MuleSoft Meetup #6 - Runtime Fabric with Mulesoft
Trivia Questions
76
Q1
What's The maximum number of replicas that can be deployed per node
in RTF?
Options :
a. 10
b. 25
c. 30
d. 40
Q2
How to use Persistence Gateway in a Mule application deployed on RTF?
Options:
a. Anypoint Object Store V2 (OSv2)
b. CloudHub Object Store V1
c. Mule Object Store
d. postgresql Database connector
Q3
What's the CPU and memory footprints for monitoring container?
Options:
a. 50mCPU, 30Mi
b. 30mCPU, 50Mi
c. 50mCPU, 50Mi
d. 20mCPU, 30Mi
80
Nominate yourself for the next meetup
speaker and suggest a topic as well.
Do share the feedback with us
Take a stand!
81
● Share:
○ Tweet using the hashtag #MuleSoftMeetups
○ Invite your network to join: https://meetups.mulesoft.com/manchester/
● Feedback:
○ Fill out the survey feedback and suggest topics for upcoming events
○ Contact MuleSoft at meetups@mulesoft.com for ways to improve the program
○ Contact your organizers Terry Lockwood & Akshata Sawant to suggest topics &
volunteer as speaker
○ Tweet your organizers at @MuleSoft @MuleDev @sawantakshata02
○ Telegram: https://t.me/joinchat/Q6y-MgriEqyDicfZV9PIAg
What’s next?
Introduce yourself to your neighbor
Networking time
Thank you

More Related Content

Manchester MuleSoft Meetup #6 - Runtime Fabric with Mulesoft

  • 1. 22nd October 2021 Manchester MuleSoft Meetup Group RTF Architecture
  • 2. Guidelines for Manchester MuleSoft Meetup #6 [Virtual] ● Welcome to Manchester MuleSoft Meetup! We will start our introduction session at 5 pm IST. ● If any issues with audio/video or Bevy, write down your registered name in chat section so that we can provide you audio/video access in Bevy. ● Please keep yourself muted unless you have any question. ● We encourage keeping your video on for making our meetup interactive. ● You'll can also write down your questions in chat section. ● We appreciate your valuable feedback. Thanks. 2
  • 3. 3 ● Introductions ● RTF Architecture ● Trivia Quiz ● Networking time Agenda
  • 4. 4 ●About the organizer: ○ Terry Lockwood ○ Akshata Sawant ●About the sponsor: ○ MuleSoft Introductions A SHOW OF HANDS: Who is new to this Meetup?
  • 6. Latest Releases/News ● Event-Driven API (AsyncAPI) https://docs.mulesoft.com/release- notes/platform/event-driven-api ● Participate in MuleSoft Hackathon 2021 https://blogs.mulesoft.com/dev-guides/mulesoft- hackathon-2021/ 6
  • 7. Latest Releases/News ● Become a leading community mentor - https://tinyurl.com/sv4rupew ● The calendar subscription link for Mulesoft meetups: https://calendar.google.com/calendar/ical/idc4qavc8b81c9oop81obr s27k%40group.calendar.google.com/public/basic.ics 7
  • 9. Agenda ● MuleSoft deployment strategies ● Runtime fabric operating models ● RTF on self-managed Kubernetes architecture ● RTF advanced concepts ● RTF logging and monitoring ● RTF components
  • 11. Runtime plane Vs Control plane
  • 16. RTF operating models - on VMs or bare metal ● Cloud infrastructure provided by customer ● Support for AWS, Azure, or on-prem ● *No need for Kubernetes expertise It contains all of the components it requires. These components, including Docker and Kubernetes, are optimized to work efficiently with Mule runtimes and other MuleSoft services.
  • 17. RTF operating models - on self-managed Kubernetes • Install Runtime Fabric services in managed Kubernetes solutions (EKS, AKS, or GKE) • Provide and manage your own Kubernetes cluster • More flexibility, cheaper, and less overhead • Kubernetes expertise required
  • 18. RTF operating models - on self-managed Kubernetes Shared responsibility model • Mulesoft provides support for • Runtime Fabric components • Mule Runtime server • Mule deployment resources • Kubernetes Administrator responsible for • Creating, configuring, and managing EKS, AKS, and GKE • Ingress controller • Logs and metrics • Container runtime and networking
  • 19. Runtime Fabric operating models comparison Capabilities Runtime Fabric on Self-Managed Kubernetes Runtime Fabric on VMs or bare metal Kubernetes and Docker Provision your own EKS/AKS/GKE cluster MuleSoft supplies Docker images Included Linux distribution Support for all versions RHEL and CentOS only Node autoscaling Autoscaling supported using Azure, AWS, or GCP settings Not Supported External log forwarding Provision your own log forwarder Included Internal load balancer Set up your own Ingress Included Anypoint Security Edge Not Supported Supported Ops Center Enable similar monitoring and alerting from AWS/Azure/GCP console Included
  • 20. The Big Picture Dockerize App Your App Docker Image Repository Docker Hub Elastic Container Registry (ECR) Deploy Into Amazon EKS Google Kubernetes Engine K8s Cluster K8s on EC2 …….. Any other K8s Cluster Implementation Dockerfile Container Exchange rtf-runtime-registry.kprod.msap.io
  • 24. Who Manages Nodes? Control Plane manages, monitors, plans, schedules nodes Worker Nodes host Containers
  • 25. Control Plane Components Control Plane manages, monitors, plans, schedules nodes Worker Nodes host Containers Key Value Store for critical cluster info Puts containers to proper nodes Ensures proper state of cluster components
  • 26. Kubernetes Cluster State X Desired State: 3 Nodes Current State: 2 Nodes c-m brings one node up so current state = desired state
  • 27. Who Specifies State? Desired State: 6 Containers Current State: 0 Containers YOU Manifest File Run 6 copies of an container image
  • 28. Who Specifies State? Desired State: 6 Containers Current State: 6 Containers YOU Manifest File Run 6 copies of an container image
  • 29. Gateway to Control Plane Desired State Vs Current State YOU Manifest File Do this on the cluster Exposes Kubernetes API
  • 30. Kubernetes Architecture Control Plane manages, monitors, plans, schedules nodes Worker Nodes host Containers Key Value Store for critical cluster info Puts containers to proper nodes Ensures proper state of cluster components Exposes Kubernetes API
  • 31. What’s In Node? Container Runtime Engine Docker, Containerd, CRI-O, frakti
  • 32. Control Plane - Node Communication Control Plane manages, monitors, plans, schedules nodes Status of the node and containers to Control Plane <aws-node>
  • 33. Container-Container Communication Control Plane manages, monitors, plans, schedules nodes Status of the node and containers to Control Plane <aws-node> Allows network communications <kube-proxy>
  • 34. Putting it All Together Control Plane manages, monitors, plans, schedules nodes Status of the node and containers to Control Plane <aws-node> Allows network communications <kube-proxy> Key Value Store for critical cluster info Puts containers to proper nodes Ensures proper state of cluster components Exposes Kubernetes API Container Runtime Container Runtime Container Runtime
  • 35. Sizing Exercise How many pods?? Ng-1 Ng-2 T3.micro T3.micro T3.small
  • 36. Sizing Exercise - solution Ng-1 Ng-2 T3.micro T3.micro T3.small T3.micro: Max ENI=2, Max IP4=2 , total pods = 2*(2- 1)+2=4*2=8 T3.small: Max ENI=3, Max IP4=4, total pods =3*(4-1)+2=11 Each Node: 1 proxy and 1 aws-node, total =3*2=6 dns service: 2 for HA total pods=(8+11)-6-2=11 # of ENI * (# of IPv4 per ENI - 1) + 2 https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
  • 37. RTF Resource allocation limitations Resource Limitation Nodes 30 max Node types VM based required (e.g., not Fargate) Replicas per node 20 to 25 per core, up to 40 per node. Associated environment per Runtime Fabric instance 50 max environments Business groups 50 max Runtime Fabric instances per business group
  • 38. RTF Pods • Docker image containing – Base Linux operating system – Java Virtual Machine – Mule runtime • Container image cannot be customized, not possible to deploy other Docker containers • No mountable volumes, not directly accessible • CPU Footprint between 20mCPU to 50mCPU to sample apps container, 50mCPU for monitoring container (streams application metrics to the control plane and cannot be disabled) • Memory allocation: 700Mi for mule4, 500Mi for mule3 + 50Mi for monitoring container
  • 39. Putting it All Together Control Plane manages, monitors, plans, schedules nodes Key Value Store for critical cluster info Puts containers to proper nodes Ensures proper state of cluster components Exposes Kubernetes API Container Runtime Container Runtime Container Runtime Forward logs to log aggregator (200mCPU, 200Mi) Communicate with control plane (100mCPU, 200Mi) Mule-clusterip-service (50mCPU, 100Mi) resource-cache (100mCPU,50Mi)
  • 40. RTF Performance and Startup vCPU Cores Concurrent Connections Avg Response Time (ms) 1.00 10 15 0.50 5 15 0.10 1 25 0.07 1 78 vCPU Cores Approximate Startup Time 1.00 Less than 1 minute 0.50 Under 2 minutes 0.10 6 to 8 minutes 0.07 10 to 14 minutes Run performance and load testing on your Mule applications to determine the number of resources to allocate.
  • 41. RTF Installation Steps 1. Create a Runtime Fabric configuration from Anypoint Runtime Manager 2. Install Runtime Fabric on a provisioned EKS Kubernetes cluster using rtfctl install 3. Verify the progress and results rtfctl status 4. Install the license using rtfctl apply mule- license
  • 43. Life Of A Simple APP Experience layer Process Layer user-eapi user-eapi 10.16.48.53 10.16.93.80 user-papi 10.18.32.61 user-papi 10.18.16.23 10.16.10.01 10.18.10.21 System Layer 10.20.10.20 sf-sapi 10.20.10.61 sf-sapi 10.20.10.65 ClusterIP ClusterIP
  • 44. ALB Ingress env:ns app1 Application Load Balancer Path: app1.epam.com Path: app2.epam.com rtf:rtf-ingress env:ns app1 env:ns app2 env:ns app2 ● Anypoint Runtime Fabric for self- managed Kubernetes does not include a built-in load balancer. ● Runtime Fabric on self-managed Kubernetes supports any Ingress controller that is compatible with the Kubernetes service provided by your vendor (EKS, AKS, or GKE) ● An application can be accessed from another application within the same Runtime Fabric cluster, bypassing the Ingress ● For apps running in the same namespace: http(s)://<app name>:8081/api ● For apps running in different namespaces: http(s)://<app name>.<namespace>:8081/ap i https://k8s-rtf1-a88537f113-589639490.us-west-1.elb.amazonaws.com Host: *.epam.com path /*
  • 46. Quiz: What Resources Created? Deployment Replicaset Pod Containers 1 vCPU = 1000m (milicore) pod requesting CPU of 20m (Half of .02 vCPU) and 800 MiB of Memory pod CPU limit 1000m (1 vCPU) and 800 MiB of Memory
  • 48. Horizontal Pod Autoscaler (HPA) Amazon EC2 (m5.large) CPU Traffic CPU Scale pod if pod CPU > 50%
  • 49. Horizontal Pod Autoscaler (HPA) Amazon EC2 (m5.large) CPU CPU Traffic Scale pod if pod CPU > 50%
  • 50. Horizontal Pod Autoscaler (HPA) Amazon EC2 (m5.large) CPU CPU CPU Scale pod if pod CPU > 50% Traffic
  • 51. RTF -HPA • Add/remove resources using Runtime Manager • Add/remove Pods (replicas) using Runtime Manager • Replicas are distributed across the worker nodes for HA • CPU bursting shares CPU cores with other applications, This means performance for one application is influenced by other applications running on the worker node
  • 52. EC2 Scaling Application Load Balancer Auto Scaling Policy: Scale if CPU > 50% (based on CloudWatch Metrics) Auto Scaling Group CPU Amazon EC2 (m5.large) Traffic
  • 53. Application Load Balancer Auto Scaling Policy: Scale if CPU > 50% (based on CloudWatch Metrics) Auto Scaling Group CPU CPU Traffic Amazon EC2 Amazon EC2 (m5.large) (m5.large) EC2 Scaling
  • 54. Autoscaler CPU Traffic Amazon EC2 (m5.large) CPU CPU CPU Amazon EKS Auto Scaling group Amazon EC2 (m5.large) CPU CPU
  • 56. Mule Runtime Cluster Mule runtime clusters are more tightly coupled and are aware of each other • Allows multi-threaded applications that leverage all attached nodes assigned resources
  • 58. EKS Logging ● EKS Control Plane Logging ● EKS Worker Nodes Logging
  • 59. EKS Logging ● EKS Control Plane Logging ○ K8 api ○ audit ○ authenticator ○ controllerManager ○ scheduler ● EKS Worker Nodes Logging
  • 60. EKS Logging ● EKS Control Plane Logging ○ K8 api ○ audit ○ authenticator ○ controllerManager ○ scheduler ● EKS Worker Nodes Logging ○ System logs from kubelet, kube-proxy, or dockerd ○ Application logs from application containers
  • 61. EKS Control Plane Logging ● EKS Control Plane Logging ○ K8s api ○ audit ○ authenticator ○ controllerManager ○ scheduler Amazon CloudWatc h LOGS Stream Amazon Elasticsearch Service
  • 62. Logging Caveat EC2 Amazon Elastic Block Store (EBS) If instance terminated, logs are gone LOGS Logs need to be aggregated in a meaningful way App 1 App 2 App 3
  • 64. RTF Worker Nodes ✔ Mule application generates logging messages using the Log4j framework ✔ Typically stored in per-application log files in $MULE_HOME/logs ✔ Mule runtime logs are stored in a separate log file (mule_ee.log) ✔ Container redirect logs to /var/log/containers/*.log files https://docs.mulesoft.com/runtime-fabric/1.10/runtime-fabric-logs
  • 65. Implementation LOG BACKEND (ES, Splunk, CloudWatch) = Logging agent running as daemon, reading logs from /var/log and sending to logging backend Log-forwarder Log-forwarder Log-forwarder • You can also access logs through Anypoint Monitoring (Titanium customers) • With Anypoint Monitoring or kubectl commands, you can view the logs from a deployed application
  • 66. Implementation = Logging agent running as daemon, reading logs from /var/log and sending to logging backend
  • 67. Agent For Streaming to Multiple Log Backends Amazon Elasticsearch Service etc. Stream Amazon Kinesis Data Firehose
  • 70. Security considerations Security can and should be applied on various levels • Infrastructure: widest range, often cross-domain • Network: transport/protocol security • Operating system: OS users, filesystem privileges • Runtime: hardening a JDK or Mule installation • Applications: such as endpoint security, services, and RBAC • Content: hiding/masking sensitive data
  • 71. RTF Layered security Anypoint Platform (Platform level) API Management (Implementation level) Tokenization (Data level) ● Operations ● Data security ● Passwords and credentials ● Facilities and network ● Secure connectivity ● Data sovereignty ● Third-party authentication ● On-premises security Create new APIs and integrations from prebuilt API security fragments, access patterns, and policies vetted by security experts Secure data in motion across the enterprise with automatic tokenization and encryption and reduce the risk of data breaches
  • 73. Overview ● Provides a mechanism for persisting Object Store entries across Mule applications replicas ● All information is deleted when the application is removed ○ Requires Mule runtimes of version 4.2.1 or later ○ Uses a customer-provided database as data source (PostgreSQL 9 and above) ● Uses Kubernetes secret to store database connection details ● Configured via a Kubernetes custom resource and an opaque secret ● Maximum TTL for data stored is set by default to 30 days
  • 74. DEMO
  • 77. Q1 What's The maximum number of replicas that can be deployed per node in RTF? Options : a. 10 b. 25 c. 30 d. 40
  • 78. Q2 How to use Persistence Gateway in a Mule application deployed on RTF? Options: a. Anypoint Object Store V2 (OSv2) b. CloudHub Object Store V1 c. Mule Object Store d. postgresql Database connector
  • 79. Q3 What's the CPU and memory footprints for monitoring container? Options: a. 50mCPU, 30Mi b. 30mCPU, 50Mi c. 50mCPU, 50Mi d. 20mCPU, 30Mi
  • 80. 80 Nominate yourself for the next meetup speaker and suggest a topic as well. Do share the feedback with us Take a stand!
  • 81. 81 ● Share: ○ Tweet using the hashtag #MuleSoftMeetups ○ Invite your network to join: https://meetups.mulesoft.com/manchester/ ● Feedback: ○ Fill out the survey feedback and suggest topics for upcoming events ○ Contact MuleSoft at meetups@mulesoft.com for ways to improve the program ○ Contact your organizers Terry Lockwood & Akshata Sawant to suggest topics & volunteer as speaker ○ Tweet your organizers at @MuleSoft @MuleDev @sawantakshata02 ○ Telegram: https://t.me/joinchat/Q6y-MgriEqyDicfZV9PIAg What’s next?
  • 82. Introduce yourself to your neighbor Networking time