As application development becomes more agile, and the ability to rapidly create and iterate new innovations escalates, so too does the need to be able to rapidly scale up the solutions that become successful. Equally it is common to create solutions with relatively short life-cycles and so we need to be able to scale down to recover resources too. On a more fine grained level, to make efficient use of shared platforms such as Kubernetes, we need to be able to dynamically scale applications up and down based on fine grained demand. Inevitably all these challenges are just as important for the integration between applications. This session explores what scalability means for the key areas of integration technology - application integration, API management and messaging.
2. Costofentryoffirstfunction
Increasing abstraction from infrastructure
Bare Metal
Virtual machines
Functions
Containers
• Reducing infrastructure cost
• More fine grained cost models
• Less operations cost
• Reduced operational ownership
3. Evolution to agile integration – high level view
APIM
APIM APIM
API Management
APIM
API Management
APIM
APIM
Gateway
Integration
Integration Int.
3
Engagement
applications
Systemsof
record
API Management
Socialization/monetization Re-platforming Application autonomy
API Management
Centralized
ESB
Fine-grained
integration
deployment
Decentralized
Integration
ownership
Socialized APIs
Webinars http://ibm.biz/agile-integration-webcasts eBooklet http://ibm.biz/agile-integration-ebook IBM Redbook http://ibm.biz/agile-integration-redbook
4. Benefits of a container based strategy
Containerization is more than just a re-platforming exercise. “lift and shift” will not bring the above benefits.
Requires: Fine-grained deployment, organizational decentralization, pipeline automation, disposable components…
Build Agility
Team
Productivity
Fine-grained
Resilience
Scalability
and
Optimization
Operational
Consistency
Component
Portability
Focus of this presentation
5. Infrastructure
Host OS
Docker
Container
Bins/Libs
Difference between virtual machines and containers
Virtual Machines Containers
App
Infrastructure
Hypervisor
VM
Derived from https://www.docker.com/what-container
Guest OS
Bins/Libs
App App
VM
Guest OS
Bins/Libs
App App
Container
Bins/Libs
App
Container
Bins/Libs
App
Container
Bins/Libs
App
6. Availability Zone A
Server PRD1
HA
Manager
What’s the container equivalent of the HA/DR topology you have today?
Container orchestration platform
(multi-zone)
Container
Server PRD2
Availability Zone B
Server DR1
HA
Manager
Server DR2 Infrastructure as code
Replication:
minimum 2
maximum 2
Spread across zones
Balance workload evenly
Traditional
(explicit configuration)
Container platform
(declarative logical configuration)
Re-instate
on failure
7. Container orchestration platform (multi-zone)
Containers enable fine-grained deployment…and discrete scaling policies
with near-complete abstraction from physical resources
min 3
max 7
min 2
max 2
min 1
max 1
min 1
max 9
min 3
max 3
min 3
max 5
8. Use cases benefiting from elastic scaling
• Typical use cases in production
– Rarely used functions
– Functions with high workload variation
– Elastic parallelization of batch
• Use cases beyond production
– Prototyping
– Performance testing
8Almost all use cases have variable load at the individual function level
time
work
time
work time
work
sporadic
load
volatile
load
cyclic
load
9. Evolution to agile integration – detail view
9
Webinars http://ibm.biz/agile-integration-webcasts eBooklet http://ibm.biz/agile-integration-ebook IBM Redbook http://ibm.biz/agile-integration-redbook
Integration Integration
Socialization/monetization Re-platforming Application autonomy
API Management
API Management
APIM APIM
APIM
APIM
APIM APIM
Eventstream
Engagement
applications
Systemsof
record
Centralized
ESB
Fine-grained
integration
deployment
Decentralized
integration
ownership
Socialized APIs
Integration Integration Int.
Gateway API Management
Eventstream
API Management
10. API PortalAPI Portal
system of
record
system of
record
system of
record
Capabilities scale in different ways
application
API Gateway
API Portal
API Management
API PortalAPI Portal
Microservice
application
API Gateway
Microservice
application
integrations
Fine-grained
integration
deployment
Independently
scaled gateway
clusters
Independently
scaled API
management
componentry
API Management
API Analytics
API Analytics
13. Microservice Application
Engagement tier scaling and availability requirements may be
very different from their back end counterparts
13
SoR SoR SoRSoR
APIs
invocations
EventStreams
Truly independent, decoupled microservice components enable
• Agility: Innovate rapidly without affecting other components
• Scalability: Scale only what you need, and only when you need to
• Resilience: Fail fast, return fast, without affecting other components
To agility, scalability and resilience, microservices need to be independent of
the systems of record
• APIs: Are simplest to use, but create a real-time dependency
• Event streams: Enable microservices to build decoupled views of the data
µService
µServiceµService
µService µService
µService µServiceµService
µService
API gateway
Integration
The back end systems may not be able to keep up with the
elastic scaling needs of the front end applications.
14. API Gateway
Caching – but in which layer?
Application
Data store
Device/
browser
CDN Server
Integration
Read cache only.
Should you terminate HTTPS at the CDN?
Is asynchronous cache purge sufficient?
What cache visibility do you have?
Will you get re-use across regions?
How will you test its effectiveness?
Must terminate HTTPS for full benefit.
Read cache primarily
How is cache invalidation performed?
Reduces load on API Gateway and all layers below.
Closest geographical point-of-presence
Uses existing internet capability (via HTTP headers)
Can’t share cache across users
Cache invalidation can be very challenging
Do you own the device app or have any controller over its design?
Reduces load on all other layers.
App can potentially work offline
Makes app extremely responsive
Reduces load on layers within enterprise.
API specific caching independent of application.
Cache consistent with API granularity
Reduces load on layers from application down.
Enables state free scalability for reference data
Writable cache options (with caution)
Compositions can benefit from fine grained caching.
Reduces load on database
Writable cache options with deep locking possibilities
Cache with understanding of the application
Application native data model can be used
Data relationships within cache are acceptable
Easiest point for accurate cache invalidation.
Further scale with grid compute
Preload closer to data store data model
No amount of caching at other levels is a substitute for a
well designed, organised and tuned database. Modern
databases (e.g. NoSQL) need attention too.
No reduction in load on application or layers above.
Database is the furthest distance from the client.
Do you have access to adjust the database?
Can you be sure you won’t destabilise the application?
Adds complexity to application build
Data model often different to API, so translation at other layers.
Change the application anyway or is it fixed?
What’s the application code change cycle?
Writable cache patterns can interfere with application design
Cache invalidation may require application knowledge.
App Server
ConsPros
15. Consumer A
(requirement
100 msg/sec)
Consumer B
(requirement
300 msg/sec) Provider
throttle
Consumer
throttles
Back end system Z
(capacity 500
msg/sec)
500
100300
• No throttling – back end system easily overloaded.
• Provider only throttling – Back end system protected, but consumers can still take more than they are contracted for.
• Consumer only throttling – Enables prioritisation of consumers. Aggregate of consumers must be limited to back end
capacity or risk overloading.
• Consumer and provider throttling – Consumers forced to behave to SLA, and back end system protected.
Managing throughput effectively requires multiple throttling points
Consumer C
(requirement
300 msg/sec)
300
https://developer.ibm.com/articles/mw-1705-phillips
16. Scaling and availability for stateful components
Shared
Consumer Consumer
Active/active with duplicated data
Horizontal scaling
Instant failover
Replicated local persistence
Consistency more complex
Active/passive with shared data
One master available at a time
Failover requires detection and warm up
Dependent on shared persistence
Simpler for consistency
17. Source / Target
Request / Reply
Stream History
Transient Data
Persistence
Immutable Data
Scalable
Consumption
Targeted
Reliable Delivery
Events
(notifications)
Messaging
(commands)
ü
17
https://developer.ibm.com/messaging/2018/05/18/comparing-messaging-event-streaming-use-cases
18. Moving to agile integration
Moving to cloud is a progressive evolution of enterprise architecture, not a big bang
Multiple aspects of integration architecture change along that journey
18
API
API Gateway
API Gateway
API API
API
API
API
API
API
Event streams are made available
such that microservice
applications can build
independent data stores rather
than always having to make real-
time calls over APIs
Applications migrated from on-premise to IaaS cloud, bring their integrations with
them, potentially moving them under the ownership of the application teams
New applications created based
on microservices architecture
Multiple cloud destinations, likely
from multiple cloud vendors
All applications, old and new, expose “managed” APIs such
that they can be consumed by other applications
Existing ESB broken up into
separately deployable integrations
using containerization
Asynchronous ”hub and
spoke” style integrations such
as ”data sync” also split into
separately deployable
containers.
”internal’ APIs always require
managed exposure, but use
integration only where necessary
on-premises
Engagement applications can
consume any API exposed within
the organization, or indeed beyond.
Where a microservices performs
an integration-like job, use a
lightweight integration runtime