Marklogic rack proposal
- 2. Design Requirements
• Stand alone share nothing architecture dedicated
to Marklogic NoSQL Database
• Extensible architecture that is easy to expand and
grow
• Segregated production and staging environments
• Automated provisioning and orchestration that
allows deployment of all flavors of Marklogic
nodes (Evaluator Nodes & Data Nodes) in 30
minutes or less
• Full redundancy for network, power and
processors
• Software defined networking with low-latency
(3.8us or less) high speed network connectivity
between Marklogic nodes and racks
• Marklogic to provide data redundancy through
multi-node writes
• 100Gbe rack and core network connectivity uplinks
• Deployed on bare metal servers with a minimum of
384GB of memory
• Intel E5 V4 processors with 8 cores to support
Marklogic V9 encryption
• Includes a minimum of 4.8TB’s of enterprise high
speed SSD or NVMe storage per physical server
(1,200,000 IOPS minimum in total per server)
• Two 10Gbe connections per physical server
• Local encrypted S3 storage for backup and recovery
• Includes 24x7 support with 4 hour or less response
Deliver an affordable high performance platform for a scale-out Marklogic v9 deployment:
- 3. Our Architectural Approach
• Use common sense in the architecture
• Leverage the software vendors to optimize platform for use case
• Implement redundancy when it makes sense without dramatically increasing costs
• Don’t try to be all things to everyone – pick your battles where you can achieve your greatest
victories (cost, delivery, performance)
• Build to grow when it makes sense to grow
• No “sacred cows”
• Cattle vs pets mentality
• Offer flexible cost effective architecture that doesn’t lock clients into to expensive solutions
• Support but also teach
- 4. OpenRack:Ready Solution
• Rack based appliance deployment – factory built, integrated and configured
• All US manufactured hardware
• Includes servers, network, storage, rack, PDU’s, optics, cables and cabling
• Shipped in rack to client’s data center – just add power and core network connectivity
• Ultra dense rack architecture – 4x more compute capacity per rack versus traditional server deployments
• Non-contiguous rack deployments
• Share nothing (network, storage, compute) architecture – dedicated to Marklogic
• Cooperative design with contributions from PRX, customer, Marklogic, Intel and Mirantis
• Private cloud deployed with automated provisioning and configuration using Mirantis OpenStack
• Proven reference architecture that has been successfully deployed at some of the largest organizations in
the world including AT&T, Apple, PayPal, Volkswagen, USDA, etc., etc.
- 6. Custom Rack SKU’s
OpenRack:Ready designs solutions
that meet your application
requirements with the right
proportion of high speed software
defined networking, high
performance/high density
compute nodes and software
defined storage options. Mirantis
OpenStack is added for self-
service on-demand provisioning.
- 7. Precision Cabling & Integration
A detailed cabling layout of
networking cables, power cords
and PDU’s will be designed and
installed at our integration facility
to optimize cable length, air
cooling and allow service access
when hot swapping units. The
only cabling required in your data
center is core network
connectivity and power.
- 8. Rack Appliance Deployment
+ + + +
Start with a single
rack and add racks
as needed to
expand Marklogic
capacity.
Each rack includes
network, compute,
storage and cloud
management.
- 9. Appliance Configuration
Network
1 x Arista 7280R Spine
4 x Arista 7280R Leaf
2 x Arista 7010T-48
Storage
Cloudian S3 Storage
360TB’s Useable
Compute +
Foundation
High Density Chassis
44 server nodes total
• (11) Four server nodes in a dense 2U form factor
• Dual Intel® Xeon® E5 8-core V4 CPUs each
• 348GB high speed DDR4 RAM each
• 2 x 10Gbe Ethernet connectivity per server
• 6 x 800GB Enterprise SSD Drive per server
• (3) Foundation nodes only required in first rack
- 10. Appliance Composition
• (1) 2U Arista 7280R 10/100Gbe SDN Network Switch – Spine
• (4) 2U Arista 7280R 10/100Gbe SDN Network Switch – Leaf
• (2) 1U Arista 7010T-48 1Gbe Hardware Management Switch
• (1) 7U Cloudian S3 Storage Appliance – 360TB Useable (for backups)
• (11) 2U Four Server Chassis (44 total compute nodes)
Total: 42U
- 11. Rack Capacity
• (192) 10Gbe ports for device connectivity
• (8) 100Gbe uplink ports
• (3) OpenStack Foundation Management Nodes (only needed in first rack)
• (41) Compute Nodes – each with 2 x Intel E5 V4 8-core processers, 384GB memory, 2 x 10Gbe
network connections, 4.8TB Enterprise High Perf SSD (44 nodes in all racks after the first rack)
• (1) Cloudian S3 Storage Appliance – 360TB’s useable
• Total Marklogic cores: 656
• (246) 800GB High Performance SSD’s delivering 200,000 IOPS each (49 million IOPS in total)
• Total Storage: 196TB’s of local SSD storage
• Total Memory: 15,744GB’s across 41 Marklogic nodes
• Rack Cost: $xxx
- 12. Software Defined Networking (SDN) Switching
• Must be programmable through the OpenStack API
• Big buffers and low latency ideal for east-west traffic generated by grid and scale-out
architectures (like Marklogic); traditional networks not well suited for east-west traffic
• 10Gbe ports with “least cost” upgrade path to 25Gbe, 40Gbe and 100Gbe
• Combined switching and routing in as single platform to enable VXLAN
• Large routing tables for enterprise deployments
• Non-blocking throughput between racks (no network bottlenecks)
• Span multiple racks with Leaf-Spine architecture
• Centralized management
• Let OpenStack do most of the networking after day one deployment
- 13. Arista 7280R SDN Switches
Fully integrated API within all distributions of OpenStack; No requirement for VMware NSX
- 15. OpenStack Foundation Nodes
• The “brains” of the outfit
• Designed to run on enterprise ready commodity hardware
• Triple redundancy for availability and performance
• Dual Intel E5 V4 processors for performance recommended
• Local storage to host server images and logging; capacity determined by
projected number of images
• Foundation nodes required for provisioning and monitoring of environment
• Dual 10GBe connectivity plus 1GBe for hardware maintenance and monitoring
• Support for up to 200 physical compute nodes per OpenStack cluster
- 16. OpenStack Compute Nodes
• Designed to run on enterprise ready commodity hardware
• Minimum 256GB memory recommended but more is better
• Dual Intel E5 V4 processors for performance recommended; cores determined by
use case and budget
• No local storage required – but optional supported
• Run hypervisor and virtual machines, bare metal servers or container servers on
same OpenStack cloud platform
• Dual 10GBe connectivity plus 1GBe for hardware maintenance and monitoring for
each physical server
- 17. High Density Server Chassis
Key Features
Four hot-pluggable systems (nodes) in a 2U form factor. Each node
supports the following:
1. Dual socket R3 (LGA 2011) supports Intel® Xeon® processor E5-
2600 v4 family; QPI up to 9.6GT/s
2. Up to 2TB† ECC 3DS LRDIMM , up to DDR4- 2400†MHz ; 16x DIMM
slots
3. 1x PCI-E 3.0 x16 slot
4. Intel® i350-AM2 Dual port 10GbE LAN
5. Integrated IPMI 2.0 with KVM and Dedicated LAN
6. 6 x 2.5" Hot-swap SAS/SATA HDD Bays Per Server (24 drives total in
chassis)
7. LSI 3108 SAS3 controller (6 ports); RAID 0, 1, 5, 6, 10, 50, 60
8. 2000W Redundant Power Supplies Titanium Level (96%)
- 18. Support
• 24 x 7 Support with 4 hour response; targeted repairs within 24 hours or sooner
• Covers all hardware including switching, cabling, compute nodes and storage
• Includes OpenStack and underlying Ubuntu Linux OS support
• Does not cover operating system installed on compute nodes or Marklogic
• Annual Support Costs Per Rack: $xxx
• Buy two years in advance, get third year included at no additional charge
- 19. Contact Info
Contact Ken Proulx or Mark Osell at PRX Technologies for more
information on our OpenRack:Ready offering.