SlideShare a Scribd company logo
JBoss Enterprise Data Grid & Websockets
Delivering Real Time Push at Scale
Mark Addy, Consultant
Fast, Reliable, Manageable & Secure
Agenda
• Web Sockets
• Data Grids
• Demo
• Code
HTTP
What’s the Problem?
Request
GET /index.html HTTP/1.1
Generate Content or Find
File
ServerClient
Return Content
HTTP/1.1 200 OK
Process Response /
Render Page
HTTP is a request-response protocol
HTTP is half-duplex – one way traffic
All conversations are started by the Client
What’s the Problem?
• Blocking request – response
• Client initiates all conversations
• Only the client or the server can talk at any point in time
• HTTP is stateless – lots of redundant data
• New connection required for each transaction
We want the latest view...
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
What are the use cases?
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
What’s the Solution?
One, two, three, PUSH!
Simulated Push
Request
HTTP Polling
Client Server
Request
Events
Response
Response
HTTP Polling
• Regular requests at a set interval
• Near real-time
• Server events may occur between requests
• Wastes Network Resources
• Piggyback Optimization
Request
Long Polling
Client Server
Request
EventsResponse
Response
Long Polling
• Connection Kept Open
• Response blocked until event or timeout
• Resource hungry on the Server
• High message volumes will cause issues
Request
HTTP Streaming
Client Server
EventResponse
EventResponse
EventResponse
HTTP Streaming
• Long Lived HTTP / XMLHttpRequest
• Resource hungry on Server
• Browser needs to close and reconnect the streaming
channel to release memory
• No Connection State or Failure detection for IFrames
• XHR multipart response support is not standard
• Buffering Proxies don’t help
Request 1
Reverse Ajax / Comet
Client Server
Request 2
Response 2
Response 1
Event
Reverse Ajax / Comet
• Utilizes HTTP Streaming or Long Polling
• Complex development
• Poor scalability
• Resource Intensive
WebSockets
The Future...
• Part of HTML 5
• Part of Java EE 7
• Full Duplex communication between Browser and Server
• Allows Web Servers to push updates to browsers
• Better than “long polling”
• Dedicated connection to the Backend Web Socket Server
Definitions
• RFC 6455 defines the protocol
• W3C WebSockets
– http://dev.w3.org/html5/websockets/
• W3C Server Side Events
– http://dev.w3.org/html5/eventsource/
• JSR 356: Java API for WebSocket
WebSocket
Handshake
Client Server
Event
HTTP
WebSocket
One
TCP/IP
Connection
Handshake
GET /chat HTTP/1.1
Origin: http://example.com
Host: server.example.com
Sec-WebSocket-Key: uRovscZj…bTt5mw==
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Version: 13
Client Server
HTTP/1.1 101 Switching Protocols
Origin: http://example.com
Sec-WebSocket-Accept: HSlY…pGaGWk=
Connection: Upgrade
Sec-WebSocket-Location: ws://…
Upgrade: websocket
Websocket exchange
• Text or Binary “Frames”
– 2 Bytes per frame
– 1 million requests http:// ~ 1GB (1KB headers)
– 1 million requests ws:// ~ 2MB
• Secure wss://
But...
• Proxies can still be problematic
• Load-balancer support not universal
Benefits
• Reduced latency
• Reduced network traffic
• Reduced CPU/memory usage on the server
• Full-Duplex
• Scalable
• Simplified development
WebSocket Support
Chrome 4+
Internet Explorer 10+
Firefox 4+
Opera 10.7+
Safari 5+
Browser
WebSocket(location,protocol)
Function onmessage
Function onopen
Function onclose
Function onerror
close()
send(data)
JavaScript API
JSR 356 – Server Endpoints
@ServerEndpoint("/websocket")
public class WebSocketEndpoint implements Serializable {
@OnOpen
public void onOpen(Session session) { … }
@OnClose
public void onClose(Session session) { … }
@OnMessage
public void onMessage(Session session, String msg) { … }
@OnError
public void onClose(Session session, Throwable t) { … }
JSR 356
Path Parameters
@ServerEndpoint("/websocket/{id}")
public class WebSocketEndpoint implements Serializable {
@OnOpen
public void onOpen(@PathParam(name=“id”) String id, Session session) { … }
Java Websocket Client
@ClientEndpoint("/websocket")
public class ClientEndpoint implements Serializable {
Tomcat (7.0.28+)
Glassfish (3.1.+)
WildFly (8.0.0.Alpha1+)
...
Servers
Data Grids
Why Cache?
java.util.Map?
Replication
Distribution
Data Volume
Replicated = Number of Elements
Distributed = Number of Elements * Nodes
Number of Copies
Replication Overhead
Replicated α Number of Nodes
Distributed α Number of Copies
Redundancy
Rebalancing
Even distribution of data
Recovery of redundancy
Hashing
cache.get(K)
Hashing
Hashing algorithm determines owner nodes
The Grid knows where the data is
Distributed Execution / Map Reduce
TaskTaskTaskTaskResult
Distributed Execution / Map Reduce
• Parallel processing
• Processing large data sets
– Not always possible to retrieve the data locally for
processing
– Network limitations
• Fault tolerant
Client Server
Client Client
Client Server
• Decouples application from cache
– Independent JVM tuning
– Deployments are easier
– Independent Scaling
Events
cache.put(K, V)
Event Event
Events
Access to Cache Events
• Cache Entry Modified
• Cache Entry Created
• Cache Entry Removed
• ...
Events can drive push...
JBoss Enterprise Data Grid
JBoss Enterprise Data Grid
• Supported Version of Infinispan
• Client - Server
• Library (Embedded)
• Map Reduce
• Distributed Execution
• Event Notifications
• Query
Putting it all together...
Websocket Support
wildfly-8.0.0-Alpha1
Data Grid
jboss-datagrid-server-6.1.0
Server vs Library Mode
No support for Events in Client Server... yet
• ISPN-374 Events
• ISPN-484 Query
• ISPN-1094 Map Reduce
JBoss Data Grid Event Support
@Listener
public class MyListener {
@CacheEntryModified
public void modifiedEvent(CacheEntryModifiedEvent<K, V> event) { … }
@CacheEntryCreated
public void createdEvent(CacheEntryCreatedEvent<K, V> event) { … }
...
}
Event List
CacheStartedEvent
CacheStoppedEvent
CacheEntryModifiedEvent
CacheEntryCreatedEvent
CacheEntryRemovedEvent
CacheEntryVisitedEvent
CacheEntryLoadedEvent
CacheEntriesEvictedEvent
CacheEntryActivatedEvent
CacheEntryPassivatedEvent
ViewChangedEvent
TransactionRegisteredEvent
TransactionCompletedEvent
CacheEntryInvalidatedEvent
http://docs.jboss.org/infinispan/5.2/apidocs/org/infinispan/notifications/Listener.html
Events
cache.put(K, V)
Event Event
Event Duplication & Propagation...
HornetQ
• Configurable de-duplication
org.hornetq.core.message.impl.MessageImpl.HDR_DUPLICATE_DETECTION_ID
message.setStringProperty(HDR_DUPLICATE_DETECTION_ID, uniqueIdentifier);
• Push events into HornetQ
Events
cache.put(K, V)
Event Event
Consumers
cache.put(K, V)
@ServerEndpoint
@Listener
@MessageDriven
HornetQ JMS Topic
@CacheEntry
ModifiedEvent
CDI Event
JMS Consume
JMS Produce
Websocket
Distributed JBoss
Enterprise Data Grid
Demo
cache.put(K, V)
@ServerEndpoint
@Listener
@MessageDriven
HornetQ JMS Topic
@CacheEntry
ModifiedEvent
CDI Event
JMS Consume
JMS Produce
Websocket
Distributed JBoss
Enterprise Data Grid
Data Grid Configuration – Library Mode
@Startup
@Singleton
public class CacheManagerSingleton {
@PostConstruct
public void init() {
...
}
}
Cache Manager
GlobalConfiguration configuration = new.GlobalConfigurationBuilder()
.clusteredDefault().clusterName(CLUSTER_NAME)
.defaultTransport().transport()
.globalJmxStatistics().enabled(true)
.serialization().addAdvancedExternalizer(new MatchExternalizer())
.build();
EmbeddedCacheManager manager = new DefaultCacheManager(configuration);
Cache
Configuration eventCacheConfiguration = new ConfigurationBuilder()
.jmxStatistics().enabled(true)
.clustering().cacheMode(CacheMode.DIST_ASYNC)
.l1().disable()
.hash().numOwners(2)
.build();
manager.defineConfiguration(CACHE_NAME, eventCacheConfiguration);
Cache<K, V> cache = manager.getCache(CACHE_NAME);
cache.addListener(queueSenderSessionBean);
@Listener
@Listener
@Singleton
public class EventNotificationSenderBean {
...
@CacheEntryModified
public void logModifiedEvent(CacheEntryModifiedEvent<String, Match> event) {
if (!event.isPre()) {
MatchUpdate matchUpdate = new MatchUpdate(event.getValue());
sendMessage(matchUpdate);
}
}
...
}
@MessageDriven
@MessageDriven({…activation configuration…})
public class EventNotificationMDB implements MessageListener {
@Inject
@InfinispanUpdateEvent
private Event<Message> event;
@Override
public void onMessage(Message message) {
event.fire(message);
}
}
@ServerEndpoint
@ServerEndpoint("/websocket")
public class WebSocketEndpoint implements Serializable {
@OnOpen
public void onOpen(final Session session) {
sessions.add(session);
}
public void onEventMessage(@Observes @InfinispanUpdateEvent Message msg) {
for (Session session : sessions) {
session.getBasicRemote().sendText(((TextMessage)msg).getText());
}
}
Thanks for Listening
Any Questions?
http://www.c2b2.co.uk
http://blog.c2b2.co.uk
@c2b2consulting

More Related Content

JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale

Editor's Notes

  1. http 1.1 – keep alive
  2. http 1.1 – keep alive
  3. Request / Response headers can total 2KB alone1 million hits per day = 2GB just in headersCompare with websockets 2Bytes per frame = 2MB500 to 1000 : 1 reduction in header sizeshttp 1.1 – keep alive
  4. http://www.websocket.org/quantum.html
  5. With polling, the browser sends HTTP requests at regular intervals and immediately receives a response.  This technique was the first attempt for the browser to deliver real-time information. Obviously, this is a good solution if the exact interval of message delivery is known, because you can synchronize the client request to occur only when information is available on the server. However, real-time data is often not that predictable, making unnecessary requests inevitable and as a result, many connections are opened and closed needlessly in low-message-rate situations.Piggyback – mixed response, no interval
  6. With polling, the browser sends HTTP requests at regular intervals and immediately receives a response.  This technique was the first attempt for the browser to deliver real-time information. Obviously, this is a good solution if the exact interval of message delivery is known, because you can synchronize the client request to occur only when information is available on the server. However, real-time data is often not that predictable, making unnecessary requests inevitable and as a result, many connections are opened and closed needlessly in low-message-rate situations.Good if you know when new data will be availableUnecessary requests – often the server has no new information to send back – many requests are wastefulUse piggybacking to reset polling interval by combining polling data with other requests.
  7. With long-polling, the browser sends a request to the server and the server keeps the request open for a set period. If a notification is received within that period, a response containing the message is sent to the client. If a notification is not received within the set time period, the server sends a response to terminate the open request. It is important to understand, however, that when you have a high message volume, long-polling does not provide any substantial performance improvements over traditional polling.  In fact, it could be worse, because the long-polling might spin out of control into an unthrottled, continuous loop of immediate polls.Long polling is itself not a true push; long polling is a variation of the traditional polling technique, but it allows emulating a push mechanism under circumstances where a real push is not possible.With long polling, the client requests information from the server in a way similar to a normal polling; however, if the server does not have any information available for the client, then instead of sending an empty response, the server holds the request and waits for information to become available (or for a suitable timeout event), after which a complete response is finally sent to the client.
  8. With long-polling, the browser sends a request to the server and the server keeps the request open for a set period. If a notification is received within that period, a response containing the message is sent to the client. If a notification is not received within the set time period, the server sends a response to terminate the open request. It is important to understand, however, that when you have a high message volume, long-polling does not provide any substantial performance improvements over traditional polling.  In fact, it could be worse, because the long-polling might spin out of control into an unthrottled, continuous loop of immediate polls.
  9. With streaming, the browser sends a complete request, but the server sends and maintains an open response that is continuously updated and kept open indefinitely (or for a set period of time). The response is then updated whenever a message is ready to be sent, but the server never signals to complete the response, thus keeping the connection open to deliver future messages. However, since streaming is still encapsulated in HTTP, intervening firewalls and proxy servers may choose to buffer the response, increasing the latency of the message delivery. Therefore, many streaming Comet solutions fall back to long-polling in case a buffering proxy server is detected. Alternatively, TLS (SSL) connections can be used to shield the response from being buffered, but in that case the setup and tear down of each connection taxes the available server resources more heavily.
  10. Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it.[1][2]Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins. The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time.[3]The use of Comet techniques in web development predates the use of the word Comet as a neologism for the collective techniques. Comet is known by several other names, including Ajax Push,[4][5]Reverse Ajax,[6]Two-way-web,[7]HTTP Streaming,[7] and HTTP server push[8] among others.[9]Specific methods of implementing Comet fall into two major categories: streaming and long polling.Streaming An application using streaming Comet opens a single persistent connection from the client browser to the server for all Comet events. These events are incrementally handled and interpreted on the client side every time the server sends a new event, with neither side closing the connection.[3]Specific techniques for accomplishing streaming Comet include the following:Hidden iframeA basic technique for dynamic web application is to use a hidden iframe HTML element (an inline frame, which allows a website to embed one HTML document inside another). This invisible iframe is sent as a chunked block, which implicitly declares it as infinitely long (sometimes called &quot;forever frame&quot;). As events occur, the iframe is gradually filled with script tags, containing JavaScript to be executed in the browser. Because browsers render HTML pages incrementally, each script tag is executed as it is received. Some browsers require a specific minimum document size before parsing and execution is started, which can be obtained by initially sending 1-2 kB of padding spaces.[11]One benefit of the iframe method is that it works in every common browser. Two downsides of this technique are the lack of a reliable error handling method, and the impossibility of tracking the state of the request calling process.[11]XMLHttpRequestThe XMLHttpRequest (XHR) object, the main tool used by Ajax applications for browser–server communication, can also be pressed into service for server–browser Comet messaging, in a few different ways.In 1995, Netscape Navigator added a feature called “server push”, which allowed servers to send new versions of an image or HTML page to that browser, as part of a multipart HTTP response (see History section, below), using the content type multipart/x-mixed-replace. Since 2004, Gecko-based browsers such as Firefox accept multipart responses to XHR, which can therefore be used as a streaming Comet transport.[12] On the server side, each message is encoded as a separate portion of the multipart response, and on the client, the callback function provided to the XHR onreadystatechange function will be called as each message arrives. This functionality is included in Gecko-based browsers, there is discussion of adding it to WebKit.[13] Internet Explorer 10 also supports this functionality.[14]Instead of creating a multipart response, and depending on the browser to transparently parse each event, it is also possible to generate a custom data format for an XHR response, and parse out each event using browser-side JavaScript, relying only on the browser firing the onreadystatechange callback each time it receives new data.Ajax with long polling None of the above streaming transports work across all modern browsers without negative side-effects. This forces Comet developers to implement several complex streaming transports, switching between them depending on the browser. Consequently many Comet applications use long polling, which is easier to implement on the browser side, and works, at minimum, in every browser that supports XHR. As the name suggests, long polling requires the client to poll the server for an event (or set of events). The browser makes an Ajax-style request to the server, which is kept open until the server has new data to send to the browser, which is sent to the browser in a complete response. The browser initiates a new long polling request in order to obtain subsequent events. Specific technologies for accomplishing long-polling include the following:XMLHttpRequest long polling For the most part, XMLHttpRequest long polling works like any standard use of XHR. The browser makes an asynchronous request of the server, which may wait for data to be available before responding. The response can contain encoded data (typically XML or JSON) or Javascript to be executed by the client. At the end of the processing of the response, the browser creates and sends another XHR, to await the next event. Thus the browser always keeps a request outstanding with the server, to be answered as each event occurs.Script tag long polling While any Comet transport can be made to work across subdomains, none of the above transports can be used across different second-level domains (SLDs), due to browser security policies designed to prevent cross-site scripting attacks.[15] That is, if the main web page is served from one SLD, and the Comet server is located at another SLD (which does not have cross origin resource sharing enabled), Comet events cannot be used to modify the HTML and DOM of the main page, using those transports. This problem can be sidestepped by creating a proxy server in front of one or both sources, making them appear to originate from the same domain. However, this is often undesirable for complexity or performance reasons.Unlike iframes or XMLHttpRequest objects, script tags can be pointed at any URI, and JavaScript code in the response will be executed in the current HTML document. This creates a potential security risk for both servers involved, though the risk to the data provider (in our case, the Comet server) can be avoided using JSONP.A long-polling Comet transport can be created by dynamically creating script elements, and setting their source to the location of the Comet server, which then sends back JavaScript (or JSONP) with some event as its payload. Each time the script request is completed, the browser opens a new one, just as in the XHR long polling case. This method has the advantage of being cross-browser while still allowing cross-domain implementations.[15]Comet applications attempt to eliminate the limitations of the page-by-page web model and traditional polling by offering real-time interaction, using a persistent or long-lasting HTTP connection between the server and the client. Since browsers and proxies are not designed with server events in mind, several techniques to achieve this have been developed, each with different benefits and drawbacks. The biggest hurdle is the HTTP 1.1 specification, which states that a browser should not have more than two simultaneous connections with a web server.[10] Therefore, holding one connection open for real-time events has a negative impact on browser usability: the browser may be blocked from sending a new request while waiting for the results of a previous request, e.g., a series of images. This can be worked around by creating a distinct hostname for real-time information, which is an alias for the same physical server.Before WebSocket, port 80 full-duplex communication was attainable using Comet channels; however, comet implementation is nontrivial, and due to the TCP handshake and HTTP header overhead, it is inefficient for small messages.
  11. Comet using HTTP streamingIn streaming mode, one persistent connection is opened. There will only be a long-lived request (#1 in Figure 3) since each event arriving on the server side is sent through the same connection. Thus, it requires on the client side a way to separate the different responses coming through the same connection. Technically speaking, two common techniques for streaming include Forever Iframes (hidden IFrames) or the multi-part feature of the XMLHttpRequest object used to create Ajax requests in JavaScript. Forever IframesThe Forever Iframes technique involves a hidden Iframe tag put in the page with its src attribute pointing to the servlet path returning server events. Each time an event is received, the servlet writes and flushes a new script tag with the JavaScript code inside. The iframe content will be appended with this script tag that will get executed.Advantages: Simple to implement, and it works in all browsers supporting iframes.Disadvantages: There is no way to implement reliable error handling or to track the state of the connection, because all connection and data are handled by the browser through HTML tags. You then don&apos;t know when the connection is broken on either side.Multi-part XMLHttpRequestThe second technique, which is more reliable, is to use the multi-part flag supported by some browsers (such as Firefox) on the XMLHttpRequest object. An Ajax request is sent and kept open on the server side. Each time an event comes, a multi-part response is written through the same connection. Listing 6 shows an example.Interleaving of requests and tieing up request to the responseRelieves some of the server overhead on threads etc. compared to other options better for firewalls than HTTP streamingComet uses either Streaming or long-polling (depending on whether a buffering proxy is detected)Ultimately, all of these methods for providing real-time data involve HTTP request and response headers, which contain lots of additional, unnecessary header data and introduce latency. On top of that, full-duplex connectivity requires more than just the downstream connection from server to client. In an effort to simulate full-duplex communication over half-duplex HTTP, many of today&apos;s solutions use two connections: one for the downstream and one for the upstream. The maintenance and coordination of these two connections introduces significant overhead in terms of resource consumption and adds lots of complexity. Simply put, HTTP wasn&apos;t designed for real-time, full-duplex communication as you can see in the following figure, which shows the complexities associated with building a Comet web application that displays real-time data from a back-end data source using a publish/subscribe model over half-duplex HTTP.It gets even worse when you try to scale out those Comet solutions to the masses. Simulating bi-directional browser communication over HTTP is error-prone and complex and all that complexity does not scale. Even though your end users might be enjoying something that looks like a real-time web application, this &quot;real-time&quot; experience has an outrageously high price tag. It&apos;s a price that you will pay in additional latency, unnecessary network traffic and a drag on CPU performance.
  12. Bytes overhead is large on long polling can be up to a K for request and responseHalf duplex so need to maintain 2 sockets one receiving responses one sending requests then correlating requests to responses.Defined in the Communications section of the HTML5 specification, HTML5 Web Sockets represents the next evolution of web communications—a full-duplex, bidirectional communications channel that operates through a single socket over the Web. HTML5 Web Sockets provides a true standard that you can use to build scalable, real-time web applications. In addition, since it provides a socket that is native to the browser, it eliminates many of the problems Comet solutions are prone to. Web Sockets removes the overhead and dramatically reduces complexity.To establish a WebSocket connection, the client and server upgrade from the HTTP protocol to the WebSocket protocol during their initial handshake, as shown in the following example:GET /text HTTP/1.1\\r\\n Upgrade: WebSocket\\r\\n Connection: Upgrade\\r\\n Host: www.websocket.org\\r\\n …\\r\\n HTTP/1.1 101 WebSocket Protocol Handshake\\r\\n Upgrade: WebSocket\\r\\n Connection: Upgrade\\r\\n …\\r\\nOnce established, WebSocket data frames can be sent back and forth between the client and the server in full-duplex mode. Both text and binary frames can be sent full-duplex, in either direction at the same time. The data is minimally framed with just two bytes. In the case of text frames, each frame starts with a 0x00 byte, ends with a 0xFF byte, and contains UTF-8 data in between. WebSocket text frames use a terminator, while binary frames use a length prefix.Note: although the Web Sockets protocol is ready to support a diverse set of clients, it cannot deliver raw binary data to JavaScript, because JavaScript does not support a byte type. Therefore, binary data is ignored if the client is JavaScript—but it can be delivered to other clients that support it.
  13. Part of HTML 5 but specifications managed separatelyW3C WebSockets - This specification defines an API that enables Web pages to use the WebSocket protocol (defined by the IETF) for two-way communication with a remote host.W3C Server Side Events - This specification defines an API for opening an HTTP connection for receiving push notifications from a server in the form of DOM events. The API is designed such that it can be extended to work with other push notification schemes such as Push SMSOrigin based security model (RFC 6454)The Web Origin ConceptThis document defines the concept of an &quot;origin&quot;, which is often used as the scope of authority or privilege by user agents. Typically, user agents isolate content retrieved from different origins to prevent malicious web site operators from interfering with the operation of benign web sites. In addition to outlining the principles that underlie the concept of origin, this document details how to determine the origin of a URI and how to serialize an origin into a string. It also defines an HTTP header field, named &quot;Origin&quot;, that indicates which origins are associated with an HTTP request.
  14. The browser sends a request to the server, indicating that it wants to switch protocols from HTTP to WebSocket. The client expresses its desire through the Upgrade header: GET ws://echo.websocket.org/?encoding=text HTTP/1.1 Origin: http://websocket.org Cookie: __utma=99as Connection: Upgrade Host: echo.websocket.org Sec-WebSocket-Key:uRovscZjNol/umbTt5uKmw== Upgrade:websocketSec-WebSocket-Version: 13If the server understands the WebSocket protocol, it agrees to the protocol switch through the Upgrade header. HTTP/1.1 101 WebSocket Protocol Handshake Date: Fri, 10 Feb 2012 17:38:18 GMT Connection: Upgrade Server:Kaazing Gateway Upgrade:WebSocketAccess-Control-Allow-Origin: http://websocket.org Access-Control-Allow-Credentials: true Sec-WebSocket-Accept:rLHCkw/SKsO9GAH/ZSFhBATDKrU= Access-Control-Allow-Headers: content-typeAt this point the HTTP connection breaks down and is replaced by the WebSocket connection over the same underlying TCP/IP connection. The WebSocket connection uses the same ports as HTTP (80) and HTTPS (443), by default. Once established, WebSocket data frames can be sent back and forth between the client and the server in full-duplex mode. Both text and binary frames can be sent in either direction at the same time. The data is minimally framed with just two bytes. In the case of text frames, each frame starts with a 0x00 byte, ends with a 0xFF byte, and contains UTF-8 data in between. WebSocket text frames use a terminator, while binary frames use a length prefix. Once the connection is established, the client and server can send WebSocket data or text frames back and forth in full-duplex mode. The data is minimally framed, with a small header followed by payload. WebSocket transmissions are described as &quot;messages&quot;, where a single message can optionally be split across several data frames. This can allow for sending of messages where initial data is available but the complete length of the message is unknown (it sends one data frame after another until the end is reached and marked with the FIN bit). With extensions to the protocol, this can also be used for multiplexing several streams simultaneously (for instance to avoid monopolizing use of a socket for a single large payload).Protocol tells the server which protocols it can talkServer chooses oneKey exchange is to ensure genuine websocket clients are talking to the server.Server must respond with a 101 switching response and the correct key in the accept.SEC headers cannot be set by an attacker using Javascript and HTML
  15. HandshakeIn order to establish a websocket connection, a client (a web browser) sends a HTTP GET request with a number of HTTP headers. Among those headers there is the Sec-WebSocket-Key header, which contains a handshake key. According to the WebSocket protocol, the server should:Concatenate the handshake key with the magic guid {258EAFA5-E914-47DA-95CA-C5AB0DC85B11}.Take the SHA1 hash of the concatenation result.Send the base64 equivalent of the hash in HTTP response to the client.GET ws://localhost:8180/infinispan-websocket/websocket HTTP/1.1Pragma: no-cacheOrigin: http://localhost:8180Host: localhost:8180Sec-WebSocket-Key: fB3uLDCC37xOuKYvQ3wLIg==Upgrade: websocketSec-WebSocket-Extensions: x-webkit-deflate-frameCache-Control: no-cacheConnection: UpgradeSec-WebSocket-Version: 13HTTP/1.1 101 Switching ProtocolsOrigin: http://localhost:8180Sec-WebSocket-Accept: oZgSoOq/S1ysb3GRLjlaZyfqddk=Connection: UpgradeSec-WebSocket-Location: ws://localhost:8180/infinispan-websocket/websocketContent-Length: 0Upgrade: WebSocket
  16. WebSocket protocol client implementations try to detect if the user agent is configured to use a proxy when connecting to destination host and port and, if it is, uses HTTP CONNECT method to set up a persistent tunnel.While the WebSocket protocol itself is unaware of proxy servers and firewalls, it features an HTTP-compatible handshake so that HTTP servers can share their default HTTP and HTTPS ports (80 and 443) with a WebSocket gateway or server. The WebSocket protocol defines a ws:// and wss:// prefix to indicate a WebSocket and a WebSocket Secure connection, respectively. Both schemes use an HTTP upgrade mechanism to upgrade to the WebSocket protocol. Some proxy servers are harmless and work fine with WebSocket; others will prevent WebSocket from working correctly, causing the connection to fail. In some cases, additional proxy server configuration may be required, and certain proxy servers may need to be upgraded to support WebSocket.If unencrypted WebSocket traffic flows through an explicit or a transparent proxy server on its way to the WebSocket server, then, whether or not the proxy server behaves as it should, the connection is almost certainly bound to fail today (as WebSocket become more mainstream, proxy servers may become WebSocket aware). Therefore, unencrypted WebSocket connections should be used only in the simplest topologies.[9]If an encrypted WebSocket connection is used, then the use of Transport Layer Security (TLS) in the WebSocket Secure connection ensures that an HTTP CONNECT command is issued when the browser is configured to use an explicit proxy server. This sets up a tunnel, which provides low-level end-to-end TCP communication through the HTTP proxy, between the WebSocket Secure client and the WebSocket server. In the case of transparent proxy servers, the browser is unaware of the proxy server, so no HTTP CONNECT is sent. However, since the wire traffic is encrypted, intermediate transparent proxy servers may simply allow the encrypted traffic through, so there is a much better chance that the WebSocket connection will succeed if WebSocket Secure is used. Using encryption is not free of resource cost, but often provides the highest success rate.A mid-2010 draft (version hixie-76) broke compatibility with reverse-proxies and gateways by including 8 bytes of key data after the headers, but not advertising that data in a Content-Length: 8 header.[10] This data was not forwarded by all intermediates, which could lead to protocol failure. More recent drafts (e.g., hybi-09[11]) put the key data in a Sec-WebSocket-Key header, solving this problem.http://www.infoq.com/articles/Web-Sockets-Proxy-ServersProxy ServersA proxy server is a server that acts as an intermediary between a client and another server (for example, a web server on the Internet). Proxy servers are commonly used for content caching, Internet connectivity, security, and enterprise content filtering. Typically, a proxy server is set up between a private network and the Internet. Proxy servers can monitor traffic and close a connection if it has been is open for too long. The problem with proxy servers for web applications that have a long-lived connection (for example, Comet HTTP streaming or HTML5 Web Sockets) is clear: HTTP proxy servers — which were originally designed for document transfer — may choose to close streaming or idle WebSocket connections, because they appear to be trying to connect with an unresponsive HTTP server. This behavior is a problem with long-lived connections such as Web Sockets. Additionally, proxy servers may also buffer unencrypted HTTP responses, thereby introducing unpredictable latency during HTTP response streaming.HTML5 Web Sockets and Proxy ServersLet&apos;s take a look at how HTML5 Web Sockets work with proxy servers. WebSocket connections use standard HTTP ports (80 and 443), which has prompted many to call it a &quot;proxy server and firewall-friendly protocol.&quot; Therefore, HTML5 Web Sockets do not require new hardware to be installed, or new ports to be opened on corporate networks--two things that would stop the adoption of any new protocol dead in its tracks. Without any intermediary servers (proxy or reverse proxy servers, firewalls, load-balancing routers and so on) between the browser and the WebSocket server, a WebSocket connection can be established smoothly, as long as both the server and the client understand the Web Socket protocol. However, in real environments, lots of network traffic is routed through intermediary servers.A picture is worth a thousand words. Figure 1 shows a simplified network topology in which clients use a browser to access back-end TCP-based services using a full-duplex HTML5 WebSocket connection. Some clients are located inside a corporate network, protected by a corporate firewall and configured to access the Internet through explicit, or known, proxy servers, which may provide content caching and security; while other clients access the WebSocket server directly over the Internet. In both cases, the client requests may be routed through transparent, or unknown, proxy servers (for example, a proxy server in a data center or a reverse proxy server in front of the remote server). It is even possible for proxy servers to have their own explicit proxy servers, increasing the number of hops the WebSocket traffic has to make.
  17. Latency – persistent connection – no need to re-establish TCP socketsNetwork – Request &amp; response headers reduced to 2Bytes from ~1KBCPU / memory – Servlets not tied up Full Duplex – bi-directional communicationsScalableSimplified development – no hidden iframes, tags or complex XHR / javascript. No complex proxies
  18. Although there are no known exploits, it was disabled in Firefox 4 and 5,[7] and Opera 11.
  19. Traditional Cache
  20. Traditional Cache
  21. Traditional Cache
  22. Traditional Cache
  23. You have nothing to worry about! The grid is resposible for locating and maintaining the location of data.Can mention services such as KeyAffinity and Grouping which do give you control over co-locating data (customer &amp; customer audit) pinning data to a node
  24. Traditional Cache
  25. Traditional Cache
  26. Traditional Cache
  27. http://docs.jboss.org/infinispan/5.2/apidocs/org/infinispan/notifications/Listener.html
  28. Traditional Cache
  29. Traditional Cache
  30. Traditional Cache
  31. Traditional Cache