Introduction
The purpose of this series is to give you an introduction to innovative technologies and standards that are shaping up the Realtime Web. In this post we will explore two emerging standards and their implementations: SPDY, Web Sockets (and SSE). In this series I intend to explore state of innovation in entire web stack from browser, browser add-on platforms to databases and how they are helping the real-time web. On the way I would also be exploring how certain technologies can work together to bring new user experiences and use cases possible.
SPDY protocol
SPDY (pronounced “speedy”) is an application layer protocol conceived and developed primarily at Google with the goal to solve problems in HTTP and make the web faster and build a secure web. It does so by finding innovative solutions to existing problems, observing and leveraging new trends in web architecture, traffic patterns hardware and software infrastructure. In order to achieve backward compatibility to communicate with existing HTTP only server-side infrastructure (and to support new SPDY enabled servers) it modifies the way HTTP requests and responses are sent over the wire. As of July 2012, SPDY is a de facto standard and has a extremely high potential of becoming a part of HTTP/2.0.
Today’s HTTP/1.1 protocol was standardized and implemented by browsers and servers back in 1999-2000. At that time pages were tiny, for instance Google was only 13K to 14K (uncompressed). (source: Google Developers video on SPDY).
Problems with HTTP
- One TCP connection for every request and most browsers limit the number of connections that can be made to a single domain. Though most browsers support keep-alive connections it was still requests are serialized and by far most browsers don’t support pipelining.
- Redundant headers in a sequence of requests. For example User Agent, Version, etc. are not going to change for fetching various resources in the same page.
- Uncompressed request and response headers. Think about cookies if they are uncompressed increases the size of the request/response dramatically. With increasingly available computing resources on the client compression time can be reduced (and also with efficient compression algorithms).
- Only client can make requests. Even if the server knows what resources the client might need its not possible for the server to send those resources to the client proactively. Lets say server is serving a HTML page request the server already knows about the CSS, JS and images so it can actually “push” them without waiting for a request from the browser.
- Overall current web architecture is reactive because of request/response based paradigm.
Features of SPDY
- Multiplexed unlimited concurrent request streams over single TCP connection. This reduces the number of connections made, handshake costs, etc. This also reliably enables the client (browser) to send full set of compressed request headers for the first request and reduced number in subsequent requests.
- Browser can make as many requests and can assign priority to them. The server will process and respond to the requests based on the priority assigned. Its worth having a click at HTTP Archive to understand various growing trends in this regard.
- Fewer packets are used due for compression on either side (request and response). This inherently takes advantage of increased computing power resources so that compressing and uncompressing are faster.
- Server push makes the data to the client via the X-Associated-Content header informing that the server is pushing the content even before the request is made thus enhancing the user experience and speed.
Why SPDY as application layer protocol?
- Features like multiplexed streams can be implemented in the transport layer. But modifying the transport layer means upgrading the firmware for existing routers deployed across homes and businesses. It also requires OS level transport layer changes to the TCP protocol implementation, which will also mean that HTTP-TCP interaction might need to change. Making SPDY an application layer protocol only need changes to web server and browser (client). which is anyway evolving.
Browsers supporting SPDY
- Presently Google Chrome, Firefox (version 11+), Amazon Silk support SPDY. Opera on its beta. You can keep track of it from Wikipedia page
Technologies supporting SPDY
- Apache HTTPD: There is an Apache module which supports SPDY by Google. The documentation and examples can be found here
- Jetty: A web server and a J2EE web container has added support for SPDY. It has also server-side and client-side libraries
- Mongrel2: A “language agnostic” web server supports SPDY
- Node JS: A platform built on Google’s V8 JavaScript engine for building fast, scalable, network applications. Even a web server can be built using node JS http module. There is also a SPDY project for Node JS on GitHub
Popular web properties supporting SPDY
SPDY for Developers
- Jetty doesn’t require any special actions as long as the required modules are set up
- Get wildcard certs (Example *.yahoo.com)
- Don’t shard hostnames. Yes, doing so will cause SPDY to make multiple TCP connections.
WebSocket and SSE Notifications
WebSocket and SSE (Server Sent Event Notifications) are communications enhancements standards which are part of HTML5. WebSocket provides full-duplex, bi-directional communications channel over a single TCP connection. This means server can push data once the connection is established anytime it wants to the browser. Thus making it a valuable communication mechanism for real-time web. If you’re curious about various server and client (browser) side implementation of WebSocket you follow Wikipedia here.
In this post we will examine Kaazing which provides server-side infrastructure for WebSocket. Kaazing a WebSocket Gateway implementation (written in Java) acts as an intermediary and supports various application level standards/protocols including HTML5, JMS, AMQP, etc. Kaazing pioneered WebSocket and worked to bring it as part of HTML5 specification. If you’re interested you can watch an interview of Jonas Jacobi, CEO of Kaazing. Kaazing is clearly targeted towards enterprise market space.
Lets first in simple terms look at what Kaazing WebSocket Gateway enterprise solution has to offer:
- Using WebSocket emulation old browsers (including IE6) can gain access to low latency real-time access to server-side data using browser technologies including JavaScript, Flex/Flash, Silverlight, Java/JavaFX, Silverlight etc. Hence faster and reliable time-to-market especially for consumer facing applications where you don’t have control over the browser people use.
- Security enhancements including support for Authentication, Authorization (especially important for Financial and Gambling applications), SSO and DMZ.
- A robust architecture supporting High Availability, Load Balancing, Enterprise Integration, Unified Security, etc. which acts as an intermediary between browser (client) and back-end message broker(s) or TCP Server(s)
- Provides a basic platform on top of which various application layer protocols can be implemented
- Gateway can scale back-end messaging systems far beyond its inherent capacity
- Most innovative feature is Reverse Connectivity. Basically its a security feature that allows you to close all inbound ports to your firewall still allowing clients to connect to your WebSocket server. In my view (I haven’t tested this), Reverse Connectivity might increase the latency as we introduce a proxy Gateway in-between (speed vs security).
- Exhaustive developer documentation
The browsers today don’t speak the backend protocols and this issue is being handled today at the application layer by transforming to a message format that custom client-side scripts can understand, this increases the latency. Kaazing provides low-latency infrastructure to connect backend technologies. Kaazing Gateway JMS Edition provides integration to popular messaging brokers such as TIBCO EMS, Websphere MQ, UMQ and ActiveMQ. It also provides multiple client libraries supporting the JMS specification so no application middle layer is necessary to translate the JMS messages to intermediary format.
Key benefits of Kaazing Gateway JMS Edition
- Integrate with any STOMP compliant message broker. For documentation on integrating with various message brokers click here
- End-to-End (from frontend to backend) JMS solution which fully integrates with existing popular messaging infrastructure. A new paradigm in enterprise messaging and communication enabling push through WebSocket.
- Balances the load from numerous clients and scales your backend message broker by subscribing to the backend once and serving many clients (scalability)
- Support for both Topics and Queues
- Extends current message brokers to the Web thereby reduces cost of development and integration
- Utilizes WebSocket as transport layer protocol for extending the reach from JMS server to the client. There is no intermediary HTTP server required the client and server directly communicate with each other. This also means less load on your web server!
- Buffering is automatically enabled for slow consumers. This means if a client is slow in processing data the other clients won’t be clogged. They will be processing at their own speed hence low-latency
- Enables utilization of existing skill-set of developers with minimal learning curve
- Handles connections through firewalls and proxies efficiently as long as the client library JS is included and configured properly in your web page.
- Extensive documentation and community support
Things to Consider
- Make sure appropriate ports are opened for enabling communication
- Though Kaazing handles both Gateway and failover scenarios conscious architectural choices and tests have to be performed in this regard to ensure High Availability
- Continuously test your application in new versions of popular (or enterprise) web browsers to ensure compatibility
Conclusion
In this post we have taken a look at SPDY, Kaazing WebSocket and how these might helping us to build the future web. These are clearly technologies that are innovative to make the web faster and more proactive. I feel sometimes standards kill innovation but its necessary for adoption as businesses fear risk if there is no standard. Its a cycle as the eco-system develops (world evolves) various technologies and possibilities come together and innovation happens (exciting). Then the best ideas bubble up to the top and become standards and remain for a while. In the future post I will explore browser technologies, back-end technologies, Database technologies etc. The entire web architecture stack and see where we can improve speed and how its going to help shape the real-time web.
Resources
HTTPWatch for Firefox for monitoring SPDY requests among others