Archive for the ‘Technology’ Category

Technologies shaping the Realtime Web (Part 1)

 

Introduction

The purpose of this series is to give you an introduction to innovative technologies and standards that are shaping up the Realtime Web. In this post we will explore two emerging standards and their implementations: SPDY, Web Sockets (and SSE). In this series I intend to explore state of innovation in entire web stack from browser, browser add-on platforms to databases and how they are helping the real-time web. On the way I would also be exploring how certain technologies can work together to bring new user experiences and use cases possible.

SPDY protocol

SPDY (pronounced “speedy”) is an application layer protocol conceived and developed primarily at Google with the goal to solve problems in HTTP and make the web faster and build a secure web. It does so by finding innovative solutions to existing problems, observing and leveraging new trends in web architecture, traffic patterns hardware and software infrastructure. In order to achieve backward compatibility to communicate with existing HTTP only server-side infrastructure (and to support new SPDY enabled servers) it modifies the way HTTP requests and responses are sent over the wire. As of July 2012, SPDY is a de facto standard and has a extremely high potential of becoming a part of HTTP/2.0.
Today’s HTTP/1.1 protocol was standardized and implemented by browsers and servers back in 1999-2000. At that time pages were tiny, for instance Google was only 13K to 14K (uncompressed). (source: Google Developers video on SPDY).

Problems with HTTP

  • One TCP connection for every request and most browsers limit the number of connections that can be made to a single domain. Though most browsers support keep-alive connections it was still requests are serialized and by far most browsers don’t support pipelining.
  • Redundant headers in a sequence of requests. For example User Agent, Version, etc. are not going to change for fetching various resources in the same page.
  • Uncompressed request and response headers. Think about cookies if they are uncompressed increases the size of the request/response dramatically. With increasingly available computing resources on the client compression time can be reduced (and also with efficient compression algorithms).
  • Only client can make requests. Even if the server knows what resources the client might need its not possible for the server to send those resources to the client proactively. Lets say server is serving a HTML page request the server already knows about the CSS, JS and images so it can actually “push” them without waiting for a request from the browser.
  • Overall current web architecture is reactive because of request/response based paradigm.

Features of SPDY

  • Multiplexed unlimited concurrent request streams over single TCP connection. This reduces the number of connections made, handshake costs, etc. This also reliably enables the client (browser) to send full set of compressed request headers for the first request and reduced number in subsequent requests.
  • Browser can make as many requests and can assign priority to them. The server will process and respond to the requests based on the priority assigned. Its worth having a click at HTTP Archive to understand various growing trends in this regard.
  • Fewer packets are used due for compression on either side (request and response). This inherently takes advantage of increased computing power resources so that compressing and uncompressing are faster.
  • Server push makes the data to the client via the X-Associated-Content header informing that the server is pushing the content even before the request is made thus enhancing the user experience and speed.

Why SPDY as application layer protocol?

  • Features like multiplexed streams can be implemented in the transport layer. But modifying the transport layer means upgrading the firmware for existing routers deployed across homes and businesses. It also requires OS level transport layer changes to the TCP protocol implementation, which will also mean that HTTP-TCP interaction might need to change. Making SPDY an application layer protocol only need changes to web server and browser (client). which is anyway evolving.

Browsers supporting SPDY

  • Presently Google Chrome, Firefox (version 11+), Amazon Silk support SPDY. Opera on its beta. You can keep track of it from Wikipedia page

Technologies supporting SPDY

  • Apache HTTPD: There is an Apache module which supports SPDY by Google. The documentation and examples can be found here
  • Jetty: A web server and a J2EE web container has added support for SPDY. It has also server-side and client-side libraries
  • Mongrel2: A “language agnostic” web server supports SPDY
  • Node JS: A platform built on Google’s V8 JavaScript engine for building fast, scalable, network applications. Even a web server can be built using node JS http module. There is also a SPDY project for Node JS on GitHub

Popular web properties supporting SPDY

SPDY for Developers

  • Jetty doesn’t require any special actions as long as the required modules are set up
  • Get wildcard certs (Example *.yahoo.com)
  • Don’t shard hostnames. Yes, doing so will cause SPDY to make multiple TCP connections.

WebSocket and SSE Notifications

WebSocket and SSE (Server Sent Event Notifications) are communications enhancements standards which are part of HTML5. WebSocket provides full-duplex, bi-directional communications channel over a single TCP connection. This means server can push data once the connection is established anytime it wants to the browser. Thus making it a valuable communication mechanism for real-time web. If you’re curious about various server and client (browser) side implementation of WebSocket you follow Wikipedia here.

In this post we will examine Kaazing which provides server-side infrastructure for WebSocket. Kaazing a WebSocket Gateway implementation (written in Java) acts as an intermediary and supports various application level standards/protocols including HTML5, JMS, AMQP, etc. Kaazing pioneered WebSocket and worked to bring it as part of HTML5 specification. If you’re interested you can watch an interview of Jonas Jacobi, CEO of Kaazing.  Kaazing is clearly targeted towards enterprise market space.

Lets first in simple terms look at what Kaazing WebSocket Gateway enterprise solution has to offer:

  • Using WebSocket emulation old browsers (including IE6) can gain access to low latency real-time access to server-side data using browser technologies including JavaScript, Flex/Flash, Silverlight, Java/JavaFX, Silverlight etc. Hence faster and reliable time-to-market especially for consumer facing applications where you don’t have control over the browser people use.
  • Security enhancements including support for Authentication, Authorization (especially important for Financial and Gambling applications), SSO and DMZ.
  • A robust architecture supporting High Availability, Load Balancing, Enterprise Integration, Unified Security, etc. which acts as an intermediary between browser (client) and back-end message broker(s) or TCP Server(s)
  • Provides a basic platform on top of which various application layer protocols can be implemented
  • Gateway can scale back-end messaging systems far beyond its inherent capacity
  • Most innovative feature is Reverse Connectivity.  Basically its a security feature that allows you to close all inbound ports to your firewall still allowing clients to connect to your WebSocket server.  In my view (I haven’t tested this), Reverse Connectivity might increase the latency as we introduce a proxy Gateway in-between (speed vs security).
  • Exhaustive developer documentation

The browsers today don’t speak the backend protocols and this issue is being handled today at the application layer by transforming to a message format that custom client-side scripts can understand, this increases the latency. Kaazing provides low-latency infrastructure to connect backend technologies. Kaazing Gateway JMS Edition provides integration to popular messaging brokers such as TIBCO EMS, Websphere MQ, UMQ and ActiveMQ. It also provides multiple client libraries supporting the JMS specification so no application middle layer is necessary to translate the JMS messages to intermediary format.

Key benefits of Kaazing Gateway JMS Edition

  • Integrate with any STOMP compliant message broker.  For documentation on integrating with various message brokers click here
  • End-to-End (from frontend to backend) JMS solution which fully integrates with existing popular messaging infrastructure. A new paradigm in enterprise messaging and communication enabling push through WebSocket.
  • Balances the load from numerous clients and scales your backend message broker by subscribing to the backend once and serving many clients (scalability)
  • Support for both Topics and Queues
  • Extends current message brokers to the Web thereby reduces cost of development and integration
  • Utilizes WebSocket as transport layer protocol for extending the reach from JMS server to the client. There is no intermediary HTTP server required the client and server directly communicate with each other.  This also means less load on your web server!
  • Buffering is automatically enabled for slow consumers. This means if a client is slow in processing data the other clients won’t be clogged. They will be processing at their own speed hence low-latency
  • Enables utilization of existing skill-set of developers with minimal learning curve
  • Handles connections through firewalls and proxies efficiently as long as the client library JS is included and configured properly in your web page.
  • Extensive documentation and community support

Things to Consider

  • Make sure appropriate ports are opened for enabling communication
  • Though Kaazing handles both Gateway and failover scenarios conscious architectural choices and tests have to be performed in this regard to ensure High Availability
  • Continuously test your application in new versions of popular (or enterprise) web browsers to ensure compatibility

Conclusion

In this post we have taken a look at SPDY, Kaazing WebSocket and how these might helping us to build the future web. These are clearly technologies that are innovative to make the web faster and more proactive. I feel sometimes standards kill innovation but its necessary for adoption as businesses fear risk if there is no standard.  Its a cycle as the eco-system develops (world evolves) various technologies and possibilities come together and innovation happens (exciting).   Then the best ideas bubble up to the top and become standards and remain for a while.  In the future post I will explore browser technologies, back-end technologies, Database technologies etc.   The entire web architecture stack and see where we can improve speed and how its going to help shape the real-time web.

Resources

How to use SPDY with Jetty?

Kaazing Documentation Center

HTTP Archive

SPDY Whitepaper

HTTPWatch for Firefox for monitoring SPDY requests among others

Inspecting WebSocket traffic in Chrome

FitNesse – Are you ready to SLIM?

Introduction

The purpose of this post is to give you a quick overview of what FitNesse is, how it works and the innovation behind it.  As we know software development is a collaborative process. There are many different people involved: Product owner, Business analyst, Developer, QA analyst and Support personnel, etc.   There is also multiple technologies involved in any given application.  To architect, manage, design, develop, test and use a software requires different skill set and depth of knowledge in the domain and technology.   The catch here is not everyone is knowledgeable in everything especially the language used to program (for business people) and business direction and needs for developers.  FitNesse is a open source software testing framework developed by Ward Cunningham and now further developed by Robert C. Martin (author of Clean Code).  Ward  has captured this problem and has come up with an innovative solution.   FitNesse utilizes WIKI pages to code the test cases with a simple English language style scripts which can be used for testing in a software development project by all the stakeholders.   Wiki pages are not new and is used for documentation purposes in almost all organizations so it becomes easier for various actors to adopt.

How it works?

The purpose of FitNesse is to automate functional testing (User Acceptance Testing) at a business level.   It is developed using Java and  a web server which hosts wiki pages.   The wiki page is written in a way (simple English) that can be understood by all the different participants of the team.  FitNesse then works by executing this by mapping it to what they call Fixtures.   Fixtures are nothing but programs which has several methods to perform functions like connecting and executing the given query in DB and return the query result, put a message on a queue or topic, etc.   Fixtures can be written in various languages including Java, C# and Ruby.  The developer or QA person can write the Fixtures and the business users could just utilize the English like syntax to execute scripts.  Whenever a wiki test is executed, the Fixtures works by calling the System Under Test (SUT) with appropriate parameters for example putting a message on a queue and will receive the results of the SUT and back to the Wiki front-end, which in turn will visually indicate if a test has passed or not.

There are two types of test systems: SLIM and FIT.  FIT is the older test system and is no longer being developed.  The FIT style fixtures will have to extend FitNesse related classes for their Fixtures thus introducing dependency on Fixture framework in your code.  SLIM fixtures are just POJOs (no FitNesse dependency) and hence can easily reusable for other purposes if necessary.  SLIM is a lightweight version of the FIT protocol with design goal of easily being able to port to other languages.

To explore further click here

In Short

User Acceptance Testing is a very critical part of the software development lifecycle today.  In this economy a bug in a software may cost thousands to millions of dollars for companies of all sizes and obviously its not desirable.   A framework like FitNesse can help business users, developers, testers and end users collaborate easily using well understood interface such as Wiki.  In fact the end users involved in testing can author their own test cases against the SUT with little training and can work together with other members of the team to finalize and successfully execute test cases that might not have been captured by QA.

Resources

FitNesse Website

FitNesse User Guide

FitNesse Download

FitNesse DZone RefCard

VoltDB – NewSQL Database

Introduction

The purpose of this post is to give an overview of  background, technical architecture, features offered by VoltDB.   Traditional RDBMS systems like Oracle, Sybase are designed decades back and have architecture based on the needs of their times.   Michael Stonebraker and his team from MIT have taken a closer look at the current database needs such as High Throughput and Low Latency for query execution, high scalability, high availability, durability, real time analytics, data integration and architected VoltDB accordingly.  The Big Data requirements has spurred innovation in data storage and retrieval using NoSQL database solutions like Cassandra, MongoDB, etc. But VoltDB distinguishes itself from them as a NewSQL solution which is truly a RDBMS with support for ACID properties.

VoltDB Architecture

VoltDB has a highly-scalable, distributed, shared nothing architecture.  VoltDB has innovated its architecture and design by exploiting the current hardware and software trends  such as multi-core processors, size of memory and MapReduce style distributed query execution, etc.      In traditional RDBMS product’s throughput and latency are affected by factors such as logging, latching, locking and buffer management.  By serializing processing VoltDB avoids these issues.  Both scale-up (bigger memory, more cpu-cores) and scale-out architecture is supported. In scale out architecture, data is stored in partitions residing in different nodes but the organization of data is transparent to the application.  Certain query execution plan (where data doesn’t reside in the same partition) MapReduce style query plans are used this speeds up the query execution by order of magnitude.  Though VoltDB comes with access libraries for various languages such as Java, C#, Python, C++, PHP, HTTP/JSON, Ruby, Node.js, etc.

Features

* Data is stored in-memory in partitions (which are based on CPU cores) and organizes data and associated processing constructs together.

* Processes data in a sequential fashion (one transaction at a time) and hence avoids multi-threaded issues concerning logging or latching.   Traditional databases actually write two times to disk one for logging (write-ahead log) and one for committing data to disk which minimizes throughput.

* In order to achieve high-throughput and low latency on SQL operations, VoltDB stores data in memory and is designed by default to partition on primary key or key specified by the developer/designer

* Recommends the use of stored procedures which is treated as a  single transaction (and to either commit or rollback)

* Supports JDBC so ad-hoc queries can be performed.  According to VoltDB its high priority in their product map for constant enhancements.  VoltDB encourages the use of stored procedures to speed up transactions

* Easier to create materialized views which are refreshed automatically when the underlying table data changes.  In worst-case scenario VoltDB claims to have a 15% performance hit.

* Durability is achieved through continuous snapshots and command-logging using which data is written to persistent storage.

* High availability is ensured by what is called K-safety (where K-safety of 1 determines 2 copies of the partition), automatic network fault detection, live node rejoin are some other features.

* Static data tables (or any table) can be replicated across partitions for faster joins

* Automatically co-ordinates fetch of data from multiple partitions and the architecture ensures that the throughput is kept at maximum

* Partitions can be “resyched” after automatically (or manually triggered)  if any node in the cluster fails

* Excellent documentation and support (very responsive and truthful)

* Support for real-time analytics and so well suited for Business Intelligence and fraud detection applications.

* Its Cloudera certified technology

Few things to consider and remember:

* Doesn’t have Hibernate Dialect so Java developers trying to use VoltDB must be aware of this.  But there is a Java API.

* A service window is required to add/remove cluster members (according to VoltDB this is high-priority to eliminate this)

* Custom design and development is required to mirror existing SQL databases to take advantage of high-throughput querying.  This is also an ongoing priority as per VoltDB

* Independent study: http://www.mysqlperformanceblog.com/2011/02/28/is-voltdb-really-as-scalable-as-they-claim/

Conclusion

There is no doubt that VoltDB is a high-performance innovative RDBMS with high-availability and scalability.   This post summarizes various aspects of its architecture and capabilities which you can use as a guide to explore further and see how VoltDB can further help you in your environment.   They also provide tools for Hadoop integration which can be used to derive intelligence and anomalies in your data.

References

VoltDB technical architecture documentation

Website Extension Paradigms – Commenting Systems

Website Extension Paradigms are primarily ways of extending functionality of a website by a service of a third-party provider.  This trend has emerged in recent years and is growing rapidly with the explosion of websites offering API support (Click to learn more about API).  This opens the door for innovation in business model by combining data from various APIs in interesting ways, in what are called Mashups.   

ProgrammableWeb.com reports that Social, Internet, Telephony, Reference and Government have added more than 2000 APIs in 2011.  Most of these APIs allows you to make a limited number of calls for free and charge for higher usage.  With proliferation of technologies like Apache Hadoop it is now possible for companies to analyze, process and extract intelligence like never before (thanks to companies like Pentaho, Cloudera and Hortonworks).  Companies can now leverage this intelligence and sell it through their APIs.

By Website Extension Paradigms I mean:

  • Extending a website’s functionality by utilizing third-party software and services.  For example, a commenting system.
  • Another form of extension is to host your service on a third-party domain and extend their functionality (Think Facebook Apps).
  • Mashups


I am planning to write a series of blog posts where I will explore different services and platforms that are used in this context.  In this post I will focus on various commenting systems that are in the market today and as a publisher you can use this as a guide in choosing a commenting system for your site/blog.

Some important criterias to consider while choosing a commenting system:

* Real Time comments
* Comment Moderation with multiple moderators if you need
* Anti-spam
* Search Engine crawl-ability
* Social Integration
* Notifications on new comments and replies
* JavaScript librarie(s) that the product uses and make sure there are no conflicts

Disqus: http://www.disqus.com/

Realtime comments – Both posting and updating
Inline media embedding – Youtube, Flickr etc
Fully compatible with mobile websites for commenting while on the go.
Comment Storage: Stored in Disqus and can optionally be synced with your blog provider
Akismet: Protection from web spam
Notification: Subscribe via RSS or email
Like button support for both the page and individual comments
Reactions – Pull mentions of your page on Twitter back into the Disqus conversation on your own site
Comment sorting options include popular now, best rating, newest first, oldest first, etc.
Used By: CNN, Time, Fox News etc.

Echo: http://aboutecho.com/products/real-time-comments.html

Realtime comments – All comment streams are updated without need for manual refresh
Top Commenters – Moderators can mark users as Top Commenters and those comments will automatically bubble up
A cloud service called Echo StreamServer which stores all comments.  They can be retrieved as Activity Streams
PostRank – Intelligent comment ranking algorithm
Integration with Social Networks
Multiple Sort Orders – Like most recent, most popular comments
White Label Solution – Fully customizable look and feel
Analytics – Get insights.  Slice data by article, data source and date ranges
Integrates easily wherever JavaScript is supported
Subscribe via RSS or email

DiscussIt: http://www.pnyxe.com/DiscussIt-comment-system

Author Reputation
Search engine indexing
Automatic SPAM and profanity filters
Automatically inherits your web design
Inserting polls into posts
Integration with popular social networks
Free version available and has most features with Ads

InstaComments: http://www.instacomment.com/

Fully hosted, no download needed
Inserting polls into posts
Data migration from existing system
Comment Ranking
Author reputation
Different versions are available at different price ranges with varying features
Integrates well on Blogger, WordPress, Tumblr, WordPress, Weebly and more
Full moderation over posts

Intense Debate: http://www.intensedebate.com/

Owned by Automattic (Strong WordPress support)
Comment Threading – Reply directly to specific comments with nested replies
Email Notifications – Respond to moderate comments via email
Subscribe via RSS
Guest Commenting – No Account or Sign up necessary
Integrates with Facebook Connect, Twitter, Gravatar, etc.
Trackbacks and Linkbacks are synced
Widgets for comment stats, most popular posts, etc.

Livefyre: http://www.livefyre.com/

Social User Tagging:  Tag Facebook and Twitter friends from inside the comment box
Allow users to Sign In using their Social Networking credentials which includes Facebook, Twitter, Google, LinkedIn and also OpenID
Users can share their comments in their favorite social networking sites
Real-time interaction with comment bubbles, new listener count.
Search engine crawlable
Follow conversations through email
Allows comment ratings
Linkback – Lets the other Livefyre bloggers leave link to their latest post
Integration with Blogger, Drupal, etc. is on their development plan

Facebook Comments: http://developers.facebook.com/docs/reference/plugins/comments/

Part of Social Plugins
Must be associated with a Facebook Application
Look and feel cannot be customized
Optionally post on your Facebook News Feed
No integration to other social networks
Used by Techcrunch

Recession, Transparency and Social Computing

As banks and other major institutions are getting bailed out by the government, one thing that stands out is the need for transparency/accountability. Obama administration is trying to do its part by initiatives such as recovery.gov to provide greater transparency in government.   When it comes to businesses, the major challenge of policy makers (and leaders) is to conceive a framework that can balance transparency against exposing legitimate business strategies.  As more and more business reshape their organizational structure into networked hierarchies from their traditional hierarchies, people at various levels of the organization will have greater opportunity to contribute towards business goals.  If businesses don’t independently transform themselves into this new organizational structure they will be forced to do so through layoffs.  For example, I was talking to a friend of mine who works as a developer in an investment bank and according to him if you’re not hands-on then you’re out.  I know that this is not the case couple of years back, it used to be highly hierarchical organizational structure there. 

The new organizational structure provides for a way to drive innovation by tapping into the collective intelligence of the organization.   At every level of the organization there is business intelligence that can help shape business strategy or reduce costs.   If applications in enterprise are built with an intent to tap into these social capabilities it will drive innovation.  It is up to the policy makers, regulators and business leaders to choose and drive these initiatives.

Tech Tip #1: Signing up Users

I’m planning to write a series of posts under “Tech Tip”. The purpose of these posts are to share ideas/tips around how companies can leverage technology approaches in their solutions to gain competitive advantage. I also intend to provide technology implementation details as applicable.

One of the major goals for a startup is to motivate users to sign up for their service and eventually to sustain the user base. There are a bunch of startups coming up everyday and from an user its quite overwhelming for them to remember credentials for all of them. As Joshua Porter puts it in his Usage Lifecycle, the challenge here is to Sign-up an unaware/interested user. Even early adopters who might be interested in trying your service may not be motivated. But we can solve this problem by leveraging authentication technology possibilities.

It is very likely that your interested user will have one of Hotmail/Yahoo/Facebook/Google/OpenID account.

By providing your users to authenticate using these services will definitely increase the probability of an enthusiastic user to sign up. These services have libraries and documentation in their respective developer web site.

Clickpass

Clickpass is a startup providing single sign-on service that requires no effort from the end-user but provides the convenience for your site to authenticate using Google, Facebook, Hotmail and Yahoo accounts. They provide extensive developer documentation and can be found here. TechCrunch also has coverage about this service. However there are also views like these that you need to be aware of before making your decision.

OAuth

OAuth is an open initiative for an open protocol to allow secure third-party website authentication . Its starting to gain traction and worth keeping an eye on their blog.

Resources

Live ID Web Authentication System
Yahoo BBauth
Google Account Authentication API

Evaluating AJAX Framework

Today building a new web application involves the essential step of evaluating AJAX frameworks and select an appropriate one. In this post I will detail the various criteria that should be considered while making a decision.

Adoption Criteria

This criteria is important for IT managers (or EA strategy) to decide if it would even be necessary for the development team to take a look and evaluate.

  • Licensing Model: Under what license(s) is this product offered? How would that affect your organization?
  • Cost: How much does the framework cost (upfront)? Also consider cost of development tools, support, consulting? How many free updates are there? If the framework is free, is there a PRO version? If so what are the benefits and cost?
  • Frequency of Releases: What is the frequency of releases/updates? Is it adequate? This shows how active the framework is among the community?
  • Technology Maturity: How long has the framework been around? How stable are the releases? What is the philosophy on backward computability? What is the product road map?
  • Talent Pool: Is there talent pool available for this framework? What is the expected learning curve? Input from the developmental team is certainly helpful here.

Development Criteria

This criteria is will help the developers assess the framework viability.

  • UI Components: Does the toolkit offer rich set of mature components? What is the future road map for new components? Are the components customizable?
  • Programming Model: What kind of programming paradigm is supported? Is it strongly typed or dynamic? Is the model familiar to developers? If not what is the learning curve?
  • Web Framework Integration: Are there web frameworks that provide some out-of-the-box support? Are there any conflicts (or challenges) in using this toolkit with web framework?
  • Documentation Quality: Is there adequate good documentation available? Are there books available? If so what are the reviews?
  • Browser Support: What browsers and versions are supported by the toolkit? What is the road map? Are the supported browsers sufficient for the requirements? What does the community say about this?
  • IDE Support: Is there IDE support? How much do they cost? How do they fit in with currently used IDE?
  • i18n: Is there support for multiple languages?
  • Utilities: Frameworks provide utilities like Browser Manager(Back/Forward Button Support), Drag-n-Drop, Java-to-JavaScript Serialization (for example, DWR). Depending on specifics of the requirements this criteria should be considered.

Maintenance Criteria

This criteria helps to evaluate and foresee any maintenance challenges that may be encountered.

  • Community Support: What is the size of the community using the product? How active and responsive are the online forums? Is the blog updated frequently?
  • Hosting: Is there hosting support for the framework JS files? For example, Yahoo UI provides hosting support
  • Profiling: Is there built-in profiling support? If no are there any external tools that can help? Are there browser related constraints in them?
  • Beta Components: The frameworks tend to offer lot of Beta components. For these components its worthy to look at their known issues and assess the risk.

Conclusion

Analyzing and evaluating the frameworks using the above mentioned criteria will help make an informed decision and thereby will help avoid potential future issues. During the evaluation if a framework is missing a particular feature or component and is available in another toolkit you should look to ensure that there would be no integration issues and will peacefully co-exist. Please feel free to suggest perspective for improving the evaluation process.

Follow

Get every new post delivered to your Inbox.

Join 262 other followers