Moving from Resolutions to Real Actions

This is that time of the year when you start thinking about resolutions for the next year (2013).   If you’re like me you will achieve some of your goals and not all of them.  If you employ the same strategies that you employed last year to setup your resolutions you are likely to end up in the same place.  Procrastination is something we all do.  The degree to which we procrastinate varies a lot.  In my view major impediment to achieving our resolutions are: determination, focus, resilience, procrastination among others.  The purpose of this post is to share my ideas that has worked for me in the past year.  If we want real results we need to employ different ways.

There are thousands of articles/posts written about New Year Resolutions.  There are lots of good ideas out there if you just Google for them.  There are websites dedicated to making productive procrastination.  This post is just my 2 cents.

If you reading this you want to probably accomplish something.  It could be a professional or personal (or hybrid) project.  You have one form of motivation for any project/task either Intrinsic or Extrinsic.   Meaning, you are doing it because you HAVE (Extrinsic) to do it or you WANT (Intrinsic) to do it.  The positive thing is there are some tasks which you will find intrinsically motivated to do in an Extrinsic project.  Here is excellent TED talk by Dan Pink: http://www.ted.com/talks/dan_pink_on_motivation.html about Intrinsic and Extrinsic motivation.  That’s where I got the idea about these forms of motivation.  Here are some things:

  • Prioritize your goals
  • Break your projects/goals into tasks that you can accomplish and measure
  • Categorize your tasks/projects(goals): Boring, Difficult, Challenging, Research, Creative, Fun
  • Determine your passionate category in order: Creative, Research, Challenge, Difficult, Boring
  • Prioritize your tasks: Low, Medium, High, Urgent
  • Relaxation activities: Comedy, Meditation, Dining out, etc.
  • While setting deadline analyze your boring, difficult tasks and factor in some relaxation activities.

If most of the tasks are boring or difficult just add some tasks that you are passionate about in your other projects and factor it in your deadline.  This planning takes sometime but believe me it works.

I personally prefer to practice self-hypnosis and visualization.  You can find lot of these videos online in YouTube for instance.  It’s the fastest way to put yourself in AUTOPILOT especially for planning your day.  I caution you for these choices have to be made based on your comfort level of the content.  Please be very careful as you make the choice and stick with it everyday.  Also, in you goals I suggest you consider to include goals that develop your mental stamina.  For example developing curiosity, determination, focus etc.   Curiosity for instance can transform boring, difficult task into more interesting ones.

Online resources like LifeTick and Joes Goals can also be used. I have created a Goal Setting Template that you can consider using.  Also, please visit http://www.facebook.com/PositiveTags and please support by “Liking” it if you like it.  I will be posting more Resolution planning ideas there.   Please provide me with feedback in the comments section or email me about anything.

Recommended Reading:

Metrics That Matter Most for 2013 by Rajesh Setty

Goal Setting by Dr. Mani (Kindle Book)

Technologies shaping the Realtime Web (Part 1)

 

Introduction

The purpose of this series is to give you an introduction to innovative technologies and standards that are shaping up the Realtime Web. In this post we will explore two emerging standards and their implementations: SPDY, Web Sockets (and SSE). In this series I intend to explore state of innovation in entire web stack from browser, browser add-on platforms to databases and how they are helping the real-time web. On the way I would also be exploring how certain technologies can work together to bring new user experiences and use cases possible.

SPDY protocol

SPDY (pronounced “speedy”) is an application layer protocol conceived and developed primarily at Google with the goal to solve problems in HTTP and make the web faster and build a secure web. It does so by finding innovative solutions to existing problems, observing and leveraging new trends in web architecture, traffic patterns hardware and software infrastructure. In order to achieve backward compatibility to communicate with existing HTTP only server-side infrastructure (and to support new SPDY enabled servers) it modifies the way HTTP requests and responses are sent over the wire. As of July 2012, SPDY is a de facto standard and has a extremely high potential of becoming a part of HTTP/2.0.
Today’s HTTP/1.1 protocol was standardized and implemented by browsers and servers back in 1999-2000. At that time pages were tiny, for instance Google was only 13K to 14K (uncompressed). (source: Google Developers video on SPDY).

Problems with HTTP

  • One TCP connection for every request and most browsers limit the number of connections that can be made to a single domain. Though most browsers support keep-alive connections it was still requests are serialized and by far most browsers don’t support pipelining.
  • Redundant headers in a sequence of requests. For example User Agent, Version, etc. are not going to change for fetching various resources in the same page.
  • Uncompressed request and response headers. Think about cookies if they are uncompressed increases the size of the request/response dramatically. With increasingly available computing resources on the client compression time can be reduced (and also with efficient compression algorithms).
  • Only client can make requests. Even if the server knows what resources the client might need its not possible for the server to send those resources to the client proactively. Lets say server is serving a HTML page request the server already knows about the CSS, JS and images so it can actually “push” them without waiting for a request from the browser.
  • Overall current web architecture is reactive because of request/response based paradigm.

Features of SPDY

  • Multiplexed unlimited concurrent request streams over single TCP connection. This reduces the number of connections made, handshake costs, etc. This also reliably enables the client (browser) to send full set of compressed request headers for the first request and reduced number in subsequent requests.
  • Browser can make as many requests and can assign priority to them. The server will process and respond to the requests based on the priority assigned. Its worth having a click at HTTP Archive to understand various growing trends in this regard.
  • Fewer packets are used due for compression on either side (request and response). This inherently takes advantage of increased computing power resources so that compressing and uncompressing are faster.
  • Server push makes the data to the client via the X-Associated-Content header informing that the server is pushing the content even before the request is made thus enhancing the user experience and speed.

Why SPDY as application layer protocol?

  • Features like multiplexed streams can be implemented in the transport layer. But modifying the transport layer means upgrading the firmware for existing routers deployed across homes and businesses. It also requires OS level transport layer changes to the TCP protocol implementation, which will also mean that HTTP-TCP interaction might need to change. Making SPDY an application layer protocol only need changes to web server and browser (client). which is anyway evolving.

Browsers supporting SPDY

  • Presently Google Chrome, Firefox (version 11+), Amazon Silk support SPDY. Opera on its beta. You can keep track of it from Wikipedia page

Technologies supporting SPDY

  • Apache HTTPD: There is an Apache module which supports SPDY by Google. The documentation and examples can be found here
  • Jetty: A web server and a J2EE web container has added support for SPDY. It has also server-side and client-side libraries
  • Mongrel2: A “language agnostic” web server supports SPDY
  • Node JS: A platform built on Google’s V8 JavaScript engine for building fast, scalable, network applications. Even a web server can be built using node JS http module. There is also a SPDY project for Node JS on GitHub

Popular web properties supporting SPDY

SPDY for Developers

  • Jetty doesn’t require any special actions as long as the required modules are set up
  • Get wildcard certs (Example *.yahoo.com)
  • Don’t shard hostnames. Yes, doing so will cause SPDY to make multiple TCP connections.

WebSocket and SSE Notifications

WebSocket and SSE (Server Sent Event Notifications) are communications enhancements standards which are part of HTML5. WebSocket provides full-duplex, bi-directional communications channel over a single TCP connection. This means server can push data once the connection is established anytime it wants to the browser. Thus making it a valuable communication mechanism for real-time web. If you’re curious about various server and client (browser) side implementation of WebSocket you follow Wikipedia here.

In this post we will examine Kaazing which provides server-side infrastructure for WebSocket. Kaazing a WebSocket Gateway implementation (written in Java) acts as an intermediary and supports various application level standards/protocols including HTML5, JMS, AMQP, etc. Kaazing pioneered WebSocket and worked to bring it as part of HTML5 specification. If you’re interested you can watch an interview of Jonas Jacobi, CEO of Kaazing.  Kaazing is clearly targeted towards enterprise market space.

Lets first in simple terms look at what Kaazing WebSocket Gateway enterprise solution has to offer:

  • Using WebSocket emulation old browsers (including IE6) can gain access to low latency real-time access to server-side data using browser technologies including JavaScript, Flex/Flash, Silverlight, Java/JavaFX, Silverlight etc. Hence faster and reliable time-to-market especially for consumer facing applications where you don’t have control over the browser people use.
  • Security enhancements including support for Authentication, Authorization (especially important for Financial and Gambling applications), SSO and DMZ.
  • A robust architecture supporting High Availability, Load Balancing, Enterprise Integration, Unified Security, etc. which acts as an intermediary between browser (client) and back-end message broker(s) or TCP Server(s)
  • Provides a basic platform on top of which various application layer protocols can be implemented
  • Gateway can scale back-end messaging systems far beyond its inherent capacity
  • Most innovative feature is Reverse Connectivity.  Basically its a security feature that allows you to close all inbound ports to your firewall still allowing clients to connect to your WebSocket server.  In my view (I haven’t tested this), Reverse Connectivity might increase the latency as we introduce a proxy Gateway in-between (speed vs security).
  • Exhaustive developer documentation

The browsers today don’t speak the backend protocols and this issue is being handled today at the application layer by transforming to a message format that custom client-side scripts can understand, this increases the latency. Kaazing provides low-latency infrastructure to connect backend technologies. Kaazing Gateway JMS Edition provides integration to popular messaging brokers such as TIBCO EMS, Websphere MQ, UMQ and ActiveMQ. It also provides multiple client libraries supporting the JMS specification so no application middle layer is necessary to translate the JMS messages to intermediary format.

Key benefits of Kaazing Gateway JMS Edition

  • Integrate with any STOMP compliant message broker.  For documentation on integrating with various message brokers click here
  • End-to-End (from frontend to backend) JMS solution which fully integrates with existing popular messaging infrastructure. A new paradigm in enterprise messaging and communication enabling push through WebSocket.
  • Balances the load from numerous clients and scales your backend message broker by subscribing to the backend once and serving many clients (scalability)
  • Support for both Topics and Queues
  • Extends current message brokers to the Web thereby reduces cost of development and integration
  • Utilizes WebSocket as transport layer protocol for extending the reach from JMS server to the client. There is no intermediary HTTP server required the client and server directly communicate with each other.  This also means less load on your web server!
  • Buffering is automatically enabled for slow consumers. This means if a client is slow in processing data the other clients won’t be clogged. They will be processing at their own speed hence low-latency
  • Enables utilization of existing skill-set of developers with minimal learning curve
  • Handles connections through firewalls and proxies efficiently as long as the client library JS is included and configured properly in your web page.
  • Extensive documentation and community support

Things to Consider

  • Make sure appropriate ports are opened for enabling communication
  • Though Kaazing handles both Gateway and failover scenarios conscious architectural choices and tests have to be performed in this regard to ensure High Availability
  • Continuously test your application in new versions of popular (or enterprise) web browsers to ensure compatibility

Conclusion

In this post we have taken a look at SPDY, Kaazing WebSocket and how these might helping us to build the future web. These are clearly technologies that are innovative to make the web faster and more proactive. I feel sometimes standards kill innovation but its necessary for adoption as businesses fear risk if there is no standard.  Its a cycle as the eco-system develops (world evolves) various technologies and possibilities come together and innovation happens (exciting).   Then the best ideas bubble up to the top and become standards and remain for a while.  In the future post I will explore browser technologies, back-end technologies, Database technologies etc.   The entire web architecture stack and see where we can improve speed and how its going to help shape the real-time web.

Resources

How to use SPDY with Jetty?

Kaazing Documentation Center

HTTP Archive

SPDY Whitepaper

HTTPWatch for Firefox for monitoring SPDY requests among others

Inspecting WebSocket traffic in Chrome

FitNesse – Are you ready to SLIM?

Introduction

The purpose of this post is to give you a quick overview of what FitNesse is, how it works and the innovation behind it.  As we know software development is a collaborative process. There are many different people involved: Product owner, Business analyst, Developer, QA analyst and Support personnel, etc.   There is also multiple technologies involved in any given application.  To architect, manage, design, develop, test and use a software requires different skill set and depth of knowledge in the domain and technology.   The catch here is not everyone is knowledgeable in everything especially the language used to program (for business people) and business direction and needs for developers.  FitNesse is a open source software testing framework developed by Ward Cunningham and now further developed by Robert C. Martin (author of Clean Code).  Ward  has captured this problem and has come up with an innovative solution.   FitNesse utilizes WIKI pages to code the test cases with a simple English language style scripts which can be used for testing in a software development project by all the stakeholders.   Wiki pages are not new and is used for documentation purposes in almost all organizations so it becomes easier for various actors to adopt.

How it works?

The purpose of FitNesse is to automate functional testing (User Acceptance Testing) at a business level.   It is developed using Java and  a web server which hosts wiki pages.   The wiki page is written in a way (simple English) that can be understood by all the different participants of the team.  FitNesse then works by executing this by mapping it to what they call Fixtures.   Fixtures are nothing but programs which has several methods to perform functions like connecting and executing the given query in DB and return the query result, put a message on a queue or topic, etc.   Fixtures can be written in various languages including Java, C# and Ruby.  The developer or QA person can write the Fixtures and the business users could just utilize the English like syntax to execute scripts.  Whenever a wiki test is executed, the Fixtures works by calling the System Under Test (SUT) with appropriate parameters for example putting a message on a queue and will receive the results of the SUT and back to the Wiki front-end, which in turn will visually indicate if a test has passed or not.

There are two types of test systems: SLIM and FIT.  FIT is the older test system and is no longer being developed.  The FIT style fixtures will have to extend FitNesse related classes for their Fixtures thus introducing dependency on Fixture framework in your code.  SLIM fixtures are just POJOs (no FitNesse dependency) and hence can easily reusable for other purposes if necessary.  SLIM is a lightweight version of the FIT protocol with design goal of easily being able to port to other languages.

To explore further click here

In Short

User Acceptance Testing is a very critical part of the software development lifecycle today.  In this economy a bug in a software may cost thousands to millions of dollars for companies of all sizes and obviously its not desirable.   A framework like FitNesse can help business users, developers, testers and end users collaborate easily using well understood interface such as Wiki.  In fact the end users involved in testing can author their own test cases against the SUT with little training and can work together with other members of the team to finalize and successfully execute test cases that might not have been captured by QA.

Resources

FitNesse Website

FitNesse User Guide

FitNesse Download

FitNesse DZone RefCard

VoltDB – NewSQL Database

Introduction

The purpose of this post is to give an overview of  background, technical architecture, features offered by VoltDB.   Traditional RDBMS systems like Oracle, Sybase are designed decades back and have architecture based on the needs of their times.   Michael Stonebraker and his team from MIT have taken a closer look at the current database needs such as High Throughput and Low Latency for query execution, high scalability, high availability, durability, real time analytics, data integration and architected VoltDB accordingly.  The Big Data requirements has spurred innovation in data storage and retrieval using NoSQL database solutions like Cassandra, MongoDB, etc. But VoltDB distinguishes itself from them as a NewSQL solution which is truly a RDBMS with support for ACID properties.

VoltDB Architecture

VoltDB has a highly-scalable, distributed, shared nothing architecture.  VoltDB has innovated its architecture and design by exploiting the current hardware and software trends  such as multi-core processors, size of memory and MapReduce style distributed query execution, etc.      In traditional RDBMS product’s throughput and latency are affected by factors such as logging, latching, locking and buffer management.  By serializing processing VoltDB avoids these issues.  Both scale-up (bigger memory, more cpu-cores) and scale-out architecture is supported. In scale out architecture, data is stored in partitions residing in different nodes but the organization of data is transparent to the application.  Certain query execution plan (where data doesn’t reside in the same partition) MapReduce style query plans are used this speeds up the query execution by order of magnitude.  Though VoltDB comes with access libraries for various languages such as Java, C#, Python, C++, PHP, HTTP/JSON, Ruby, Node.js, etc.

Features

* Data is stored in-memory in partitions (which are based on CPU cores) and organizes data and associated processing constructs together.

* Processes data in a sequential fashion (one transaction at a time) and hence avoids multi-threaded issues concerning logging or latching.   Traditional databases actually write two times to disk one for logging (write-ahead log) and one for committing data to disk which minimizes throughput.

* In order to achieve high-throughput and low latency on SQL operations, VoltDB stores data in memory and is designed by default to partition on primary key or key specified by the developer/designer

* Recommends the use of stored procedures which is treated as a  single transaction (and to either commit or rollback)

* Supports JDBC so ad-hoc queries can be performed.  According to VoltDB its high priority in their product map for constant enhancements.  VoltDB encourages the use of stored procedures to speed up transactions

* Easier to create materialized views which are refreshed automatically when the underlying table data changes.  In worst-case scenario VoltDB claims to have a 15% performance hit.

* Durability is achieved through continuous snapshots and command-logging using which data is written to persistent storage.

* High availability is ensured by what is called K-safety (where K-safety of 1 determines 2 copies of the partition), automatic network fault detection, live node rejoin are some other features.

* Static data tables (or any table) can be replicated across partitions for faster joins

* Automatically co-ordinates fetch of data from multiple partitions and the architecture ensures that the throughput is kept at maximum

* Partitions can be “resyched” after automatically (or manually triggered)  if any node in the cluster fails

* Excellent documentation and support (very responsive and truthful)

* Support for real-time analytics and so well suited for Business Intelligence and fraud detection applications.

* Its Cloudera certified technology

Few things to consider and remember:

* Doesn’t have Hibernate Dialect so Java developers trying to use VoltDB must be aware of this.  But there is a Java API.

* A service window is required to add/remove cluster members (according to VoltDB this is high-priority to eliminate this)

* Custom design and development is required to mirror existing SQL databases to take advantage of high-throughput querying.  This is also an ongoing priority as per VoltDB

* Independent study: http://www.mysqlperformanceblog.com/2011/02/28/is-voltdb-really-as-scalable-as-they-claim/

Conclusion

There is no doubt that VoltDB is a high-performance innovative RDBMS with high-availability and scalability.   This post summarizes various aspects of its architecture and capabilities which you can use as a guide to explore further and see how VoltDB can further help you in your environment.   They also provide tools for Hadoop integration which can be used to derive intelligence and anomalies in your data.

References

VoltDB technical architecture documentation

Website Extension Paradigms – Commenting Systems

Website Extension Paradigms are primarily ways of extending functionality of a website by a service of a third-party provider.  This trend has emerged in recent years and is growing rapidly with the explosion of websites offering API support (Click to learn more about API).  This opens the door for innovation in business model by combining data from various APIs in interesting ways, in what are called Mashups.   

ProgrammableWeb.com reports that Social, Internet, Telephony, Reference and Government have added more than 2000 APIs in 2011.  Most of these APIs allows you to make a limited number of calls for free and charge for higher usage.  With proliferation of technologies like Apache Hadoop it is now possible for companies to analyze, process and extract intelligence like never before (thanks to companies like Pentaho, Cloudera and Hortonworks).  Companies can now leverage this intelligence and sell it through their APIs.

By Website Extension Paradigms I mean:

  • Extending a website’s functionality by utilizing third-party software and services.  For example, a commenting system.
  • Another form of extension is to host your service on a third-party domain and extend their functionality (Think Facebook Apps).
  • Mashups


I am planning to write a series of blog posts where I will explore different services and platforms that are used in this context.  In this post I will focus on various commenting systems that are in the market today and as a publisher you can use this as a guide in choosing a commenting system for your site/blog.

Some important criterias to consider while choosing a commenting system:

* Real Time comments
* Comment Moderation with multiple moderators if you need
* Anti-spam
* Search Engine crawl-ability
* Social Integration
* Notifications on new comments and replies
* JavaScript librarie(s) that the product uses and make sure there are no conflicts

Disqus: http://www.disqus.com/

Realtime comments – Both posting and updating
Inline media embedding – Youtube, Flickr etc
Fully compatible with mobile websites for commenting while on the go.
Comment Storage: Stored in Disqus and can optionally be synced with your blog provider
Akismet: Protection from web spam
Notification: Subscribe via RSS or email
Like button support for both the page and individual comments
Reactions – Pull mentions of your page on Twitter back into the Disqus conversation on your own site
Comment sorting options include popular now, best rating, newest first, oldest first, etc.
Used By: CNN, Time, Fox News etc.

Echo: http://aboutecho.com/products/real-time-comments.html

Realtime comments – All comment streams are updated without need for manual refresh
Top Commenters – Moderators can mark users as Top Commenters and those comments will automatically bubble up
A cloud service called Echo StreamServer which stores all comments.  They can be retrieved as Activity Streams
PostRank – Intelligent comment ranking algorithm
Integration with Social Networks
Multiple Sort Orders – Like most recent, most popular comments
White Label Solution – Fully customizable look and feel
Analytics – Get insights.  Slice data by article, data source and date ranges
Integrates easily wherever JavaScript is supported
Subscribe via RSS or email

DiscussIt: http://www.pnyxe.com/DiscussIt-comment-system

Author Reputation
Search engine indexing
Automatic SPAM and profanity filters
Automatically inherits your web design
Inserting polls into posts
Integration with popular social networks
Free version available and has most features with Ads

InstaComments: http://www.instacomment.com/

Fully hosted, no download needed
Inserting polls into posts
Data migration from existing system
Comment Ranking
Author reputation
Different versions are available at different price ranges with varying features
Integrates well on Blogger, WordPress, Tumblr, WordPress, Weebly and more
Full moderation over posts

Intense Debate: http://www.intensedebate.com/

Owned by Automattic (Strong WordPress support)
Comment Threading – Reply directly to specific comments with nested replies
Email Notifications – Respond to moderate comments via email
Subscribe via RSS
Guest Commenting – No Account or Sign up necessary
Integrates with Facebook Connect, Twitter, Gravatar, etc.
Trackbacks and Linkbacks are synced
Widgets for comment stats, most popular posts, etc.

Livefyre: http://www.livefyre.com/

Social User Tagging:  Tag Facebook and Twitter friends from inside the comment box
Allow users to Sign In using their Social Networking credentials which includes Facebook, Twitter, Google, LinkedIn and also OpenID
Users can share their comments in their favorite social networking sites
Real-time interaction with comment bubbles, new listener count.
Search engine crawlable
Follow conversations through email
Allows comment ratings
Linkback – Lets the other Livefyre bloggers leave link to their latest post
Integration with Blogger, Drupal, etc. is on their development plan

Facebook Comments: http://developers.facebook.com/docs/reference/plugins/comments/

Part of Social Plugins
Must be associated with a Facebook Application
Look and feel cannot be customized
Optionally post on your Facebook News Feed
No integration to other social networks
Used by Techcrunch

Recession, Transparency and Social Computing

As banks and other major institutions are getting bailed out by the government, one thing that stands out is the need for transparency/accountability. Obama administration is trying to do its part by initiatives such as recovery.gov to provide greater transparency in government.   When it comes to businesses, the major challenge of policy makers (and leaders) is to conceive a framework that can balance transparency against exposing legitimate business strategies.  As more and more business reshape their organizational structure into networked hierarchies from their traditional hierarchies, people at various levels of the organization will have greater opportunity to contribute towards business goals.  If businesses don’t independently transform themselves into this new organizational structure they will be forced to do so through layoffs.  For example, I was talking to a friend of mine who works as a developer in an investment bank and according to him if you’re not hands-on then you’re out.  I know that this is not the case couple of years back, it used to be highly hierarchical organizational structure there. 

The new organizational structure provides for a way to drive innovation by tapping into the collective intelligence of the organization.   At every level of the organization there is business intelligence that can help shape business strategy or reduce costs.   If applications in enterprise are built with an intent to tap into these social capabilities it will drive innovation.  It is up to the policy makers, regulators and business leaders to choose and drive these initiatives.

Personal Screening Framework

In any conversation, active listening is one of the most important communication skill to have. If we are not listening then we are digesting information passively without getting the real message.  It also helps us to develop real empathy for people.   In order to listen actively its necessary to develop the tools and skills that will help us. In this post I will outline a framework that will help in understanding people’s strength quickly in a technology environment.

Each quadrant in the above picture shows a skill necessary in a technology business environment.   During a conversation it is useful to use this framework for listening and understanding what the other person’s true strengths/interests.  Also, this will help you build purposeful conversation in areas of mutual interest and importance. This framework can be easily extended or morphed to different settings.   For example one might not want to use this framework in a dating setting :)

A question to ask ourselves would be, What other settings do I encounter?  What tools can I develop for them?

Follow

Get every new post delivered to your Inbox.

Join 262 other followers