Archives For Uncategorized

Image

The hype about Software Defined Networking is running high, with the Open Networking Summit seeing upwards of a 1000 attendees and a large number of vendors, incumbents of the networking world and small startups alike clamoring to show off their gear. But, sitting through the sessions and walking through the vendor demos, it told me one thing – if we thought SDNs are going to bring administrative, topological or operational efficiency to networking, that is probably not going to happen.

Why am I so pessimistic about this, when the world is more or less giddy with the endless possibilities that SDNs can bring in these directions? My pessimism has nothing to do with the fundamental capabilities of the technology, but has everything to do with the fact that the space is filled with carriers (and enterprises) who want to cut costs, incumbent vendors who want to stay in business and save their falling profit margins, new vendors who want to make it big and displace the incumbents and a research community that is largely just giddy about coming out of decades of boredom that was infused by working on improvements to BGP, TCP and QoS that nobody in the real world cared about. That’s correct, nobody is really thinking about the users or about designing it right.

So, anyone that is looking to reap the benefits of the technology in any reasonable amount of time better have pockets deep enough to build it end-to-end themselves. I will refrain from making references to my employer’s SDN adoption itself. There is enough public domain information on it and for the technology enthusiast, Amin Vahdat’s talk at the ONS should provide a lot of juicy details.

Sadly, yet again, here is a technology with a lot of potential to make us rethink the way we deploy and use our networks without an ecosystem with motivations that would support that. The innumerable marketing pitches made at the ONS are a testimony to this impending future. We have seen this in the past with the cellular world. The end result of such an environment is often numerous specifications drafted with many compromises to accommodate various favorites, with very little true interoperability. Although the IETF has had better success with interoperability, it has had other issues, notably, incredibly long times to reach consensus and have a spec published. Also, there is the mess of producing large numbers of incredibly complex specs. SIP, IPv6, anyone?

Even though the SDN space is showing signs of becoming another giant mess of specs and gear, there is always the hope that people will take Nick McKeown’s talk seriously and start thinking about the core strengths of the technology. The hope is small, but all is not lost yet.

But, more importantly, I believe (and hope) that one positive thing that will come out of this SDN resolution is a massive rethinking of networking APIs and just the way the networking world approaches software. So far, it has taken people with extensive knowledge and experience in the intimate details of the vendors’ gears to be able to operate networks. That has the potential to be disrupted with the SDN wave. This won’t happen from the incumbents, but hopefully the S in SDN has attracted enough software talent to this field to cause this. Although the Internet has largely been running on software thus far, there is some feeling that SDN is bringing software to networking. Shshsh! Let’s keep it that way, in the hopes of seeing better defined programmable interfaces!

Advertisements

knoweldgemaze-0022

Long, long time ago, the Internet used to be open. It reflected freedom of expression.  Vast amounts of content were created and consumed.  Without borders, without walls, without restrictions.

Today, there are so many places to help you create and share content – but, along with creating content, they are also slowly building a wall around the content! Take Quora as an example – Quora’s mission is about creating and sharing knowledge.  It has done a phenomenal job of creating good content – and an even better job of locking up the content!

Enamored by the high quality content, I started blogging on Quora – it has certainly slowed me down on this blog a bit!  But, the inability to share content freely, without requiring the users to log in, has been an issue – a rage, actually! Even though Quora responded showing that they listened to their users, it’s still a problem – fundamentally, they think that requiring an identity to access content is normal and acceptable!  It is this fundamental notion that is causing the Internet to be closed, one step at a time!

Whose right is it anyway to put a wall around our content? It’s hynotism – just like I wrote before, it’s about owning every move of the user without the user knowing that he/she is being hypnotized!  The walls we knew with cellular companies where small, claustrophobic and uncomfortable.  The walls we see now, with the likes of Quora, are deceptively liberating – they have the illusion of being free and available.  But, unless we see a turn that brings down the walls, the future of the Internet surely seems to be drifting towards a closed state!

It’s official, more data is consumed by smart phones than tablets. Granted, this says nothing about usage with WiFi – but, while not conclusive, it certainly shows that a lot of data is consumed from the phones today. it’s about the convenience, the ability to consume data on the go, when you need it, where you need it. So, when is this insufficient?  

Image

As I was browsing through news on my phone, I realized that there are some places the small screen isn’t enough.  An example is looking at a figure like the one above on the phone screen – zooming in gives the details, but loses the bigger picture.  To get the picture at high level, a snapshot of the full image is useful.  In order to get the details, zooming into the right spot is useful. On the small devices, you can only have one or the other – and this is a problem.  If you are anything like me, I’d rarely have the patience to flag such cases and look at it in detail when I’m on a bigger device!  

While this is making the case for bigger screens, it really points to a future, where all the experience is still driven by the smartphone, with the world around it as virtual enablers – e.g., using a projector and an opportunistic peripheral system, we could be using any wall as a bigger screen when we need it.  The main idea is that we, as users, should not need to carry bigger devices or remember things to do on bigger devices at later points in time!  We want things that place little to no cognitive load, while allowing us the flexibility needed! 

Soon, we will be able to transform the world around us into peripherals – including keyboards and monitors – and then, we could have the best of both worlds! 

Image

CES used to be the largest show for consumer electronics all around – if Barcelona has it going for all things mobile with MWC, Vegas certainly had it going for all things consumer oriented, including mobile. But, this year, there is all this talk about how CES has become irrelevant.  Microsoft exited the show, ceding the keynote spot to Qualcomm.  While there is some speculation whether Microsoft might have given up its role a year too early, it has mostly sparked a discussion about the relevance of the show itself.

So, if CES is indeed irrelevant, is there something that has taken its spot? Consumer electronics, as an industry, is clearly not irrelevant! If anything, with the advent of smart everything, consumer electronics is buzzing more than ever. So, exactly why are we seeing this major change in the attitude about CES?

The major shift that has been happening over the years is the inroads that software has made into the field of consumer electronics. A decade ago, it was mostly about cool hardware with some software capabilities.  It was about WiFi, about plasma and LCD TVs, connected appliances, motion sensors and what not.  Today, the role of software is predominant.  Intelligent software now rules – while hardware is still not insignificant, without serious software, it is simply insufficient.  This has caused a shift in the major players that are impacting consumer electronics.  The Internet companies (aka Google, Facebook, etc.) are having a huge role in the present and future of these devices.  It is not inconceivable that a connected refrigerator will have ‘share’ and ‘like’ buttons to share your diets with your buddies automatically!

The confluence of highly power efficient hardware and highly intelligent software is the composition the consumer electronics industry is looking for in this era.  The hardware players are trying really hard to move up the software stack and add intelligence.  They need to cease being behind the scenes and create consumer awareness.  The software players are trying to push down to the low level APIs as much as possible. However, it is not clear that each side has figured out the strengths needed from the other side.  The hardware players are still largely scrambling to have winning software strategy and talent and vice-versa. The front runners of the software world are trying to take a stab at devices (Facebook phone, anyone?).

The importance of software is clear to everyone – so much that Qualcomm tried extra hard at CES to create an image that it is more than hardware.  An image that has been criticized quite a bit. Perhaps this will be the era of partnerships to find that perfect balance of hardware and software excellence.  But, no matter what, both sides have to figure out how to respect the other side, understand how the markets and production cycles work on each side (more on this at a later time) and find a way to bring the best of both to life!

There is no question the future is contextual – it has to be.  The volumes of information thrown at us is increasing at such a pace, we can’t keep up if it isn’t contextual.  So many people are talking about it.  Scoble believes it so much, he is writing a book on it called ‘Age of Context’.  

But, so far, the main piece of context we’ve known is location.  Surely, there are cool things you can do with location, but that is still not the absolutely key contextual question you can answer.  The contextual future will really arrive not just with the ‘where’, but with the ‘when’ – the ‘where’ is just a substitute for the ‘when’.  Using the ‘where’ and certain other bits of information, the ‘when’ is derived by the human brain and ultimately used as the context.  A good case for this was made on Techcrunch today. 

When you look at traffic, you really want to know when or how long it will take you to get to your destination. When you get alerted as your spouse leaves work, you really want to know when he or she will get home. When people are waiting for you in a meeting room, they want to know when you might get there.  Since the ‘when’ is more complex to get right, systems so far have been using the ‘where’ – e.g., telling the user that the spouse left work or where a person is, so that the rest of the blanks can be filled in by the human brain.  

The real breakthrough in contextualization will come when devices can answer the ‘when’ – many efforts are underway by many major companies and the players that are able to understand how core the ‘when’ is to this equation and piece together solutions towards the vision of answering that question will drive our contextual future! 

Infinite scrolling is one of the cooler user experiences brought to life by Pinterest – since its introduction, many others have followed suit with incorporating infinite scrolling on their web sites and applications.  For certain types of content and applications, this is the best overhaul that has happened to the user experience in a long time.  Pagination has been a deterrent to viewing a large amount of content.  For user generated content that grows at a fast pace, taking that deterrent out of the equation has definitely helped! 

Image

There really hasn’t been an equivalent to pagination on mobile, minus the dull experience of clicking through tiny page numbers on a small screen browser.  Loading content in response to scrolling is so much smoother.  

Given the popularity of this feature, it is now making an appearance in all kinds of applications.  Facebook. Twitter. Tumblr.  Pulse. The problem is that loading content infinitely is not quite suitable for all types of content. Aside from all the discussions around SEO and finding the equivalent of click value in a scroll experience, there is also the question of exactly what kind of user experience one is targeting with the infinite scroll.  

For leisurely content, that is quite fitting. It gives the user a perception of vast amounts of interesting content and keeps the engagement of the user.  On the other hand, for some other types of content, it is exhausting to the user to see that there is an endless amount of content they need to get through. Search content has thus far not become a target for infinite scrolling, appropriately so. I believe news content should fall under that category as well.  The recent Pulse update that delivers endless content leaves me exhausted, with the feeling that I’m never caught up with news! If users are interested in old news, they will actively seek it. One of the issues with old news often is also that updates may have been published later – if the users have already consumed the latest content, the old news becomes somewhat even confusing for the user. History linked from recent news is useful, but related and outdated articles showing up in the news stream is distracting.  

Following best practices for designing the infinite scroll experience is one part of it.  But, thinking through the type of content and whether the experience is fitting for it needs to be a necessary pre-step before adopting it.  

Image

Emoticon overload is so common, it’s now a phrase in the urban dictionary! NYTimes posted a blurb on this topic three years ago and there is much advice out there on how to use emoticons sparingly in business communication.  Digital body language is a topic of study now. Emoticons were introduced to convey the tone that is often not otherwise known in digital conversation. However, we are at a generation today where these are abused to the extent that the real tone of the conversations are difficult to glean.

So, why are we obsessed with emoticons? Why have we taken it to a point that called for smiley abuse awareness?! The fact is that we often want to say things in a lighter tone and keep an out to get away with things we say. With a smiley next to it, at best, there is room for ambiguity.  We get so used to it, anything you say without an emoticon appears serious and sometimes even annoyed. We like to keep the reader guessing as to what is just-for-fun and what is more serious than that.

This is one reason even socially introvert people are comfortable having a big presence online and a large group of friends in social networks.  The removal of the physical proximity provides a level of comfort to people – this is why we often find people who are otherwise introverts having hundreds of friends in social networks and things being said in asynchronous digital media that would not get spoken in voice communications!

How bad is this? My take is that it depends on the people involved in the conversation. But the risk is in getting used to it so much that conversing without emoticons ceases to feel normal. As long as we can be in control of our digital etiquette and know when to use what conventions, we are safe!

Image

It is certainly the era of user experience.  An era where Apple is teaching the rest of the world about the importance of user experience – one where others are learning fast to figure out how to get it right and be a player.  An era where user experience as a science is getting its due credit.  One where it is clear that computer science in isolation does not make or sell great products.

This does lead to the question of the end goals of a perfect and intuitive user experience. Is it really about the user? Do Apple, Amazon, Facebook, Google, Microsoft and others care so deeply about their users that they feel compelled to pour billions into getting this right? Maybe – but, just a little bit!

As is evident from the numerous patent wars, no one wants to get along in this game. Sure, the patent wars are broader than user experience at the face of it, but isn’t everything about the user? Faster speeds, lower power processors, better networks, more memory, better applications, slicker UIs, you name it – if it isn’t about the user of the product, there would be no incentive in investing in it.  So, why then can the players not get along to create the best unified experiences?

The end game here is hypnosis – yes, it is an era of digital hypnotism.  Wikipedia describes the characteristics of hypnosis as “The hypnotized individual appears to heed only the communications of the hypnotist. He seems to respond in an uncritical, automatic fashion, ignoring all aspects of the environment other than those pointed out to him by the hypnotist.”, among other things.  Every player in this game is trying to be a hypnotist and the subject is the user.  User experience is a means to an end.

Let’s parse it a bit further.  Apple has demonstrable success in the art of hypnotizing the user.  By creating intuitive, simple to use products, they built a faithful user base.  A user base that will adapt to their products and swear by their products.  They look past flaws in their products to the extent that flaws appear to be a feature.  Having established that loyalty, they enjoy the luxury of rolling out a flawed product (Maps of course!), a highly important one at that, and still not losing the user base!  It is a perfect win for their years of investments in user experience!

There are no incentives in working together – when everyone wants to claim the user, there is no question of unity.  After all, multiple simultaneous hypnosis is proven hard in psychology!  As every one of these major players try to grab every piece of data about the user that can be used to bring them under their influence, the users themselves are undergoing a transformation.  We talk less and type more.  We smile less and use more smileys (more on this later).  Running out of battery on our phones is our biggest fear.

It is a new world.  As long as the net result is making our lives better, being hypnotized by one of these players may just be par for the course.  We win some, we lose some.  As in psychology, you can only be hypnotized if you want to be hypnotized.  As every big player tries to do everything, they are trying to take over the users’ lives in totality.  They want to know our past and present and predict our future.  Or, better still, lead us towards paths we will be happy to follow.  The trick is in having enough snippets of what we, the users, want to do.  Once we are hooked (err, hypnotized), we will do as we are told!

Windows 8 Needs Some Love!

It turns out Microsoft was so focused on touch screens that they forgot the PCs needed to be manipulated with a keyboard and mouse! Oops! Well, they spent a lot of money on this from designers to paid app developers alike – so, it better get used on more platforms than just the ones with touch interfaces!

Now what? Well, Microsoft needs some serious love! Particularly Win 8! It does make you wonder how much of a mess up this really was vs a strategy to popularize the platform :)!

Clearly, you only need to write sensational articles – facts are a minor inconvenience that can be worked around!  If you wrote sensational rants, your popularity will increase and ranking algorithms will increase your visibility and your popularity will increase further!

So, Lifehacker is at it again – this time on how to maintain email privacy! Just to clarify, I don’t actually seek and read Lifehacker articles. But, apparently, Pulse’s ranking and rating mechanisms allow these articles to be part of “Best in Technology” category.  I get enough value from this category to keep it, but that means that every now and then, I read the hideous reports on Lifehacker and rant about it!

This time around, they’ve put in enough disclaimers and given themselves enough outs (which alone should make this a useless article), so, good CYA efforts there! But, that article has some fundamental issues.  First, we are talking about a case involving a senior CIA member where personal information was revealed with FBI help. And the article talks about how to stop that from happening to you. Who is this “you” they are referring to? Presumably the common user? And, are we talking with or without help from the FBI?

The article goes on to make so many assertions about using VPNs or separate email providers to do various things.  Including using a VPN provider that won’t give up your IP address as easily as Google would.  Is this because different Federal requirements apply to VPN providers? Or because some providers are willing to take the risk on behalf of their users?

Anonymity, privacy and security are all related but different aspects.  Reality shows that there is no perfect solution that scales to the common user. And more importantly, it is one thing to protect against a casual observer who is really not interested in your data anyway; a totally different thing to protect against a funded, motivated attacker (or protector, as the case may be) who is dedicated to cracking through the mechanisms in place.

The problem is that most common readers will miss the different types of users and attackers.  Unlike the peer rating mechanism popularized by eBay like companies that has found a way to sustain itself due to sufficient incentives and distribution of responsibilities (although not perfect), the current ‘likes’ mechanism popularized by Facebook is simply ridden with challenges.  In other words, it cannot be as easily used as a measure of authenticity of any sort.  For some definition of “popularity”, it serves a purpose, but when articles like this bubble up to the “Best of Technology” categories of highly popular news aggregators, you are sending a message to millions of readers vouching the credibility of these sources.

But then, whose responsibility is it to balance the sensation with the facts? No one has claimed it yet…