Archives For January 2013

Image

I know, it is a new age.  We are supposed to have moved on from voice communication.  It’s all about social now.  What exactly is “social” anyway?  Who knows, but it is not about traditional calling!  We are supposed to be broadcasting our thoughts in short (or long) snippets with embedded short links!  Who calls any more?! Must be those old school people and the older population!

Well, I still pick up the phone and call sometimes.  When I need to reach someone now.  When I want to ask a quick question.  When I want to casually chat with friends.  When I want to hear someone’s voice.  When I want to have an effective exchange.  Etc.  Sure, the time I spend on voice calls is small compared to the time I spend on “social” (although I don’t exactly know what it is, I know I spend a lot of time being “social”).  But, does that mean voice is irrelevant, as an application?

In this era of voice becoming almost obsolete, it is still hard for me to say voice is irrelevant.  More and more devices that are categorized as “phones” function abysmally for voice – they are built for other functions these days.  Makes me wonder if these devices were even tested for voice quality and what the pass criteria was!  But, at the end of the day, I want to toss a device that gives me crappy voice experience and pick up a real phone.  I have low tolerance in general, but I can tolerate a few more milliseconds of latency on “social” better than I can tolerate crappy voice.  I suspect I’m not alone.  We may not call much any more – but when we do, we better be able to hear the other person crystal clear!  Voice isn’t THE killer app any more, but it certainly is one of the key experiences still worth designing for!

Advertisements

Image

I reset my iPad to factory settings, wiping all of my content from it and set it up for my mom this past weekend.  This has been coming for some time now, but I finally came to terms with it – I had no use for the iPad any more!  The truth is that I haven’t used it in a while – months now, really.  When I first bought the iPad, I was thrilled.  I used it a lot – the key role it played for me was that of an electronic note taker.  I transitioned from having notebooks and pieces of paper I couldn’t find or decipher to having electronic copies of all the notes of any significance.  Other apps and uses of the iPad were secondary.  The portability and ease of notetaking alone were worth it for me.

And then I found predictive keyboards on Android, such as Swiftkey.  As the algorithms trained on more of my data, I got to a point that I could type very well and fast on my Android phone.  That is when the I really stopped ‘needing’ the iPad.  As of now, I can type better on my phone than I can on my iPad.  Typing on the iPad even annoys me.  Apple’s keyboards are not as predictive yet and the auto-corrections are also somewhat lame.  And given the restrictions that exist on iOS ecosystem, there aren’t third party predictive keyboards that can be used.  It is a reminder of how not all of the innovation can come from a single company – not even Apple!

Granted, the bigger screen is useful for watching videos and makes for a better working environment.  However, between my Macbook Air and my smartphone, I have it covered.  When I am really working for an extended period of time, I’d much rather have my laptop anyway.  And, with what the Macbook Air weighs, anywhere I can take my iPad, I can take my Air too – so, it is really about all the other hours of the day when I need ultra light portable devices.  For those hours, I find the iPads too big – I want something that fits in my pocket and allows me to be handsfree when I need to be!

The iPad seemingly served a need when I found typing on the phone to be a pain.  But not now.  The more time I spend with my phone, the more I customize it.  My apps know me better.  Pulse on my phone is actually more relevant to me than Pulse on my iPad.  I know I can use the crazy concept of “identity” and actually log in to Pulse (gasp!) to have more or less the same experience across all my devices – but, I hate that!  When my devices can figure out who I am based on all the things I do anyway, I’m in.  Until then, I will customize the device I use the most.  For the videos I watch and all the content I consume, the tradeoff of the form factor is worth it.

Convenience wins, until there is something I do a lot that is painful to do.  As of now, I do a lot on my Android phone and I love it!  I will wait for the day when the world around us become peripherals to create bigger displays dynamically as I need them.  As much as I feel sad parting with my iPad, the time has come – it’s yours, mom!

Screenshot_2013-01-21-08-42-47
So, it appears that Flipboard has issues with all of the Quora links – it seems to consistently exhibit this behavior for all Quora content.  The same content opens fine with Twitter.  Is this an issue with Quora or Flipboard? Whatever it is, hopefully it gets fixed right away or I’m not reading as much Quora as I’d like… Flipboard is absolutely my favorite way of getting my social feeds – every other thing stinks once you’ve used Flipboard.  So, fix it please!

Image

I am so disappointed with HBR today.  It is one of the sites that I always believed had high quality content – but, that changed a bit today, when a tweet in my feed took me to the blog post of Kyle Wiens on why he would never hire anyone with poor grammar.

As a disclaimer, about 10 years ago, I would have totally sided with Kyle on this. Sloppy grammar is simply sloppy.  But, various experiences in my career have allowed me to mature a bit more than that.  And I can now definitely differentiate among lack of attention to detail as a fundamental nature, lack of attention to detail in secondary tasks and just sloppy language.  And they are not the same thing!

In his post, Kyle writes about how good programming relates to good writing, as one example of how good language skills apply to all disciplines.  There is no doubt that good writing reflects clarity of thought and the ability to pay attention to detail – however, the converse is not true, as experience might tell us.  Certainly not in all disciplines.  Consider all those people whose native language is not English.  Are we supposed to penalize all those researchers for being incapable of expressing excruciating amounts of detail in the English language?

I have come across many people (some very senior folks at extremely successful organizations) with sloppy grammar in the course of my career in technology.  Capitalization errors.  Not knowing the correct number of spaces after a comma or a fullstop.  Not knowing the difference between “it’s” and “its”.  Incorrectly using “affects” when it should be “effects” or vice-versa.  And so on.  Sometimes I think it’s even fashionable to write bad grammar – I can’t seem to get by a few days without running into something on TechCrunch that is grammatically so incorrect that it makes me cringe!  I have actually analyzed this a fair bit – I’m cynical and critical myself and this has certainly not escaped my observations.  I’ve seen how, with certain people, this trait also reflects a muddled up state of the mind, where there does exist a correlation between sloppy language and lack of clarity in thinking overall.  Usually, these people turn out to be native English speakers.  In some cases, these people have overlapping thoughts that get in the way of each other – such people can confuse themselves and their audience and they are often fighting several thoughts that aren’t taken to completion.  This would lead to the hypothesis that Kyle is right in his post.

However, the important thing is to also look at several other people that are unable to write grammatically correct language, but exhibit an amazing degree of clarity in thought and cognitive ability that sets them apart.  More often than not, these people are non native speakers of English, but not as a rule.  As someone who has done extensive amount of hiring and can pride myself on arguably hiring some of the most amazing talent in the field of technology, I can now tell how to look for the people that can pay attention to detail.  In the end, I believe that is really what the HBR post is trying to get at – although I’m not sure that Kyle realizes that.

There are two classes of these people – ones that pay attention to detail always and ones that pay attention to detail where it matters.  The former category is safe – these are people who will make good employees.  The latter category is tricky – how do you know upfront if they will pay attention to detail where it matters to you and your organization rather than only where it matters to them?  Filtering this is a skill you acquire and can’t easily be taught.  The important reality is that these are the people that will take your organization to the next level.  Knowing where to pay attention to detail and what one should let go of is hard – but, that is what defines great leadership!  The ability to spot those people takes talent too – and great leadership starts there!

Pay attention to people’s language skills – but, connect it to their cognitive skills in the area of importance to you.  Remember that you are after attention to detail… for the right set of things.

It’s official, more data is consumed by smart phones than tablets. Granted, this says nothing about usage with WiFi – but, while not conclusive, it certainly shows that a lot of data is consumed from the phones today. it’s about the convenience, the ability to consume data on the go, when you need it, where you need it. So, when is this insufficient?  

Image

As I was browsing through news on my phone, I realized that there are some places the small screen isn’t enough.  An example is looking at a figure like the one above on the phone screen – zooming in gives the details, but loses the bigger picture.  To get the picture at high level, a snapshot of the full image is useful.  In order to get the details, zooming into the right spot is useful. On the small devices, you can only have one or the other – and this is a problem.  If you are anything like me, I’d rarely have the patience to flag such cases and look at it in detail when I’m on a bigger device!  

While this is making the case for bigger screens, it really points to a future, where all the experience is still driven by the smartphone, with the world around it as virtual enablers – e.g., using a projector and an opportunistic peripheral system, we could be using any wall as a bigger screen when we need it.  The main idea is that we, as users, should not need to carry bigger devices or remember things to do on bigger devices at later points in time!  We want things that place little to no cognitive load, while allowing us the flexibility needed! 

Soon, we will be able to transform the world around us into peripherals – including keyboards and monitors – and then, we could have the best of both worlds! 

Image

CES used to be the largest show for consumer electronics all around – if Barcelona has it going for all things mobile with MWC, Vegas certainly had it going for all things consumer oriented, including mobile. But, this year, there is all this talk about how CES has become irrelevant.  Microsoft exited the show, ceding the keynote spot to Qualcomm.  While there is some speculation whether Microsoft might have given up its role a year too early, it has mostly sparked a discussion about the relevance of the show itself.

So, if CES is indeed irrelevant, is there something that has taken its spot? Consumer electronics, as an industry, is clearly not irrelevant! If anything, with the advent of smart everything, consumer electronics is buzzing more than ever. So, exactly why are we seeing this major change in the attitude about CES?

The major shift that has been happening over the years is the inroads that software has made into the field of consumer electronics. A decade ago, it was mostly about cool hardware with some software capabilities.  It was about WiFi, about plasma and LCD TVs, connected appliances, motion sensors and what not.  Today, the role of software is predominant.  Intelligent software now rules – while hardware is still not insignificant, without serious software, it is simply insufficient.  This has caused a shift in the major players that are impacting consumer electronics.  The Internet companies (aka Google, Facebook, etc.) are having a huge role in the present and future of these devices.  It is not inconceivable that a connected refrigerator will have ‘share’ and ‘like’ buttons to share your diets with your buddies automatically!

The confluence of highly power efficient hardware and highly intelligent software is the composition the consumer electronics industry is looking for in this era.  The hardware players are trying really hard to move up the software stack and add intelligence.  They need to cease being behind the scenes and create consumer awareness.  The software players are trying to push down to the low level APIs as much as possible. However, it is not clear that each side has figured out the strengths needed from the other side.  The hardware players are still largely scrambling to have winning software strategy and talent and vice-versa. The front runners of the software world are trying to take a stab at devices (Facebook phone, anyone?).

The importance of software is clear to everyone – so much that Qualcomm tried extra hard at CES to create an image that it is more than hardware.  An image that has been criticized quite a bit. Perhaps this will be the era of partnerships to find that perfect balance of hardware and software excellence.  But, no matter what, both sides have to figure out how to respect the other side, understand how the markets and production cycles work on each side (more on this at a later time) and find a way to bring the best of both to life!

There is no question the future is contextual – it has to be.  The volumes of information thrown at us is increasing at such a pace, we can’t keep up if it isn’t contextual.  So many people are talking about it.  Scoble believes it so much, he is writing a book on it called ‘Age of Context’.  

But, so far, the main piece of context we’ve known is location.  Surely, there are cool things you can do with location, but that is still not the absolutely key contextual question you can answer.  The contextual future will really arrive not just with the ‘where’, but with the ‘when’ – the ‘where’ is just a substitute for the ‘when’.  Using the ‘where’ and certain other bits of information, the ‘when’ is derived by the human brain and ultimately used as the context.  A good case for this was made on Techcrunch today. 

When you look at traffic, you really want to know when or how long it will take you to get to your destination. When you get alerted as your spouse leaves work, you really want to know when he or she will get home. When people are waiting for you in a meeting room, they want to know when you might get there.  Since the ‘when’ is more complex to get right, systems so far have been using the ‘where’ – e.g., telling the user that the spouse left work or where a person is, so that the rest of the blanks can be filled in by the human brain.  

The real breakthrough in contextualization will come when devices can answer the ‘when’ – many efforts are underway by many major companies and the players that are able to understand how core the ‘when’ is to this equation and piece together solutions towards the vision of answering that question will drive our contextual future!