I’ve been watching with great interest this silly little debate about integrity and journalism between Marco Arment, John Gruber, and Joshua Topolsky, and I couldn’t resist the urge to comment.
First, a quick summary of the issue. A number of prominent tech sites, including The Verge and Engadget, wrote articles about the HP Spectre One all-in-one desktop computer, which looks suspiciously like an Apple product ripoff (think Thunderbolt Display bolted on top of a MacBook Air, with the same wireless keyboard and Magic Trackpad as an iMac). Each of these sites neglected to mention the similarities with said Apple products.
And so Marco noticed and alleged this was due to said tech sites being toothless and wanting to remain chummy with OEMs because their business model depends on it. Gruber chimed in and believes that this has more to do with these sites taking an editorial position that more closely aligns their views with those of certain readers who believe these manufacturers are not copying Apple, are entitled to do so, or one of many other similar positions. Topolsky responded with a pretty emotional and heated post. After that, a bunch of Internet ego dick wagging happened on Twitter.
Egos aside, I’m more interested in discussing what I believe to be the core issue with the business model of sites like The Verge that leave the door open for these types of omissions.
I haven’t worked as a journalist, but I have worked on the other side of the fence, in PR. Tech news sites, like any other news sites, are faced with a few fundamental problems: a high volume of news stories, too few resources to adequately cover them in depth, and fighting the clock to get “breaking” news out the door and on the site. These sites primarily make their money through advertising, which either directly or indirectly pays higher returns for greater pageviews. This means there is an incentive to be the first to break a story, to have an exclusive scoop, or take a controversial position on an issue. With so much competition, if you’re too slow in getting to a story, you’re losing out on potential “eyeballs” from referrals and other sources.
On the flip side, companies like HP and their PR agencies spend a lot of time and money trying to craft compelling pitches and use other tactics to get the interest of journalists. The gold standard for PR is that your press release gets published verbatim in whole or in part. Runner up would be that your messaging and spin is intact, even if the words aren’t. Providing the messaging isn’t totally over-the-top, journalists are happy to do this because they are working on deadline, and need to post a high volume of stories in a timely manner. This means there’s no time for editorializing straight news like a first-look piece. This is how the PR business stays alive and stays successful, and news sites keep costs down and revenue up.
What it comes down to, I think, is not a desire to suck up to OEMs, or to pander to Android and PC loving readers, or willful maliciousness by sites like The Verge, as Marco and Gruber seem to believe, but rather a sort of unintentional journalistic laziness that results from the pressures of the job and these publications’ business models.
Over dinner this evening, my family and I began discussing some issues they’ve been having with their computers. The details of these techincal issues aren’t particularly important, but it’s worth noting that they all stem from multiple devices or components that aren’t playing nice together.
And so, as I tried to explain to my parents what might be causing these problems, my dad began (rightfully) to rant about how complex computers continue to be, and how little they have evolved from the days when he was spending entire nights installing Windows 95 from 13 floppy disks. Most computers still require an incredible amount of technical know-how to operate, and even minor problems are often extremely frustrating to diagnose and resolve.
Novice computer users are often surprised that geeks like myself also get frustrated when computers don’t work the way they should. Why? Because I realized years ago that, although I love learning about and tinkering with the guts of a computer, at the end of the day I just need the tools to work with me instead of against me. I don’t buy computers becasue I want to endlessly tinker with them. I buy them because they’re supposed to solve problems and make my life simpler. And yet, in many cases, they don’t.
My father brought up two interesting analogies — the automobile and the television. In both cases you can go out and purchase a low-end product and it will function in essentially the same way as the high-end product. The car will get you from point A-to-B, and the television will turn on and display video from an input source. Sometimes these devices have issues that require maintenance, but they are generally just as reliable as their high-end counterparts. This makes sense.
In both cases, televisions and automobiles are also not particularly user-servicable. In order to diagnose issues, you need to take them to a trained professional. Yet most of the time, you can get in your car and expect that it will reliably get you to your destination. There’s even complimentary roadside assistance to ensure that any serious issues are resolved with the least amount of discomfort.
What about computers? My parents own a high-end Windows-based laptop that is substantially more powerful than anything they have ever owned before. And yet it doesn’t help them do the things they needs to do any more reliably or frustration-free. Error messages are just as cryptic, they’re still not able to reliably print to a wireless printer, and email server issues are causing real headaches. They are having to compromise and find painful workarounds to these solutions.
Traditionally, computers have been open systems in which a vendor licenses an operating system to a company that assembles a computer from various off-the-shelf and custom parts. Both those companies, and the individual component manufacturers, try their best to account for all potential hardware and software variations, but the systems have been fundamentally designed with flexibility in mind.
It should, in theory, be able to support thousands of potential printers, external displays, hard drives, networking components, and other peripherals. It should also support them through third-party software packages that are either installed by the computer vendor or the user. As such, there are likely millions of possible configurations that must be supported — some of which weren’t even on the market at the time th computer and operating system were conceived.
Therefore, we would consider most of the computers ever designed to be open systems. They are made to accept and work adequately with tens of thousands of devices and millions of configurations.
Automobiles are far less complicated. To begin, they are closed systems — meaning that there are only so many different inputs the owner has the capacity to modify. It may be possible to select from a few different packages when purchasing, but it will be almost impossible for the average person to upgrade major components in the car after purchase.
Would we expect a car to be just as reliable if every owner could change the engine, muffler, carburator, or one of the dozens of on-board computers? Would we expect them to be as reliable if Honda, for instance, licensed its designs and components to third-party companies who could then modify them in dozens of ways and sell them directly to consumers?
No. And yet this is the situation most people have been dealing with since the dawn of the personal computer. The sole manufacturer who has been relentlessly pursuing a closed-system approach to computing is Apple, with iOS-powered devices like the iPad and iPhone thus far being the purest expression of that mantra.
Most computer manufacturers have always tried to have it both ways — they want to make the user interface intuitive and powerful while offering consumers choice and flexibility. These are fundamentally opposed concepts. A system cannot be infinitely flexibile and remain reliable and simple.
Closed systems like iOS will prove to be the future of computing. Even Microsoft, historically the greatest champion of open systems, seems to have understood this with the concessions they have made in Windows Phone 7 and the upcoming Metro UI in Windows 8. And consumers have been voting with their wallets, given how quickly the market has embraced Apple’s new breed of mobile devices.
As with cars, there will always be a market of tinkerers who want freedom (as-in free speech), but these should be a small minority of users. Computers must serve their users and become useful but unobtrusive tools. They will only do that if their designers embrace human-centered design, and make hard compromises to ensure those solutions are vertically-integrated and simple from a user’s perspective. Simple means saying no to the realm of unconstrained possibilities, and only closed systems can achieve that goal.
Upon re-reading Netflix CEO Reed Hastings’ blog post about splitting off their physical disc rental business into a separate company called Qwikster, I was struck by a sentence and how it applies to so many tech companies:
Most companies that are great at something –- like AOL dialup or Borders bookstores -– do not become great at new things people want (streaming for us) because they are afraid to hurt their initial business. Eventually these companies realize their error of not focusing enough on the new thing, and then the company fights desperately and hopelessly to recover. Companies rarely die from moving too fast, and they frequently die from moving too slowly.
Two companies came to mind when reading this: Microsoft and RIM. The former is ironic given that Reed Hastings is on the board. The latter is the one I’m really interested in talking about given the company’s growing list of disappointing news and announcements over the past year.
At the heart of RIM’s problem is what Hastings is referring to in his post. It’s the concept of disruptive innovation. The idea that markets aren’t won or lost over slow iterations of existing products with the same value proposition. Apple, for instance, never could and likely won’t ever be able to win the PC business as it stands. But they recognized this and understood they don’t have to. The PC industry has been in accelerating decline for the past decade,1 and Apple smartly chose to position itself for dominance in the post-PC era.
They did this through a series of disruptive innovations that nobody has been able to match: iPod, iPhone, iPad. And they are no doubt already thinking about the next one, because the iPhone decade will eventually come to an end. As Horace Dediu pointed out on a recent episode of his outstanding podcast, The Critical Path, each of these products flowed from a new input mechanism (scroll wheel, touch) which in turn required a new ecosystem and market strategy.
This brings us back to RIM, the once-darling of Canadian tech innovation that I’m still hoping can return to its glory days. Looking back at the Blackberrys that existed in 2005, and comparing them with the Blackberrys of today, I find few substantial differences. Sure, the screens are better, they’re faster, have nicer graphics, and have a few new bells and whistles. But, fundamentally, these are the same enterprise-focused devices that once attracted IT pros to the platform.
Since then, the iPhone emerged and dramatically disrupted the market, and the modern Android platform was born in response to it. The BlackBerry has continued to chart a conservative path with the same platform and same fundamental assumptions about market access, the required ecosystem, and its consumers. In the process, RIM’s leadership seems to have been surprised to wake up one day and discover their devices were no longer inspiring the hearts and minds of those who once carried a BlackBerry on their hip.
That’s not to mention the iPad, and RIM’s subsequent distraction and failed attempt at responding to it. The PlayBook was dead in the water on launch and likely can’t be saved with slow iteration. They only sold 200,000 devices this quarter, and are already trying to bolster demand by slashing prices. This is a race to the bottom that will do nothing but continue to hurt the company.
Shortly before the PlayBook was released, I speculated in conversations with friends that RIM needed to bank the company on this device. I think they did just that, but never realized they were doing it. Unfortunately, the device wasn’t the right one to make such a gamble on.
Now it seems like QNX-powered BlackBerries may be their last ditch attempt. And it sounds like there will continue to be a concurrent lineup of existing BlackBerry OS devices and newer QNX-powered devices — no doubt creating confusion in the marketplace.
Just like Henry Ford didn’t try to sell his customers a faster horse, Apple didn’t become the most valuable company in the world by building faster Macs. They made minor gambles and released three key products over a decade that disrupted existing markets and paved the way for new ones they could own. RIM should take a lesson from Apple’s playbook2 and focus on new product offerings, with entirely new value propositions that play to their strengths.
The path of playing catch-up with Apple’s four-year-old innovations can only lead to failure. Reed Hastings is right. RIM is only just realizing the errors of its strategy, and can’t move fast enough to save itself.
Is there something in human nature that makes us consider time in blocks, rather than as a continuum? So often when looking back at history, we talk about months, decades, centuries, events, and pretend that they were events isolated in a bubble. That, for instance, the 80s just started and stopped one day. I don’t buy it.
A few weeks ago, a friend and I were discussing how, in his opinion, music today was totally derivative. Nobody is experimenting. Our current decade pales in comparison to the 80s, when there was some real world-changing stuff happening musically. This strikes me as a form of nostalgia for a decade that many hold in a certain high esteem, but that’s beside the point.1
In reality, 1981 was a lot like 1979. Popular music in 1991 wasn’t radically different from the music in 1988. Every piece of art is built upon the path that has been carved out by its predecessors. Particularly with mainstream music where things don’t often surface that truly rock the boat.
Even The Beatles, likely one of the most influential bands of all time, were incredibly derivative in the early days. The influence of Chuck Berry, Carl Perkins, Buddy Holly, and others is clear. Later, beginning with Rubber Soul, you can start to hear Bob Dylan influences. They were listening to their contemporaries and building upon it. Taking it to the next level.
Long-time Beatles producer George Martin was quite clear on the influence that The Beach Boys had on them:
Without Pet Sounds, Sgt. Pepper wouldn’t have happened. Pepper was an attempt to equal Pet Sounds.
I’ve used music as an example to illustrate this, but the same can be said about culture in general or historical events. The Beatles didn’t come out of left field, just like the 80s wasn’t a homogeneous decade that flipped to something else in 1990. To argue otherwise does a disservice to the creative spirit and remix culture.
Let’s ignore for a moment the fact mainstream music today is considerably more bland than the hyper-stylized music that was popular in the mid-80s through early-90s. It doesn’t mean that it’s any more or less derivative. ↩
There is no way that regulators can look at what Google makes from Android, the worldwide smartphone market and the juggernaut that Apple has become and say that Google’s acquisition of Motorola is in any way anti-competitive. It is a necessary move by Google to keep pace with its biggest competitor in the mobile realm.
I’m not totally sure I buy this, although I don’t think it will stop the deal from passing. Google has used its monopoly position in the search/advertising market to build an operating system at a huge expense and give it away for free, ostensibly in order to secure mobile search advertising revenue. This is the same kind of rationale that got Microsoft in trouble in the 90s when they bundled Internet Explorer with Windows.
No other company can afford to give away a mobile operating system for free. Apple, Microsoft, and HP all have to make money on the software itself (and hardware, in the case of Apple and HP). It will be interesting to see Google’s next moves in the space. Will they begin to lock down Android, making it less “open”? Will they try to become a successful hardware manufacturer, creating best-in-class Android devices?
Great companies stand for something. They create products and services that are designed around a clear vision of how things should look, feel and behave. The products are, in a sense, opinionated.
Nike products help you achieve your full athletic potential. Ferraris are finely tuned sports cars that represent the pinnacle of performance engineering. Apple makes you Think Different about the role of technology in your life.
And yet, many companies make short-sighted decisions to maximize profit at the expense of brand reputation and company values.
Take Nikon, for instance — one of the world’s greatest photography companies. They make some of the best cameras and optics in the world, and have built their reputation over nearly a century. But they also put their name on inexpensive low-end cameras that don’t line up with the company’s core values. They are brought to market in response to customer desire for Nikon-branded products at a bargain-basement price.
Sanyo is one of the world’s largest camera manufacturers, but you’ll rarely find a Sanyo camera for sale anywhere. That’s because they are an OEM for companies like Nikon who, like many others, choose from a shopping list of available models, parts and specifications.
Over time, these products fail to differentiate themselves in a crowded market of nearly identical products, and ultimately dilute the values that the company stood for.
Great companies educate their customers by demonstrating their vision and their products’ value. More importantly, they are defined by the many things they could produce but choose not to.
Every company is founded on an idea. How closely your products line up with that vision should be the defining characteristic of success.
This week, Apple will be hosting its Back to the Mac event, where they are rumoured to unveil Mac OS X 10.7, a new MacBook Air, iLife and iWork ‘11. As Apple concedes in the event’s media invitation, many people had begun to wonder if the Mac was being unfairly neglected in favour of the shinier and now more popular iOS-based devices (iPhones, iPads, iPods).
Yet we shouldn’t blame Apple for having shifted huge amounts of resources into this emerging platform over the past few years. The mobile space (including tablets) has exploded, and they’re clearly in it to win it. At WWDC last June, there weren’t even any Mac sessions — it was all about iOS. What’s a Mac developer to do?
Personal computers as we know them, including the Mac, will be relegated to a niche product within the next few years. Tablets are the future. There will no doubt continue to be a need among many of us for a high-performance, multitasking, windowed operating system, but that need won’t and shouldn’t extend to the average consumer.
Most people perform a few simple tasks on their computer: email, web browsing, consuming content (video, audio, photos, reading), and basic content creation (documents and spreadsheets). All of these things can be done today on a tablet — in some cases even better than with desktop computers. And it will only continue to improve as the form factor matures in the coming years.
The success of the iPad, with 7.5M units sold since its launch less than 6 months ago, seems to prove that Apple is on to something. The first million iPads were sold twice as fast as the original iPhone. And it’s not just techies who are buying these, but regular people as well. If you consider the iPad a PC, Apple is now the number one computer maker in the United States, with 25% marketshare.
Most people have had an antagonistic relationship with computers for years. They have mastered a few simple tasks that they need to accomplish on a daily basis, and yet the experience of using a computer is still incredibly frustrating for the average person. As a self-professed techie, I get questions on a daily basis from friends and family about their computer woes. How do I install this webcam? Why is it beeping? Where did my contacts go? Why won’t it print? How do I resize this photo? It’s enlightening to sit and watch an average person tackle problems on a PC. Their behaviours are rarely as we (the techies) expect them to be.
And yet, for many of these users who struggle with machines on a daily basis, the iPad immediately makes sense to them. I’ve watched as people use an iPad for the first time. It’s completely intuitive and satisfying to them, and they know that they need to own one. Even two-year-olds get it. Tablets are the future of computing.
I love Mac OS X as much as anyone but, for the average user, the added complexities of window management alone make it unnecessary for most tasks. With the tablet, our antagonistic relationship with computers may soon be over — and personal computers as we know them will become a niche product.
Contrary to what many content owners would like to think, digital media has done wonders for content and is continuing to open the doors to new business opportunities.
I like being able to carry thousands of songs in my pocket. I love streaming Netflix movies straight to my Xbox, or watching them on my Apple TV. I love being able to take a photo and share it with dozens of people in a heartbeat. And I’m growing to love my recently acquired Barnes & Noble nook.
As recently as a year ago, the major issue surrounding digital content was DRM. That battle has thankfully been won — at least with music — and progress is being made with other types of content. But I think it’s time to refocus the debate. The next battle is over format shifting.
With DRM it was popular to argue that customers wanted to truly own content they purchased and not simply purchase a license for it. I want to take this one step further. I want to truly own content and not be tied down to the medium on which it exists.
Let me explain. If I bought all the Beatles vinyls 40 years ago and want to have these in a digital format, I’m forced to pay full price for the latest remasters. Granted, significant work has been done to restore the recordings. But we’re not only paying for that work when we buy the remastered digital files, but also content ownership rights. How much would one have spent on the exact same Beatles content had they bought every new edition and remaster in the past 40 years?
And what about video? How many of you have purchased a movie over and over again on Betamax, Laserdisc, VHS, DVD, HD-DVD, Blu-ray, Xbox Marketplace, the iTunes Store, and so on? I want to have the choice of paying a small fee for the medium itself, and any applicable work that went into adapting it to a newer technology. But there’s no need to pay for the content over and over again.
As far as I know, there’s only one company who is doing this right now. The Criterion Collection allows you to trade up from your DVD versions to Blu-ray for a minimal fee.
There are obvious technical challenges to this idea. In an ideal world, these will be lifted as we move away from the physical and toward streaming content and an omnipresence of DRM-free digital files.
Last week, Apple released Aperture 3, an evolutionary update to their RAW workflow/editing software for professional and high-end amateur photographers. A friend of mine asked me why I thought Adobe Lightroom users were reluctant to switch to Aperture, and it got me thinking…
Let’s start by looking at the numbers. According to a survey of professional photographers by research firm Infotrends (published by Adobe’s John Nack), market share in this segment is split at 37% for Lightroom, 6.3% for Aperture, and 57.9% for Adobe’s Camera Raw plugin for Photoshop. When isolating only the Mac platform figures (Aperture is not available on Windows and likely never will be), Lightroom has 44.4% and Aperture 12.5%. Forgetting for a minute that Adobe published these numbers, there is no question that Lightroom enjoys a commanding lead over Aperture. Why is this?
Apple was first to market with Aperture, which was released in late 2005. Adobe came out with a public beta of Lightroom in early 2006 — likely as a stalling tactic to counter adoption of Aperture. Lightroom 1.0 shipped in early 2007. While most agreed the first version of Aperture was a nice tool, it was plagued by a number of speed issues and many pros simply considered it too sluggish for general use.
Adobe had the advantage of tight integration with Photoshop and the popular Camera Raw plugin that was, at the time, the de facto standard for RAW image manipulation. Without a doubt, the fact that Lightroom was built on this same engine gave it the edge with pros. The tools felt familiar, camera support was already excellent, and the results were predictable. On the other hand, Aperture 1.0 launched with slightly inferior camera support, and a few image quality bugs. Early on, Adobe also made the smart move of releasing public betas and have taken user feedback into account for the final versions.
By the time Lightroom 1.0 was released, it was considerably superior to Aperture (interface and workflow user preferences aside.) Pros demanded speed, and Adobe delivered it. Aperture 2 brought considerable speed increases, but it was too little, too late.
Fast forwarding… Aperture 3 is now available, and Lightroom 3 is in public beta. The new version of Aperture mostly adds features that appeal to amateurs, including Facebook and Flickr uploaders, and Faces and Places support that is taken from iPhoto ‘09. Of interest to pros is the non-destructive brushes — but Lightroom has supported this for some time now. Clearly, Apple is trying to adjust their strategy and broaden Aperture’s appeal. In my opinion, it’s too little, too late.
In order for pros to consider moving their massive photo libraries over to Aperture, Apple would need to provide extremely compelling features that provide a clear, dramatic benefit over Lightroom. As of today, I think both solutions are comparable. For a new user looking to purchase a RAW editing software, it should come down to interface and workflow preferences more than anything. But, with Lightroom 3 slated to offer a number of important refinements — increased speed, a new noise reduction engine, watermarking tool, import handing, and customizable print package creation — it seems likely that Adobe will continue to dominate this segment.