Every day or so I read another blog post (or ranting comments) about how BlackBerry could be rehabilitated, or how Nokia could restart Maemo and build the ultimate smartphone again. Things came to a head after Jolla announced their first phone for sale. Surely this phone with an amazing user interface will vindicate the N9?! Amazing technology plus a killer UI? Marketshare is theirs for the taking!
I’m sorry; but no. Most people don’t understand how a smartphone platform works. Simply put: there will not be any new entrants to the smartphone game. None. At all.
Obligatory disclaimer: I am a researcher at Nokia investigating non-phone things. I do not work in the phone division, nor do I know any internals of Nokia’s phone plans, or Microsoft’s after the acquisition of Nokia’s devices group is complete. I hear about new devices the same way you do: through leaks on The Verge. This essay is based on my knowledge of the smartphone market from my time at Palm/HP and general industry observations.
A few new Android manufacturers may join the game, and certainly others will drop out, but we are now in a three horse race. The gate is closed. I’m sorry to Jolla, Blackberry, the latest evolution of Maemo/Meego/Tizen, whatever OpenMoko is doing these days, and possibly even Firefox OS. No one new will join the smartphone club. It simply can’t happen anymore. You can’t make a smartphone.
There was a time when a small company, with say a few hundred million dollars, could make a quality phone with innovative features and be successful. This is when ‘successful’ was defined as making enough profit to continue making phones for another year. In other words: a sustainable business, not battling for significant marketshare. Those were the days when Palm could sell a million units and be incredibly happy. The days when BlackBerry had double digit growth and Symbian ruled the roost.
Then came 2007. It might be over-reaching to say ‘the iPhone changed everything’, but it certainly was an definitive event. The 1960s began in January of 1960, but ‘the sixties’ began when the Beatles came to America in early 1964. Their arrival was part of a much larger cultural shift that started before 1964 and certainly continued after the Beatles broke up. I would personally say the sixties ended memorial day of 1977, but that’s just my opinion.
The Beatles appearance on the Ed Sullivan show is a useful event to mark when the sixties began, even though it’s really a much fuzzier time period. Steve Jobs unveiling the iPhone in 2007 is a similarly useful historical marker. Everything changed.
The first big change was data networks. In the old days there really wasn't a data network. Previous phones were about selling minutes and, to a lesser extend, texting. carriers didn’t really care about smartphones. They didn’t push them or restrict them. As long as you bought a lot of minutes the carriers didn’t really care what you used.
There were no app stores back then, just a catalog of horrible JavaME Tetris clones at 10 bucks a pop. I owned a string of PalmOS devices during this period. Their ‘app store’ was literally boxes of software in a store which you had to install from your computer. No different than 1980s PCs. While my Treo had access to GSM, it was merely a novelty used to sync news feeds or download email very slowly.
Around 2006 the carriers 2G and 3G data upgrades finally started to come online. Combined with a real web browser on the iPhone you could finally start doing real work with the data network. This also meant the carriers became more involved in the smartphone development process. Clearly data would be future, and they wanted to control it. Carriers request features, change specs, and pick winners.
Carrier influence means you can’t make a successful smartphone platform without having strong support from them. This is one of the things that doomed webOS. The Pre Plus launch on Verizon should have been huge for Palm. Palm spent millions on TV ads to get customers into the stores — who then walked out with a Droid. Without having strong carrier support, all the way down to reps on the floor, you can’t build a user base. To an extent Apple is the exception here, but they have their own stores and strong existing brand to leverage against carrier influence. Without that kind of strength new entrants don’t have a chance.
The cost of entry
Another barrier to entry in the smartphone market is the sheer cost of getting started. A smartphone isn’t just a piece of hardware anymore. It’s a platform. You need an operating system, cloud services, and an app store with hundreds of thousands of apps, at least. You need a big developer relations group. You need hundreds of highly trained engineers optimizing every device driver. The best webkit hackers to tune your web browser. A massive marketing team and millions in cash to dump on TV ads. You need deep supply chains with access to the best components. The cost of entry is just too high for most companies to contemplate.
To continue with webOS in 2011, I estimate HP would have had to spend at least a billion a year for three years to become a profitable platform — and that was two years ago. The cost has only gone up since then. There are very few companies with the resources. You already know their names: Apple, Samsung, Google, and Microsoft. All vertically integrated or well on their way to it. You aren’t one of these companies.
Access to hardware components
Smartphones need good hardware to be competitive. With the 6 month cycles of today’s marketplace that means you have to access to the best components in the world (Samsung), or have such control of your stack that you can optimize your software to make do with lesser hardware. Preferably both.
Apple has the spare cash to secure a supply chips and glass years in advance; you do not. If Apple has bought the best screens then your company has to make do with last years components. This compromise gets worse and worse as time goes on, making your devices fall further behind in the spec wars.
Retreat to the low end
A common solution to the component problem is targeting the low end. After all, if you can’t get the best components then maybe you could build a decent phone out of lesser parts. This does work to an extent, but it limits your market reach and opens you up to competition at the low end. You are now competing with a flood of cheap Android devices from mid-tier far-east manufacturers.
Even if you OEM hardware from one of these low-end manufacturers you are now in a race to the bottom. Your product has become a commodity unless you can differentiate with your user experience. That requires the telling potential customers about your awesome software, which requires a ton of cash. Samsung spends hundreds of millions each quarter on Galaxy S ads. This path also requires an amazing UI that will distinguish you from your peers.
A disruptive UI
Even with an paradigm shifting UI you’ve got to overcome all of the difficulties I outlined above. Most people in the wealthy world have smartphones already, so you not only have to convince someone to buy your phone, but to leave the phone they already have. Your amazing UI has to overcome the cost of change. Inertia is a powerful thing.
Most likely, however, your new platform won’t have such a drastically different interface. Smartphones are a maturing platform. A smartphone five years from now won’t feel that different than today’s iPhone. Sure, it will be faster and lighter with better components, but it will still have a touch screen with apps, buttons, and lists.
Unless you’ve figured out how to make a screen levitate with pure software you won’t be able to shake up the market. Google Glass is the closest thing I can think of to a truly disruptive interface. Adding vibration effects to scrolling menus is not.
So does this mean we should give up? No. Innovation should continue, but we have to be realistic. No new entrant has any chance of getting more than one percent of the global market. That could still be a success, however, depending on how you define success. If success is being profitable with a few million units, then you can be a success. You will have to focus on a niche market though. Here’s a few areas that might be open to you:
Teenagers without cellphone contracts. Make a VOIP only phone. Challenge: you are now competing with the iPod Touch.
Point of Sale systems. Challenge: this is an enterprise only pitch and they have long sales cycles. You might be dead by then. They also don’t care about user experience, so your awesome UI doesn’t matter. Small to medium businesses will use apps on standard devices like iPads, so you are back to where you started.
Emerging markets: half of the world is buying their first smartphone. This is an opportunity if you can get in fast with cheap hardware, but now you compete with FirefoxOS.
Mozilla is targeting emerging markets where last generation hardware is more likely to succeed. Even so, Mozilla is working very closely with local carriers to ensure success while facing down competition from low-end Android devices. They also have the advantage of being a non-profit. Their ultimate goal is not to become a profitable phone company, but rather keeping the web open and free. This is probably not a viable option for you, and even Mozilla may be too late to follow this path.
The sad truth
Smartphones are a rapidly maturing product. Soon they will be pure commodities. Just as I wouldn’t suggest anyone build a new line of PCs or cars, smartphones are becoming a rich man’s game. Unless you start with a few billion dollars you have no hope of making a profit. Maybe you could follow CyangoenMod’s approach of building value on top of custom Android distros, but even that risks facing the wrath of Google.
Sorry folks. There's plenty of room to innovate elsewhere.
posted Mon Dec 02 2013
No Starch Press is on a roll with its series of Lego themed books. While most of them are about model ideas or construction techniques, Beautiful Lego is different. This is a Lego art book. In classic coffee table style it is filled with gorgeous photos to thrill the reader. Beautiful Lego does not seek to discuss 'can Lego be art', but takes it as fact. These are works by artists, just artists using the medium of Lego instead of paint or clay, and the results speak for themselves. Stunning.
Beautiful Lego is written and photographed by Mike Doyle, a lego artist himself as well as an excellent graphic designer, but features the work of over 70 different artists. The book is organized by topic -- spaceships, people, architecture, robots -- with interviews of artists interspersed. Each artist is asked the single question: "Why Lego?"; with an immense variety of answers. There is a common theme, though: the desire to create using an incredibly malleable medium.
Some models are beautiful and some are terrifying, such as "The Doll" (pg 5) and "Disscected Frog" (pg 79). The architectural models really shine; good use of the few curvy pieces in Lego can make amazing results. There is even political art: The Power of Freedom (page 124).
Beautiful Lego surprised me by the diversity of styles within the medium of Lego. Some are hyper detailed, some expressive, some minimalist. Angus MacLane has a cute style known in the book as CubeDudes, which are head on caricatures of famous figures like President Lincoln, Kirk and Spock, and the Stay Puft Marshmellow Man. (page 36)
You will appreciate the book on two levels. First, the beauty or expression of the piece, then a second time as you pour over the photos trying to figure out "How did they do that with Lego?" Mike Doyle's victorian house series in particular will amaze you with the flexibility of Lego. (And make you wonder how big his Lego collection is:) While re-reading the book for this review, I'm struck by how much good photography makes a difference when experiencing a model.
I heartily recommend Beautiful Lego to the adult Lego fan in your life. It just might make you pull out the bin from the garage and build a few orignal models yourself. And yes, there is a Freddy Mercury model called Fried Chicken.
posted Wed Nov 27 2013
Almost since it was first released, fans of the Raspberry Pi have asked when it the hardware will be updated with better components. A faster CPU perhaps? Double the RAM? Built in wifi? The list of components you could upgrade is long. This request was brought up again when the Raspberry Pi foundation announced the sale of the two millionth Pi.
First I think we should step back for a moment and consider the magnitude of this achievement. 2,000,000. Two meeealion. That’s a whole lot of tiny computers. Not only has this sales volume let the foundation move production back to the UK, these Pis have been used to build computer labs in Africa, teach children Scratch programming, photograph endangered species in infra-red and countless micro-servers where a Pi is strapped to the back of a Costco hard drive. In short, the Raspberry Pi has become an engine for innovation.
At first, I too wanted a new Raspberry Pi with a spec update. True, the specs are anemic. It’s fine and well to say ‘what do you expect for 35$’ but that doesn’t make my code run any faster. Upon further reflection, however, I’ve realized that not updating brings some benefits as well.
Keeping the specs identical means a stable platform. If I buy a Pi three years from now it will run software exactly as my first Pi from a year ago did. Stability is very important; especially when we are talking about software often used in poor conditions without IT staff. The same goes for accessories. Every camera module and GPIO extender is built for this specific device. They will continue to work perfectly in the future.
Keeping the specs identical means our code has to get faster instead. Modern software is blazingly inefficient and it tends to not age well. X Windows on the Raspberry Pi is extremely slow, even though it ran fine on my 486 in college at one tenth the speed. I could only dream of owning a 700mhz computer in 1995. Instead of throwing faster hardware at our problems we need to improve our code. I’m currently building a GPU accelerated graphics API, targeting the Raspberry Pi first. If it can run at 60 fps on the Pi then it can run anywhere.
Keeping the specs identical means we explore and document everything. While slow, the Raspberry Pi has some very interesting hardware that can do amazing things when used properly. Only devices with a long life span get fully explored. Just look at the things people have done with the NES and C64s. Because these devices were so popular they were documented (i.e.: reverse engineered) in exhaustive detail. Today I could build a simple NES emulator over a (long) weekend if I chose, thanks to the hard work done by the community over the years. If we keep the specs the same then the Raspberry Pi will be similarly dissected and documented.
I do not long for a new Raspberry Pi. I long for better software that lets me do more with what we already have. Here’s to another two million identical Pis; each a spark for a new idea, not new hardware.
posted Tue Nov 19 2013
Now that Apple has given us final specs and cost for the redesigned Mac Pro I’ve heard complaints that it is underpowered and non-expandable, especially for the price. The Pro comes with reasonably beefy CPUs but they will be out of date in a few years. The buyer can only expand the ram and disk, and not so much on the disk side given the lack of available space. So how can this be worth the $3000 entry price Apple is charging?
First we must realize that the Mac Pro isn’t for everyone. It really is for creative professionals who spend a lot of time in Logic, Aperture, Final Cut Pro, Maya, and other pro apps. These people need the maximum ram and processing power possible, and will pay for it. Expandability of storage isn’t a problem because they don’t care about internal storage anyway. Anyone who buys one of these will be using a stack of external drives or NAS. I can buy a 3TB drive at Costco for under 200 bucks! Thus the nice collection of Thunderbolt and USB ports on MacPro’s backside.
More importantly, however, the CPUs aren’t the real focus of the new Mac Pro. Apple is betting that the future of high speed computation is GPU computing. Apple is right.
I recently went to the International Super Computing conference when it was held here in Oregon. At least 50% of the talks were about how to restructure computing tasks to take advantage of GPUs. GPUs are the future of almost all high performance computing. GPUs are not as general purpose as a modern CPU, but if you can structure your problem in a way that a GPU can compute, then you can get a 5x to 10x performance boost for the same watt (or dollar). Intel and Nvidia are happy to sell you a stack of GPUs without video connectors. These cards exist purely for GPU computation. Daisy chained together a stack of GPUs will beat any traditional super computer.
Of course, with the GPUs doing the heavy lifting the challenge becomes how to get your data *to* the GPU quickly. That’s why Apple’s MacPro site spends so much time talking about the IO bus and memory bandwidth. Internal storage? CPU upgrades? Who cares! The MacPro is all about moving data in and out of beefy GPUs as fast as possible.
Apple has been working on this for a while. Initially they started shifting graphics work to the GPU with Quartz Extreme. This enabled the OSX compositing window manager to run smoothly on older hardware. Later Apple introduced full Mac support for OpenCL, a computation companion API to OpenGL. When you write some code in OpenCL the Mac can shift the computation dynamically between the CPU and the GPU. Powerful GPUs can make up for weak CPUs.
And this brings me to the Raspberry Pi, my favorite cheap ARM based mini-computer -- so cheap I’ve seen hard drives with Pi’s glued to the side of them as files servers. At 700mhz the Raspberry Pi’s CPU is anemic but the GPU is surprisingly powerful. Broadcom’s VideoCore IV not only supports OpenGL 2.0, meaning real shader support, it also has H264 video encoding and decoding in hardware. It can decode a 1080p video in real time on this 35$ computer. The CPU just has to stream the compressed video file to memory; the GPU will care of the rest.
The Pi’s GPU also has an interesting API called dispmanx. While it is extremely undocumented, I’ve learned that this API lets you set up an almost unlimited number of hardware layers in the GPU. You can have one layer with 3D content from OpenGL while a second layer plays video and a third shows images. Most importantly each of these layers can be resized and alpha blended entirely by the GPU. This means we can create a full compositing window manager like OSX and Window 7 have, all on our tiny 700mhz computer. These guys are already working on a port of the composited Wayland/Weston library to the RaspberryPI.
While the Raspberry Pi does not support OpenCL it is possible to use the GPU for accelerated JPG decompression and there is ongoing efforts to directly target the VideoCore’s internal APIs for SIMD processing.
All of this power comes from shifting computation from the general purpose CPU to the custom purpose GPU. This is a long term trend. Over time more and more work will be shifted. GPUs can’t do all computational tasks of course; but if you can transform your problem in to something the GPU can handle (preferably something highly parallel), then you’re golden. He who controls the GPU... controls the world! Now let’s get some cheese, Pinky.
posted Tue Nov 12 2013
Installing Node on a Raspberry PI used to be a whole lot of pain. Compiling an codebase that big on the Pi really taxes the system, plus the usual dependency challenges of native C code. Fortunately, the good chaps at nodejs.org have started automatically building Node for Linux arm Raspberry Pi. This makes life so much easier. Now we can install node in less that five minutes. Here’s how.
First, make sure you have the latest raspbian on your Pi. If you need to update it run.
sudo apt-get upgrade; sudo apt-get update
Node and NPM
Now install Node itself
wget http://nodejs.org/dist/v0.10.2/node-v0.10.2-linux-arm-pi.tar.gz tar -xvzf node-v0.10.2-linux-arm-pi.tar.gz node-v0.10.2-linux-arm-pi/bin/node --version
You should see:
Now set the
NODE_JS_HOME variable to the directory where you un-tarred Node, and add the
bin dir to your PATH using whatever system you prefer (bash profile script, command line vars, etc); In my
.bash_profile I have:
Now you should be able to run
node from any directory. NPM, the node package manager, comes bundled with Node now, so you already have it:
node-gyp. The compilers should already be installed with Raspbian. Check using:
Install node-gyp with:
npm install -g node-gyp
Now any native module should be compilable.
That’s it. Node in 5 minutes.
posted Wed Oct 23 2013