OreDev on Design

I was in Sweden all of this past week for the OreDev conference. I had a wonderful time last year and made it first on my list to see again. The attendees are friendly and their technologies diverse, making it such a good learning environment. I was especially pleased to see they have added an entire track on User Experience. What follow's is OreDev's take on the future of user experience design, from the visualization technologies coming out of Microsoft Research, to a brief history of touch interfaces, to the latest rapid development technologies for mobile devices.

Interactive Visualizations

First up is Eric Stollnitz from Microsoft Research's Interactive Visual Media Group. This is the group responsible for the zooming PhotoSynth technology. They are continuing to pump out amazing imaging technology, some of which he demoed to us.

Eric showed us the Image Composite Editor which will stitch a multi-gigapixel panorama out of a hundred photos taken with a standard consumer digital camera. You just give it a directory full of images and it does the rest, arranging and blending the photos into a single final image. I especially liked how it would arrange the photos randomly and then move them into place as it does the analysis. This gives you a sense of what the app is doing. It uses realtime feedback as you change parameters to update the generated panorama using lower quality thumbnails. Once you've got settings that you like press the render button and let it run in the background, producing the final (very large) image. Future versions of this app may work with video footage as well.

The first thing he noted about their demos is that almost everything is now done with WPF, which is Microsoft's vector based UI toolkit for writing native Windows apps. It lets developers work at a higher level and produce apps that would be difficult to impossible using the older WinForms toolkit. In the Java world this parallels the relationship of JavaFX and Swing. The panorama stitching app was originally in older technology and they were able to have one intern completely rewrite it in WPF over the summer.

Eric demoed several apps using the DeepZoom technology that has been shown by MS for the past couple of years. It's interesting to see how this technology has evolved. Early demos simply focused on the amazing technical ability of zooming into insanely large images. Now that this is commonplace the focus has shifted to what useful things can be done with the technology, which is really far more exciting. Modern computers so much excess computing power, it's nice to see us doing interesting things with it.

My favorite DeepZoom app is called WorldWide Telescope. It combines beautiful large space images from various telescopes with user created content. You can 'tour' the galaxy with paths from different tour guides. In one example a seven year old kid first showed us his home in Toronto, followed by a tour of his favorite constellations. Another tour took us from earth, through the solar system, to the local galaxy structures, and finally to the edge of the known universe. The blending of multiple images into the same experience were simply amazing.

Tap is the New Click

Next up was Dan Saffer of Kicker Studio. It turns out I've been reading his blog for months. I found it while Googling for touch interfaces on Bing.com.

Dan gave us a brief history of touch technology, an overview of the various options, and finally some design guides for using touch in new interfaces.

One of the most interesting challenges in designing touch interfaces is the fact that fingers are larger than a mouse pointer (significantly) and that your hand can be in the way of the screen. With clever design you can address these shortcomings, but it's not trivial.

One way to solve the size issue is with iceberg and adaptive touch points. A touch point is simply a place on the screen where a user can touch. It's what we'd think of as a button or clickable image in traditional user interfaces. An iceberg touch point is when the touchable area is larger (sometimes a lot larger) than the apparent visual bounds of the thing being touched. This focuses the user's attention in the center of the touch point, but allows for a lot of error.

An adaptive touch point is where the touch bounds adjust based on context. For example, in the iPhone keyboard if you type 't' then 'h', the OS can guess you are likely to type a vowel next (at least in English). The keyboard can adaptively expand the touch size of the vowels to make them easier to hit. This improves the error rate of touch keyboards significantly.

The final example Dan showed us was a visual remote control system his firm developed. It's a set top box for your TV that uses a camera and vision system to let you control the TV with just hand gestures. They developed a simple language of gestures for changing the volume, navigating menus, and turning the TV on and off. It was interesting to see how they reduced a complex task into a small set of simple gestures through very careful research and design. Great stuff.

Quote of the day: The best designs are those that dissolve into behavior.

Making web applications for iPhone

On Thursday I attended a talk by Michael Samarin on developing iPhone web applications without navigating Apple's app store or writing a single line of JavaScript (well, okay, he wrote 3 lines). He did everything with Dashcode, Apple's visual web design tool, and Java servlets in NetBeans.

Dashcode has improved a lot since I last used it. Originally for building dashboard widgets, it has turned into a general purpose web design tool with a drag and drop interface similar to their InterfaceBuilder tool for desktop apps. Most importantly it has tons of Javascript enabled widgets that emulate the native iPhone environment.

Michael built a simple app on stage that could navigate and play a directory of of video clips on his server using a JSON web feed from the servlet. Dashcode lets you bind Javascript UI controls to JSON fields, making a functional app with only about 3 lines of actual code. You can even add simple hardware accelerated 3D flip transitions using CSS. The server side component was a straight Java servlet which parses the directory structure on disk and generates the JSON feed.

Next he used some Applescript called from Java to build a web based remote control for the QuickTime player. It was simple to build and surprisingly responsive. Every bit as good as a native iPhone app without all of the headaches. Michael's company, Futurice, now often recommends web apps to their clients instead of native iPhone apps because development time is far shorter, cheaper, and can be updated instantly without going through Apple's approval process.

The rapid development time really lets his team focus on the user experience. While I still believe in plugin based rich clients for the web, I was very impressed with what's possible in pure JavaScript and CSS on the iPhone. They've got quite smooth stack.

Conclusion

I love OreDev because it exposes me to technology I wouldn't otherwise get to see. Ultimately great design doesn't depend on the technology you use, but how you use it. And if the talks presented here are any indication, the future looks bright for many technologies.

That's the highlights for me. I did my final presentation on JavaFX Friday followed by some fun demos for a local Java group. Both went quite well and is in prep for a new app I'll be launching in a couple of days. Now time for some sleep.

Talk to me about it on Twitter

Posted November 10th, 2009

Tagged: travel