My friends have launched their “travel inspiration” service Wanderfly today. Co-founded by Evan Schneyer, Christy Liu, and Cezary Pietrzak, “Wanderfly answers this basic question: ‘Where can I go?'” This guide shows you even more places to visit in Australia
Just enter your departure city, budget, approximate time frame, location (if you know a general vicinity, i.e., “Europe,” or you can be completely spontaneous and leave it defaulted to “Anywhere”), and interests (casino, eco, food, culture, outdoors, romance, shopping, spa, party, beach, entertainment, and/or luxury). “Get Going,” and the system will recommend custom-tailored locations along with flight and hotel options and a collection of things to do pulled in from services such as Foursquare, Yelp, Eventful, Nile Guide, Find. Eat. Drink, and Lonely Planet.
Some really cool stuff coming from Microsoft Research. LightSpace is essentially Surface without the table, encompassing the entire environment around you, and controlled by projectors and depth-sensing cameras. Definitely a push towards a gesture-based computing, LightSpace and future research will embrace “device-less augmented reality.”
LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing.
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.
Stumbled upon this awesome browser “game” while reading Gizmodo. It’s like a game of Asteroids on most any webpage where you shoot up objects on the site – just drag-and-drop the javascript code into your bookmarks menu, visit a site, and launch the code. You steer your ship with the arrow keys, and shoot with the spacebar. Like it says on the site, “it’s cooler if you make your own sound effects.” Have fun!
I’ve been visiting ui.stackexchange.com recently as a resource for ui/ux information. It’s a network within Stackexchange, a community powered Q&A hub that allows members to ask, answer, and rate. Here’s what I grabbed from their wiki:
The websites feature the ability for users to ask and answer questions, and through membership and active participation, to vote questions and answers up or down and edit questions and answers in a wiki fashion. Users can earn reputation points and “badges” through site participation; for example, a user is awarded 10 reputation points for receiving an “up” vote on an answer given to a question, and can receive badges for their valued contributions. By collecting reputation points, users are given more and more permissions, ranging from the ability to vote and comment on questions and answers to the ability to moderate many aspects of the site.
There have been some pretty good topics posted (284 as of today), but as with any great site, content is king, so once the community grows, it’ll be a great spot to gain some community insight.
By now, you’ve probably heard, seen, or used Google’s new search interaction, Google Instant. Quick recap: Instant progressively shows results as the user types a query and refreshes results as the user adds each additional character, all the while, providing suggested searches. Here are the benefits Google says will be gained by using Instant:
Faster Searches: By predicting your search and showing results before you finish typing, Google Instant can save 2-5 seconds per search.
Smarter Predictions: Even when you don’t know exactly what you’re looking for, predictions help guide your search. The top prediction is shown in grey text directly in the search box, so you can stop typing as soon as you see what you need.
Instant Results: Start typing and results appear right before your eyes. Until now, you had to type a full search term, hit return, and hope for the right results. Now results appear instantly as you type, helping you see where you’re headed, every step of the way.
I’ve noticed that i’ll keep typing my search even though the correct suggestion is right underneath. The 1st suggestion would be shown in gray in the main search bar, and I would dictate to that like I’m playing a typing game.
I have a habit of pressing ‘Enter’ after a search even when the result is already shown. I feel that when Instant suggests what I might be looking for in the field but I haven’t finished typing the entire query, pressing ‘Enter’ should default to the suggested query.
Refreshing results with each additional character is extremely fast.
There should be some type of hotkey to activate the “I’m Feeling Lucky” feature during a search.
I can’t wait to see if Instant will roll out on Google’s other features, especially maps and images. Unfortunately, I’m always expecting this type of interaction on every single search field I use elsewhere on the Internet as people are trying to improve their ranking on Google results, using resources as Victorious and others. But some clever devs have made their own home-brewed versions to work on other services like YouTube and iTunes.