16 Sep 2005

ubicomp day 2

Second day of Ubicomp.

Our talk was the second talk of the day, and I pretty much was preparing it until the last minute we were supposed to go up there. I think it went alright, nobody stood up and said that this is all bullshit. Some interesting questions and comments afterwards as well, but I think we did a good job of deflecting a lot of potentially misguided questions.

It makes me think a bit about ubicomp, because the scope is so wide, there wasn't going to be many people who would be willing to stand up and ask questions in front of 200+ people.


Found an interesting and cool demo today. KDDI or some associated company "handed out" these QR Code capable phones that could be used to scan QR codes around the conference. Every exhibit and talk has a QR Code assigned to it which is supposed to point to some "ubicommunity" site which I don't really know whether it is being used much.

The cameraphones are really advanced here. They have fast network access using iMode and 3G, but the best thing is that their lenses are auto focus, so you can place the camera very close to the code and it won't be blurred. When will we have phones like that??

The application is a bit of a hack on though, it takes much longer to actually go click on the tag and then have to use your bloody phone to input ratings for demos and talks. It would be much beter if there were more qr codes printed to rate them by simply clicking on a tag rather than using a web form on a mobile phone.

Videos of Note

Cool video is the The Yellow Chair which is an arty bit, not technical, but very interesting to see how people think about free wifi and also location specific data. An art student setup a yellow chair outside her house and filmed people using free wifi from the chair. She talked to them and asked them what they think. From that point, there is also a shared folder that is only available at this point so people can drop files they like here and can only access them here.

Another cool one was the Ubiquitious Video that stiched together many photos from a moving video to generate an interactive environment using alpha blending.

Finally, an interesting video that I missed right at the start involving some Swiss guys who did the automatic GUI generation using minimisation of cost. They did a sensor application which calculated your "insurance cost per kilometre" for your driving by adding a bunch of sensors to an iPaq and then display a "tachometer" like dial and then got a professional driver to test it out to see how expensive he was to insure.


This was the start of a whole bunch of Intel talks which I believe involved Placelab. A quick impression of the talks: I liked the auto-calibrating, auto-orientating ultrasonic location system that was done by the University of Bristol and also the DigiDress talk. The other talks I weren't so impressed. Interestingly, this day had three Intel papers, and I believe all one the same system (Placelab.)

I missed the first talk about Self Surveying 802.11 in detail because I was busy preparing for our talk. But from the gist of what I heard, this was about using pre-surveyed knowledge of the environment to further enhance 802.11 positioning. The idea is to use a bunch of cell phone towers, mapped war driving open access points and others in the environment to narrow the location of a laptop. The experiment involved getting people to walk around with laptops for 3 days.

This was one of the first in 4 or 5 Intel research talks. I thought this one was very similar in backend to the second one which was a little "sliced". The one I'm talking about is "Learning the places we go." The idea is that people don't think in coordinates (eg, I'm going to -444.33, 24.22 today, you care to join me?) but that people use place names. So their argument is that, since we don't talk about coordinates, we can accept lower resolution in our applications and only use place names rather than coordinates. When you use place names, you have to infer a name of the place using various techniques. In this case, there is a bunch of sensors on a laptop such as GSM, GPS and WiFi, which 3 people carried around for a length of time (not specified.) Then they recorded the locations that they went to on a paper diary and the digitised their "places" after the experiment.

Things they noted were, home and work were the most frequently located places (heh, obviously) and that three quarters of the places were visited infrequently. I believe the talk went on to cover how to generate these place names. Interesting questions that arose was that these names were temporal, eg, what could one day be a home for you, might the next day be someone elses' home. So that might be an issue that needs to be solved. I thought that it was obvious ubi apps have place names rather than coordinates. The Active Bat System uses room names rather than coordinates in most of its applications. I thought it was an excuse to not work on a higher precision location system. Of course, you don't really need a high precision location system all the time, but sometimes, saying "home" don't really cut it if you want to have a smart home which knows where you are.

Now to the talk that impressed me, the talk by the University of Bristol people who built a self-surveying location system using ultrasonic tags. The tags themselves cost $100, uses relative distances just like Mike Hazas and James Scott's paper on auto-calibration. The calibration takes around 30 seconds and interestingly, also allows you to use an L shaped gesture to define the XYZ axis. The accuracy is the most surprising bit, these guys used around 4-6 sensors to get around 2.5cm accuracy. However, their update rates are quite slow. This could probably be the beginnings of a deployable system in ultrasonic. Their measurement of this is a little suspect, but as with all people who build location systems, they admit it is very hard to find the ground truth for these systems.

Finally, the only other talk of note is the DigiDress paper. That is basically Nokia Sensor (which I talked about one this blog a while ago) but apparently the idea was based on research these guys did in Finland. They built a prototype and was able to distribute it on their website just by word of mouth to around 280 people. They did some analysis of what people put on it (advertsiting themselves) and whether they saw other people using it, and in what mode the interaction was. Quite interesting to see something that grows from a research project into a full blown supported application by Nokia.

You can reply to me about this on Twitter: