All posts in Blog

Small update on the Social RFID project demo

The story videos are back online.

My Red5 Media Server hosting the Social RFID stories has been down for quite some time, making the story videos unavailable. I decided to make a little work-around for the web-version. Now the story videos run off of the webserver itself, making the thumbnails and videos available again.

Play around with the working demo!

Social RFID

RFID 2.0

While I wrote about social applications making use of existing business RFID infrastructure, RFID in Japan mentions HP Japan and BEA Japan proposing a business kind of “RFID 2.0”:

Systems that allow for a company to integrate the data with existing business applications (and share the data with other companies) are “RFID 2.0.”

A nice analogue indeed. Unfortunately their source, nikkei, is in Japanese. Judging from the summary in English, it looks like both companies thought about the way data is treated within the RFID infrastructure. But did they mention end-users?

Social RFID

A PDF version of my thesis (2MB) is available for download:

Social RFID thesis cover

The website accompanying this thesis can be found at I will publish a demo application of my graduation project there soon. Right now it’s using RFID input, so I will have to make a workaround. I’ll try to publish a web version soon.

Thingtagging Google Base items

As an example to my thesis findings I wanted to publish my thesis through Google Base and provide it a Thinglink code. As Google Base is still in beta fase, this does not work like it perfectly at the moment.

The first problem I came across was the addition of the attribute ‘Thinglink’. Google did not approve this attribute name. After contacting Google about this, they approved it for my particular item, but I’m still unable to add more items and provide them with thinglinks.

Google Base

The second problem is the layout of the actual thinglink. Google Base attributes do not allow colons in them. So it’s inpossible to use an attribute name of ‘Thinglink’ (which should be possible as soon as Google approves this attribute for new items) with the value of ‘thing:189THS’. A workaround could be the use of just the part after the colon: 189THS. Google comes up with the following solutions:

I have researched your question, and unfortunately, at this time there is not an easy way to include the formatting “thing:123ABC.” As you stated our system will strip colons ( : ) in custom Google Base attribute value fields.

At this time, you can include the formatting “thing:123ABC” in the following locations: title, description, or in custom attribute field of the type “Large text.” The “Large text” option is not very practical in your situation, as it needs to be text of few lines to allow the attribute to remain specified as type “Large text” otherwise the attribute specification reverts back to “text.”

A working Google Base item example of my thesis is public and searchable at the moment. At the time of creating it, I did not publish the actual thesis yet. Earlier this month, Google Base allowed you to attach more files than just images. A very detailed digital representation of an object can be now be created!

My Google item was of the Item Type ‘Thesis’. It is impossible to use custom item types at the moment. I suppose this will be fixed soon. Because my current item already is of the type ‘Thesis’, and I cannot change it at the edit page, I am unable to update information on my thesis.

Update: My thesis ‘Social RFID’ is now available and more detailed on Google Base.

Additionally just like Matt Biddulph explains thingtagging used on Flickr, the same can be done with Google Base items, linking the description layer of Google Base to the actual thinglinked object. Unfortunately also here Google is currently stripping the colon from the thinglink.

Internet For Things

You always know Google is up to something. After my request for an XML feed from individual items I got the answer:

We currently do not have an RSS feed for individual items, but we’re always working to further refine and enhance Google Base.

Of course it was only a matter of time until Google would release an API for Google Base just like all its other services.

In my thesis I wrote about setting up a primary description layer for objects. To create a public counterpart of the EPCglobal network one of the best central description databases would be Google. And that’s what is possible right now! By using Google Base as primary information layer we can create a second layer of applications for things. These applications all have their own database and can combine the central descriptive information in the primary layer with their own or other secondary applications.

The objects (or items as Google refers to them) all need a unique identifier, and that’s where Thinglink appears: the public alternative to the commercial EPC (Electronic Product Code). At the moment Thinglink is doing great, but what they are doing right now is a double job. They provide unique identifiers: the actual thinglinks. But they also allow their users to create a descriptive layer around these things. At that point, Google is doing the same, but better, and gives people more flexibility to create a much more detailed digital representation of the ‘item’ concerned.

Thinglink’s job is to provide unique identifiers: Thinglinks. Its website could give a nice overview of thinglinked objects, just like it does now, and it does a great job at that. The descriptive information about things though, belongs in Google Base, because we have more than just thinglinked objects.

The term ‘Internet Of Things’ is creating smart networked environments where objects communicate with each other. With the Google Base API becoming available we are one step closer to building applications for objects in the analogue world: an ‘Internet For Things’. More about this can be found in my MA thesis.


Yesterday I had my final exam, and I made it! It was nice to finally present these months of work and I was pleased to see the jury being so enthusiastic about both my project and thesis. In my examination committee were John Hennequin, Dick Willemsen, Rob van Kranenburg and Eric-Paul Lecluse.

The thesis has its own unique identification code, the thinglink thing:189THS. Its description layer is stored as item at Google Base. More on these layers of information can be found in my thesis. At the moment Google Base is experiencing technical problems so updating its details and publishing the PDF through Google Base is not working at the moment. I will retry soon.

Thesis Social RFID - Patrick Plaggenborg

At the HKU Faculty of Art Media & Technology Graduation Exposition 2006 I will present my project. It will take place at the 13th and 14th of September.

Project description

Object history

People do not realize they are leaving a growing amount of tracks because of digitalisation. Phone calls are being stored and internet traffic is being logged. People are not just blind for the digital history these digital tracks form, they are also blind for history in the analogue world.

Even without RFID, objects have their own history. They all lead their own life. Some are used intensively and might build very strong relationships with people. Others are destined to a lonesome stay on shopping shelves. Objects collect human emotion and experiences. This emotional history is invisible to strangers.

While browsing a store with second-hand goods I found a very old plastic military toy. The turret was broken off and it only had 1 wheel. My first thoughts were, how can they charge € 0,25 for a tank like this? Would anyone ever buy such a worn toy? But then I thought about all the things the object might have been gone through.

Plastic military tank

My project’s goal is to let people look at objects differently, by making the unforeseen emotional history visible. In this project users are able to digitally construct that emotional history by telling their own audiovisual stories about the objects. An object looking worthless at first sight will be appreciated when listening to stories about others’ experiences. Hidden emotions are revealed.

With a future pervasive RFID infrastructure and mobile devices capable of interacting with the ‘digital’ objects, this hidden emotional history can be revealed. The digital body RFID is creating, can build a collective emotional memory, making it possible for objects to carry the story of their own history. This is the main point of the supply chain application of RFID.

SocialRFID application screenshot

History parallel

A mobile device is used to view emotional stories about the object. A critical stance on the supply chain history collection is taken, and a parallel is created between the descriptive history of EPCglobal or Thinglink data and the emotional history of object stories: digital history and analogue history. The user can go for the descriptive history and retrieve manufacture, transport, retail or usage information. On the other side of the screen the parallel of emotional history is displayed. A list of currently added stories is displayed here. The user can then select a story and listen to it. In the end the goal is to be able to expand the collection of stories about an object by letting the users of the system record their own.

Extracting the emotional story

Technical side of the project

Finally every little part of the application is working together:

  1. Every object has its own unique identificating code, stored in an RFID tag.
  2. The SocketScan software on the PDA is reading the tag with the CompactFlash RFID reader and is sending virtual keystrokes to Flash
  3. Flash is recognising the unique ID and is retrieving the object’s information from the webserver.
  4. Object information is stored in a MySQL database.
  5. Flash likes to communicate in XML so Flash is accessing a PHP script which is forming XML from the SQL data.
  6. The external website running the PHP script is also hosting the object photograph that is being displayed next to the Descriptive History.
  7. The amount of stories and their unique names are also found in the XML data.
  8. Thumbnails of the stories are stored als jpg images on the webserver.
  9. The Open Source Red5 Flash Communication Server is hosting the actual stories in the Flash Video format (flv).
  10. Flash is streaming the flv stories from the Red5 server.

For the graduation exposition I’m working on the ability for visitors to add new stories to the objects available.

Socket CF RFID reader received

After waiting two weeks for a reader that was supposed to be shipped within 2 days, yesterday I got the message that it would take another 5 days. With my exam this monday that is of course way too late. It was quite annoying to get to hear this on such a short notice. Thankfully I was able to get hold of the last reader in stock at the Dutch distributor of Socket products.

Socket CF RFID Reader Card 6E

In the mean time I’ve been working on the keylogging class for Flash. After installing the socket reader it was quite easy to read and write information to the RFID tags with the Socket Demo application. Unfortunately my keylogging class did not work right away. The onKeyDown handler does not work with the SocketScan keyboard wedge software. Flash did not notice anything. The onKeyUp handler does work, but the keycodes that are send do not resemble an ordinary keyboard at all!

Using a hardware button to activate the scan worked fine and SocketScan did send the text to Flash, but Flash was unable to recognise the text. All that was recognised were keycodes 0 and 115. The keystrokes did however turn into actual text when sending it to a Flash TextField. It was necessary to rewrite a keylogging class to check an invisible textfield for a tag ID.

Socket CF RFID reader and Flash Players

Earlier I mentioned the CompactFlash RFID Reader Card by Socket Communications. After the Mobile Bristol Toolkit this device was my second choice. It’s shipping with a demo application to read and write tags and SocketScan software sending virtual keystrokes (called wedge software) to any active application on the Pocket PC.

The Mobile Bristol toolkit was my first choice because of the nice integration with reader and the application. Sending virtual keystrokes is not such a tidy solution, but it does the trick.

Before rushing out to order the Socket reader there is another bump to take. The SocketScan software does not automatically recognise RFID tags in range. The user will have to assign a hardware button to a small application doing the actual scan. This can be any PDA’s hardware buttons. Another option is the Socket Trigger software displaying a small window that is always on top. This window can be tapped to activate the scanner. The trigger software is not such an esthetic solution, but assigning a hardware button will do.

Most of the Flash players however disable the hardware buttons completely or allow the user to remap them to keyboard keys. In fact, all of the 3rd party applications mentioned in the PocketPCMag article as well as the Mobile Bristol Toolkit make Windows functionality for the hardware buttons useless. A big problem, because this renders the SocketScan software and thus the Socket RFID Reader useless!

Fortunately the Standalone Flash Player by Adobe itself is acting like a normal Pocket PC application and does not disable the hardware buttons. It’s the Flash Player 6 though, but for my purposes this is just fine. The recently released mdm ZINC V2 Pocket PC player is able to create standalone Flash projectors and is using the latest version Flash Player 7 for Pocket PC. This application also doesn’t harm normal hardware button assignments. Additionally it comes with a lot of extra Fscommands to extend functionality. But because creating programs for it with the Windows application is quite a workaround, and the Adobe Flash Player will just load ordinary .swf files I did not go for Zinc. Its price tag is also a con.

The SocketScan wedge software allows you to create a prefix and a suffix around the tag’s data when sending it as virtual keystrokes. I chose to wrap it in ‘(‘ and ‘)’ characters, for example: (4313750134). With a Flash class listening to keypresses the start and end character can be recognised and the actual tag ID can be used as parameter in other methods.