The imagination of students does not have to be confined to the brain, but can be brought to the real world. The Imagine Cup, a world-wide competition of students aged 16+ shows what they can achieve in the field of technology when they put their heads together.
In the 2010 edition, the team from New Zealand – 4 young engineering students from University of Auckland – placed third with their idea OneBeep. These four – Vinny Jeet, Steve Ward, Kayo Lakadia, and Chanyeol Yoo – came up with the idea to use radio waves to transmit any file – no matter whether text, image, video or software – to remote locations where there was no internet access. The aim is to update the laptops deployed through OLPC with educational material.
Wow. We saw just a short video at the Digital Technologies Symposium, but that alone was a great demonstration of what can be possible in the near future. Now that the proof-of-concept has been successfully achieved, they are looking into working together with the Solomon Islands (it’s close to home) to deploy OneBeep for OLPCs there before they go out into the world.
The guys told me that at least part of their software will be made open source, but they haven’t really looked into it yet. Let’s hope they do and others can contribute to this fantastic idea and take it further together with them.
Having installed it, I must say that I like this slick software because it just offers you to type text. No thrills, no bells and whistles. Simply text.
As keystrokes and background music are not my thing, I can turn them off. I can also change the frosty winter landscape to a simple white to see even less on the screen.
OmmWriter forces me to separate content from layout because there are no layout options. Usually, I play with headings, bullet point and pictures. It will be interesting to see if I get jumpy from not being able to do all that or if there are also instances when I don’t need to layout.
You can actually also use OmmWriter to create audio text art when you use the music and / or the pitches of the keystrokes to convey meaning.
Since last Thursday I have picked up snippets of buzz around Google Wave and finished watching the demo video today (had kept it for my workout to have that pass more quickly 😉 ). I am just speechless and can’t wait for it to be publically available.
Wave is a new communication tool that is email, instant messenger, microblog, blog, collaborative writing tool, and some more all in one. It redefines the way online communication is done because you do not have to think about whether you want to write an email, send a Twitter message or start a chat. You just do it all in Wave.
The Wave team from Australia demonstrated so many features and extensions of Wave that your head spins (not in a particular order):
embed waves into a blog
drag and drop photos and links
instant translator (I think that one got the biggest applause)
add people to an ongoing conversation
linking other services, e.g. Twitter, bug tracker, and be able to either update from within Wave or these other services
federation: communication between different Wave installations
playback of the development of a Wave conversation
I think the only things that are missing are audio and video chat. I could imagine audio comments to show up like text comments or have an audio conversation be recorded while working on a document.
Now, how can some of these features be used for learning and for research? The one point that jumps to mind instantly is the possibility of collaborative writing (including using visual media). You see instantly what the others type and do not have to wait until changes are committed. That is similar to EtherPad. You can add comments to a document which can be hidden or displayed which is easier to detect them and to deal with them than in Google Docs where you write them inline with the text. Parts of a current wave can be opened as a new wave to branch conversations.
When you want to research the flow of communication, for example how people interact when writing collaboratively on a document, all you have to do is hit “playback” and the wave unfolds in front of your eyes. I already liked the “playback” feature when I discovered it in the concept mapping software CmapTools. Even though I have not used it yet beyond testing purposes, it has a ton of potential if you are involved in that kind of research because you do not need an additional program to record what has been done, but you can have all actions recorded automatically.
Another great thing about Wave is that it can run on any server and does not have to sit on a Google server. The GUI is also changeable. This is great news because Google is often accused of listening in on conversations, using it all to their own benefit. When the Wave runs on a server different from the Google one, there is no information exchange. However, Wave communications can take place across different servers so that Wave users do not necessarily have to create accounts on all Waves. They then form a federation. According to the developers, only the content that is meant for the Wave users across the systems will be sent to the federation Waves. If a private comment is made between two users of one Wave, the other server does not see that comment at all. That makes Wave attractive to companies, institutions etc. that want to use it, but do not want to put their communication and documents out there on a Google server for confidentiality reasons.
The demo made it all look so simple, but the Wave team spent two years on getting this far and they have pooled resources from other Google programming teams. Google opened the code and invited programmers to get busy on extensions and gadgets for Google Wave to make it more powerful even before the official launch. Let’s see when the Wave will hit our keyboards.