Some months ago I wrote about Made – tool – a software under development in my team. Today we released the tool (v 0.5), but with a new name: ImaNote. It is not primary a “learning tool” but it is a “social software” and there are many ways to use it in learning, too. I am sure you’ll come up with many other ideas. Let us know.
Here are the basic facts about ImaNote:
ImaNote – Image and Map Annotation Notebook – is a web-based multi-user tool that allows you and your friends, to display online high-resolution images (e.g. maps) and add annotations and links in to them.
You simply mark an area on an image and write an annotation related to the point. You can also add a link to the annotation. You may use RSS to keep track of the annotations added to the image or make links on your own blog/web site/email that are pointing to the annotations on the image. The permalinks leads right to the points in the image.
ImaNote is Open Source and Free Software released under the GNU General Public Licence (GPL). ImaNote is a Zope product, written in Python. Zope and ImaNote run on almost all Operating Systems (GNU/Linux, MacOS X, *BSD, etc.) and Microsoft Windows.
Download, have a look of the screen shots or read more about it from the ImaNote project’s website.
4 replies on “ImaNote 0.5 released”
Grreat Stuff!Basically this is what flickr allows, but now it's in more widely deployable use.I find several learning scaffold functions for it: – more coherent discussion about image objects when annotations are given a place in the image – Co-discovery of features of a image at many zoom levels (in the heritage map): reading the image is different at various zoom levels and so are their annotations (level 1: overall, context, meaning, level 2 (closer): parts in image, certain objects, inter-relations, etc., level 3 (really close): reproduction technique, age, material, etc).- Linking together physical locations and virtual representations (place mapping). An example of this could be a growing map of a "biology journey into forest" where a crude map of an exploratory path to a forest by students is drawn. It is incrementally improved, annotated for stuff found in the nature and inter-linked to discussion about these findings (Project: biology walk in the forest).
Possibilities for next version: – crude marking / drawing on top of image (freeform pen) (hide / show these so they don't detract from enjoying the image) – Adding a sound file to a certain spot (speech annotation or related sound link)- Would crudge MNG / GIF animation be possible ? An animation would allow for time sequenced event portrayals and their time based annotations. That is, an MNG image could show 5 stages of a butterfly development as a slow animation with play/stop/rewind/forward controls. One could then annotate each phase in development (a separate frame). Feeds could also then be viewed sequentially in time (1st frame annotations, 2nd frame, etc.)- One annotation content in two positions in two images, with side by side comparision of images. You could annotate two images with same text so people could compare them. Or even if not same annotation, side-by-side images would help in image comparison, analysis and discussion tasks (and their related annotation). Example: arts related annotatoin of two related paintings, Before & after comparison of images of sketches with annotation. Risk: becomes a discussion tool.- Many ways to view annotations: date of adding, place in the map (i.e. closest to spot X that I pointed on the image), based on poster, based on keywords (metadata), based on – Identification of annotations visibly on the image (nubers, names, tags, something?)- Can this be combined to visual jamming tool in FLE3?It would allow not only development of visual versions, but also their annotations more accurately.This is a great tool.I hope you have time to develop it further. I'm sure you have many more ideas yourself!Of course, it'd help to have a task that this tool supports, so deciding on what features to add would be easier for you.
I agree… The more I think about this, the more I would like an integration of this with jaming: annotate anything (audio, text, video, images, animation any blob type).Of course, it's a HUGE challenge and has to be tackled media piece-by-piece.Also, I'd be very careful not to repeat something when it perhaps has no worth in repeating.for instance, doc/pdf files have a built-in annotation tool.I can tell you for sure that many distributed teams working with media objects would kill to have a really nice system like this, which they could use to communicate ideas, explore things, make memorisation notes, etc.But I digress…Just stay focused on what you want and what you think is wearthwhile for learining.
Hi Smau and Antti, Thank you for your comments. I should go and see the HIIT's text annotation tool. Still, I think we really need to keep this tool simple. It is and will be *image* and *map* annotation tool, only.