Tag results

Wiki[mp]edia data sources & the MediaWiki API

A brief presentation I gave for Melhack last week:

Wiki[mp]edia data sources & the MediaWiki API
View more documents from Brianna Laugher.

I wrote a bit on my techiturn blog about what I worked on in my 24 hour hack.

There is a huge amount of rich data in Wikipedia and other MediaWiki collections, naturally, but as there is no API evangelist you have to do a bit of digging to figure this out. Regular readers may recall that I am quite a fan of the API and what it means for reusers.

13 November, 2009 • , , ,

Comment [1]

WikiDashboard live in the wiki wild

I have previously written about WikiDashboard working on the live English Wikipedia. WikiDashboard is some software that puts a small overlay — a dashboard — over the top of each wiki page (for a MediaWiki that has it installed/enabled/where you choose to use it). It gives you visuals on who has edited that page over time, and has graphics that let you figure out when the page has been heavily edited. It is an extremely useful tool in making a quick critical analysis of the history of Wikipedia articles, for example.

I received an email from John of the Bering Strait School District (BSSD) OpenContent Initiative who told me they are using WikiDasboard live on their site Currwikulum:

The Palo Alto folks have been great, and as from what we have been told, we are the first implementation “in the wild” of their WikiDashboard tool.

We will be collecting data on how teachers, students and administrators use this in a collaborative curriculum development process.

We are still tweaking it…it will default to “invisible” unless clicked after we are done debugging.

I am a bit sceptical about the benefit of having WikiDashboard, as it stands, “on” all the time; it is rather bulky. Making it less intrusive sounds like a great step towards making it a real “every day” tool, and that can only be good for increasing the general public’s ability to scrutinise and understand the wiki way. Getting data back about how it is used will also be awesome!



I recently returned from the chapters meeting in Berlin. In no small feat of people engineering, participants from every chapter made it to the meeting. Now there’s tons I’d love to follow up on, including blog posts. Watch this space…

13 April, 2009 •

Comment [1]

Edits in unexpected places

I went to de.wp because I wanted to contact a user from Commons who had pointed to his German Wikipedia user page. While there, I found that thanks to unified login, I was already logged in there. Sweet! I went to the preferences to change the language to English so I could at least read the menus.

While on the preferences page, I was rather surprised to notice that it said I had 6 edits at the German Wikipedia. To my knowledge I have never edited the German Wikipedia since after universal login was turned on. I may have added images to stuff previously as an anon user, but that was definitely pre-universal-login. So what were these edits?

My mysterious six German edits

Hm… they appear to be an utterly random selection of edits that I made years ago… and in English! Trust me, I have never done any German “rewording” or copyediting in my life. :)

Looking at the history of the German article on Mulesing is not very enlightening. A handful of recent German looking edits preceded by a ton of very English looking edits. The log reveals what actually happened:

It was transwikied. My name is there because the history is preserved. What is weird, though, is that edits I made in English point to, and are recorded against, my username in German, although I never made those edits in the German Wikipedia. I don’t know if this is unified login being ultra super smart, or a strange side-effect.

Anyone have any insight into this? Have you ever found any “unusual” edits like this in an unexpected wiki? :)

25 March, 2009 • ,

Comment [8]

Free MediaWiki hosting offered by Dreamhost Apps

So I noticed a couple of weeks ago that my host, Dreamhost had started offering what they called Dreamhost Apps. They off certain hosted web-based software for free. One of the apps they will host is MediaWiki. AFAIK they are the first people to host a MediaWiki install for free, even at your own subdomain.

So I decided to see what it looks like:

Signing up is a cinch…

Hm, barely five minutes later and I have a wiki.

This is the “admin side” web panel:

So if you have tinkered with MediaWiki a bit you will know that being able to edit the LocalSettings.php file is key to customising. Because this is free you don’t get any SSH/FTP etc access, but they do let you change the most basic settings via their web panel:

So what does this mean?

They make it easy to upgrade to a “proper” (paid) Dreamhost account, which is nice. But even as it is, they offer easy backups:

To me, this is vital. It is dangerous to invest your time and effort into anything that won’t let you pick up your data and go elsewhere if things turn pear-shaped. Having that database is your freedom to move to a provider who will treat you right. It also means, in my view, that Dreamhost are happy to compete on the quality of their products and services instead of using entrapment to keep customers. This is an extremely good thing.

Dreamhost Apps is still very new, so it might be a good idea to keep an eye on the support forum.

Dreamhost don’t get everything right but I have always appreciated their value for money and their frank and honest attitude to their customers. I have no complaints. So if you are ultra cheap-skate and/or still scared of a command-line, but you want a wiki to play around with and don’t like the existing options, Dreamhost Apps might be the right thing for you.

PS. They also offer Wordpress, Drupal, ZenPhoto, phpBB and Google Apps, but none of that is as cool as MediaWiki. ;)

03 February, 2009 • ,

Comment

Wikiversity, interested? - How to make a wiki editable slideshow

A friend at LCA came up to me with an interesting problem. She has some sets of slides, that she uses to teach beginner’s programming workshops in Python. She wants to share the slides with other people so they can be more widely used. What should she do?

Firstly, she could upload them to Wikimedia Commons just as a file. Oh, but, wait!, SXW and ODP OpenOffice formats were removed from the accepted file types list because of security concerns! Which means the only file format you can upload slides in is PDF. And PDF is like a paper book. It is not editable at all. If you want to edit it you pretty much need to start from scratch.

The second problem is, even if OpenOffice formats were accepted, there is no good versioning for uploaded files. For wiki pages, blobs of marked-up text, we have an awesome interface via the History tab. We can see who made each revision, what exactly they did, and we can also easily see the diff between any two edits of any given wiki page. But for uploaded files, all we know is that there may be uploaded multiple versions of a file. If we want to see anything like a “diff” we have to download both versions and
compare them manually. This is tedious and severely hinders collaboration.

So, hm… seems like wikis are out. Maybe there are some other sites that allow collaborative editing of slides? Slideshare, nope, no editing, just viewing and commenting. Well there is Google Docs, but you need to invite in advance who you want to let edit it (and I’m pretty sure they need a Google account). That is rather contrary to the aim of making them available for anyone to use.

I had a bit of look around the web and I didn’t see any other service offering collaborative editing of slide sets.

Hm…. but now, there is S5:

S5 is a slide show format based entirely on XHTML, CSS, and JavaScript. With one file, you can run a complete slide show and have a printer-friendly version as well. The markup used for the slides is very simple, highly semantic, and completely accessible. Anyone with even a smidgen of familiarity with HTML or XHTML can look at the markup and figure out how to adapt it to their particular needs. Anyone familiar with CSS can create their own slide show theme. It’s totally simple, and it’s totally standards-driven.

Sounds pretty awesome, eh? You can check out a demo as well.

So the S5 format is not much more than a text file with some markup (and maybe some images). This is good for slideshows with minimal bleeding-edge techniques, which I guess is about 99% of them.

So I started workshopping with my friend about how this might be adapted to MediaWiki. Maybe a S5 slideshow could just be a regular wiki page, so we get all the diff/history page benefits of MediaWiki, and just insert a link that says “view/save as slideshow”… have some parser-thingy that perhaps interprets a subset of MediaWiki syntax and converts it to the S5 format…hm…. sounding good??

I then did a bit more googling and found out I wasn’t the first to share this brainwave. Dokuwiki have a plugin to do just what I described. So do PmWiki. I do recommend checking out the Dokuwiki example — it is very impressive. (Click the “pull down poster” icon in the top right to start the slideshow view.) That looks so awesome to me that I think I will try and do all my slides from now on in a Dokuwiki using that plugin. The only thing that is missing is a “convert to PDF” button. Not sure how difficult that one would be.

So back to MediaWiki. Other wikis have done it, which means probably MediaWiki could too. Actually someone did start work on such an extension, although it has not been touched for a couple of years.

My friend is a competent coder and interested in this small challenge, of writing a MediaWiki extension for wiki-editable slideshows, good enough to be accepted on Wikimedia projects (that is the challenge bit :)).

So, Wikimedians, should she bother? Wikiversity folk – would you find it useful? Would it be useful for others too? Wikiversity is the most obvious use that springs to my mind, but perhaps you can think of other great uses that I have missed?

29 January, 2009 • , ,

Comment [6]

Media handling on Wikimedia: Preview the future! (part 2)

Please read the start of part 1 to find out how you can try out these features too!

OK so part 2 is about in-browser video transcoding. So…what does all that jargon mean and why should you care?

Just as image files come in different formats (BMP vs JPG vs TIFF vs PNG vs SVG vs …), so too do videos. In fact it’s rather more complicated because there’s these things called codecs. As far as I understand it, different codecs are different methods of compressing audio/video – codes that say how to pack the raw (and huge) audio/video file in a particular way to save space. But because each codec has its own particular way, you need to unpack it in that same particular way otherwise your computer won’t be able to understand it, and you won’t be able to play the file. MP3 is an audio codec. MPEG-2/3/4 are video codecs.

Unfortunately it’s not even as simple as equating file format == codec, because some file formats are “container formats”. AVI and OGG are container formats, and it means that inside, the audio/video can be encoded in a variety of different codecs. So basically it’s more pain.

Now some codecs seem free, but some codecs really are Free, and hopefully this coincides with being patent-free so no one will sue you just for using them. The Wikimedia Foundation, bless their cotton socks, recognise through their Values statement the importance of free formats and codecs:

An essential part of the Wikimedia Foundation’s mission is encouraging the development of free-content educational resources that may be created, used, and reused by the entire human community. We believe that this mission requires thriving open formats and open standards on the web to allow the creation of content not subject to restrictions on creation, use, and reuse.

Consequently, Wikimedia Commons has a policy on which file types may be used:

Patent-encumbered file formats like MP3, AAC, WMA, MPEG, AVI and the like are not accepted at Wikimedia Commons. Our mission requires content to be freely redistributable to all. Patent-encumbered formats fail to meet this standard.

So what is allowed? For audio/video, it comes down to Ogg container format with Ogg Speex/FLAC/Vorbis (audio) and Ogg Theora (video) inside. Yay Ogg! There’s only one tiny problem… no Windows software plays anything Ogg by default, no recording devices produce Ogg files by default, and this means users have to convert their files before uploading. Blah! What a hassle! Why can’t using free software and free formats be easy?!? (I’m not being facetious… I half know what I’m doing and it’s still a pain.)

Well, soon things are going to get a whole lot better: with Firefox 3.1, due next month in February, by default Firefox will support Ogg Theora. That means you’ll be able to play Ogg video in your browser without any extra software.

But even better: someone has written an extension called Firefogg which will transcode a file for you when you upload it. So, if you have Firefox 3.1+, and you have the Firefogg extension, and you come to a site that only accepts Ogg and you have a something-else file, now you just need to upload it as normal and Firefogg will convert the file for you before uploading it to the site.

I don’t know about you but I think that’s some serious genius. And Michael Dale has an implementation of it for Wikimedia Commons! Here’s what it looks like:

So we make it to the Commons upload form, and notice a new option saying “Enable video converter”. So if we tick that…

… then we can choose some random video format (in this case, AVI). And instead of just uploading, it will do transcoding (converting the format) and then uploading.

So, transcoding:

And, uploading! (Nice to get progress meters “for free” with this extension)

And… wala! Here’s my uploaded file, now in Ogg format, and playing using just the browser because that’s how awesome Firefox 3.1 is going to be.

As it transcodes, it also writes a copy of the final Ogg file next to your original file – handy to have both around.

One of my favourite things about this is that it removes the need for me to figure out all the configuration options in transcoding files. There’s so many and figuring out the optimum ones can be very tedious. With Firefogg, the site that is accepting the Ogg file tells your browser what settings it wants you to use — and you don’t have to see or deal with any of it! Total win. :)

So, to recap, how you can play with this awesomeness:

  1. Add this to your /monobook.js: importScriptURI('http://mvbox2.cse.ucsc.edu/w/extensions/MetavidWiki/skins/add_media_wizard.js'); (like this)
  2. Install the Firefox 3.1 beta (…or wait til February and it won’t be beta anymore)
  3. Install the Firefogg extension for Firefox
  4. Go and upload videos with the greatest of ease!

Again this is something I hope that will become available as a Gadget for people’s user preferences, so if you like to experiment a bit please do so and report back, so it can be stable enough by the time Firefox 3.1 is released to be a Gadget for everyone.

Long live the Ogg! :D

28 January, 2009 • , , ,

Comment [6]

Media handling on Wikimedia: Preview the future! (part 1)

So at LCA I was able to catch up with Michael Dale, who is doing some very interesting video work for the Wikimedia Foundation. Michael was talking about his development work, as well as taking part in the Foundations of Open Media Workshop which is collocated with LCA.

I had noticed some recent posts by Michael to the mailing lists inviting people to trial some new features he was working on, so after his talk I cornered him to get a personal demo and make sure I didn’t miss anything. :)

There are two separate features that are bundled together in the demo: one is an in-browser video transcoder, and the other is a cool add-media wizard. The add-media wizard works on Firefox right now, while the video transcoder needs just a couple of extra steps, so I will cover the add-media wizard first.

First of all, go to your user monobook.js subpage on whichever wiki you want to try this on. Mine, on Wikimedia Commons, is at http://commons.wikimedia.org/wiki/User:Pfctdayelise/monobook.js. Edit it and add this line:

importScriptURI('http://mvbox2.cse.ucsc.edu/w/extensions/MetavidWiki/skins/add_media_wizard.js');

(note the trailing semi-colon)

Do a shift-refresh, and then open a page for editing. If you’ve cleared your cache you will see a new little icon above the edit box, at the far left:

To my mind it looks a bit like the Ubuntu icon, but I think it’s actually a film reel with a plus symbol. :)

So if you click it, you will get a box coming up with some thumbnails of images, searching on whatever the name of the page you edited was.

So what we get is some thumbnails from an image search, and we can see different tabs for different media archive sources. At the moment there is just Wikimedia Commons and Metavid. Obviously we could add other license-suitable archives as we become aware of them (eg Flickr’s CC-BY and CC-BY-SA images). Currently it mixes together all media types.

So, let’s choose one…

Now I can write a caption, and either preview the insert, insert it directly, or cancel the insert. This is what the preview looks like:

Note the “preview” notice at the top of the drop box, and at the bottom the options “Do insert” or “Do more modification”. Hmm, so what was that crop option?

Clicking on the “Crop image” option gives a “box drawing” cursor where we can draw a box over the image to choose what crap we want. Everything not in the crop is shadowed.

So how does this crop actually work? At the moment, it relies on a template called Preview Crop. So if you’re testing this feature out, check that your wiki has that template. In the future, hopefully crop functionality could be added directly to the MediaWiki image syntax, so it might be equivalent to something like [[Image:Foo.jpg|crop,10,10,120,150|thumb]] or something. For now, you need the template.

And… it works! :)

So that is the add-media wizard. If it is very well designed, it may remove the need to search Wikimedia Commons separately to writing your Wiki* article. (I mean, it wouldn’t be hard to improve on the default search.) It would be neat to integrate some of the features from Mayflower, including hover-over indication of license, description and metadata, and advanced search options such as searching by file type.

A few more things:

That’s about it for now; I will cover the in-browser video transcoder in a second post. If you think it looks interesting I encourage you to try it out, and report any problems or suggestions you have. Or if you have no problems: that’s also good to know! If a few more people try it out in its current form, I think it would be a great thing to enable as a Gadget then people can easily choose to use it by turning it on in their preferences. Ultimately it may even be best as a default thing turned on for everyone, by being integrated into MediaWiki core.

28 January, 2009 • , ,

Comment

'Hacking MediaWiki (For Users)' video

(Hacking MediaWiki [For Users] on blip.tv)

This is a video of a talk I gave at the November Linux Users of Victoria meeting called “Hacking MediaWiki (For Users)”, talking about ways to extend and modify MediaWiki on the “wiki side”, without need admin access to LocalSettings.php (and everything else). Preparing the talk inspired me to write about why I love MediaWiki.

It covers subpages, links, templates, categories, namespaces, special pages, modifying the interface, skins (CSS & JS), magic words, the Gadgets extension and the “uselang” hack. It’s basically for power-users, I would say.

Many thanks to Ben for the excellent quality recording. The audio in particular is very good. He also cut it down to size and uploaded it which are those annoying things that nobody particularly enjoys doing, so thankyou.

The slides can be downloaded from Wikimedia Commons (direct link). When Slideshare wakes up I will put them up there too.

25 November, 2008 • ,

Comment [7]

InstantCommons lives -- and why it matters

OK, my NaBloPoMo definitely failed. Nonetheless.

Something that was introduced to MediaWiki with little fanfare was wgForeignAPIRepo. This allows a MediaWiki install to specify another wiki to pull images and other media files from. Like, say… Wikimedia Commons!

This is an idea that has a long history under the name InstantCommons. One of the main reasons Wikimedia Commons is cool is that it can be used as a media resource as if the files were uploaded locally by all the Wikimedia wikis. So “InstantCommons” is about extending this aspect of Wikimedia Commons behaviour to any MediaWiki wiki.

I enabled it for the Wikimedia Australia wiki. All I needed to do was paste these settings into the LocalSettings.php file:

Now, on the front page, there is a colourful map. Clicking through to the image page shows the full image information from Wikimedia Commons, as well as a subtle hint as to the image’s origin:

I wrote to wikitech-l to ask about what the plans for future development of this functionality is. The response was pretty quiet, although Chad is planning to do some dev work on it which is awesome.

There are two big obvious gaps in the functionality at the moment. The first is that third parties using InstantCommons don’t have any way of knowing what happens to the images they are using. While wiki resources such as Wikipedia and Wikimedia Commons may look stable from the outside, editors know that they are anything but. Especially with images, as there is no straightforward way to move/rename images. This means images get deleted and re-uploaded under their preferred name. Copyright violations are also rife among uploaded files, much more so than contributed text I would guess. Not to mention that the community understanding of international copyright is constantly being negotiated and argued. It’s very tricky; not straight-forward at all.

So this is one thing. It is nice to know when images that you use are deleted so that you can remove the ugly redlinks from your pages. But you on your third party wiki have no way to find this out locally: deletions at the other location won’t appear in your local logs. There are a couple of choices:

The first option seems good but depending on what your wiki is and how you run it, you may well want to keep up to date and, for example, remove known copyright violations.

The second is a good option but may not be very popular. A similar concept was created for Wikimedia wikis, known as CommonsTicker. However my observation is that using the CommonsTicker has not been a very popular choice. Wikimedia wikis might get pissed when an image gets deleted that had been on their front page, but that doesn’t mean they’re prepared to drudge through the CommonsTicker log on a regular basis.

The third has been most popular in the Wikimedia universe. CommonsDelinker is a bot that runs over all 800+ wikis of Wikimedia and removes redlinks to images after they have been deleted at Wikimedia Commons (if the local community has not, in the meantime, removed the link themselves). CommonsDelinker also comes with a manual (the code is under the MIT license) and with some minor tweaking might be usable by third parties. It makes more sense for third parties to run the bot themselves, rather than Wikimedia Commons, due to configuration differences.

The other gap in the functionality is that there is no way for the central repository to know which of their files are being used and where. For Wikimedia Commons, we generally like to be able to find out what impact our actions will have — especially if, say, WikiTravel was using our content. Actually this problem is by no means well-solved even for the Wikimedia universe — we rely on a toolserver tool called CheckUsage. If the toolserver goes down, Wikimedia Commons admins have no way of knowing what impact their deletions may have. This still needs a MediaWiki-native solution.

Why is InstantCommons important? The Wikimedia Mission is to disseminate freely-licensed educational content effectively and globally. For Wikimedia Commons, what could constitute more effective dissemination than letting every MediaWiki wiki use its content so easily and transparently? Not a single thing that I can think of.

23 November, 2008 • ,

Comment [2]

Why I love MediaWiki

Yesterday and today I was preparing my talk for a Linux Users of Victoria meeting. The talk was “Hacking MediaWiki (For Users)”, a kind of intermediate-to-advanced overview of MediaWiki’s wiki-side features and customisation-related functionality. As I was preparing it, I realised just how much I love MediaWiki.

Much of my thinking about MediaWiki is tinged with frustration, as I despair about getting my bugs fixed and features written. (It’s no joke to say that the only PHP I know is because of MediaWiki.) But that is only because I take for granted how great it is. I love how good it is already, that I want it to be even better. I think there are very few features in it that I have never used. I cut about two thirds out of my talk because I realised I had way too much to cover.

I love that it is designed from the ground up in a way that trusts and respects the users: the open-by-default-ness of it that makes applying access controls (especially read restrictions) after the fact, a doomed proposition. If you want that, you need a wiki engine with a less trusting philosophy.

My single favourite thing about it is the ability (for sysops at least) to edit the interface, via editing pages in the MediaWiki: namespace. It really steps back and lets the users take ownership of their wiki. It makes me wish practically all software I used had such a function. (Unfortunately the “discoverability” of this fact is still low. Your chances of figuring this out without anyone telling you would be near zero.)

Actually, at work I recently wrote some documentation for my coworker’s web-based software package. I wrote some HTML pages (actually I wrote them in a wiki and then just saved the result) and they were just included under a “documentation” tab. Not super-integrated into the software, although at least they were available from the same interface. It made me think how cool it would be if more web-based software had an integrated wiki that was seeded with initial documentation. Then groups could use that wiki to expand the documentation to reflect their own use of the software. Probably integration a light-weight page-based wiki engine wouldn’t be such a difficult thing to do, would it?

The talk is over now. It was videoed so I will wait a few days to see if that surfaces. I am considering breaking it into chunks and reworking it a bit, maybe as series of slidecasts.

Thank you LUV for the opportunity to speak, and thank you MediaWiki for the inspiration!

05 November, 2008 •

Comment

A FLOSS Manual: How to contribute to Wikimedia Commons


“Oh Brianna,” you may ask, “What is that most interesting tome you’re reading?” Why, I’m so glad you did. It’s a Lulu-printed version of a short manual I wrote called How to contribute to Wikimedia Commons.

So, I spent the week before Wikimania in France. The purpose was to hang out at the FLOSS Manuals Inkscape documentation booksprint and also to hang out with my French friend (un-wiki related). As you may have guessed I am a big Inkscape fan. (Inkscape is the premier open source software for creating and editing vector graphics, i.e. SVGs.) But I definitely don’t know enough to really help out with writing documentation. I would be too busy reading it. So I decided to write some documentation to help people who might be familiar with Inkscape, i.e. accomplished or semi-accomplished illustrators and artists, but new to the complex and confusing world of Wikimedia. That is the aim of my manual, to be a self-contained introduction to contributing to Wikimedia Commons, without information overload.

So you can read the manual online at http://en.flossmanuals.net/WikimediaCommons/. You can also check out the editor’s view at http://en.flossmanuals.net/bin/view/WikimediaCommons/WebHome. FLOSS Manuals uses a highly customised version of Twiki — yes, a wiki. The most immediate difference between Twiki/FM and MediaWiki is that Twiki/FM uses a WYSIWYG interface that converts directly to HTML (which you can also edit directly if you really want) — no “wiki markup” intermediary. I thought it would bug me (Wordpress’ annoys me immensely), but I soon got used to it and didn’t find it slowing me down at all.

Twiki/FM is great for planned-ahead books with a small number of authors where the bulk of content is written over a known time period, but MediaWiki is definitely better suited to laissez-faire, largely unstructured book development by an unknown number of authors over an essentially unbounded time frame. That’s not to say that there aren’t a significant number of improvements that MediaWiki requires to really meet the needs of book authors and their enablers.

I also feel that Twiki/FM is a better choice if you want your manual to have an offline life, whereas MediaWiki is much better if you intend it to stay all linked-up on the web. The killer feature here for Twiki/FM is Objavi. It’s a wiki-to-pdf converter that uses CSS for styling and most importantly, actually looks great.

Basically double-handling is the killer. If you are doing speedy wiki-based authoring the last thing you want to do is have to edit a version specifically for print. It’s intolerable.

MediaWiki/Wikimedia is supposed to be getting its own whiz-bang PDF generator, via a partnership with Pediapress. So far it’s only enabled on a test wiki. The interface for creating a new “collection” (for Wikibooks, this will be usually equal to a book) is really awkward. But if admins get the hang of it and create nice PDFs for everyone else that will be nice. OTOH that won’t work on Wikipedia at all, where people will most likely creating their own custom grab-bags of articles. And unlike Objavi there is no way to specify print-specific style at all. Having said that, I just looked through a sample pdf (log in and click “Load collection” on the left, then follow the prompts) and it is quite impressive.

The issue that both Objavi and Pediapress seem to struggle with is that of images — their size and placement. Web-appropriate proportions just don’t suit normal portrait-orientation books. Someone should do a PhD on figuring out a good algorithm to convert them automatically. :)

Anyway! Back to my manual. It’s dual licensed under the GPL and GFDL, as GPL is the license ordinarily used by FLOSS Manuals, and I asked for GFDL licensing for obvious reasons. My hope is that chapters and similar groups will keep a copy to share with people who prefer book documentation.

I haven’t yet sorted out a pricing thing on Lulu, but I was hoping that it could be sold for cost + 2 euro (1 for FLOSS Manuals, 1 for the Wikimedia Foundation).

I also hope to see simple Wikipedia introductory manuals developed. English and Spanish ones in time for Wikimania would be nice!


By the way, this took five freaking weeks for Lulu to ship, so excuse me if I am overly excited. :)

01 September, 2008 • , , , ,

Comment [10]

The prettiest MediaWiki you've ever seen


MediaWiki derives its structure from links, templates and categories. You don’t need to do very much to develop something quite powerful. This site called culturalcartography.net only uses links, for example. They skinned up their MediaWiki and used it to develop the bulk of the site’s content, then wrote their own SVG interface that calls their MediaWiki API and presents the link web in a dynamic and interesting way.

I found this out from a paper called Building an SVG interface to MediaWiki (full paper) at SVG Open, which is currently on in Nuremberg.


Pretty skin for MediaWiki.


The edit box: see, it’s really MediaWiki underneath!


The custom-built SVG interface view that calls on the MediaWiki page of the same name, showing the link web between this page and others in the same wiki.

28 August, 2008 • , , ,

Comment [5]

Write API enabled on Wikimedia sites!

Brion announced that the MediaWiki’s ‘write’ API has been enabled for Wikimedia wikis. This means you can now edit Wikipedia and her friends without opening your browser. :)

Using Bryan’s quite excellent mwclient Python module, you can login, view a page’s current content, make your change and then view it in less than 10 lines. Really: see for yourself. (My password is not ‘password’ BTW. :)) In theory it could be one line less, since you shouldn’t need to log in. But for some reason mwclient gave me a LoginError when I tried to do it without logging in.

Check it!

There is probably also nice wrapper modules already written for your favourite language.

What kind of imaginative interfaces for editing Wikipedia can you imagine existing? Now we can build them! :)

26 August, 2008 • ,

Comment [2]

Google Reader users: a favour...

… Could you try subscribing to a page history feed in Google Reader and see if in the feed items, you get the contents as the edit summary, or the diff view?

e.g. RSS or Atom

What it should look like (thanks, Bloglines):

What it does look like:

Far less interesting, I’m sure you’ll agree.

BTW, if you didn’t know these feeds existed, they are linked under the “toolbox” section of the menu on each article’s history page.

22 August, 2008 • ,

Comment [9]

APIs: Ask, and ye shall receive

Wow. Wikis just gave me another lesson in awesome. I love it.

While thinking about the problem of Zemanta attribution strings, I mused that we really needed to develop a “Commons API”. There is a MediaWiki API for Commons, but there are more project-specific pieces of information we would like to provide. The big three are

  1. Deletion markers (warning to machines: don’t use this file)
  2. License info for a given file
  3. Author/copyright holder attribution string for a given file

So I made a bit of a start at Commons:API, thinking we could use the wiki pages to write psuedocode algorithms for the different problems. Already I knew at least I, Bryan and Duesentrieb had run across these problems before, and definitely others too. Therefore it made sense to combine our individual algorithms together and define a single, strongest-possible-permutation version and recommend others to use it. I imagined we could describe the algorithm in psuedocode and let people provide implementations in various programming languages. Versioning would kind of suck but hey, an imperfect solution is better than nothing.

However, a perfect solution is even better than both! I barely raised the topic when Magnus actually implemented it (warning: seriously alpha).

First, Magnus is one of my wiki-heroes. You could not ask for a more responsive developer, so it is just delightful when he chimes in on a “what if” discussion. Cool new shiny things are never far away. (Surely one of the strangest things to ever grace the toolserver is still Flommons, the “Flickr-like Commons” interface. Cut away the cruft!) And he is a lovely chap to boot. He tirelessly tweaks and prods any number of “what about…” or “why not move this here?” queries.

My pythonfu is not strong enough that I could code something like this up as he does, in half an hour, but I could probably practice and make some effort and manage it in a period of time. I recognise the neat or nifty factor in creating stuff that was previously just a “what if”. Programming rocks.

Secondly, I love how responsive a wiki community can be. Sure, for every five ideas you might have, four will garner a lukewarm response at best, but every now and then one will strike a chord and get some momentum. “Build it and they will come”; wikis can also obey “name it and they will build it”. [Of course, I’m hardly the first person to suggest Commons needs an API.]

Thirdly, thinking about the other Wikimedia projects — and indeed a good many third-party MediaWiki installs — it is obvious that all the projects may like the chance to define their own API. If nothing else, to define the “deletion markers” and the featured content (rather like another of Magnus’ tools, Catfood – category image RSS feed).

So, what does that suggest… that suggests wiki users need a flexible method of defining new parts of the API. Special:DefineAPI? Probably not, too subject to change.

Extensions can define API modules. So perhaps we should develop Extension:WikimediaCommonsAPI? If every project wanted to do this it may get a bit messy, but most projects wouldn’t bother I imagine.

Again we run up against the need for Commons to have a more structured database, rather than just store all information about an image in a big text blob.

At any rate, I hope we can set the current “pre-alpha” API up as a serious toolserver or svn project with multiple contributors. Wikimedia Commons is lucky to have attracted a relatively techy community of contributors, with a number of people MediaWiki- or toolserver- savvy. Let’s see how we go.

01 April, 2008 • , , ,

Comment [1]

Links for 2008-03-18

This is a preview of what the Commons upload form may look like one of these days… if I have anything to do with it :)

Things to note:

I love this form :) Try it yourself, if you’re logged in at Commons.

18 March, 2008 • , , ,

Comment [4]

Templatology, an essay

Templates are one of MediaWiki’s most versatile features. I was thinking about them recently because of a discussion with other editors about whether a particular template should even exist, and if so, what should its wording be. Templates are a now ubiquitous part of English Wikipedia articles and MediaWiki wikis everywhere, so it may be interesting to look at how they have evolved. (Warning: this is quite long.)

What is a template?

Templates are a feature that provide “boilerplate” text or style, whenever you want to have a standard look or text across more than one page. In MediaWiki, to put a template called “foo” (that is, you would find it in the wiki at [[template:foo]]) on any page, you would put {{foo}}. They can also take “parameters”, or particular values that you can change for each time it is used: {{foo|parameter value 1|parameter value 2}}.

Various types of templates are referred to by other names, including infoboxes, naxboxes, notices and warnings, which more reflect the purpose of those templates.

Another name used is “tag”. When a template is used on a page, it creates a link in the database between the page name and the template. This means one use of templates is to mark pages that you want to group together for some reason. These grouped pages can then be found listed at Special:Whatlinkshere/Template:Foo. If you only wanted to use a template for this grouping purpose, you could make the template so it actually had no visible content. However categories usually make more sense for this purpose.

A history of templates

Templates as we know them today were first introduced in August 2004, MediaWiki v1.3, along with categories and the MonoBook skin still used today. Before this they were in the MediaWiki namespace with the “system messages” or user interface messages. With this move they also got the feature of “parameters”.

The first revision of the Help:Template page on meta was in June 2004 (I suppose by this stage they already had the practice of running the latest MediaWiki version live for Wikimedia sites, rather than the latest release which is typically after). The opening paragraph is now cute:

Templates, or custom messages, have grown from humble beginnings as an afterthought in a localisation feature. They are now used in almost 10% of pages in the English Wikipedia database.

I asked Duesentrieb to run a query like this, and apparently there are 229,686 en.wp main namespace non-redirect pages without templates – a very neat 10%. So from 10% usage to 90% usage in less than four years. Pretty impressive, especially given there is no edict mandating their use.

However, this is actually getting well ahead of ourselves. There is an interesting post from Larry Sanger in May 2001 called Do we need templates ?:

From: “Krzysztof P. Jasiutowicz”
> Do we need templates of pages ?
> Groups of pages – rock bands, biographies, film entries share common
> features and therefore want some kind of templates.
> Pages of the same category edited by different people tend to follow
> sometimes incompatible patterns or disagree with each other.

One of the reasons that Wikipedia works—why it is developing so quickly and is so attractive to contributors (compelling, one might say…) is that anyone can come in and contribute in practically any fashion. Instigating templates has a number of implications for how we might begin to think of Wikipedia: it would become a collection of standardized information rather than a collection of information that people just happen to feel inspired to input. Who is interested in inputting “standardized information”? Maybe some people, but surely not nearly as many as those who are interested in inputting whatever information they know.

Suppose we were to require (somehow) that everyone writing about the countries of the world input the information in exactly the format of the CIA Factbook. Who, honestly, would want to do that? And on the other hand, who would want to contribute a lot of generally accurate, useful information that will eventually add up to weighty, detailed articles, not necessarily all in the same format?

If I finish the quote here we can all enjoy a guffaw about how things have changed. I think his answer to the question Who is interested in inputting “standardized information”? has been shown to be wrong. Empty edit boxes freak people out. Structured stuff where you just fill out a missing bit here or there is much easier to deal with. (This is also why bots have been so successful in “seeding” wikis. It’s much easier to correct something that’s wrong, rather than write a correct paragraph from a blank slate.)

However, a fairer quote would include the following, where Larry clearly recognises that “it’s early days yet”:

Eventually, I suspect, we’re going to have huge amounts of information, and it will be possible for people to go in and render related entries in a similar format. It’s generally better to impose order after creation, in a way that reflects the natural categories of things as information is given. […] [I]n a constantly-growing, constantly-improving encyclopedia, why not just let people add whatever information they want, and when it’s reached a certain level of maturity, only then start imposing some uniformity on the way similar information is presented?

And that seems to be more or less what happened. I’m not great at this online ethnography biz, so I don’t have any other choice quotes from 2001 to 2004, although I expect there was further discussion about templates and their appropriateness.

What’s interesting is how far they’ve spread. While first imagined as kind of article skeleton structure, they’re now just as widely used in all kinds of talk pages, user pages, maintenance and communication tasks.

A taxonomy of templates

There are some broad classes of templates that can be described:

Now into the user realm —

Any other clear classes I missed? (There are a few I can think of which are pretty boring, hence not here.)

Template complexity

This is what you see when you edit the article on the Melbourne suburb of Hawthorn. Note how the template takes up the entire first screen, and it’s not even done! For a newbie it must be pretty bizarre — although frankly this one’s formatted quite well. But if you’re just trying to get into the guts of it (and remember newbies may not know about section editing), it’s quite “WRONG WAY, GO BACK”.

So there is the complexity of templates — and typically these infobox ones — within articles. Maybe one day MediaWiki will get some whiz-bang “template adder” for articles and all that ugly template code won’t appear in the edit box. That would be nice.

Then there is the complexity of trying to edit the templates themselves. This is nothing short of a nightmare. Template syntax is approaching a very ugly programming language, especially if you throw in parser functions. The migration to the new preprocessor (Feb 2008) has shown deeply nested templates all over the place.

I don’t really see a solution to this, unfortunately. People can’t help themselves “improving” stuff. Here is one way things get complex real fast:

  1. There are two or more functions that display different content but in a similar context.
  2. Someone decides to combine in them in a single template that takes a parameter, which says which content to display. The old templates get deleted/redirected.
  3. Helloooo, complexity.

Repeat this a few different times, at a few different levels, in a few different contexts, and suddenly you’ll find it all very difficult to try and untangle.

Convenience becomes necessity

All templates begin life because someone finds it easier to make a boilerplate and post that, rather than posting something longer, and having to look it up each time.

However once a template exists, the expectation soon develops that whenever it is applicable, it should be used, and the plain text equivalent should not. Even if previously, you could take or leave the plain text equivalent.

I don’t know why this happens, but it does — without fail.

Templates in user communication

This is actually the crux of what I intended to write about. :) In my 2007 Wikimania presentation I talked quite a bit about the wording, attitude and intent of the English Wikipedia user talk templates. I complained that the wording was often officious, scolding and impersonal, and they were not likely to encourage people to become part of the community.

In hindsight, maybe I had the wrong idea about them all along. John Broughton says this in Wikipedia:The Missing Manual (my review):

The primary purpose of a warning about vandalism or spam, perhaps counter-intuitively, is not to get the problem editor to change her ways. (It would be nice if they did so, but troublemakers aren’t like [sic] to reform themselves just because someone asked nicely.) Rather, when you and other editors post a series of increasingly strong warnings, you’re building a documented case for blocking a user account from further disruptive editing. If the warning leads to the editor changing his ways before blocking is necessary, great – but don’t hold your breath.

(Yes, the gender did change in the middle of that paragraph. :) Srsly, accept singular they already!)

If this is a widespread attitude, that you have to wait until someone receives a level 4 template before it’s legitimate to block them, then it’s not too surprising that there is so much trouble with “gaming” on en.wp. That IS a game, isn’t it? It’s hard for me to not see that situation as leading to punitive block. It’s certainly not leading to a preventative one!

I guess my problem with user warning templates is I have a feeling they don’t work. I have a feeling they don’t improve a situation. I have a feeling they don’t get read — users don’t pay attention to their content.

If there was evidence that anyone read them, learned something from them, or some situation was averted — that would be nice. [Of course such evidence would be anecdotal. That’s all we have when it comes to user interactions.]

Image deletion notification templates

When an uploaded file is nominated for deletion or is actually deleted, it is commonly considered courtesy to inform the uploader, via a template to their user talk page. If they didn’t receive this, they would have no idea their upload had been deleted until they tried to go look at it, which is a pretty nasty surprise. It’s now quite common to visit a user talk page and see a dozen odd notices about missing information on files. Because they are often placed by bots, many can pile up without a human there to notice, “OK, this person seriously doesn’t get this concept, time for a chat”. This is even more true on Commons.

These templates perform two functions: notification + admonishment. They would be better if they were simplified to a single line and only used for notification. Admonishment is something that should be between two humans.

Templates on Commons

There is one benefit to templates that I cannot ignore on Commons and it is that of translation. Translated templates may mean two users can “communicate” (of a fashion) despite not having any language in common.

Templates are for the benefit of the poster, not the receiver

The benefits are

Just as automated phone answering services are for the benefit of the company, not the caller.

Receiving a form reprimand is patronising. I am not the only one who has this emotional reaction – as Wikipedia has Don’t template the regulars.

It follows from this that templates are patronising to newbies too. I guess the only reason this is considered acceptable is that as they’re newbies, they won’t realise this template is a form response. (Well, except for how it’s totally generically worded, yeah.) So, since we’re all equal ‘n all, go ahead and template the regulars.

(So far there is no essay Don’t template the newbies. Instead, treat everyone equally badly. ;))

It would be very valuable to see an in-person observational study of people’s reactions as they learn to edit Wikipedia, including how they react to templates. Maybe the vast majority appreciate the “official” warning as it gives them some direction. Maybe they really do pay attention to them.

Maybe the problem is not the tool, but the way it’s being used. Maybe the only thing to do is take a sharp knife to the language that is used, and help resist the idea of messages as block precipitators, rather than messages as useful informers and educators.

10 March, 2008 • ,

Comment [5]

Links for 2008-03-04

(Correction: not enabled on test.wikipedia. try this random testwiki.)

IMG_1474

(via cc-au)

04 March, 2008 • , , , ,

Comment

WMF is hiring

The Wikimedia Foundation is hiring: Software Developer / IT Support.

According to the organisation chart there’s room for one more dev, presumably more experienced than this position.

Yay devs :)

26 January, 2008 • ,

Comment

Library of Congress & Flickr: that should have been us

Some big news this week is a deal between the Library of Congress and Flickr in something they’re calling The Commons, “ The Library of Congress Pilot Project”. LoC says:

We are offering two sets of digitized photos: the 1,600 color images from the Farm Security Administration/Office of War Information and about 1,500 images from the George Grantham Bain News Service. Why these photos? They have long been popular with visitors to the Library; they have no known restrictions on publication or distribution, and they have high resolution scans. We look forward to learning what kinds of tags and comments these images inspire.

This is a great initiative on their behalf. As a public institution they should be applauded for seeking to make their collections more accessible and more useful. They are indeed a leading example for other cultural institutions to look to and hopefully take inspiration from.

It’s also a very smart move on Flickr’s behalf. It inspires warm fuzzy “public good” feelings, and let’s face it, Flickr does have the best interface for social image management, and tagging is awesome fun.

But when I read this announcement I had a bit of a feeling of being stopped in my tracks. Library of Congress and Flickr? Why wasn’t it Library of Congress & Wikimedia?

Wikimedia Commons users have long recognised the value of the LoC’s collections and there are literally thousands of their images hosted on Commons.

Sharp-eyed Lupo also reminded me of this piece in the Wikipedia Signpost, July 2006:

Wikimedia Foundation representatives met this week with officials from two major institutions regarding the issue of access to archival materials. The United States Library of Congress has expressed interest in including Wikipedia content as part of its archive collection, while also indicating that it could make a sizable amount of its own material available for use on Wikimedia projects. […]

Wikimedia interim executive director Brad Patrick, accompanied by Danny Wool, Kat Walsh, and Gregory Maxwell, met with representatives from the Library of Congress this week to discuss sharing information, sources, and media. The Library, one of the largest and most comprehensive in the world, has offered access to nearly 40 terabytes (approximately 10 million items) of digital information. “That there would be a moment’s hesitation to cooperate fully with the Library of Congress is beyond my comprehension,” said Patrick. “I’m glad that we are moving in this direction.”

Indeed… so what happened in the last eighteen months?

Brad Patrick and Danny Wool have left as staff; Kat Walsh is now on the WMF Board (I’m not sure if she was then), and Danny and Greg are still active within Wikimedia even if not as much as they once were. So not all of the connections from that time have moved on. But whatever they were thinking might happen clearly didn’t happen.

It’s disappointing that we weren’t able to make this happen. More importantly, I hope we will be able to pull our shit together and not miss such opportunities in the future.

There are three aspects:

One is on the organisational side, in terms of positioning ourselves as the partner for these kinds of ventures, public-interest and smart in collectively managing huge media sets. I don’t know how we’re doing on that front. It looks like 18 months ago we weren’t so great at following through, but at lot can and I imagine has changed in those 18 months.

The second is the software side, where we are not the best prospect. Right now Flickr probably does have a better set-up. I can only repeat my request that WMF hire more software developers and put some priority on functionality relating to media-management. It may take a year or two of serious improvements before we provide anywhere near the kind of usability that Flickr does.

The third is the community side, in terms of do Wikimedians welcome these kind of ventures. And for once this is actually the easy part. For Wikimedia Commons I feel pretty confident in saying we would rejoice to receive this kind of news.

It is a bit of a kick up the proverbial.

18 January, 2008 • , , ,

Comment [8]

Top 10 software extensions Wikimedia Commons needs in 2008


D-I-Y: © Cburnett, GFDL

The end of the year is typically a time for reflection and planning. Planning is much easier than reflection :) so here’s my list of the top ten MediaWiki extensions that Wikimedia Commons (hereafter, just “Commons”) needs.

#0. SUL

Ah, SUL. No acronym brings wry grimaces to the face of a Wikimedian better than this one, and perhaps no issue better demonstrates the consequences of the Wikimedia Foundation’s shoestring budget. Bug 57, Single user login, unified login, CentralAuth — whatever you call it, it should mean that an account created at any Wikimedia wiki allows one to log in at any Wikimedia wiki. Promised since at least 2006, you can currently take part in the testing at test.wikipedia, so there is progress. The full spec is on meta at Help:Unified login.

Why is this relevant to Commons? Because it’s the most likely wiki that editors are likely to use after their “home wiki”. SUL can reasonably be expected to indirectly promote uploading at Commons for Wikimedians, as another barrier to doing so is removed. (Spare a thought especially for the Spanish and Portuguese Wikipedians, who have disabled local uploads entirely.)

#1. Image search

AKA inbuilt Mayflower. Mayflower exists, is open source, and rocks the socks of everyone who uses it. All that’s needed is for some bright spark to specialpage-ize it, and then a little fairy dust to have that as the default search engine/page to be used within Commons.

If you need to be convinced, it’s easy: default MediaWiki search vs Mayflower

#2. Multilingual categories/tagging

Commons is a multilingual project, but since category redirects don’t work as desired, any given category can only work if everyone uses the same name. The category needs to “work”, so that a visitor can go there and expect to find all the media relevant to that concept. But the redirect/alias bizzo also needs to “work”, so a user can tag /categorise their files using their native language.

The urgency of this task is the great shame of a multilingual project having to enforce a single language description on its users. Seriously uncool.

#3. Rating system

Someone did contact me about making progress in coding this up, but I haven’t heard a progress report lately, so it’s definitely something I need to follow up. As Commons grows, it becomes the case that for any given query there may be dozens or even hundreds of relevant files. So having a rank-by-quality or rank-by-average-rating option in the search engine can make a dramatic improvement to the search results.

People love rating stuff, so hey, free data on quality. Off the top of my head I can’t think of any other image database that has a rank-by-rating option but I would be pretty surprised if no one had done it yet.

#4. SVG editing as text
#5. SVG display – pick language labels
#6. SVG display – animated SVGs

These three are naturally related, and arise from the project that’s currently occupying my thoughts (if not my time). SVGs are like the wiki version of an image, as I recently said, because they are so easy to edit. You can open a SVG in a text editor and twiddle with it and save it, and you’ve got a brand new SVG.

But, that’s kind of annoying if MediaWiki wants you to download the file first, and then re-upload it again. Instead, it would make more sense to be able to edit an SVG in a wiki page — exactly the same in fact. Have the edits be recorded in the image history just like text page revisions. It would be a little bit tricky because you would still want to retain the ability to upload a new version of the file, but is surely doable.

From there, it should only be a small hop-step-and-a-leap to a special page extension that allowed one to easily translate text labels inside SVG diagrams.

Take for example this diagram of a biycle. There are currently five different files: Bicycle diagram-en.svg, Bicycle diagram-es.svg, Bicycle diagram-fi.svg, Bicycle diagram2-fr.svg, Bicycle diagram-pl.svg. But there is no need to have five different files. Instead, it would be better to condense all the labels within a single document and extend the image syntax to allow something like this:

[[Image:Bicycle diagram.svg|thumb|language=en]]

So the main part of this request is for the image syntax extension. A tool could be hacked up fairly easily for easy label-translating on the toolserver, I think, although of course it would be preferable within MediaWiki natively.

Lastly, animated SVGs! They’re possible, although admittedly I’ve only ever seen one in existence. GIFs are just so crappy. :( Would be awesome to be bleeding edge on this one.

#7. Gallery preview

Gallery preview exists as JavaScript (and you can install it on Commons now via Special:Preferences > Gadgets), but it would be great to have as a default behaviour. It’s just so nifty! And it encourages browsing around more than the category links at the bottom of the page, I think.

#8. InstantCommons

The idea of InstantCommons is to let any MediaWiki wiki use Commons media as easily and transparently as the Wikimedia wikis do — that is, ‘‘as if the media were uploaded locally’‘. Such a feature would be of immediate interest to Wikitravel and Wikia, both non-Wikimedia projects, and really be a huge leap forward in Commons’ success at sharing free content.

Current status is unknown, but there’s some code in SVN.

#9. Native CheckUsage

As a consequence of #8, this one becomes much more pressing. CheckUsage exists on the toolserver and tells you in which projects a Commons image is being used. Indirectly that tells you how many people you’re likely to piss off if you delete the image without delinking it first.

This is basic necessary functionality for the Commons community. It would be like if you removed the ability to unblock users. We have the ability to do the damage, but we also need the ability to survey the scene and minimise it. So, it is an uncomfortable situation that we rely on half-hacked-up tools for such a critical task, and it would only be moreso the case if InstantCommons was enabled.

#10. ImportFreeImages

This one’s a gimme. The extension exists, it’s already had a decent workout on Wikia, all that’s needed is some code review and a switch-flick.

ImportFreeImages allows the user to search Flickr and transfer an image locally all within MediaWiki. Because Flickr enables Creative Commons licensing, it is a major source of freely-licensed media. But there are two problems. One is that Flickr also allows non-free licensing, so we have major headaches in teaching people the fine distinction between tiny icons. The second is just that it’s annoying to have to manually save the image locally, upload it again, make sure you copy all the relevant author info and so on, and asking people to do that leaves a lot of room for mistakes.

So ImportFreeImages saves all those problems, and because you can restrict which licensed-images you want it to show from Flickr, you can solve the licensing confusion as well. It acts as a filter on Flickr, and just makes the whole thing a breeze for the user. So — awesome.

There are at least 55,000 images from Flickr in Commons at the moment. (Around 2.5% of the total.) It’s common enough, and causes enough confusion, that the community has built a plethora of tools to try and make it easier:

(The main reason Commons instituted the flickrreview system was because Flickr lets people change their licenses without any kind of historical display, which is seriously uncool, as far as trying to figure out if a stated license was ever valid goes.)

If we had ImportFreeImages, we could more or less forbid people from manually uploading Flickr images, and goodbye Flickr hassle!

So that’s my list. There are other things that I want, such as structured data, but I don’t really see it as likely to happen by the end of 2008. 2010, maybe. These ones all seem within reach (OK, with the exception of #2, but you have to dream big, right?). If there’s anything you think I missed, drop me a note and let’s hear it.

20 December, 2007 • ,

Comment

wikimedia commonswikipedialinkscommunitymediawikiconferenceslinux.conf.auwmfcreative commonswikimaniapoty2008australiawikimedia chapterswikimedia australiavideo
(see all tags)

free culture

wikimedia...

...& other free content projects

interesting folk