Working on Dandelion I had to design how the addresses for various pages are going to look. I could have gone with the traditional PHP approach, like
http://example.com/wiki?page=SomePage&action=edit&revision=3, but it's not very nice. The alternative is somewhat better looking approach of modern wikis, like
http://example.com/SomePage?action=edit&revision=3, but it's still more RPC than REST.
To make my wiki @REST@ful I had to change its internal organization: no more actions! Actions are not REST! Just rename them to "views"
Ok, ok, that's not really much of a change. But it lets you come up with something like
http://example.com/SomePage/edit?revision=3 which does look RESTful and allow you to have local links to actions (which, in turn, means that the parts of the page that generate the theme don't have to know the base URL of the wiki).
I've gone with this approach and soon stumbled upon some problems. One of them was the "raw" view, used to download the content of pages alone, without any markup or parsing – just directly get the file. It's handy for storing things like cascading style sheets or images for your page style (especially when I added proper cache headers to that view). But try to refer to an image that has address
http://example.com/logo.png/raw from inside a style sheet with address
http://example.com/style.css/raw. Yes, it would be
url(../logo.png/raw). Not pretty. In addition, I needed some hacks to make the
http://example.com/RecentChanges point to an action, not a page.
This made me settle on
http://example.com/edit/SomePage?revision=3 kind of URLs. I can now refer to
http://example.com/raw/logo.png from inside
http://example.com/raw/style.css simply as
url(logo.png). And not only that! I can use build-in authentication mechanisms to restrict access to the
/edit action. I can even use – so far useless –
robots.txt file to tell the bots to keep away from special actions, err… I mean views, and only index real pages! I think I have found the right solution.