Avi Bryant’s selective vision?

In Mike Schinkel‘s artfully crafted rant, Avi Bryant is “Seeing things the way in which one wants them to be (not the way they are)” — especially with regard to the need for clean URL’s. I can see the merit of both sides of this argument, and I think it speaks to the larger divide between page-oriented frameworks like Rails, and more flexible, but arguably less web-centric frameworks like Wicket.

Of course, for every Wikipedia, or other site that effectively uses clean URL’s, I could give you an Amazon — as Avi cited — or an iTunes music store, or a GMail, where URL’s are completely unimportant, and those applications do not ‘subvert the web’ or leave their users deprived. It is a design decision, whether the interface that the system presents through URL’s is important or not.

As to the wider question of whether Avi Bryant suffers from “confirmation bias,” or otherwise fails to embrace views that challenge his suppositions — well, I think we should refrain from psychoanalyzing each other through our blogs, eh? When I spoke with Avi, he actually agreed with me that learning Haskell or Erlang would be a great thing to embark on, challenging many of our pre-conceptions of what software should look like. He is an open-minded guy.

Finally, I should mention that clean URL’s in Seaside, as Ramon Leon pointed out, are quite possible. They are just not the default behavior.

Did I mention Wicket yet? It is really cool…especially if it could be combined with JRuby

Advertisements

3 thoughts on “Avi Bryant’s selective vision?

  1. Thanks for the kind words and interesting analysis.

    I do have a question to pose you. You cite Amazon, iTunes, and GMail as applications where URL’s “are completely unimportant.” I contend they are successful applications in spite of their use of crappy URLs. As Al Ries says in “The 22 Immutable Laws of Branding”, a company can succeed with poor branding only because their competitors don’t do a better job.

    So I’ll ask: Can you give me an examples of websites that:
    1.) uses crappy URLs,
    2.) was started since around 2003 (i.e. the birth of Web2.0),
    3.) was *not* part of a wealthy parent company in its formative years,
    4.) and, has seen *significant* growth?

    I can give you many counter examples (some of which started before 2003):

    — Google Search (not always great URLs, but usually good)
    — Digg (not URLs, but good)
    — WordPress w/it’s permalinks
    — Flickr (in most cases)
    — YouTube (not great URLs, but their strategy leverages URLs)
    — Twitter
    — del.icio.us
    — StumbleUpon (not always great, but often good)
    — JotSpot
    — MyBlogLog
    — WebJay
    — Upcoming
    — Eventful (not great, but usually reasonably good)
    — LinkedIn

    There are hundreds more; need I go on?

    Oh, I can also give you some really, really crappy ones 🙂

    — Dell.com
    — HP.com
    — IBM
    — zdnet.com
    — eBay (well, not as bad as they could be)

    BTW, while I don’t use iTunes, I frankly find Amazon and GMail two of the most frustrating websites/apps I have ever used. Why? Because how Amazon uses URLs and because of how GMail uses state and URLs. I’ve been planning a rant about both of them forever, when I get the time… 🙂

    Oh and regarding your comment about “refraining from psychoanalyzing each other through our blogs”; I thought that was the whole point of blogging! 😉

    Thanks again for a good post.

  2. I think Amazon’s URLs are clean, they’re just not guessable. Every resource has a unique URL identifier, so it conforms to REST principles. I guess in the context of this conversation clean == guessable and so I guess I’ll just sum up my thoughts by saying that while guessable URLs aren’t critical (and unique RESTful URLs are) all other things being equal they are certainly “better” than non-guessable URLs. I don’t see much controversy in that.

  3. I like the idea of RESTful URLs for very specific parts of a web app. They make perfect sense when we’re talking about a product page or a weblog entry, right? But the notion that we require clean URLs everywhere is clearly overkill.

    I really like the idea that frameworks such as Seaside focus on the idea of building out a usable workflow FIRST, and then adding RESTful URLs (if you need them) later. It seems plausable that your entire problem domain does not require the use of RESTful URLs. Is your entire app really supposed to be an API? Do you really want to worry about properly protecting all resources?

    Just my two cents. I really like your writing (please do more), and thank you for the del.icio.us link on my Ballsy Seaside article.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s