Etech Wrapup

The final day of etech was substantially scaled down in terms of size of presentation rooms, but the audience just got more and more packed. It seemed almost ridiculous, to the point that Bradley Horowitz half-jokingly asked the Yahoos to leave and free up valuable oxygen. In any case, here’s what I tried to cover in this last post:

There was definitely some gee-whiz factor present at the morning AJAX sessions. The first presentation was more theoretical than anything – it was Carsten Bormann presenting the beginnings of a disconnection-tolerant AJAX library, which would assist people in creating apps that survive network detachments and outages. Beyond the obvious storage of cached deltas, he went a bit into hinting about technical gotchas, including error handling on communication errors themselves (due to some weird firefox error capturing that goes on), and he also suggested the approach of storing all changes as cookies in the browser, regardless of connection state. Clearly, this presents some inefficiency problems, but it seems like a reasonable approach (to me) to make sure state changes consistently get communicated to the browser on each request. Then, the server takes over the responsibility to delete each cookie transaction confirmed with Set-Cookie. This is definitely confined by the regular cookie limitations, but it’s an interesting approach. He said that a release is available at http://prj.tzi.org.

Then, following up on the AJAX braindumps was the venerable Steve Yen, creator of fascinating NumSum and NextAction, which are AJAX-style web apps that offer the intriguing feature of saving state on Save Page As…, PLUS the ability to upload local changes when a connection is available. His apps show some serious promise, and i’ve been really impressed with his work so far, especially with the crazy development he’s done with NumSum. Apparently, many modern browsers support saving the modified DOM tree on File->Save As…, which allows him to do his magic in the applications. With a Flash 8 storage technique, you get around the missing browsers, but Flash 8 has its own issues… supposedly one of the Safari guys from Apple was IN the room, and I think this may bode well for support in Safari soon.

A lot of the things that Steve’s apps do are really slick. He went into a small demo of the current NumSum and showed layered graphs on the web excel-style spreadsheets that actually looked like they operate more snazzily than the real thing. They also support inline view and edit(!) source. He was able to go into some of the technical implementation details in pretty heavy detail, as he does everything in terms of small deltas stored in single records, which are stored locally and later synchronized with the server. To support that, he wrote a SQL-like TQL emulator in javascript to mirror the similar statements that are run on the server’s MySQL store. He also went over some more tech specs of his TrimPath Junction framework written for these apps, including a small discussion of the power of the with method to dynamically scope methods in code. Pretty mind-bending stuff, and he’s been able to go far with these basic concepts.

After that, I had a tough time deciding between Liz Turner’s presentation on a visualization strategy she built for a huge Harper’s Weekly data set, and the Pervasive Electronic Games presentation. I chose Liz Turner’s presentation.

She started off with a discussion of the existing browseable archive that she operated upon, a time-sorted list of articles that was browsable by keyword hierarchy. The taxonomy itself was nothing to sneeze at, and provided for an interesting basis for the visualization research. The browsing app itself was fascinating, and for the purpose of discovering intersections between various topics covered, it was definitely an impressive tool. Many of the concepts she covered were her general design approach behind visualizing the data set effectively, her use of an iconography or “picture plane” to give viewers an anchor through which they could differentiate different connected layers of data. She also expressed the philosophy that the map was built to reflect and allow manipulation on the query itself. I felt that some of the confusion and difficulty the audience felt when trying to interpret it was because it needed some way to differentiate between the chosen keyword/icon and all of the keyword/icons that were attempting to intersect the main data set.

Although I felt the usefulness of this example was limited due to the bias of the content, I felt that the approach was sound, and would love to see a mostly automated solution based upon automatic tagging and filtering of content, resulting in 2-level tag hierarchy on any time series of text-based content. That would be something astounding, as you would be able to feed in huge text data sets (Usenet/the Well archives, anyone?) and get this interesting visualization to find clusters and pieces of intersected data in a pretty smooth way. I’d probably hook together a new exclusion dictionary ability of Freetag along with an autotagging parser and hierarchy generator, and then hope that the graphical browser was open source and operated upon an open schema.

Bradley Horowitz‘s presentation was a packed room, and yes, I absolutely felt guilty for not leaving. However, I felt like I needed some inspiration, and his presentation was a whirlwind overview of the worthwhile goals that Yahoo! is pursuing as a whole. It ended with an awesome demo of Checkmates over a bluetooth phone-to-J2ME profile connection that enabled Ed to demo the device and have the progress reflected live on the projection.

After that, I was able to make it to the Videogame controller presentation by Tom Armitage, which was a good point about slowness of innovation in game controller design.

Then, I stuck around for the presentation from some EFF guys about the next lawsuits to expect in the near future. It was definitely good to hear these guys talk about what they’re trying to do to protect the innovations that conventions like Etech try to encourage.

In conclusion, I would say that what I got out of Etech had a great deal to do with the fact that I haven’t been hanging out with all these people that much at all. I’ve been barbequing in Santa Monica and working on interesting commercial problems in relative isolation for a few years. Perhaps the reason for the general dissatisfaction with Etech on the part of the blogerati has more to do with the simple fact that they know exactly who to follow and who to look to in the forefront of technical innovation. However, it is hard to find these people if you’re not one of them. This tiny community of sophisticated developers and designers are really a small portion of the huge industry of disseminating technical knowledge an information. Therefore, I believe that conferences like Etech provides a highly curated level of access to no-bullshit technical innovators, to whom access to each other is no big deal. The difficulty of the conference is to attract enough high-level people and technically creative and influential presentations, while keeping the content accessible to those of us without the magical access card to web 2.0.

One thought on “Etech Wrapup

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.