Interaction Fodder

Tuesday, November 08, 2011 - 08:18 PM

Via Gruber I just read through “A Brief Rant On The Future Of Interaction Design”. Briefly, it's a rant about how so-called “touch” designs (and also predictive simulations of their future possibilities) are really not fully “touch” based in that they’re unidirectional and incomplete. The user can move things under glass, but has no feedback returned about what's happening under the glass. On top of that, there are multiple different ways we primates manipulate things with our hands, and these interfaces take advantage of only a small subset of those possibilities.

The first thing I thought of while reading it was Horace Dediu’s recent post on Revolutionary User Interfaces. Which discusses how Apple’s major user-input changes have been a major factor in the success of their products. From the Mouse, to the iPod click-wheel, to the iPhone’s current touch interface, the interaction method has been the defining product differentiator.

I have no predictive thoughts on top of that, but it seems unlikely that Apple’s (or others’) teams aren't thinking in similar directions, internally. We already have gyroscopes and accelerometers in our hand-held devices, I wonder what the interfaces would be like with pressure sensitivity? I also wonder what could be done when haptic feedback can accommodate both small scale finger feedback (well) and larger scale gripping-style feedback?

[ read/post comments | 0 of 0 comments ]

Revenge of the Marker of the Beast!

Friday, October 07, 2011 - 12:11 PM

Laboratory Industries

In (something resembling) cooperation with r stevens, the Revenge of the Marker of the Beast has been unleashed:

(The Satanic Sticky Notes are also available separately.)

To see what else we're doing as Laboratory Industries, watch the site or follow @LabIndustries.

[ read/post comments | 0 of 0 comments ]

and… Lion. (the further woes of mysql and mod_perl)

Saturday, September 03, 2011 - 09:50 PM

I try to do my development on my personal machine, not the server. Once it's working, I move it to my development machine, fix bugs, move it to staging, fix bugs, move it live, panic.

Hopefully that panic is followed by things working, hopefully quickly, but I digress.

Step 1 is keeping things working on my own machines. I run Mac OS on all those machines, Debian Linux on the servers. It seems that every Mac upgrade causes its own set of headaches. This time, after upgrading to Lion, the problem was that (after all sorts of other, expected, upgrading annoyances) all my perl code ran as test scripts, but wouldn't run under mod_perl. I would see:

[Sat Sep 03 18:39:29 2011] [error] install_driver(mysql) failed: Can't load '/Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle,
1): Library not loaded: libmysqlclient.16.dylib
    Referenced from: /Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
    Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204.
    at (eval 3317) line 3
    Compilation failed in require at (eval 3317) line 3.
    Perhaps a required shared library or dll isn't installed where expected
    …

It turns out that when you search for this kind of error, you get a lot of help for how to fix it under Rails. I don't want to steal from any of the numerous posts on how to fix it there, but it took reading through a whole bunch of those to figure out the "correct" solution, and I don't have a particular one to credit with helping me, although they all helped me grok the situation. It also would have been faster if I had known more about how Macs deal with dynamic libraries.

The important thing is that libraries contain within them the paths to the other libraries they work with. You can see these included paths with the "otool" command. In this case it was the perl DBD::mysql file mysql.bundle that was unable to find a library it needed. Specifically the libmysqlclient.16.dylib file: $ otool -L /Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
/Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle:
  libmysqlclient.16.dylib (compatibility version 16.0.0, current version 16.0.0)
  /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0)

The problem file is the first returned "libmysqlclient.16.dylib" the "mysql.bundle" is looking for. It's looking relatively, and it needs to look or it more absolutely.

The tool to fix this is apparently "install_name_tool". You can use this to change those resulting lines that otool returned.

Now, the actual path of my "libmysqlclient.16.dylib" library is "/usr/local/mysql-5.5.9-osx10.6-x86_64/lib/libmysqlclient.16.dylib". However, I know that there's a "mysql" link corresponding to "mysql-5.5.9-osx10.6-x86_64" and another "libmysqlclient.dylib" corresponding to "libmysqlclient.16.dylib". So instead of using "/usr/local/mysql-5.5.9-osx10.6-x86_64/lib/libmysqlclient.16.dylib" I can use "/usr/local/mysql/lib/libmysqlclient.dylib". Hopefully future mysql upgrades will maintain those symlinks and I won't have to do this again.

The command I need to run, then, is to tell "mysql.bundle" where to find "libmysqlclient.dylib". This requires admin/sudo privileges so it is: $ sudo install_name_tool -change libmysqlclient.16.dylib /usr/local/mysql/lib/libmysqlclient.dylib /Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle or, in more generic terms: $ sudo install_name_tool -change "old, bad library name" "new, fully functional path" "really long path to the problem child/library" After changing the relative path to an absolute one, otool says: $ otool -L /Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
/Users/phillip/Sites/hosting/perl/lib/perl5/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle:
  /usr/local/mysql/lib/libmysqlclient.dylib (compatibility version 16.0.0, current version 16.0.0)   /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0)
and apache is starting happily.

actually, it found more …problems… but they were generic and more easily sorted out. End of the day, everything is happy now. On the computer, at least.

Postscript: the reason this wasn't an issue under my test scripts, run from my home account, is that my "DYLD_LIBRARY_PATH" environment variable contains "/usr/local/mysql/lib" as a path. Apache doesn't run under my account, and its security…stuff… makes it difficult to set a similar path that DBD::mysql will see.

[ read/post comments | 0 of 0 comments ]

Database fun!

Friday, September 24, 2010 - 10:37 PM

A long overdue post with what my last year has been like is pseudo drafted, but in the meantime, just a quick notice that the database machines are being replaced in a couple of hours!

fun?

So I'll be up until the move is complete, and I apologize in advance for all the inevitable breakage.

[ read/post comments | 0 of 0 comments ]

San Diego Comic-Con 2009: The Sequel ^ 8

Sunday, July 19, 2009 - 12:24 PM

On Wednesday, San Diego Comic-Con starts.

This will be the 8th year I've been there, and they've gotten more fun every year. However, I'm not sure there's much that could top last year, which I consider to be incentive for everyone involved to try harder.

As usual, I'll be at booth 1335/1337 with the Dumbrella crew. With me this year are:

If I'm feeling inspired I'll get our crappy, outdated boothcam working and post it here, but I make no promises.

[ read/post comments | 0 of 0 comments ]

The end of GeoCities

Friday, April 24, 2009 - 06:41 AM

Apparently, Yahoo is shutting down GeoCities at some point this year. To me, this marks an end, of sorts, to the original dot-com era.

Back in my days at theglobe.com, my primary project was our homepage builder. Eventually had some marketing-oriented name for it (uBuilder?), but not for most of the time I was working on it. It originated as a way for users of our web based chat system to upload personal icons, and ended up as a GeoCities competitor.

The big thing, I felt, that made us "better", were that we didn't have complicated URLs based on which "community" your page was in, we just had "members.theglobe.com/username/" style URLs. Also, I think our page building tools were better and more flexible. On the downside, I think at our peak, we had about 1-5% of the traffic that GeoCities did (although over a million page views in a day was a big deal back in 1997, on 1997 era hardware). On the upside, in 1997, the only developer working on this project was me, so our development team was cheaper.

But for fun, here's Jon's page. We made ones for toothgnip and diablo too, but they don't seem to be archived.

[ read/post comments | 0 of 0 comments ]

Apache deflation and negotiation

Friday, April 17, 2009 - 04:38 PM

Sometimes, wasting time isn't entirely unproductive.

This week, while thinking of getting work done on some longer term projects that are standing nearby and mocking me, I somehow got conned into installing Yahoo's YSlow Firefox extension.

It's pretty cool in a masochistic sort of way. It gives you a performance evaluation of whatever site you're looking at, based on "best practices" for HTML, server configs, etc. In my case, the one thing that popped out at me while looking at my homepage here, was that I wasn't compressing any of the "text" content (HTML, CSS, JavaScript, RSS feeds, etc.) that the machines serve. Server side auto-compression is one of those things that I remember looking at a few years ago before being distracted by the next shiny bauble that prevented me actually doing anything about it.

The idea is to auto-compress any text being served back so that the payload delivered to the clients (you and your web browser) is smaller, gets to you faster, and loads faster. Computers are fast enough, and the files are small enough that the compression speed-hit is far outweighed by the network latency speed-gain.

As an added bonus, since the server has to hold the network connection, with all it's associated memory and resource usage, open until the client is finished getting all its content, this frees up those resources a little bit faster.

The first step was enabling apache's mod_deflate module. This seems simple enough, the docs even have a perfect example right at the top:

Compress only a few types

AddOutputFilterByType DEFLATE text/html text/plain text/xml

Nifty! Now let's check the documentation for AddOutputFilterByType:

Compatibility: Available in Apache 2.0.33 and later; deprecated in Apache 2.1 and later

Well, crap.

I'm running Apache 2.2 (which is also what all the documentation links point to), so I probably shouldn't start by implementing this with a deprecated config directive.

It does point us in the direction of its replacement, the mod_filter module. Reading through all this documentation is not entirely unconfusing, as there are a lot of parts without a very coherent picture of a whole. At the end of the day, what it comes down to is that I need to first define my filter, and then apply it where and how I want to. To define it, I put at the top of my config:
FilterDeclare compress-response
FilterProvider compress-response DEFLATE resp=Content-Type $text/
FilterProvider compress-response DEFLATE resp=Content-Type $application/x-javascript

This declares a filter with the name "compress-response" and then says that it should be applied to anything with a MIME-Type starting with "text/" (i.e. text/html), or "application/x-javascript". Further down, in the virtual hosts that I want to use this compression, I need to add the line:
FilterChain compress-response
Nice and easy!

For the purposes of full disclosure, there's also some stuff in the mod_deflate documentation that I used for determining browsers where this will and won't work, so the full set of directives is:
<Location />
# Insert filter
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent env=!dont-vary

FilterChain compress-response
</Location>

I'll probably eliminate a bunch of those at some point as I don't think we need to worry about Netscape 4 a whole lot these days.

This worked, and made me happy. It's been live for most of this week, and nobody noticed, commented, or complained. Success!

What bugged me about this, is that while compressing on the fly makes sense for all the dynamic pages (most of mine and my clients' sites), it seems like a waste of resources for things like the JavaScripts served back from OhNoRobot, which are written to disk and then served back multiple times. It makes more sense to zip them once, when they're being written, and then serve those back to the clients that can handle it.

I'm about a decade too late to be the first person to think of this, so it's also conveniently built into Apache. Content Negotiation also supports sending back pages in multiple languages, but for my purposes I wanted to send back a ".gz" file instead of a ".js" if there was one available. The two important things to do are: add MultiViews to your enabled Options, and do an AddEncoding for the .gz files:
<Location "/js/">
Options +MultiViews
ForceType "application/x-javascript"
AddEncoding x-gzip .gz
</Location>

I also had to add the ForceType directive, because otherwise the .gz version of the file would be served back with Content-Type: application/x-gzip instead of as JavaScript. The other thing that wasn't immediately clear in the documentation is that in order to support the content negotiation for .gz or .js, you need to have both files there, but both need to be a ".js" file plus the encoding suffix, so (for example) Dinosaur Comics needs to have both "/js/23.js.js" and "/js/23.js.gz" on disk.

If you want to test this using curl from the command line, add "-H "Accept-Encoding: gzip,deflate"" to your requests, e.g.: curl -vI -H "Accept-Encoding: gzip,deflate" http://www.dumbrellahosting.com/

As a final remark, I'm pretty sure that most of the above is pretty obvious to all good Apache administrators. However, for those of us doing that as just one part of a larger job, it seems remarkably difficult to find a coherent set of task-oriented how-tos. Mostly I document this so I'll remember what the hell I was thinking when I look at this config again next year.

[ read/post comments | 0 of 0 comments ]

Dumbrella Hosting

Dumbrella Hosting is a provider of premium hosting services for webcomics creators. Learn more!