Hello.

I'm a digital product and graphics designer. I love device responsive web standards, functional user interfaces and branding — especially if there's a new product or service involved.

That's pretty specific, though. Deep down I really love designing all sorts of things. I geek out on physically interactive spaces and objects, data art, computational aesthetics, as well as bio-design.

I studied visual communication and art history at The George Washington University and I'm a graduate of New York University's innovative design and technology master's program, ITP.

I live, work and ride bikes in sunny Brooklyn, NY.

Contact

Academic Experience

2010.09 — 2012.05

Master of Professional Studies
Interactive Telecommunication Program (ITP) Tisch School of the Arts, New York University

2000.09 — 2004.05

BA Visual Communications with minor in Art History
The George Washington University
Graduated Cum Laude
National Society of Collegiate Scholars
Spring 2003 semester at Sydney University, AU

Professional Experience

2012.08 — present

UX Designer, Microsoft, New York, NY

I'm only just getting started.

2012.01 — 2012.05

Interaction Designer, SumAll, New York, NY

Worked with a small team of designers and developers to release the front-end of an analytics web application. Integrating an impressive array of data sources into a smart and charming experience, the application allows ecommerce business owners to save time and make better decisions.

2011.06 — 2011.09

UX Designer, Microsoft Bing, Bellevue, WA

Worked with design, editorial, dev and program management teams to scope, design and develop prototypes for a soon-to-be-released Bing.com feature during a summer internship. The internship culminated in two presentations of the feature prototypes to senior leadership at Microsoft as well as the Bing design team.

2007.02 — 2010.08

Graphic & Interaction Designer, Empax, Inc., New York, NY

Created a range of environmental, print and interactive materials to promote nonprofit clients and their causes. responsible for designing and presenting brand strategies, identities, print collateral, environmental signage, animation, user experience and interface, content management system setup and third party plug-in and data integration, search engine optimization, user analytics and testing.

2006.12 — 2011.08

Freelance Graphic & Interaction Design Consultant, New York, NY

Worked as a sole proprietor with various clients from retail, music, film, nonprofit, real estate and technology industries to create and improve existing brand and user experiences across many platforms and media, although mostly print and web.

2004.04 — 2006.01

Graphic Designer, The George Washington University Communication & Creative Services, Washington, DC

Worked with project management and external production vendors to deliver a range of print and interactive material related to university publications and communications initiatives. responsibilities included design and implementation of print collateral, posters, animation, environmental signage, web publication and press checks.

Other Experience

2011.11 — 2012.02

Vibrant Technology Researcher, Intel Research, NYC
Grant recipient working with NYU faculty, Intel researchers and student collaborators to design and develop a prototype for a location-based interactive organism that explores what happens when technologies are re-envisioned as peers instead of tools.

2006.01 — 2006.12

English Teacher, NOVA Japan, Kure-shi, Hiroshima-ken, Japan
Taught and mentored students of all ages and abilities in small to medium-sized classes to improve proficiency in english linguistics and conversation.

Selected Press & Publications

2012.05

Project: #BKME
Creative Applications (Web)
“BKME.ORG – A Web Platform for Reclaiming Bike Lanes”
by Greg J. Smith

2012.03

Project: #BKME
Laughing Squid (Web)
“BKME, Web Platform For Recording Bicycle Lane Violations”
by Edw Lynch

2011.07

Project: Budget Climb
Freakonomics (Web)
“What Would it Be Like to Climb 26 Years of Federal Spending?”

2011.04

Project: Budget Climb
Flowingdata (Web)
“Physically climb over budget data with Kinect”
by Nathan Yau

2011.02

Project: Gedenk Logo
Logo Lounge 6 (Book)
by Catharine Fishel and Bill Gardner, Rockport Publishers

2010.12

Project: Pousse Cafe
Gizmodo (Web)
“A Bartender That Pours The Perfect Shot, Every Shot” by Matt Buchanan

2009.11

Project: The 2007 Gotham Awards Logo
Basic Logos (Book)
by Index Book

2008.10

The Alliance for Climate Protection Website
Print Magazine
“Dialogue: Martin Kace”
by Steven Heller

Selected Exhibitions

2011.12

ITP Winter Show 2011, NYC

2011.05

ITP Spring Show 2011, NYC

2011.04

Data Viz Challenge Party, hosted by Eyebeam and Google, NYC

2010.12

ITP Winter show 2010, NYC

Magic 8 Bama

May 9, 2011

questions were tweeted by the audience to @magic8bama at the beginning of the performance and answered on the spot
The Magic 8 Bama in stasis
responses were generated by n-gram analysis of source text
this particular version used video from April 16th, 2011's weekly presidential address
future versions will include more source videos and thus a greater vocabulary to draw from
extra response feedback allows for greater comprehension especially on shorter video clips

Role(s): Design, Programming

Related Links:

How Does The Magic 8 Bama Work?

If you could ask President Obama one question, what would it be? This is the question posed to a visitor to the site. The visitor then tweets their question to @magic8bama and proceeds further into the experience where they can then find their question within a list of others. Clicking on their question will solicit a response from The Magic 8 Bama, which is a video speech cut up into individual words and arranged to reflect responses generated by running n-gram analysis of the speech’s transcript. The intent is to be one-third vague prophesy, another-third comedically perplexing and a final-third frustrating. Like an the original Magic 8 Ball.

Concept

Although the metaphor and experience is intended to be light-hearted, the genesis of the idea stems from my interest in how our digital identity grooming practices and the thorough richness with which we embed them might be manipulated in ways we never intended.

These online identities have for the last decade or so been limited to a noise of numerical and textual data linked to poorly defined behaviors. We’ve never really been at risk of losing more than our credit card numbers or possibly disclosing our address to an advertiser we’d rather didn’t have it. However as we willingly supply greater amounts of personal information, and I mean intimate details surrounding our inter-personal relationships, images of ourselves and friends and the things we like and love and dislike – we have given away some of the most personal keys to our identity.

One of the clearest examples how powerful these intimate details are and how uneasy we become at their manipulation is facetofacebook – a project that stole a million public facebook profiles, recontextualizing their images into categories for a fake dating site. Imagine if your profile pic was happy and smiling at Disney World, but the face detection on facetofacebook put you into a “single, lonely and dumpy” category. It performed such tasks algorithmically so who’s to defend your claim that you’re not.

Building on this idea, I thought that if we have opened ourselves up to such manipulation by providing only some text about our interests and an image, how sovereign will our identities be in the not-so-distant future of ubiquitous online video? Here our identities will be scrutinized from multiple angles, our voice will be analyzed and the way we interact with other people will all be open to interpretation. I wanted to experiment with this idea in an effort to demonstrate how this manipulation might happen using online video.

As it would turn out I’ve also been exploring the role media plays in forming opinions of candidates and their poilitcal beliefs.

Technical Construction

The results I initially set out for was a Max Headroom effect, where I would be able to control the speaker in any way I saw fit. Although I don’t imagine this is exactly the way the future of our online video identities will play out, to get this point across I needed to have as uniform of a talking head as possible. Keeping in mind that I would be jumping around the audio track as well as the video track, using a speaker with a fairly uniform position on the screen would minimize the visual noise to something tolerable by the viewer. Some of the most obvious and readily available sources of video with such a subject were newscasters and The President.

Knowing that I would also need to have accurate time codes for each word spoken in the video in order to parse out new statements, I would need something like subtitles to get me started. The only problem with subtitles being that they were accurate to the sentence-level, and I was looking for word-level accuracy. Even with a good subtitle track, I would still need to manually adjust for each word in the sentence. So I started doing some research into techniques to automate this process. While nothing proved perfect, one of the best open source resources I found was a Java class from the Sphinx4 project. Using the aligner class (sorry the documentation is not good) you’re able to feed the audio track along with an accurate transcript and the class will produce an output of each word with in and out time codes.

Knowing this I decided that subtitles were not a neccessity, but that I would need really good transcripts which would probably mean that the speaker would be reading something pre-written. While news anchors read from teleprompters, I figured that the President would probably have greater care put into transcribing his words. Sure enough, WhiteHouse.gov has incredible documentation of each speech the President gives, and he also gives Weekly Addresses, about 4 minutes long on the latest political discourse. In these particularly, no one is interrupting him, there’s no coughing nor clapping nor laughing – which makes for a great audio track and a very accurate transcript. The site provides both .mp4 files of the videos and .txt files of the transcripts into the public domain.

I found a pretty good video of Obama’s April 16th, 2011 Weekly Address about fiscal responsibility and downloaded both the .mp4 and the transcript. I brought the video into final cut and exported an .aif audio track which I then converted to a 16 bit mono .wav file (this format is important and after some trial and error I discovered that Sphinx4 is far less accurate with other formats). Also, after some trial and error working with the transcript, I discovered that I needed to strip out all the punctuation and capitalization. Here’s a sample of what my transcript looked like before I fed it to the Java class:

this week i laid out my plan for our fiscal future it’s a balanced plan that reduces spending and brings down the deficit putting america back on track toward paying down our debt we know why this challenge is so critical if we don’t act a rising tide of borrowing will damage our economy costing us jobs and risking our future prosperity by sticking our children with the bill at the…

Then I just fed the audio track and the transcript into the aligner class and it’s output looks like this:

this(5.66,5.86)

week(5.86,8.36)

it’s(9.19,9.3)

a(9.3,9.35)

balanced(9.35,9.83)

plan(9.83,10.16)

that(10.16,10.38)

reduces(10.38,10.81)

spending(10.81,11.38)

and(11.38,11.58)

brings(11.58,11.86)

down(11.86,12.1)

the(12.1,12.22)

This seemed great. Then I went back into final cut and worked on compressing and converting the video file out to an .ogv format – I wanted to do this with the open web, not some proprietary api – and started working on a javascript player that would allow me to generate sequences of timecodes that represented statements and have the video jump around. I ended up creating a two instances of the same video and swapping their depths in order to speed the transitions between words up. I thought this might work better than over burdening a single timeline with too many sequential tasks with no break in between. Although Sphinx4 is quite accurate, there were a couple large chunks of audio data that it was unable to parse and I had to go in and manually adjust. I would say it’s about 80% accurate – which is the best I found (or could afford). It’s also worth noting as well that I have different results in Chrome vs. Firefox when playing the same time codes.

Once I had a decent prototype of playback, I needed a method to generate the sequences. I adapted a markov chain generator written in python from my Reading and Writing Electronic Text class (written originally by Adam Parrish) to contain a list of all ~320 unique words from the 4 minute speech. Running the program on the transcript spits out markov chains as well as a list of word array indices that I could easily port (manually copying and pasting at this point) to the javascript player. For the time being, this is the state of my art web application skills. I generated a bunch of responses and hard coded them into the javascript and they are called at random when a question is clicked. This is not ideal, and not how I want to leave this. Although this is in fact how a Magic 8 Ball works – totally random selections made from a pool of 16 total responses.

Outcome and Future Versions

I learned quite a bit so far in the development of this project. I spent a lot of time developing the concept behind the piece and eventually preparing for a public performance of the piece on May 6th at ITP. I asked the audience to tweet their questions while I was on stage and went through their questions and we listened to the responses. Although the markov chains are generally pretty dry, I hand picked a couple of phrases and put them into the array which ended up producing some really great and unexpected results.

Now that I (may) have some time to continue working on this, I’d like to get two more source videos in there about very different topics to increase the vocabulary and also the opportunity for greater de- and re-contextualization of Obama’s statements. I would also like to develop a server side component that would allow The Magic 8 Bama to generate responses related to the question (if only by regurgitating a word or two) and also to do this on the fly. Probably using Tornado. It would make the whole piece MUCH more interesting to play around with. Your can try out the beta version here, and stay tuned for a more robust and dynamic application.