Hello.

I'm a digital product and graphics designer. I love device responsive web standards, functional user interfaces and branding — especially if there's a new product or service involved.

That's pretty specific, though. Deep down I really love designing all sorts of things. I geek out on physically interactive spaces and objects, data art, computational aesthetics, as well as bio-design.

I studied visual communication and art history at The George Washington University and I'm a graduate of New York University's innovative design and technology master's program, ITP.

I live, work and ride bikes in sunny Brooklyn, NY.

Contact

Academic Experience

2010.09 — 2012.05

Master of Professional Studies
Interactive Telecommunication Program (ITP) Tisch School of the Arts, New York University

2000.09 — 2004.05

BA Visual Communications with minor in Art History
The George Washington University
Graduated Cum Laude
National Society of Collegiate Scholars
Spring 2003 semester at Sydney University, AU

Professional Experience

2012.08 — present

UX Designer, Microsoft, New York, NY

I'm only just getting started.

2012.01 — 2012.05

Interaction Designer, SumAll, New York, NY

Worked with a small team of designers and developers to release the front-end of an analytics web application. Integrating an impressive array of data sources into a smart and charming experience, the application allows ecommerce business owners to save time and make better decisions.

2011.06 — 2011.09

UX Designer, Microsoft Bing, Bellevue, WA

Worked with design, editorial, dev and program management teams to scope, design and develop prototypes for a soon-to-be-released Bing.com feature during a summer internship. The internship culminated in two presentations of the feature prototypes to senior leadership at Microsoft as well as the Bing design team.

2007.02 — 2010.08

Graphic & Interaction Designer, Empax, Inc., New York, NY

Created a range of environmental, print and interactive materials to promote nonprofit clients and their causes. responsible for designing and presenting brand strategies, identities, print collateral, environmental signage, animation, user experience and interface, content management system setup and third party plug-in and data integration, search engine optimization, user analytics and testing.

2006.12 — 2011.08

Freelance Graphic & Interaction Design Consultant, New York, NY

Worked as a sole proprietor with various clients from retail, music, film, nonprofit, real estate and technology industries to create and improve existing brand and user experiences across many platforms and media, although mostly print and web.

2004.04 — 2006.01

Graphic Designer, The George Washington University Communication & Creative Services, Washington, DC

Worked with project management and external production vendors to deliver a range of print and interactive material related to university publications and communications initiatives. responsibilities included design and implementation of print collateral, posters, animation, environmental signage, web publication and press checks.

Other Experience

2011.11 — 2012.02

Vibrant Technology Researcher, Intel Research, NYC
Grant recipient working with NYU faculty, Intel researchers and student collaborators to design and develop a prototype for a location-based interactive organism that explores what happens when technologies are re-envisioned as peers instead of tools.

2006.01 — 2006.12

English Teacher, NOVA Japan, Kure-shi, Hiroshima-ken, Japan
Taught and mentored students of all ages and abilities in small to medium-sized classes to improve proficiency in english linguistics and conversation.

Selected Press & Publications

2012.05

Project: #BKME
Creative Applications (Web)
“BKME.ORG – A Web Platform for Reclaiming Bike Lanes”
by Greg J. Smith

2012.03

Project: #BKME
Laughing Squid (Web)
“BKME, Web Platform For Recording Bicycle Lane Violations”
by Edw Lynch

2011.07

Project: Budget Climb
Freakonomics (Web)
“What Would it Be Like to Climb 26 Years of Federal Spending?”

2011.04

Project: Budget Climb
Flowingdata (Web)
“Physically climb over budget data with Kinect”
by Nathan Yau

2011.02

Project: Gedenk Logo
Logo Lounge 6 (Book)
by Catharine Fishel and Bill Gardner, Rockport Publishers

2010.12

Project: Pousse Cafe
Gizmodo (Web)
“A Bartender That Pours The Perfect Shot, Every Shot” by Matt Buchanan

2009.11

Project: The 2007 Gotham Awards Logo
Basic Logos (Book)
by Index Book

2008.10

The Alliance for Climate Protection Website
Print Magazine
“Dialogue: Martin Kace”
by Steven Heller

Selected Exhibitions

2011.12

ITP Winter Show 2011, NYC

2011.05

ITP Spring Show 2011, NYC

2011.04

Data Viz Challenge Party, hosted by Eyebeam and Google, NYC

2010.12

ITP Winter show 2010, NYC

Scope Proposal: Words in Your Mouth

April 12, 2011

MaxHeadroom

Words in Your Mouth

The gist of the challenge is this: I want to develop a procedure and program that will allow me to take a fairly standardized interview video clip and corresponding transcript as input, and create a Max-Headroom-esque output of the interviewee saying anything we choose within the obvious limits of the vocabulary set available in the interview. And put it on the web.

Current Barriers & Constraints

  1. Video composition. For this effect to work properly, I’ve been searching for interview clips where there is a single speaker positioned in a generally static way and shot from a single camera angle throughout the entire clip. The reason being is that the abrupt changes in intonation when words are jumbled out of their original sentence contexts will prove jarring enough towards reasonable comprehension without the added parameter of a person jumping all over the screen and shown from different angles.
  2. Typical subtitle granularity. If you’re lucky enough to find a video that fits the above criteria, and it has been accurately subtitled, the level specificity is never more accurate than the sentence. This presents the largest and most obvious challenge and that is how do you generate a transcription of a video annotated with time codes at the word level. Given this data it would be easy enough to take that data and map it to the video clip.
  3. Video playback capabilities given a single clip (especially on the web). Even if you had this word-level specific transcription annotations, you would then face the challenge of how to play back each portion of the clip at the given time code. Currently the best options available for controlling a single video timeline on the web is popcorn.js, and its currentTime function may not perform at a high enough level to make this task possible. The other option, which would add and extra step between this and the previous, would be to take the transcription data accurate to the word level and generate some process to chop a video clip into a number of individual clips, each containing a single word utterance. Then, on the web, you might preload all of the clips within the given phrase before playing them back, and you could easily just swap their depths when transitioning between each.

The obvious alternative to this goal is to work with what is available, and that is subtitles at the sentence level. I might then be able to n-gram analysis at the sentence level and generate interesting conversation mashups. Even this goal has it’s challenge, which is how to take the start time for a subtitle and accurately estimate the duration of the entire utterance based on the character length of the subtitle. The good news is that this approach would definitely be more forgiving to slight inaccuracies in contrast to the word-level approach.

Before I get to the alternative, I’m trying to milk the original task as far as it will go before lowering the bar slightly. Right now, I’m looking into the possibilities of using Sphinx4‘s aligner audio file transcription process to generate time codes for each word in an audio file. I can just take the audio track from the video I want to transcribe, it will be the same length and should work. I can imagine that this won’t be any better than getting a Google voice transcription of a voicemail. We know how that goes.

Then taking this data, I would ideally be able to get some kind of video chopper process to output a bunch of independent clips. I have no idea what’s available for this process as of now. Please let me know if you have any ideas. The final step if all this works is to make some kind of web app to call a given number of clips and play them in sequence and hopefully the effect is at least funny. I’m thinking of approaching this by using a tornado python script, but I haven’t prototyped this yet so I’m still open to other options for this portion as well.