Generative models and AI for social good are both so cool
. The first is cool, because aside from Monet, no one else can paint like that, not really, except generative models. The second is cool, because social impact is really cool.
I've also become obsessed with these couplings: manifold topology and persistent homology, medicine and climate change, human-in-the-loop AI systems and user experience, gamification and engagement, organizational behavior and team structures, crowdsourcing and the future of work.
Publications on Google Scholar
Teaching & Preaching
I created and taught the GANs courses at Stanford
and on Coursera
. These are the first courses on GANs, the state-of-the-art in realistic image generation.
Our course launch is recorded on Youtube
, so go take a little peek.
I have the great fortune of teaching 15K+ students now, who I hope go on to do great things. When first recording, I was teaching 1:1 to my sister. She was my student 0. I hope that's made the experience more personal and engaging to every student in the future.
Black lives matter
I wish more people were willing to say this publicly, especially those I admire in my field. For those who look up to me, I hope you know I stand for this and am willing to say and work towards this publicly. During the protests in summer 2020, I put together a small skunkworks team of great students and we used computer vision for anti-facial recognition
to start helping with protester anonymity. We've since connected with several groups and hopefully made a small difference.
On weekends, I used to run Sidepact, a weekend program for employed engineers to start companies — cofounded with my amazing college friend, Kevin Sun
. Our current program director is the wonderful Sebastian Gallese
, who launched his company last batch.
O reader, my reader. Yes, that I am. Without getting into the poems I've written for Yelp reviews (excuse the praeteritio
), here are some of my highlights. (1) A Sonnet Riddle
. And (2) I'll just paste this other one here for you. It's a Shakespearean sonnet on training neural networks:
Cross entropy was at a heavy loss,
As poor costly predictions tumbled out,
Bidden keenly by a research Pangloss,
Who builds, tests, tunes without enough doubt.
Back alas to the core architecture,
A broad search was to be conducted.
Better models were bullish conjecture,
As loss was tallied and reconstructed.
One model had promising vigor,
To prophesy the right output vector.
This net was much deeper, faster, bigger,
And deemed best by the model selector.
So its gradient cared not to relent,
As it hurried to take a swift descent.
My undergrad thesis visualized the logic of Latin grammar into puzzle pieces for novices to learn more intuitively and engagingly. Think: Scratch for natural language. Play at TeachMeLatin.com
. Watch the video
on the research behind it.
Began as a personal research project, published in ACM CHI
, one of the top ranked computer science conferences in the world, and my undergraduate thesis
. Collaborated with 4 Harvard professors, 2 in Computer Science and 2 in Classics.
I also wrote a PyPI pkg for an obscure conda feature.