What the Oprah AI Special Got Right

In 1995, Bill Gates went on the David Letterman show to explain the concept of the internet and spin out this vision as to why it would shape the future. It's a fascinating and funny clip to rewatch almost thirty years on, but what is truly amazing is just how much Gates got right, even when the vast majority of the tech was so nascent it seemed to exist in the realm of science fiction.

I thought about this clip when I watched Gates on the Oprah Winfrey special about AI that aired last week. I have no doubt we will watch the show in ten years and it will feel like a quaint time capsule, capturing a time before a shift happened, when no one was quite sure what exactly was going on. Standing on stark contrast to the Jon Stewart story on AI that I ripped apart a few months ago, the Oprah piece was fair, balanced, and nuanced -- there was no fearmongering, but there were smart and legitimate questions about the need for regulation and oversight.

In the first interview, Sam Altman stated that AI is "the future of the internet," and he's right. This technology is already powering and will continue to power the way we live our digital lives, and done correctly, will make them better. We can offload administrative work that's not the best use of our time (I have to file expenses later today and am frankly dreading it); we can get smarter and better results, and we can co-create content. AI won't be a bogeyman; it will just be the thing that helps you write a meeting summary more quickly so you can get on with your day.

AI misinformation and scams were also raised as topics, but there are two points to consider. In an age when a vice-presidential candidate can make up racist stories out of whole cloth and share them on social media, it seems a bit rich to freak out about AI. After all, we can just spread the lies ourselves. And on the subject of AI spam calls, Congress and phone companies could fix that easily - but the telemarketing lobby has poured millions into making sure that doesn't happen. Eventually there will be enough of an incentive to solve this problem, but inaction isn't the fault of AI, it's the fault of people who choose not to act.

My biggest takeaway from all this is that for those of us who believe in this technology, we need to start telling better stories. I spoke at the Scandinavian Creative AI Summit last week, and was totally blown away by Nick Law's presentation on how the pendulum in tech has swung so far towards the technical and almost entirely away from creative. This was the secret sauce of Apple back in the day, and OpenAI and other platforms should pay attention. AI powered tools for law enforcement could be seen as scary -- or they could make our cities safer and raise enough revenue to keep libraries and community centers open. It's all down to how we talk about it.

Ten years from now, when my AI has finished all my boring work and I've spent time doing what I like, it'll be fun to knock off for the day, pop on my leisure headset, and give this a re-watch.

AI, Spatial Computing, Intrinsic Motivation, and the Future of Work

In his keynote at the KPMG Tech and Innovation Summit, Jason Calacanis shared a slide about millennials, gen z, and their love of gambling. These are the kids who sit at a basketball game but have their eyes glued to their phones the entire time, making bet after bet; they're the kids in the basement trading memestocks for lulz. When secure jobs are a joke and the housing market is out of reach, why not just embrace nihilism and bet the farm?

It's a dark take, but not necessarily an incorrect one. Despite proclamations about the strength of the economy, I know tons of people who are out of work right now -- and they all have top tier educations and glowing resumes. And if someone with an Ivy degree and a shelf full of honors is out in the cold right now, how can a regular person get ahead? Add to that growing concerns about mass displacement due to technology, and it's no wonder that the vibes are off.

The event featured a lot of excitement and optimism about emerging technology, and that's a stance I tend to agree with -- but my excitement is starting to become tempered. It has nothing to do with the tech, and everything to do with the fact that lots of humans are going to go through radical changes in the next decades -- and many simply won't accept them.

I was at a family wedding over the weekend, chatting with a guest, and asked what she was up to recently. She'd lost her job recently, and rather than looking for a new one, or retraining, or upskilling, or going to the gym, or finding a new hobby, she's just...hanging out and watching Netflix. Her husband works and can support them, but it was a wake up call for me, an almost psychotically driven person -- most people, at the end of the day, just aren't that motivated to learn and change.

Unfortunately, the future is all about learning and changing and growing. They key skill we need to start teaching kids and retraining adults on has nothing to do with math or coding or emotional intelligence -- it's the need to make sure people curious and intrinsically motivated. That's a trait often found more in developing markets, and as the economy becomes ever more global, people in the US and Europe are going to fall behind.

We need to train a generation of new workers who look at a headset and think about all the ways it can solve problems and improve their work. We need to train a new generation to look at AI and ask themselves "how can I be better than this machine?" We need to train a new generation to realize that education doesn't stop when you finish school, and that continuous learning is the path forward. And we need to do it now.

Much of the toxicity in our current political climate has roots in the fact that people weren't upskilled and retrained early enough. When car factories started closing in the seventies, unions fought for people to be paid to do nothing, not to retrain or relocate. When timber industries shut down in the Pacific Northwest, it was convenient to blame spotted owls, rather than the timber companies that wanted a cheaper labor force in Brazil.

AI and spatial computing can solve so many of these challenges, from helping people learn more quickly to providing paths to new work. It's up to those of us working in the space to tie all this together and help the next generation of workers before it's too late.

Reality Bites at Meta -- But It's Not Too Late to Change It

As I read through this new Yahoo Finance piece about the challenges at Reality Labs, I couldn't help but nod along. I had flashbacks to the time I working with Meta, meeting people working on VR projects who had never put on a headset, or who didn't understand that basics of content (I'm not talking narrative choices here; I'm talking "don't swing the 360 camera around").

I heard from one person whose project had been funded by Meta that in order to get into the App Lab, never mind the Store, he had to call a C-level executive who just happened to also be a distant family friend. Another piece was funded to the tune of seven figures and continues to collect awards but is still stuck in the App Lab. An employee at a major state university got on a call to enquire about buying headsets only to be insulted about her institution and get a lecture on how the salesperson went to an Ivy. And so on.

Meta has alienated developers by announcing big funds and partnerships, handing out money for a few months, and then pivoting. The App Store remains a walled garden where the only way to get in is to make friends with an employee (who might be gone in a few months, given the churn). The headsets are quite good and the market share is huge, but they're succeeding despite themselves, not because of anything they're doing.

Listen, I want Meta (and Apple, and Pico, and HTC etc) to succeed. VR benefits from having an open and robust market with lots of competition. But Meta has also consistently alienated longtime creators and experts in the space and created an environment of instability that hinders growth.

The first thing Meta needs to do is start bringing in experts. People who know VR, who have track records, who understand what an APK is. They need to mend fences with the creator community and open up the App Store just like Apple and Android -- content obviously needs to meet some basic guidelines, but beyond that, let people put stuff out there.

The one thing I hear consistently is that people buy headsets, enjoy using them for a bit, and then run out of content and get bored. All our devices are just chunks of plastic and chips without good things to look at and play with, and given how many headsets are just gathering dust, there is a huge opportunity to revive them.

The second thing Meta needs to do is empower every day people to create content. Facebook, Instagram, and WhatsApp are only as good as the content people share -- but users don't really share content in headset. Horizon Worlds got part of the way and then petered out, but teaching people how to make 360 content and share it would be massive. My current work is all about empowering everyday creators -- and while I focus on enterprise and education, the same principles apply for everyone.

A rising tide lifts all boats in the space, and improvements at Meta will mean a better market for everyone. But they need to act fast in order to turn the ship around.

Why Educators Need to Embrace AI

A few days ago, a professor posted their AI policy on social media, and the policy was just "no AI." No nuance, no consideration that using AI is already a valuable skill and will only be more valuable in the future, just...no AI. OK, then.

This is, of course, an absurd take lacking in all nuance, and will age about as well as those old "no using Wikipedia as a primary source" rules I had to deal with years ago. There's a vast gulf between feeding an essay topic into ChatGPT and copying and pasting it into a doc (that's unethical, but also very easy to spot on any close read) and using AI for research, generating topic ideas, or spelling and grammar checks. Is it ethical to use an AI-powered virtual human to practice before a presentation or to brush up on foreign language skills? What about using Midjourney to generate illustrations in a report, instead of pulling stock photos from the web?

The rule is not only silly and impossible to enforce meaningfully, it also does students a massive disservice, as using AI is going to be a critical skill for the next generation of workers. Using new technology to augment work, alongside critical thinking and analysis skills, are the skillsets the future workforce needs to have -- and students who don't have access to that will be far behind their peers that do.

In the next few years, we'll finally reach the end of "teaching to the test," a concept meant to promote equity but which in fact has sucked the joy and creativity out of learning. This is going to require a massive shakeup in the way we teach kids, as they'll be required to practice social/emotional skills (machines are not yet smart enough for those) and higher level thinking and analysis, and simply memorizing rote formulas won't be enough. It will require more work, sure, but will lead to far better outcomes.

Every generation, a new technology comes along and educators freak out, then cautiously accept it, then embrace it. Rather than going through this whole ridiculous cycle again, we need to start talking about reasonable use cases for AI at every grade level, and then introducing concepts to students. Their future economic prospects depend on it.

WWDC Recap: You Can't Win 'Em All

For me personally, it's probably a good thing that the most recent WWDC was sort of...mid, as the kids say. Last year, I declared WWDC the best day of my life on the heels of the Vision Pro announcement, and then had to do a mini-apology tour with my loved ones. So this year's spate of useful and interesting but not earth-shattering VisionOS developments was good for both my professional and personal relationships.

The ability to go back and spatialize photos is fantastic, and I'm excited to have a play with that. I still think Apple is underselling the potential killer app of hands free spatial video and photos; that's what I probably do most with my headset outside of Zoom calls and watching movies. I'm a little bummed that my prediction about a new entertainment partner didn't come to pass, although they did announce some cool new content.

But we can skip over almost everything else, except the ability to pause and resume Apple watch fitness streaks -- as someone with 2,071 day streak, not having to do jumping jacks in an airport lounge to close my rings is pretty great. But that's just me, and I'm just crazy.

OK, so now it's AI time, or as Apple referred to it, "Apple Intelligence." And unfortunately, it was more of the same -- solutions without real problems. AI cartoons are charming and fun, but does anyone really need them? Cutting a few steps out of searching in emails or photos is nice, but not a game-changer. ChatGPT integrations are great, but the ChatGPT app also exists.

I read somewhere that Apple is being more cautious in general on the back of the failure of the Apple Car, and I get it. The continual emphasis on privacy was a way to set them apart, but it also limits the utility of what they're building. At the end of the day, it's a lot of flash, just like Sora and Midjourney and many of the others. Creating weird images to fool boomers on Facebook is all well and good, but I think most people an AI that calls customer service for them and solves a problem without wasting their time.

There are good AI products out there right now for training and education and conversation, and those are exciting. Meeting transcripts and summaries are sometimes imperfect but generally fantastic. Even the image generation platforms are fun and useful for creating clever decks. But in general, we're not nearly as advanced as we think we are, and Apple didn't really move the needle in that regard.

The Fast Future Blur: VR/AR and the Future of Work

I try to stay away from overt self-promotion in my newsletter, but just this once, I'll break my own rules. I'm super proud to announce the launch of the Fast Future Blur, which is in stores and online now. The book features all the members of the faculty of the Fast Future Executive program collaborating on chapters to show that all of the technologies we talk about are interconnected. I co-wrote my chapter on VR/AR and the Future of Work with the great Rajan Kalia.

I don't want to give too much away (buy the book!) but we spent a lot of time last fall digging really deep into how leaders can use this technology to solve real world problems and created a framework for applying it across an organization. There are clear and actionable steps for creating and deploying content and measuring outcomes. If this is interesting, I also lead workshops for companies and educational institutions that go even deeper into defining problems and figuring out what works in VR/AR, and then how to actually make the stuff.

While I can't quite claim this book is a beach read, it's definitely a good and educational read -- maybe something to turn your brain back on after a few weeks lounging in the sun. Is "flight home read" a thing? Maybe it should be.

And if you're interested in learning even more, we're doing a launch summit on June 11th at UC Irvine. You can register here. And the book is available on Amazon, Barnes and Noble, and Waterstones for you Brits.

The Future of Work is Pivoting

When I was giving a guest lecture up at Brown last week, one student asked me about my career journey, and how I got to where I am now. It was a long answer, partly because I've been in the workforce for twenty years and partly because I've pivoted, a lot. I haven't always wanted to pivot, and it hasn't always been fun, but it has paid off in the long term. And whether we like it or not, it's going to be the reality of how we work from now on.

When I finished college, I thought for sure I'd work in politics. After all, I'd been interning at various non-profits since I was 15; I'd served on non-profit boards as an undergrad; and I'd even done the Wellesley in Washington program, interning in the Senate. But the universe had other plans, and that other plan was discovering that making a living wage wasn't really a thing as a young person who wanted to change the world, and if you didn't have a trust fund, you weren't going to get very far. I spent a year mindlessly answering the phone at a law office and freelancing on the side until a job at an alt-weekly opened up, and I pivoted to journalism. I kept at that, and by the time I was thirty, I was an editor at a national publication. Except, oops, magazine journalism started going up in flames. So I pivoted to music tech, and then to XR. I pivoted in XR from doing entertainment to training and education. And I am quite sure that in some number of years, I'm going to have to pivot to something else.

Am I jealous that certain workers have politicians falling all over themselves to protect their jobs where I was told to just "figure it out?" Sure, but I also realize that the days of anyone fighting to protect those jobs are numbered. The forces of technology and innovation aren't going to stop anytime soon. Sooner or later, we all have to pivot.

The real issue becomes how we set people up for success with those pivots. Most schools in the US sadly do a poor job at teaching critical thinking, analysis, and communication -- they're still stuck in the days where they trained people to churn out widgets. This is not to slam teachers, who are overworked, underpaid, and forced to teach to largely meaningless tests on top of managing out of control classrooms. Rather, it is an indication that we require a radical rethink of what skills students need to learn to make them flexible, adaptable adults.

From there, we need to start retraining those workers who are about to be displaced. If you're reading this, you're likely safer than many, but people who do rote work that can be automated are going to be in trouble very soon. The answer is not to go backwards or be protectionist -- it is to move forward, re-skill, and hopefully wind up in a better position. Put it this way -- if taxi companies had immediately launched an Uber competitor, Uber would have probably gone away. But they dragged their feet and complained while customers -- quelle suprise -- opted for better service. By the time they launched their own apps, they were far behind the curve.

There's a great scene in the Taylor Swift documentary Miss Americana, where she learns her latest album hasn't been nominated for any Grammys. Rather than ranting and blaming the Recording Academy, Swift simply says "I'll make a better album next time." Workers need to be ready to improve and be one step ahead of the curve, lest they fall completely behind.