I work at Andreessen Horowitz ('a16z'), a venture capital firm in Silicon Valley that invests in technology companies. I try to work out what's going on and what will happen next.
I write a blog here, do an annual presentation called 'mobile is eating the world', send out a popular weekly newsletter, talk too fast on podcasts and think aloud on Twitter.
Electric and autonomy are rolling through the car industry, changing everything about it. But though they transform gasoline and car accidents, they could change a lot more besides - everything from cigarettes to parking. It's the second-order consequences that are hardest to see, but most interesting.
Smartphones are still evolving, but we're on the upper slopes of the S-Curve. This means innovation is slowing, but also that iOS and Android are now unassailable. It's time to focus on what's next - voice, machine learning and, especially, augmented reality.
The trap that some voice UIs fall into is that you pretend the users are talking to HAL 9000 when actually, you've just built a better IVR, and have no idea how to get from the IVR to HAL. How can we find the mental models for this to work - to bring less rather than more friction?
Where Facebook is surfing user behaviour, Snapchat is trying to create entirely new experiences all the time, around camera, touch and mobile - around what makes mobile different to the PC.
Ten years after the iPhone, what assumptions can we leave behind? What do we build if we assume a billion people have a high-end smartphone, and forget about PCs?
With fundamental technology change, we don't so much get our predictions wrong as make predictions about the wrong things.
As we pass 2.5bn smartphones on earth and head towards 5bn, and mobile moves from creation to deployment, the questions change. What's the state of the smartphone, machine learning and 'GAFA', and what can we build as we stand on the shoulders of giants?
Machine learning means every image ever taken can be searched or analyzed and insight extracted, at massive scale. Every glossy magazine archive is now structured data, and so is every video feed. How does this change retail?
With Amazon's Echo, Snapchat Spectacles or the Apple Watch, e're unbundling not just components but apps, and especially pieces of apps. We take an input or an output from an app on a phone and move it to a new context. We remove friction, but we also remove choices.
Everything bad and complicated that the internet did to media is going to happen to retail. As, just as Facebook shapes so much of what we read online, what channels will shape and reshape what we buy?
As machine learning starts working, how does that change Google, Apple and the smartphone interface? What companies get reshaped around machine learning?
Content is moving from the open web to proprietary platforms - Facebook, Google, Snapchat and others - that give both new ways to get users and new formats to curate content. Far more video, far richer ways to show content, video as the new HTML (or the new Flash), and new metrics and dynamics.