Feb 04, 2024

“AI Inside”

Tech companies are scrambling to stake their claims in the AI gold rush, and — on top of the legal and ethical woes they’re leaving in their wake — a lot of them are shipping surprisingly bad product. The tell? If the name focuses on “✨AI”, brace yourself.

Remember Intel’s iconic “Intel Inside” campaign? Effectiveness aside, it was a marketing campaign disconnected from solving user problems. Having “Intel inside” didn’t make it easier to use a computer or unlock new use cases. Instead, the campaign rested on the premise that “Intel Inside” was something consumers found inherently desirable. It’s a similar story with “✨ (product-name-here) AI” and “✨ AI co-pilot” products. They think consumers want “AI” because it helps them do…. stuff, and so they ship software that let them do… that stuff. Raw technology brimming with whatever use case you can think of! (1)(2)(3)(4)

The hope is — sticking with the gold rush metaphor — that if they send enough miners (users) down every mine shaft, they’ll eventually, inevitably, discover gold in them there hills. To some degree, that’s tech. There’s always a discovery phase with new technology, purpose-driven or otherwise. What makes this era of exploration unique?

  1. This stuff is weapons-grade plutonium, and instead of running long careful betas and designing around potential harms, companies are yeeting AI features into production without a second thought.
  2. I really mean “without a second thought”. Companies that ought to know better are shipping product that is gimmicky, half-functional, and/or outright destructive to the core product and then loudly, aggressively promoting it in-app and everywhere else.

Examples in the wild are myriad, but Notion and Adobe will do. I use products from both companies daily, and conveniently for me they offer examples of the bad way and some better ways to integrate large language models into commercial applications.

Bad

Notion’s value proposition for groups is “external hive brain”. Instead of knowledge being partitioned away in the heads of various mortal humans, Notion puts all that information into a single organized space where everyone has access to all that knowledge.

It follows then, that you’d want your external hive brain to be filled with high quality content. And here’s where we run into trouble. As of February 2024, Notion wants AI to be a major — perhaps default — way content gets created and edited.

  1. When you create a new document, there are two options; “blank page”, and “start writing with AI…”.
  2. If you pick “blank page”, you’re met with a “press ‘space’ for AI…” prompt. This same message appears whenever you place your cursor on a blank line.
  3. When you select text, the first button in the floating toolbar is “✨ Ask AI”.
  4. At one point last week, I had two different in-app pop-ups for two different AI features, one of which couldn’t be dismissed without viewing the underlying promo.

Notion really wants you to use AI.

But why would any company want an external hive brain filled with content they can’t trust? That gives you no audit trail to determine reliability? That has extra, value-free filler intentionally stuffed in? None of this is exaggeration; Notion prefaces generative AI with a “AI responses can be inaccurate or misleading” disclaimer, includes a “make it longer” tool, offers no sourcing for text it generates, and no UI for indicating whether or not a document includes generated content. These trust & quality undermining features combined with heavy emphasis in the interface work together to directly undermine Notion’s core value proposition. It’s bad product.

The lesson here; general purpose AI isn’t one-size-fits-all. Notion has strong uses cases in tone change, summarization, and translation. The problem stems from generative tools.

Adobe Illustrator is a more straightforward example of clumsy execution and marketing exuberance. When you select multiple elements, two out of three primary actions in the contextual toolbar are for AI features; one for “✨Generate”, and one for “Recolor”. Recolor is a useful feature (note again the correlation between a name that reflects a defined use case and quality / utility), and Generate… is not. To be fair, this isn’t a case of “add (beta) to the end to cover our asses”, it really is beta software. The prompting is shallow, the results aren’t predictable, and the output is akin to what you’d get from a raster-to-vector conversion (read; not great).

Normally that’d be fine, but turning a contextual action bar — something that should include the most common actions — into an advertising surface for tools that aren’t fully cooked and aren’t ready for professional use isn’t. It undermines Illustrator utility, and Adobe’s credibility as a reliable vendor of professional tools. It’s bad product.

Better

Notion Q&A is their second attempt at integrating AI, and it shows. One again, the name tells the story. Where before we had general purpose “do AI… stuff”, now we have a specific use case; ask questions and get answers from across all of your content. The critical difference here is that any generation is backed by links to the content it was generated from. No crossing fingers that the summary is accurate. Just dig in.

And unlike their general purpose AI, Q&A improves Notion’s value proposition for larger accounts by turning “the more you add, the harder it is to find things” into “the more you add, the richer your dataset is”. It’s good product.

A subset of Photoshop’s Generative Fill, Generative Expand strikes a similar note. In contrast to general purpose pure-prompt image generators, Generative expand solves a specific, painful problem; you don’t have enough image for what you’re trying to do. It’s baked directly into the crop tool; expand the crop larger than your current image, note what the extended scene should include, press enter. Simple. Style is determined by the source photo, which makes for more consistent output, and a tool that’s useful in a professional workflow. It’s good product.

Best

What makes for an “ideal” AI product? Where do things go from here? Notion Q&A and Generative Expand hint at the answer:

  1. They focus on augmenting, improving, and/or manipulating original human input, not replacing it. First wave products like ChatGPT and Midjourney are androids; the best tools will make cyborgs.
  2. They move past pure prompt interfaces and solve concrete problems.

In other words, the arc of AI tools will probably follow a similar arc as operating systems; from command line to GUI-centric. You can already see this happening with products like Perplexity, that solve a lot of the problems inherent in ChatGPT’s approach by wrapping a lot more UI around the inputs and outputs.

Consider the case of image generation, where first generation tools suffer from a fatal flaw; they pack style and content into the same interface (a single prompt). That leads to a large amount of unpredictability, which makes them unsuitable for most professional use cases (assuming you can get past the “is this legally safe to use?” issue in the first place).

What makes Generative Expand more useful is that it separates out content and presentation. Imagine defining a set of “brand styles” that include a high degree of control and specificity, that you can then apply at will to to a multi-modal input that describes the content (text, reference image, doodle, all of the above). Massively more predictable, massively more powerful.

This demo of generating architectural renderings in realtime from a GUI give a hint at how much unexplored opportunity there is. Intentional interfaces, solving real problems, with “✨AI” fading back to an implementation detail.

Read more notes