GenAI perspectives (contents+ post): a year with my robot friends
Made with Midjourney v6

GenAI perspectives (contents+ post): a year with my robot friends

Looking back on GenAI "year 1"

LinkedIn has an annoying feature/UX choice which makes placing exact post dates nearly impossible to find. ChatGPT was opened to the public on 30-11-2022. It led to a rapid journey, months of experimentation, and a handful of odd LinkedIn posts. Finally, on March 1, 2023, I took the leap and published the first edition of The Sceptic's Guide to ChatGPT.

Fast forward to over a year later, and the LinkedIn GenAI discourse I mapped in my introduction to the guide still remains largely stagnant. It continues to intrigue and frustrate me, with the same three groups dominating the conversation:

  1. Don’t believe the hype: “Look at the mistakes it makes, LOL; it’s not even as good as Google; there’s no serious use-case here. It’s a toy.”
  2. This changes everything (superficially): “ChatGPT can do everything. Not only has research changed forever, and we no longer need to use search engines, but look at this brilliant [insert dull and superficial result] to [a crucial, nuanced and deep business/marketing/creative task].”
  3. Moar content! Zero effort! “Here’s a listicle about how to use ChatGPT to create the most boring spammy articles and posts the world has ever seen.”

Wanting to offer a fourth way was what provoked me, at the time, to join the fray at the risk of being seen as "another LinkedIn AI dude" (shudder). And ultimately it led to writing the guide. The fact the discourse has remained pretty unchanged after a year with those new tools made me think that maybe it'd be good to collect some of my previous perspectives on one page (like I previously did for Creative Strategy). Naturally, I'm focusing on those who stood the test of time (so far).


Introductions and general assessment

This updated version of a long-form piece was published a little after GPT's v4 went public (for pro subscribers), and Google launched the "improved" version of Bard, which is now called Gemini. I didn't publish another update because I couldn't identify any significant leaps in "bot thinking" since. Claude (which some people swear by) just isn't consistent enough IMHO. GPT4 is still the market leader on most tasks, definitely on conceptual thinking and prompt/context "understanding". I'm not alone in that view.


GenAI in the workplace and creative industry specifically


Practical implications

  • Bots for desk research? At your own risk. Explaining why tasking bots with research is so tricky. Keep in mind those limitations are baked into the fundamental nature of GenAI. "it's not a bug, it's a feature" — they are unlikely to change in the foreseeable future.
  • The importance of using multiple regenerations to determine whether an AI answer is in the right area.
  • Visual analogies to drive the above points home. One with generated images in the style of Norman Rockwell and another with oddball film posters. The idea here is that mistakes, inferior quality, and other weirdness jump at us when looking at an image but not when reading a text. Not even if we're searching for them (and most people don't). But as those are sister technologies using similar statistical principles, we should all keep in mind things are just as common in generated texts.


A brief "bottom line" — where things stand

Honestly, this is still one of the most important things to keep in mind while working with GenAI.

It is better at "boring things" and always will be.

The more consistent and uniform the content of a topic (/sphere/discipline), the more conclusive and accurate the prediction will be. Because it's all based on statistical probability patterns. This is how all GenAI answers are generated. That's why:

  • Parsing information and playing with formats in different and easy ways? Goes without saying.
  • Common legal and medical advice? Quite good.
  • Formal letter writing? Immaculate.
  • Job applications? Remarkable. Feed your CV, the JD, and Company background. It will write your cover letter or application. You will just need to sprinkle your personal human touch so it doesn't read like the application is from Siri/Alexa.
  • Adapting one proposal to another? The cheeky bot will suggest adjustments to your budgets and timelines and tweak deliverables to reflect even minor differences between two similar projects.
  • Translating specific brand cases into frameworks and between frameworks? At the blink of an eye, but in the most boring way possible. (exposing the truth about the importance of brand frameworks relative to actual brand strategy 😬)
  • Self-help? If you really want your mind blown, feed it personal emails or text chats (even if there are multiple authors involved), perhaps some private journal entries (don't worry about length, as long as each piece separately is manageable); then ask it for insight and advice. I guarantee it will consistently impress (and potentially nudge you on your journey). It creates a self-reflection engine that is honestly better than some therapists I've met. How is that possible? Because, my fellow unique snowflakes — people are people.

By comparison, results for marketing, brand strategy, creative strategy, and advertising are generally bland and certainly extremely inconsistent in quality (unless super generic). They get worse the more strategic, conceptual, nuanced, and unique the task/subject.

The obvious reason is that, as you know, if you've been on Linkedin for just one day — marketing professionals and thought leaders often disagree on the fundamental terms and principles of the discipline. But probably the bigger factor is the colossal amount of mediocre to nonsensical gobbledegook out there, especially around subjects such as brand strategy or creative strategy. This cocktail of nuanced inconsistency and fluff is nearly as fatal to the poor robots as Nightshade. They cannot find the sufficient statistical pattern consistency they require to generate accurate results. And this principle is amplified the more specific, detailed and nuanced the answer to any subject is. But does it stop people? Of course not! Because nuance dies way ahead of truth.

As Douglas Adams once wrote on an AI-generated cup of tea: "It invariably delivered a cupful of liquid that was almost, but not quite, entirely unlike tea."

But people will still try to convince you that it's not just drinkable but as good as any proper cup of tea they've ever had.


While we're here — What is the GenAI use that you've personally experienced and most surprised you or blew your mind?




I hope to update this page from time to time. If you want me to keep you posted on writings and other things, free from the whims of the algorithm, then here's my sporadic mailing list.


James Souttar

Life on a burning platform

2mo

Great summary. One unexpected and fascinating take-away (I paraphrase): “because most of what is written about marketing and branding is crap, LLMs can only generate crap from it” — contemporary GIGO. Since I‘d never asked it about these subjects, this really hadn‘t occurred to me (as a general principle about AI, and how it can inadvertently act as a bullshit detector.) [Lovely image from MJ — I never fail to be delighted by the way AI is upbeat about human-machine collaboration, or disappointed by the way supposedly intelligent human beings are so resolutely downbeat about it.]

To view or add a comment, sign in

Explore topics