AI: Confidential - What People Really Think About AI Right Now

AI: Confidential - What People Really Think About AI Right Now

At Remix Summit London, we invited delegates to anonymously share their honest thoughts about AI, then explored those ideas live in conversation.

What emerged wasn’t hype or backlash, but something more nuanced. People are already using AI in their day-to-day work, often productively, while still feeling deeply uneasy about its impact on creativity, trust, learning and the wider world.

Three themes dominated both the submissions and the discussion: creativity, trust, and what it means to think for ourselves.

This is what people really said and what the panel helped unpack.

1. AI as a Companion, Not a Creator

Most people described AI as useful, even transformative when it acts as a support system rather than a source of original ideas.

Delegates use AI as:

  • A thinking partner and sounding board
  • A PA/editor for admin, repetition, and logistics
  • A research aide or interpretive guide
  • An always-available, non-judgemental assistant

“AI - my creative buddy.”
“It’s like a cheaper spellcheck on steroids.”
“It’s the best therapist - totally accessible, always there.”

But there was clear resistance to AI as an originator. Many people spoke about enjoying the struggle of creativity - the exploration, uncertainty, and eventual arrival at ideas and feeling that AI often misses intent, taste, or specificity.

“I enjoy arriving at ideas - not just polishing what AI gives me.”

Underlying tension:

AI feels welcome when it supports human agency  and threatening when it replaces authorship.

Panel takeaway:

Use AI to supercharge human thinking and creativity, making ideas sharper and more meaningful without replacing the struggle that sparks innovation. The panel emphasised that in a noisy, uncertain world, clarity matters more than speed. AI works best when it helps people sharpen their thinking, not generalise it. One suggestion was to use AI to “interview yourself”, helping overcome the blank page while keeping humans firmly in control of direction and intent.

2. Creativity, Authorship & IP at Risk

Alongside practical benefits, there was deep concern about what generative AI means for creative ownership, labour, and trust.

Key anxieties included:

  • IP erosion and unclear ownership
  • Creative labour being absorbed and monetised without consent
  • Disproportionate impact on freelancers and working-class creatives
  • Unlabelled AI-generated content undermining trust


“AI will do to IP rights what the internet has done to design rights.”
“I lose trust when people deliver work and haven’t even read it.”

Several responses framed this as a structural shift - from creation to extraction. Creativity continues,  but the systems that reward and protect creators feel increasingly fragile.

Underlying tension:

Creative work persists, but the conditions that sustain it feel uncertain.

Panel takeaway:

The discussion turned to transparency and responsibility. Labelling AI-generated content, keeping a clear “human in the loop”, and giving creators meaningful choice were seen as practical starting points. Trust, the panel argued, isn’t built through policy alone, but through visible intent and honest practice.

3. Cognitive, Ethical & Environmental Unease

Beyond day-to-day use, many people expressed discomfort about AI’s broader impact.

Concerns included:

  • Erosion of critical thinking and learning
  • Overconfidence, hallucinations, and persuasive misinformation
  • Environmental cost (energy, water, infrastructure)
  • Inequality, job displacement, and wealth concentration
  • A sense of inevitability: using AI despite moral resistance

“It scares me sh**less. Especially the environmental impact.”
“I fear AI but I increasingly use it.”
“I actively avoid using it…because I want to use my own brain.”

People weren’t rejecting AI outright. Many felt caught between convenience and concern, usefulness and unease.

Underlying tension:

AI feels unavoidable, but many people remain unconvinced the trade-offs are acceptable.

Panel takeaway:

Rather than framing AI as something to trust or distrust wholesale, the panel talked about “calibrated confidence”. Too little confidence and people get stuck. Too much and judgement is lost. Used well, AI can support both emotional and rational decision-making, helping people understand options more clearly rather than replacing judgement altogether.

What This Tells Us

What stood out most wasn’t fear or excitement, it was ambivalence.

People value the time AI saves and the friction it removes. They appreciate how it can help them think, organise and explore ideas. . At the same time,  they worry about creativity being hollowed out, trust being eroded, learning being weakened, and social and environmental costs being ignored.

Right now, AI is experienced less as a breakthrough and more as a negotiation.

Not whether to use it, but how.

If you’d like to talk further about these themes or explore how AI can be used to build confidence we’d love to chat.