Sunday, 24 August 2025

What’s the deal with Amazon’s “hallucinating” Ai generated summaries? Time to go full John Connor?

 


In this blog, I won’t be expressing discontent with Amazon as a whole; they have done a pretty stellar job of regularly discounting my novel without affecting me personally. No complaints, there. But I am increasingly concerned with their continued over-reliance on Ai for generating inaccurate summaries about my product, especially when the margin for making atrocious errors is clear… errors that both harmful to your business, and misleading to the customer. This is not exclusive to Amazon either, with other product sites using their own version of Ai - one of which states an inference - amounting to my horror novel being suitable for children because… it features children. And if you don’t believe me, read it right here.

Good grief. If Ai just takes a brief look at the front cover, it will realise this is horror, not the Hungry Caterpillar.

Amazon’s latest Ai generated summary of my novel (what amounts to the bread and butter for an author) is, in my case, not only catastrophically inaccurate, but bordering on a lie. It is a composite of UK reviews, that it has resoundingly misinterpreted/taken out of context… while creating a brand new expression to support its own value-judgement! I’m not exaggerating either (unlike the latter). We will get onto “Fabulously Spoo” as a negatively interpretated Ai comment shortly…

Above all, trying to contact a human who can help through either KDP select, Amazon Author Central, customer service etc, to draw attention to/rectify the issue, is nigh on impossible. Round and round in circles… click, click, click… and then, back to the beginning. Sure, you can post on the Amazon Author community forum, but all who raise similar queries to my own seems to receive the proverbial GET-OVER-IT-GUY as their first reply; someone who purposefully ignores a well-conceived query and goes straight in waving a can of gasoline. Not even slightly helpful. It’s all really… rather weird. And speaking of weird, what exactly do I mean by hallucinations in this blog title? (And yes, you did read that correctly; I purposefully intended to incorporate this.)

So, after going round and round in circles, for the first time ever, I logged onto Chat GTP earlier this morning, and asked it some questions. I’ll stop writing for a bit, and just post the summaries. (Apologies for the typos in the Q’s asked; I was getting increasingly frustrated.) These aren’t my responses btw, these are Chat GTP’s…

 

Any images you see of our own planet earth are composite images, created from numerous, blended digital photos to produce an enhanced panorama. It’s important you have plenty to make a realistic picture, otherwise you’re filling in the gaps with your imagination. And here’s the point, you must have comprehensive evidence at hand, and the ability to interpret correctly (and with clarity), otherwise you’re going full Black Adder – pretending to paint no man’s land - in that episode of Black Adder goes forth.

Darling: Are you sure this is what you saw Blackadder?

Blackadder: Absolutely. I mean there may have been a few more armament factories, and not quite as many elephants, but...

 

I have no problem with Ai, if it is frequently monitored, quality-tested, and deemed appropriate-and fit for platform. I am no luddite; but there’s somewhat of an irony in the example I’m using here: no man’s land. I haven’t asked Ai to pass judgement on my Amazon page, and yet it does, and in an increasingly confounding way. I need it to enhance my product, not misconstrue the content and deliver to the masses as truth. For now, I am happy to consign Ai into the no man’s land of my personal preferences/influence on my life, at least until it’s fully formed, and a true reflection of what it was intended to be. I wouldn’t purposefully head for a blood test, and request a trainee nurse to fruitlessly search for a vein in my arm and then repeatedly attempt to insert a cannula… because that’s damn painful: I would request a professional who knows what they’re doing.

My novel’s Ai update yesterday fixed upon (and then added) a single, detrimental phrase to the overall headline… citing evidence from a glowing five-star customer review. Not only did it do this, based upon a single word (lull) - taken out of context - but it wholly ignored the rest of one my favourite reviews, using a five letter word as a fulcrum to embellish the entire summary, and produce the strangest overview.

But it doesn’t end there. In terms of my book’s readability, it gets weirder. Ai stated there are 3 positive comments, and one negative. All four, are in fact, positive… but for some reason, ‘fabulously spooky’ is a negative when it comes to readability, because Ai has only pulled two thirds of the letters from the word spooky, rendering us with a brand new expression, “Fabulously spoo

I mean, what the Dicky Davies?

(Please Ai, don’t read and attempt to interpret that expression!)

Does it believe the customer (again, five star) actually meant to write: spew, poo, or poos? This novel… fabulously poos! Or is it, loose spoon, louse poo… or sly spoonful (which personally, I’d take as a compliment.) When all’s said and done, I can’t remotely fathom any of this. 😊 No data pattern, protocol, or algorithm could produce anything this bizarre. This could well be an example of what Chat GTP told me about an Ai hallucination.

So, is it time to do something?

Is it time to go full John Connor, and what does that realistically mean? 😉

People sit back, and don’t do anything. This isn’t a criticism – in fact - its human nature, in our increasingly busy existence, to seek the path of least resistance, and wait for others to get involved. It’s far more comfortable to remain impassive, and wait… and let others shoulder the weight. Recently, on our street, a business decided it was going to extend its premises, which would cause significant upheaval, noise, pollution - as well as increased footfall - to all living in this quiet Victorian cul-de-sac. The advocates got wind of these proposals, combined their efforts, and stopped the planned extension in its tracks via a cogent and sustained appeal to the council. I’m very proud of those well thought-through letters. Others on our street, closer to the intended site, who would experience the same negative effects… did nothing. I can guarantee their exasperation if this thing went ahead. They had a chance to make their voice heard with a single paragraph… but didn’t.

So, here’s what I’ll do (as part of this whole John Connor trope). I will push with all my might to get answers. I will write, advocate, push, push, push… until I can I get these answers. I will seek to see Ai summaries are changed/restricted/refined to be more reflective of their product. I am one voice, but I will try. I will try for you, good author, as much as I will try for myself. Feel free to share, comment, guide people to this blog, and above all, become a second, third, fourth voice etc, in breaking through the complex tiers of complexity to reach the people who matter, and let this ring in their ears, until something is done about this.

It’s not only a ridiculous scenario, it’s fundamentally unfair.

JSC

 

No comments:

Post a Comment