Dispatch from an advertising future #14

Amazon is apparently testing a system that lets you pay by scanning your hand

He had to stop. But when he told stories he was animated. At the match he saluted his team. At a gig he punched the air. But he was getting bombarded. When he was out and about, or within sight of smarthome tech, his hand geometry was mapped and personalised messages followed. Show exasperation in an argument? his wrist buzzed with offers of calmfoods. Show excitement at an event? his pocket buzzed with details of new experiences. Wave to a friend, his glasses flashed directions to a coffeeshop. He had to stop, or maybe just move to a clenched fist.


He had to stop doing it. He knew he had to stop but, well it was natural, wasn’t it. I mean he was enthusiastic, demonstrative, outgoing. He waved. He gesticulated. When he told a story or a joke he was animated. When he was at the match he jumped and cheered and held his arms aloft saluting his team. When he was at a gig he danced and waved and hi-fived the band. But he had to stop. He was getting bombarded.

It was the same when facial recognition came in. Before the glasses had given people the veil, faces and expressions led to mood mapping led to countless personalised messages on every device that same face looked at.

Now you only had to wave when you were out and about — or within sight of your smart home tech—and your hand geometry was mapped. And once you were mapped wherever that hand appeared in real space you ad-dentity followed leading to a trail of personalised messages and offers.

Show exasperation in an argument… your wrist buzzed with offers of calmfoods.

Show excitment at an event… your pocket buzzed with details of the next experience.

Wave to a friend… your veil flashed directions to a nearby coffee shop.

He had to stop… or maybe just move to a clenched fist.