鶹ý

Skip to content

Shelly Palmer - A Future We Won't Recognize

Shelly Palmer has been named LinkedIn’s “Top Voice in Technology,” and writes a popular daily business blog.
openai-unsplash
We have crossed the Rubicon and declared war on reality.

Greetings from Atlanta. I'm here to lead an AI workshop for AMB Sports and Entertainment at Mercedes-Benz Stadium. This is my first time visiting the facility; it's really something!

In the news: There are two unrelated articles today that, when taken together, foreshadow a future we will not recognize. First is the deepfake video of Vice President Harris, which Elon Musk dubbed a parody, allowing him (according to his interpretation of his own rules) to widely distribute an anti-Harris video substantially created with AI. If you choose to watch this , you will instantly realize that it requires a certain amount of sophistication to understand that it is a parody. It looks and sounds real – which is the point.

Please do not respond to me with your political opinions – I am completely uninterested in which side of the aisle you align with.

What is important about this story is that we have crossed the Rubicon and declared war on reality. We have entered a new world of deepfakes at scale. Using this technology, people with very limited technical skill can make thousands of messages like this. Scale is the lesson here. Critically, generative AI also empowers scale for social posts, email, and every other form of communication. Get ready to be inundated with hyper-targeted AI-generated bot-delivered messaging. There will be no escaping it. (There's no way this gets regulated or stopped. I won't print the reason here, but if you're interested, reach out and I'll explain it in a private email.)

The second (equally important) story is about a Meta initiative called , which Meta calls "a place for people to create, share and discover AIs to chat with – no tech skills required." Meta says the motivation for this feature was the fact that popular creators cannot personally interact with the vast majority of messages they receive. The solution (according to Connor Hayes, Meta's VP Product for AI Studio) is to create an AI “extension of themselves."

OK. Let's imagine a world where you could interact with a human, an AI pretending to be that human, an AI pretending to be any human, or an AI that you know is an AI. All of these entities are (or soon will be) capable of carrying on a complete conversation with you. Now, without any help from me, follow this reality (which is where we are today) to its logical conclusion.

I'm not sure if George Lucas was prescient (or just lucky) when he wrote C-3PO's famous self-introduction: "I am C-3PO, human/cyborg relations." It always announced itself as a purpose-built artificial life form. Ethics? Protocol? Law? Truth in labeling? Requirement? We won't need to recognize the future; it's already here.

As always your thoughts and comments are both welcome and encouraged. Just reply to this email. -s

P.S. Over the past few days, I've gone pretty hard at Google for creating and continuing to run , one of the worst commercials I've ever seen. In most cases, when the article is reposted, my byline is eliminated (a rant for another day), but my author's note is included. It says:

"Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models."

I've received a bunch of emails asking about the irony of the last line of my disclaimer. After all, the article rails against Google's suggestion that we should use AI to express our innermost thoughts and feelings.

First, I believe in full transparency. The disclaimer is accurate. In practice, I use AI for preliminary research, proofreading, images (although not in this case), as a thesaurus/dictionary, as a bit of a grammarian, and for any other task where I think it will be faster to use AI than other available tools.

Generative AI should be viewed as a skills amplifier, not a skills democratizer. Because I know my subject, I also know what specific tasks I'm trying to amplify or speed up. This is significantly different from asking AI to write something and calling it your own (which I would never even consider). I hope this clears up the issue.


ABOUT SHELLY PALMER

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named  he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow  or visit . 

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks