When do you and ChatGPT cross the Milli Vanilli Line?
I recently engaged in an energized LinkedIn discussion with Frank Prendergast and Jason Ranalli. We were trying to discern the “Milli Vanilli Line” when it comes to personal disclosure and AI. Never heard of it? It’s probably going to impact you soon, so let’s dive into it …
How much authenticity can we lose?
The debate began with Frank’s comment on my recent blog post ( Where humans thrive in the hierarchy of AI content):
“If I read a blog post from someone on the assumption it’s written by them, and I find out it was actually AI, I’ll feel cheated,” Frank said, “like I’ve been a victim of the old bait-and-switch.
“But where’s my line? Is 20% AI OK? 40%? 60? I have no idea. And how would it even be measured?
“Will that question be a thing of the past when AI is ubiquitous?”
How much authenticity are we willing to lose?
ChatGPT makes everyone a competent writer, just like the calculator made everyone competent at math in the 1980s. We don’t feel compelled to declare to the world that we use a calculator to do our taxes or run a business. When does AI simply become … life?