"Will AI tools unfairly exploit the work of other people without credit or compensation" this already happens. The issue about AI taking decisions: I think AI can be very useful in suggesting different options for things like medical interventions, but we still need people to test and check those recommendations before unleashing them on the world. Another problem with AI, tangentially related to the problem of people forgetting essential skills, is that when AI comes up with an answer even the people who have built the app or bot or whatever don't know how they did it
Your last point is a particularly good one and something that's been brewing for awhile as computers run increasingly complex algorithms managing huge numbers of variables across vast swathes of data. At least today these are human designed rules and algorithms but it seems like that could change a bit in the future.
You wrote: > "What happens if we lose the ability to question the results of AI calculations when they exceed our ability to comprehend?"
You have asked a hugely important question. No one knows the answer to this, which is one argument for slowing down some use cases until we can trace and explain results, and understand more about how they were derived.
Like everything else - from a simple knife to a supercomputing brain - it depends on who wields it: what are the intentions, for good or ill, and can they control it.
Methinks we are growing more technological far faster than the spiritual side, the conscience side, can keep up with.
There must be something in the air. Or maybe people's attentions have begun to congeal around AI/LLMs This is a good post, Mark. Insightful and clearly drawn from your interests. I just hit the publish button on my teacherly take of LLMs in the classroom. Forgetting skills is a biggie, and I think we can look to another technology to think about it: the technology of writing ... and what writing did to memory. As an analogue to that ancient "problem," what will we give up for AI/LLM-loaded apps?
I just read a story today from a hiring manager. He said that he could tell right away that ChatGPT was used for cover letters and resumes. He said he immediately rejected those applications. I realize that AI might be good for some stuff but in the case of a job, the person should use their own voice if they actually care about the job. AI is not going to get you through the job and what it entails.
"Will AI tools unfairly exploit the work of other people without credit or compensation" this already happens. The issue about AI taking decisions: I think AI can be very useful in suggesting different options for things like medical interventions, but we still need people to test and check those recommendations before unleashing them on the world. Another problem with AI, tangentially related to the problem of people forgetting essential skills, is that when AI comes up with an answer even the people who have built the app or bot or whatever don't know how they did it
Your last point is a particularly good one and something that's been brewing for awhile as computers run increasingly complex algorithms managing huge numbers of variables across vast swathes of data. At least today these are human designed rules and algorithms but it seems like that could change a bit in the future.
You wrote: > "What happens if we lose the ability to question the results of AI calculations when they exceed our ability to comprehend?"
You have asked a hugely important question. No one knows the answer to this, which is one argument for slowing down some use cases until we can trace and explain results, and understand more about how they were derived.
I suspect there are lots of court cases coming down the tracks!
Like everything else - from a simple knife to a supercomputing brain - it depends on who wields it: what are the intentions, for good or ill, and can they control it.
Methinks we are growing more technological far faster than the spiritual side, the conscience side, can keep up with.
There must be something in the air. Or maybe people's attentions have begun to congeal around AI/LLMs This is a good post, Mark. Insightful and clearly drawn from your interests. I just hit the publish button on my teacherly take of LLMs in the classroom. Forgetting skills is a biggie, and I think we can look to another technology to think about it: the technology of writing ... and what writing did to memory. As an analogue to that ancient "problem," what will we give up for AI/LLM-loaded apps?
Nothing is free, is it? I'll check out your post!
I just read a story today from a hiring manager. He said that he could tell right away that ChatGPT was used for cover letters and resumes. He said he immediately rejected those applications. I realize that AI might be good for some stuff but in the case of a job, the person should use their own voice if they actually care about the job. AI is not going to get you through the job and what it entails.
I’m not saying generative AI is a grift, but all the former metaverse and crypto bros have jumped on that train...