Release Notes #11

I’ve read a fair number of articles promoting large language models and the claims are either unsubstantiated or promising a “better” future that can’t be guaranteed. The technology and the models have been around for so long, why is there such an indefinite wait?

In that time, you could just hire a subject matter expert, do the research yourself based on the existing information out there, or scale down rather than scale up on the content.

And yes, I’m a broken record because I’ve spoken on this many times and yes, there are SOME positive use cases for these LLMs but they don’t outweigh the nonsensical and/or harmful ones and they certainly can’t make all these lofty dreams come true.

Like, mental health care is abysmal. Is AI the answer or a significant change in our system (by change, I really mean dismantle and restart because reform won’t work IMO). Waiting lists won’t be cut down because AI assisted a professional in making a questionable diagnosis.

Medical science is biased. Where is the AI getting the data from? Who’s cleaning that up? Who’s testing it to ensure safety for Black patients (as one demographic example)? Who is asking these questions and who is listening and taking them onboard? What safeguards are in place?

Don’t even get me started on AI use in the justice system!!!

Honestly, this boils down to two things:

  1. I’m tired of “hey, I know you said there are fundamental issues here but I think we should ignore them and feed biased data into our technology to fix it.”

  2. I’m tired of this era of “whatever it takes to win, that’s what I’m saying”

Release Notes #10 Release Notes #12