AI still generally sucks, but it's getting better at writing rote code than it used to be... and that sucks even worse...
Local Water Commissioner : "Only a psychopath would speed up climate change for his own profit!"
Pi Pi (pronounced pee-pee) : "Yea, well.. ya see.. ManBearPig, eventually, he gonna kill everybody..
he just gonna kill you first"
The Simpson's understandably gets a lot of love for "predicting the future" but, for my money, I'll take South Park. Most notably - ManBearPig. Initially a throw-away metaphor for climate change in 2006, South Park expanded his assholery to become a complete disruptor of the environment in 2022's "South Park The Streaming Wars". This more-defined version of ManBearPig would absolutely thrive watching modern-day asshole-on-asshole environmental crime like crypto vs AI, so I doubt we've seen the last of him.
Oh for the official record - fug a bunch of crypto! Its sole "value" is based on FOMO coupled with false scarcity. Don't @ me bro - I'll write that blog article another day. But (so called) AI? That argument gets a bit more nuanced. In the case of data-based, scientifically specific machine learning that is hyper focused and open sourced? More please! The majority of LLM's on the other hand? Big ol' hells to the nahs from this side. I won't even touch on the environmental impact of training "AI" (usually LLM) models in this blog post; I prefer, instead, to keep my pleb-level conclusions tolerably frustrating to the reader.
Back to Blog Villain #1 : Pi Pi. He, as do most CEOs, gets U.S. capitalism; demand drives profit... the potential for profit-driven wealth inspires a swath of unscrupulous entities to give zero shits about how they make money... unscrupulous entities weaponize demand with reckless disregard to all non-selfish outcomes as long as they can "get in and get out" before their disingenuous model collapses... rinse, wash, repeat... This brings us back to ManBearPig 2.0 - this time with sweet AI flavoring.
First AI came for our artists, which was generally met online with a resounding... meh. Then it came for our writers and.. well... you can guess the outcome. Why the complacency? It's easy to blame the ability to suddenly generate sick shit like a hyper-realistic poster of a heavy metal band named Bark Sabbath led by a dog along with a shit backstory. My guess? Well, that's part of it.. But it likely also has something to do with decades of untethered
"internet theft culture" (1:10) coupled with long-term, wide-spread
shitting on creative endeavors as anything more than a "hobby." Fact or fiction, my guesses are ultimately irrelevant to the overall outcome : our beloved search engines are now ripe with creepy+shit "artwork" and "informative articles" with all the flavor of a plain, untoasted bread sandwich and all the accuracy of a mad-lib.
..and if you've been tortured by my artwork or have previously read my blog, you realize these specific issues don't directly affect me... yet. But oh lawd I've heard AI is a comin' and, as such, I felt it necessary to understand first hand how good/bad AI was getting at invading my immediate wheelhouse : coding. Luckily(?) I spend a lot of my "money makin'" development time in Google Colab and Databricks, both of which have AI enabled by default. So the groundwork was easily laid.
As with most experienced coders, I (almost) always know exactly what I plan to type dozens of lines ahead when I get to actually typing stuff. For this reason, I decided to leave AI activated on these particular platforms and leverage it as a kind of "
IntelliSense on Steroids". I made a decision to start each coding day with AI engaged and abandon it as soon as it made an inexcusable recommendation.
When I first began this experiment, I ended up immediately frustrated almost every damn morning. Initially the AI was, to put it gently, absolute ass at determining what I was trying to actually accomplish. But something peculiar happened over the last 3-4 months...
The internal AI's became surprisingly good at guessing some of the most boring parts of my "pre planned" code several lines in advance. It wasn't really "solving" anything I didn't prod it with, mind you, and it definitely required me to perform a "diligent code review" due to sporadic bed shitting. Still, it became admittedly better at coming up with basic-to-mid-level code than a lot of co-workers I have pair-programmed with and, honestly, reduced my development time a bit.
Overall, my experience has convinced me that AI is nowhere ready for solo "primetime" coding tasks. But honestly - most entry-to-mid level coders are nowhere ready for that level of trust either. So what does this mean for non-expert level coding (or coding adjacent tasks like RSS / website UI creation) as a whole? Given the sheer number of CEO's like Pi Pi in the world, the immediate future for entry-to-mid level coders feels quite grim.
Now entering the arena - the humble business analyst. I've worked as an "analyst" at times and have also worked alongside many brilliant analysts as a "coder". A lot of analysts I know are pretty good "proof of concept" coders that simply aren't interested in coding as a primary occupation (and BOY do I feel that). As such, they instead end up focusing most of their time and curiosity towards becoming diligent problem solvers and experts in their immediate professional sandbox. Dollar for dollar, top-shelf analysts are some of the most affordable assets any company can have. Entry level coders?
Ehhhh...
Coupled with improving AI and affordable, easier to manage
"horizontal computing", it feels inevitable that a lot of companies will begin to lean more heavily on their product-expert analysts and allow them to
"vibe code" previous POC ideas into quickly working, horizontally scalable models.
Am I happy about my conclusions? Absolutely not. But with Pi Pis filling the C-Suite seats at most companies, we're all on ManBearPig 2.0's future menu regardless of how good we are at.. well.. anything ..