This Is a Troubling Sort of Post
“[…] I have immersed myself in AI-driven development, accumulating over 633 million tokens with Claude in just a month, resulting in a total spend of $522, primarily on Opus 4.5 for complex coding tasks. What started as a vibe has evolved to a systematic product management and delivery system, leveraging agents, skills, and commands to deliver high quality and secure solutions. There’s an incredible opportunity to enable people to produce high quality work at speeds we’ve only dreamt of.”
This to me is a deeply troubling thought. So he starts by boasting how little thinking he has done and how much money it cost him to not think. Notice the trumpeting of $522 and the number of token being handed over to and from Claude. Then he follows up by saying that it produces high quality work at speed. On what metric does it do that? Is this just better than what he can create, or is this good as far as he is concerned? Is this objectively amazing code? I can tell you it’s not. If it was then why are there so many developers who say that it creates work for them more often than it saves them work?
“This hands-on experience has enhanced my focus on integrating security with AI innovation. I have been developing projects where agentic security reviews are triggered via GitHub Actions, utilizing Claude to automate vulnerability scans and compliance checks. Additionally, I maintain customized system prompts that direct Claude to proactively incorporate security best practices—such as input validation, encryption, and access controls—during the implementation phase.”
The first sentence just makes me chortle. Calling an LLM “AI” is like calling your shoe a hammer. Then he talks about “agentic security reviews” and vulnerability scans this to me is hilarious. I have significant experience in the realm of static code analysis, and I can tell you for a fact that tools get the job part of the way, and people part of the way, and then there is all the stuff that is missed. This is the reality of this problem–it is hard and it is unsolved and LLMs have not moved that problem into the solved category.
Here is the real issue: writing secure code is hard. Every project is different and the way secure code is going to look is also going to be a bit different. Furthermore, what is required for a given application is not a fixed point. For example: is it a problem that I run a Python SimpleHTTP server in my home lab, for a static page that is only accessed in my network? Probably not, but if I was to put that on the internet–big problems.
LLMs don’t understand–anything. They have no judgement. They have no experience. They are simply plagiarizing others work and giving you the endorphin rush of accomplishment without any effort of skill gains.
This idea that making customized prompts and this fiddling he is doing is anything more than an exercise in delusion is just sad.
“Looking forward, I recognize significant opportunities in AI-enhanced threat detection, such as real-time anomaly detection in logs, and secure agentic workflows for DevSecOps teams. This approach is transforming how we create safer systems without compromising speed.”
This is not a new idea. People have been using “AI” for security and log analysis for years. In fact this has been the case long before this LLM fad drove the world up the wall.
His final analysis is just so out of touch with reality. That LLMs are improving speed and safety. For example, you can look here where the vendor lays out some of the risks of AI generated code. Remember this guy is using the AI to write the code and review the code to make him feel good that it is secure and that he accomplished something–without any effort or thought.
If you actually do research into the topic what you find is that at best LLM code is as secure as human code–occasionally more so. However, if the LLM is doing rarely better than a human, but the perception is that it is doing far better than a human consistently than the LLM is actually worse than a human coder.
Perception drives actions, and if we delude ourselves into thinking LLM generated code is better than people, then we are just inviting the same bad code and insecure code into our projects that a person would make. Remember these LLMs are just generating the most likely result. This is based on human and LLM code. This means that you are getting results that are going to represent the most common best and worst mixed with possible or even likely hallucination. Does that sound like a recipe for better than human code with better than average security and practices?
If you want to get into the weeds on this; look at this paper. This topic is a deeply nuanced issue and everyone has their opinion. However, this is one opinion broadcast on LinkedIn that should have been kept out of public view.





