

Sounds to me like you’re doing the fun part of the job - “solving challenging problems” - without having to do the vast majority of the work (which is seldom as much fun), such as making it suitable for actual end users, integration with existing systems and/or migration, maintaining it during its entire life-cycle, supporting it (which for devs generally means 3rd level support) and so on.
So not exactly a typical environment from which to derive general conclusions about what are the best characteristics for a professional in software engineering in general.
Mind you, I don’t disagree that if what you’re doing is basically skunworks, you want enthusiastic people who aren’t frozen into a certain set of habits and technologies: try shit out to see if it works kind of people rather than the kind that asks themselves “how do I make this maintainable and safe to extend for the innevitable extra requirements in the future”.
Having been on both sides of the fence, in my experience the software that comes from such skunkworks teams tends to be horribly designed, not suitable for production and often requires a total rewrite and similarly looking back at when I had that spirit, the software I made was shit for anything beyond the immediacy of “solving the problem at hand”.
(Personally when I had to hire mid-level and above devs, one of my criteria was if they had already been through the full life cycle for a project of theirs - having to maintain and support your own work really is the only way to undrestand and even burn into one’s brain the point and importance of otherwise “unexplained” good practices in software development and design).
Mind you, I can get your problem with people who indeed are just jobsworths - I’ve had to deal with my share of people who should’ve chosen a different professional occupation - but you might often confuse the demands and concerns of people from the production side as “covering their asses bullshit” when they’re in fact just the product of them working on short, mid and long term perspectives in terms of the software life-cycle and in a broader context hence caring about things like extensability, maintenability and systems integration, whilst your team’s concerns end up pretty much at the point were you’re delivering stuff that “works, now, in laboratory conditions”. Certainly, I’ve seen this dynamic of misunderstandings between “exploratory” and “production” teams, especially the skunkworks team because they tend to be younger people who never did anything else, whilst the production team (if they’re any good) is much more likely to have at least a few people who, when they were junior, did the same kind of work as the skunkworks guys.
Then again, sometimes it really is “jobsworths who should never have gone into software development” covering their asses and minimizing their own hassle.
Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.
So the problem is twofold and more generic than just in therapy/advice:
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.