• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle

  • Not a classroom setting, but I recently needed to investigate a software engineer in my team that has allegedly been using ChatGPT to do their work. My company works with critical customer data, so we’re banned from using any generative AI tools.

    It’s really easy to tell. The accused engineer cannot explain their own code, they’ve been seen using ChatGPT at work, and they’re stupid enough to submit code with wildly different styling when we dictate the use of a formatter to ensure our code style is consistent. It’s pretty cut and dry, IMO.

    I imagine that teachers will also do the same thing. My wife is a teacher, and has asked me about AI tools in the past. Her school hasn’t had any issues, because it’s really obvious when ChatGPT has been used - similarly to how it’s obvious when someone ripped some shit off the internet and paraphrased some parts to get around web searches.



  • Purely anecdotal, but I worked for a small software company that were truly awful. I left a review on leaving that was honest, but factual. Weeks later, a competing review was left up that countered all of my points as positives - often hilariously so, shit like “we don’t unit test because we aim to be right first time”.

    My review was then removed from Glassdoor. Knowing the rumours, I used an old eMail address that would be hard to link back to me. A little while after, the COO complained on LinkedIn about fake reviews on Glassdoor, pasting my email address that I had used to register.

    I trust the negative reviews on that site, but not the positive ones. I also don’t trust Glassdoor as a company to do what it says it’ll do…






  • EnderMB@lemmy.worldtoMemes@lemmy.mlHasn't happened yet
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    2 years ago

    That might have been true decades ago, but now people have:

    • Greater access to knowledge, and are forced to think more critically about what they consume.
    • More extreme views, which picks off the weak.
    • Most importantly, older people had stuff. They owned houses, had stable, life-long careers, and had settled down before they hit their thirties.

    People in their mid to late thirties nowadays might have a fancier job title, but many of them are still struggling like they were before. It’s hard to be protectionist when you have nothing but your life to protect…





  • While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.

    I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.








  • Don’t get me wrong, I think it’s a fucking stupid approach, as do ~90% of IC’s at these companies.

    Someone at Amazon put it nicely when they’ve said that there’s a rise in “belief-driven” leadership in tech right now. Instead of following the data and asking people what they want, we’re seeing tech leaders position themselves as visionaries, and making market-changing decisions on gut feeling. It’s absolutely a series a short-term decisions, and all they care about is what they think, and how it’ll save their skin for the next 3-6 months.