this post was submitted on 12 Dec 2025
177 points (99.4% liked)

Asklemmy

51678 readers
390 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] cRazi_man@europe.pub 3 points 5 days ago* (last edited 5 days ago)

My organisation has fired a bunch of people and plans to replace them completely with AI. They're pushing us to use it. Soon it will be mandatory to use ambient, always on AI for all information recording. There's mention of working areas AI camera surveillance to monitor for efficient use of man hours (don't know whether the tech is developed enough for this or how practical this is). The guy working above me is doing some sort of degree in implementation of AI in business and his answer to a lot of problems is "AI could probably do that for us". Meanwhile we get training to tell us that we will personally be held accountable for any errors in the AI output we use, and we will be held responsible if we input any information that was would be deemed confidential or sensitive. BTW, copilot is already activated for all our work outlook, calendar and one drive accounts and has all that data; so not sure what would be considered more sensitive information to give.