this post was submitted on 12 Dec 2025
177 points (99.4% liked)
Asklemmy
51678 readers
390 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My organisation has fired a bunch of people and plans to replace them completely with AI. They're pushing us to use it. Soon it will be mandatory to use ambient, always on AI for all information recording. There's mention of working areas AI camera surveillance to monitor for efficient use of man hours (don't know whether the tech is developed enough for this or how practical this is). The guy working above me is doing some sort of degree in implementation of AI in business and his answer to a lot of problems is "AI could probably do that for us". Meanwhile we get training to tell us that we will personally be held accountable for any errors in the AI output we use, and we will be held responsible if we input any information that was would be deemed confidential or sensitive. BTW, copilot is already activated for all our work outlook, calendar and one drive accounts and has all that data; so not sure what would be considered more sensitive information to give.