I got laid off because I didn't use Copilot after discovering that it wasn't helpful for any of my job duties.
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
I use it for copy paste box - e.g. if I have a log somewhere that is in a shitty ui that doesn't wrap, but requires me to scroll right, I copy paste that into a copilot window and press enter. Then I can read it (because it wraps), it sends a metric somewhere, I can ignore the output from the chat and do my job lol
Frankly, I'd rather be fired than to bother with a mandate to use a useless tool just to justify the money that was spent on it.
I'm glad they let me go when they did, I get to be on unemployment while the bubble is popping. =D
My company has its own internal system that leverages GPT models. So yeah it’s definitely becoming prevalent. My boss specifically is always toting it as some amazing tool, all the other business wankers think it’s great, my direct colleagues have taken to letting it summarize their meeting notes and clean up their emails before sending.
All in all, it’s a grim situation. From the execs who think it will change the workforce for the better, down to the average workers who are losing their ability to proofread.
I’m beyond the point of airing my grievances and just find myself idly standing by hoping enough of them get owned by it somehow and become weary of it. I don’t think that’s gonna happen, but a guy can dream.
Obsessive.
My direct boss is obsessed with perplexity, and our company just added CoPilot. I'm in sales, and my boss is always telling me to use AI to generate my responses to customer requests. I can reply in a 100% professional method faster than it takes to create the prompt, generate it, proofread it, and then copy and paste. Admittedly, our SharePoint site is very robust, and anything I need CoPilot for to search could just be done inside the search box in SharePoint. It's so redundant.
it doesn't exist. but i work for a company that does real work. it doesn't bullshit.
I teach a state university in the US, and AI use is encouraged for tasks that LLMs are actually good at. Generating wrong answers for multiple choice questions, formatting latex documents, writing excel formulas, etc. We've also used it during some brain storming sessions to generate ideas and check for any obvious holes in our ideas.
One of my managers is like that, I've known him for about 5 years and he's been the biggest idiot I've ever met the entire time. But ever since AI came out he's turned it up to 11.
Fortunately my other manager can't stand him, and they have blazing arguments, so generally speaking if he tells me to do something I don't like / want to do, I go and tattle tell.
the company i work for which is 90% white collar office workers and 10% blue collar workers keeps trying to get us to use Copilot. as one of the blue collar workers, it has very limited use. like someone else said, it can be used basically as an advanced search engine, but even the results for say, the specs of an old piece of equipment and how to repair are suspect.
when I am very bored at work I try and quiz it or think of ways to make it more useful to my job to save time, but all the suggestions it gives basically involve pointless entry of data that is basically just busywork in my eyes with little payoff. it doesn't even seem to be able to properly analyze emails, which I thought perhaps it would be useful to summarize a few months of emails and point out the most important points, but it cannot seem to do this for the life of me, and continually suggests ways that do not work, or are a ton of literally copying the text of all emails and putting them into a document or something like that.
i do not see ai or robotics being able to replace my job within my lifetime
I’m very lucky that the leadership at my company is kinda mixed on it. I continue to feel that one of the big shortcomings with this shit is that the utility just does not translate into meatspace, they aren’t even pretending it does. That’s why Musk is making this renewed push on robotics. He’s the only one that understands that they’re probably gonna see the bubble pop sooner rather than later if they can’t find a way to threaten the labor force OUTSIDE of tech.
My thoughts go out to my comrades in the tech world rn because I will truly pull my hair out if I ever get this bullshit shoved down my throat like that.
If I were you I would try and make the most of it by using it antagonistically. These fucking things are ostensibly mining our brains to supplant us in society. Why shouldn’t we shovel them full of bullshit? Especially if you’re being forced to have to try and “use” this garbage. If all you can do is accelerate the declining utility of this stuff some small bit while on company time than more power to you. Maybe that wouldn’t work because it logs everything but you can always chalk it up to trying to “train” it.
My company added an AI chatbot to our web site, but beyond that we generally are anti-AI.
They sent us an email saying that we should only use Copilot built into the Microsoft 365 stuff work provided, if we use AI.
But no mandate or instructions otherwise thankfully. But I don't work in tech.
thankfully i'm cusodial so they're just relieved i know how to use the timeclock lol (not kidding) lol
Fortunately not that bad but the people who are using it do get praised while being a massive burden on everyone who has to review the code or worse, documents.
I did see one clever usage to basically replace most front end devs with AI:
- client asks LLM question about our data.
- LLM generates code to query our data and display charts
- client sees results
All runs with just mistral:8b. Very flexible solution compared to having front end devs constantly iterate on a UI monstrosity meant to serve every single clients needs. Of course this assumes the AI is writing the query correctly.
So the LLM can run arbitrary code against your database? Or your clients can? Both sound scary as hell!
I can’t imagine the nightmare of trying to reproduce “incorrect data” and they just send you the prompt instead of the query
That could be fixed by simply logging the prompt and code executed. Maybe also give each prompt/response a reference ID and demand that in tickets. The nightmare would be actually reading the code the AI generated.
I'm exactly in the same boat.
I'm a developer and my boss has no idea about developing. If i take long with something, he consults chatGTP and gives me the solutions chatGPT came up with.
It's exhausting...
Same! He'll paste me a response from chatgpt showing how chatgpt would've solved the problem I am working on. It will have a list of 3 solutions, and mine will be better... yet I have to waste time proving that it's better than chatgpt's.
My "company" is tiny, and only employs myself 1 colleague, and an assistant. We're accountants.
We self host some models from huggingface.
We don't really use these as part of any established workflow. Thinking of some examples ...
This week my colleague used a model to prep a simple contract between herself and her daughter where by her daughter would perform whatever chores and she would pay for cello lessons.
My assistant used an AI thing to parse some scanned bank statements, so this one is work related. The alternative is bashing out the dates, descriptions, and amounts manually. Using traditional OCR for this purpose doesn't really save any time because hunting down all the mistakes and missed decimal places takes a lot of effort. Parsing this way takes about a third of the time, and it's less mentally taxing. However, this isn't a task we regularly perform because obviously in the vast majority of cases we can get the data instead of printed statements.
I was trying to think the proper term for an english word which has evolved from some phrase or whatever, like "stearing board" became "starboard". The Gen AI suggested portmanteau, but I actually think there's a better word I just haven't remembered yet.
I had it create a bash one liner to extract a specific section from a README.md.
I asked it to explain the method of action of diazepam.
My feelings about AI are that it's pretty great for specific niche tasks like this. Like the bash one liner. It took 30 seconds to ask and I got an immediate, working solution. Without Gen AI I just wouldn't be able to grep whatever section from a README - not exactly a life changing super power, but a small improvement to whatever project I was working on.
In terms of our ability to do our work and deliver results for clients, it's a 10% bump to efficiency and productivity when used correctly. Gen AI is not going to put us out of a job.
My boss loves it and keeps suggesting we try it. Luckily, there isn't much use for it in our line of work.
We are definitely encouraged to use it where I work. There are regular sessions for engineers to share workflows they're using with LLMs.
What surprised me recently is the objective metrics that the company is trying to gather about usage. Not just in usage amount, but also quality. We put labels on our PRs to indicate to what extent we used AI and which tools, with a "wasn't helpful" option.
It's a lot better than my previous job that went full build-a-new-product-on-top-of-ChatGPT
I sat through a meeting today where we played a game: guess the brand shown in the AI slop images. This was after being shown uses of Gemini like “ask questions about the current meeting” (presumably because most meetings are meaningless and nobody listens)
My organisation has fired a bunch of people and plans to replace them completely with AI. They're pushing us to use it. Soon it will be mandatory to use ambient, always on AI for all information recording. There's mention of working areas AI camera surveillance to monitor for efficient use of man hours (don't know whether the tech is developed enough for this or how practical this is). The guy working above me is doing some sort of degree in implementation of AI in business and his answer to a lot of problems is "AI could probably do that for us". Meanwhile we get training to tell us that we will personally be held accountable for any errors in the AI output we use, and we will be held responsible if we input any information that was would be deemed confidential or sensitive. BTW, copilot is already activated for all our work outlook, calendar and one drive accounts and has all that data; so not sure what would be considered more sensitive information to give.
semi-toxic and stupid. I got the fucking AI cert they wanted me to, but instead of getting hooked up with clients to do that, I'm doing the fucking test automation since the start of this awful career.
I work at a large tech company. It's in our expectations. I hate it so much
It's very obvious half my team's code base is AI written garbage
I work in AI research, so naturally, AI is part of our day to day work. But when it comes to things like LLMs tools and other generative models, we rarely hear anyone talk about those. Sometimes, people will share their workflow, and that may involve LLMs to supplement traditional search engines for literature reviews for example. That's about the extent of it. No one really cares to talk about them much. No one pushed those tools on us. We just do our work with whatever tools we think are best.
It's available everywhere at this point. My team has the license to have copilot in every main office app, the teams that don't still have the generic web version. There are AI chat bots for various things with some stupid brain related name. There are a couple things we've done with an LLM that have actual business use cases that benefited an automated process (sorry, have to be vague). Another, not Microsoft, product we use is advertising their AI features all the time, which as a side note feels incredibly unprofessional for enterprise type software, if it was up to me I'd find an alternative over that alone. Another cloud database service also has a brain themed chat bot.
What my job doesn't do is force anyone to use it ever (outside of the people that had to set it up). I use copilot in Microsoft teams, as it can pull from emails and chats to answer questions specific to my job: generic stuff like fluffing up a peer/self review, helping me find a conversation that I only vaguely remember, finding emails when outlooks search decides to be shit. Since it's my work device with my work data I'm not concerned about my privacy so that's actually useful to me. I've played around with the word and excel copilots but they're terrible. Word can help you build a doc but if i open a doc copilot tells me it can't actually edit anything in the doc. So what is it for, generic questions? Then excels can physically do things but it gets it very wrong for me. I was almost even excited for it because I thought, maybe I could say something like "Hey copilot, take this sheet, put it into a pivot table, use X for columns and Y for rows" and it would do it for me, rather than me taking those steps. But it doesn't work so I stopped trying after a few attempts at getting it to be useful.
can it actually pull info from emails? everytime i try to do that it totally fails, hallucinates, or suggests i try something that it actually can't do. the most it can seem to do is a sum a single email or thread, and the amount of time it takes to do that i could just read the thread myself...
It works for me generally. Asking things like "who was i asking about X" type stuff.
I did get a pop up reminder to utilize our company's proprietary AI but I managed to block it thanks to local admin rights.