I've been using ChatGPT in various iterations to assist with my work tasks for a few months. In using it consistently and on tasks I actually do on a day to day basis, its strengths and weaknesses become quite apparent. It's a useful tool that has the capacity to improve productivity when paired with a human who can make use of it, but it's a limited tool that does a very specific thing and not much else, and that becomes apparent once it's being used in an environment where you can't cherry pick. Once you get off the beaten path, the illusion quickly fades.
Large Language Models essentially function by looking at the relationships of different words with each other in a giant database that has been built using all kinds of writings. Next, an algorithm essentially tries different iterations of sentences that could exist, and the new sentences are passed to a routine that checks to see if the answer is something that looks correct using a certain heuristic. The algorithm will flip back and forth regenerating the text until a choice with a certain score has been reached. This is important to remember, because it is ultimately designed to look at the relationships between words and score results to create something that looks good, even if it isn't necessarily old.
It can do some coding, but in general it's pretty shite at it, and usually anything it produces is fundamentally broken, especially if you're doing something that isn't textbook. I haven't really found it's capable of producing anything I'd be interested in having it create without facing a full rewrite. It does look good, but good looking and actually compiling/running are different criterion.
It can answer some general questions, but the more specific and specialized the thing you need, the less it can answer. One major problem with using language models to answer questions is that they get their answers from arrogant idiots on reddit who claim their answer is definitely correct 100%, and so it'll happily answer you confidently and completely wrong. One thing that makes the demo work is it starts with a blank slate every time. Continuing on a conversation you start to run up against an increasingly quirky engine.
It can do some writing, but it's a glorified autocomplete. The thing it's best for is helping to come up with fluffy corpospeak to buff up messages for public consumption. I don't use the text provided when I ask it things, but it is useful as a muse for reports for public consumption. It's especially good for creating generic explanations for common concepts for the benefit of readers who don't know anything about a certain concept (or in corporate settings explaining a concept everyone already knows about
I've tried to use it to develop slide decks for presentations. I've found that while it does know how to do it, the output is pretty limited and the attention span of ChatGPT is a major issue. Again, it's better as a muse because the output that it produces isn't really acceptable for something professional, I ended up basically doing everything on my own because I wasn't happy with the output.
There's a lot of simple but large/tedious tasks you'd think an AI would be great for, but as you add more details it gets much more confused and the illusion breaks really quickly, and for other stuff there just is no way to import the data you need for context, meaning you can't get started with the task anyway.
One thing it's reasonably good at is sanity checking things. Say to it "Is this accurate?", and it'll put its own algorithms to work checking what you wrote.
So in summary (if you're looking at enough ChatGPT outputs certain phrases recur a LOT by the way), it's ok at being a demo that can occasionally be used for useful work, but it's not Skynet -- it's barely a functional chatbot. We ascribe more intelligence to it than it has because it looks like something we understand as intelligent, being able to do something that typically takes many kinds of intelligence in humans and assuming the algorithm that can do one thing has the same level of competence in other ways that a human who can do these things does.
Large Language Models essentially function by looking at the relationships of different words with each other in a giant database that has been built using all kinds of writings. Next, an algorithm essentially tries different iterations of sentences that could exist, and the new sentences are passed to a routine that checks to see if the answer is something that looks correct using a certain heuristic. The algorithm will flip back and forth regenerating the text until a choice with a certain score has been reached. This is important to remember, because it is ultimately designed to look at the relationships between words and score results to create something that looks good, even if it isn't necessarily old.
It can do some coding, but in general it's pretty shite at it, and usually anything it produces is fundamentally broken, especially if you're doing something that isn't textbook. I haven't really found it's capable of producing anything I'd be interested in having it create without facing a full rewrite. It does look good, but good looking and actually compiling/running are different criterion.
It can answer some general questions, but the more specific and specialized the thing you need, the less it can answer. One major problem with using language models to answer questions is that they get their answers from arrogant idiots on reddit who claim their answer is definitely correct 100%, and so it'll happily answer you confidently and completely wrong. One thing that makes the demo work is it starts with a blank slate every time. Continuing on a conversation you start to run up against an increasingly quirky engine.
It can do some writing, but it's a glorified autocomplete. The thing it's best for is helping to come up with fluffy corpospeak to buff up messages for public consumption. I don't use the text provided when I ask it things, but it is useful as a muse for reports for public consumption. It's especially good for creating generic explanations for common concepts for the benefit of readers who don't know anything about a certain concept (or in corporate settings explaining a concept everyone already knows about
I've tried to use it to develop slide decks for presentations. I've found that while it does know how to do it, the output is pretty limited and the attention span of ChatGPT is a major issue. Again, it's better as a muse because the output that it produces isn't really acceptable for something professional, I ended up basically doing everything on my own because I wasn't happy with the output.
There's a lot of simple but large/tedious tasks you'd think an AI would be great for, but as you add more details it gets much more confused and the illusion breaks really quickly, and for other stuff there just is no way to import the data you need for context, meaning you can't get started with the task anyway.
One thing it's reasonably good at is sanity checking things. Say to it "Is this accurate?", and it'll put its own algorithms to work checking what you wrote.
So in summary (if you're looking at enough ChatGPT outputs certain phrases recur a LOT by the way), it's ok at being a demo that can occasionally be used for useful work, but it's not Skynet -- it's barely a functional chatbot. We ascribe more intelligence to it than it has because it looks like something we understand as intelligent, being able to do something that typically takes many kinds of intelligence in humans and assuming the algorithm that can do one thing has the same level of competence in other ways that a human who can do these things does.
- replies
- 2
- announces
- 3
- likes
- 11