Guest realmayo Posted April 1, 2024 at 03:14 PM Report Posted April 1, 2024 at 03:14 PM Hmm, I guess my problem with what you're saying is: what if the answers that AI generates are better than most of the responses generated by humans? If we say that's not possible, not going to happen, well, I have serious doubts. But if we say that AI's answers are better (or at least as good) as most human answers: then I still have a strong preference for human answers on a forum like this. I can't say why for sure. I think it's perhaps because an implicit part of most of the non-simple questions asked here is "in your Chinese-language learning experience, blah blah blah." Or indeed: "as a native speaker, what is your gut feel about blah blah blah." I also think there's an assumption that for particularly simple questions, a learner should be able to find out the answers themselves, whether that's AI or textbooks or tutors. And in fact if people do ask those kind of questions repeatedly, the replies they get do indeed tend to direct them to a more systematic form of study (e.g. a textbook or a tutor) where this basic information is presented in a sequence best suited to learning step-by-step. Quote
Moshen Posted April 1, 2024 at 11:41 PM Report Posted April 1, 2024 at 11:41 PM Quote Humans are not perfect, but LLMs are one derivation away from humans and thus less perfect. I can't recall a time here on the forum where someone gave an answer that was 100% made up on the spur of the moment, based on nothing. But AI does this pretty often. Before I wised up, I asked ChatGPT to name several people who had written on topic X. Two of the three people it named were people I knew and I was able to confirm that they'd never written on that topic. This is called "hallucination." Humans are much less likely to do this. 1 Quote
Baihande Posted April 2, 2024 at 08:33 AM Report Posted April 2, 2024 at 08:33 AM In this forum, there have been given quite a number of examples where chatGPT blunders. Personally, I have better experiences. For example, when translating from Chinese to German, it even corrected typos in the Chinese original, where G*translate only produced garbage (even when translating to English, translating to German often only is a source of amusement). Answers to questions on facts seem not to be reliable, but they may still be giving you a hint. In this instance, chatGPT looks a bit like a student that doesn't know the correct answer and so "hallucinates". In the case of the Hangzhou squirrels, they reminded me of the London squirrels that are an invasive species from Northern America. Now I have a hint that there is a species of grey squirrels native to China. Can you expect more from a chat partner? But of course, before dishing this out to others as factual, you should verify. However in the future even this may change with the AIs fact base expanding. Quote
becky82 Posted April 2, 2024 at 09:49 AM Report Posted April 2, 2024 at 09:49 AM One interesting paper I was looking at was this, which researched critical thinking while using ChatGPT in Chinese language learning: Quote Participants approach the information provided by ChatGPT with a sense of skepticism, evaluating its authenticity from a critical standpoint. Unlike the unquestioning acceptance frequently observed in response to teacher feedback, participants are more inclined to critically assess the information offered by ChatGPT. Quote
Tomsima Posted April 2, 2024 at 11:15 AM Report Posted April 2, 2024 at 11:15 AM I don't know if there is any research on this yet, but i genuinely feel chatgpt performs worse than it did last year. I wonder if this is because they are trying to push people onto the paid model (GPT4)? I tried Mistral for the first time the other day out of frustration. It was even worse than chatgpt, unspecific and constantly apologising when I inevitably had to correct over and over again. Quote
Recommended Posts
Join the conversation
You can post now and select your username and password later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.