Amen! Keep the critiques coming. The avalanche of BS about the positive educational uses of Chat GPT4 (mostly devoid of empirical evidence) need to be countered.
The end is powerful/insightful. We just have to help those that want to learn, which is a process.
"Another possibility is that the students in the survey just don’t care. They are not interested in learning; they merely want a degree for some functional purpose (getting a job, for example) and they are just going through the motions of academic life. The first group, we can help, perhaps with assignments like the one I used. The second group—well, they will soon be replaced by ChatGPTn, so who cares? They will learn the hard way that there are no short cuts in knowledge. If you want to know something you have to learn it. The alternative is bullshit."
I have a lot of issue with this. I can't tell what your main point is, but you seem to be making both: 1) GPT-4 isn't accurate or 2) Students shouldn't be using it.
1)
- Your students may know what you want from then when you ask them to be "comprehensive", but you can't expect an LLM to know what you mean. The word "comprehensive" is doing too much of the work here and the prompt should be more detailed (also, comprehensive in less than 500 words?).
- You only prompted one time. You did no second prompt, nor refining.
- A good prompter would first obtain even a vague idea of the topics that should be covered and feed those in. Again, the LLM isn't going to read your mind.
- You forgave it for not citing sources because you didn't ask it to, however you don't seem to realize that often an LLM literally cannot do this. Unless it's directly from a book or something with nice title, it will often be unable to generate this information from it's weights ("knowledge") because it's not a database, that's not how the tech works.
- Combined point: your prompt is bad. Not only is it a bad prompt for humans, it's a really, really bad prompt for an LLM.
2)
Many professors feel the same way you do, so you're in good company, but you all make the same mistakes - you stink at using this technology. It's a skill, and a single shit prompt isn't it. Having graduated college and coming back 15 years later for follow-on degrees, I have had professors blanket ban to use of LLMs, and a class that *required* using it. Deciding what is actually valuable to learn seems to be very difficult for some reason. What these AIs are doing, and thank God for this, is forcing professors to realizing that writing an essay on a topic isn't a good way to learn. It's as archaic as "teaching" Shakespeare in high school. Decide what is valuable for a student to know. Don't make them guess when they do they are forced to do open-ended research - much like what you wanted in your "comprehensive" paper.
Amen! Keep the critiques coming. The avalanche of BS about the positive educational uses of Chat GPT4 (mostly devoid of empirical evidence) need to be countered.
The end is powerful/insightful. We just have to help those that want to learn, which is a process.
"Another possibility is that the students in the survey just don’t care. They are not interested in learning; they merely want a degree for some functional purpose (getting a job, for example) and they are just going through the motions of academic life. The first group, we can help, perhaps with assignments like the one I used. The second group—well, they will soon be replaced by ChatGPTn, so who cares? They will learn the hard way that there are no short cuts in knowledge. If you want to know something you have to learn it. The alternative is bullshit."
I have a lot of issue with this. I can't tell what your main point is, but you seem to be making both: 1) GPT-4 isn't accurate or 2) Students shouldn't be using it.
1)
- Your students may know what you want from then when you ask them to be "comprehensive", but you can't expect an LLM to know what you mean. The word "comprehensive" is doing too much of the work here and the prompt should be more detailed (also, comprehensive in less than 500 words?).
- You only prompted one time. You did no second prompt, nor refining.
- A good prompter would first obtain even a vague idea of the topics that should be covered and feed those in. Again, the LLM isn't going to read your mind.
- You forgave it for not citing sources because you didn't ask it to, however you don't seem to realize that often an LLM literally cannot do this. Unless it's directly from a book or something with nice title, it will often be unable to generate this information from it's weights ("knowledge") because it's not a database, that's not how the tech works.
- Combined point: your prompt is bad. Not only is it a bad prompt for humans, it's a really, really bad prompt for an LLM.
2)
Many professors feel the same way you do, so you're in good company, but you all make the same mistakes - you stink at using this technology. It's a skill, and a single shit prompt isn't it. Having graduated college and coming back 15 years later for follow-on degrees, I have had professors blanket ban to use of LLMs, and a class that *required* using it. Deciding what is actually valuable to learn seems to be very difficult for some reason. What these AIs are doing, and thank God for this, is forcing professors to realizing that writing an essay on a topic isn't a good way to learn. It's as archaic as "teaching" Shakespeare in high school. Decide what is valuable for a student to know. Don't make them guess when they do they are forced to do open-ended research - much like what you wanted in your "comprehensive" paper.