Prompt: Could LLMs be fine-tuned to detect (and segment) logical fallacies?

Learning to identify logical fallacies requires significant mental effort and is time consuming, especially at the start. Because of this constraint, fallacy detection is often a skill not mastered by many (including me). However, it is a skill necessary to filter noise from the signal, especially in this information-rich age.

Would it be possible to fine-tune an LLM model to identify logically fallacies from text? If done right, this would help filter fake news and learn logical fallacy detection implicitly.

Acknowledgement: Discussion with Tanay Biradar.