Students use generative AI to get answers to everything from better understanding class material to homework help (and often solutions). But what about when generative AI gives wrong answers? All its explanations are plausible, and it takes a high level of mastery to see the errors. In my class, I embrace this notion. For my exams, I already give various options to promote multiple means of expression. This year, I have added another, which is to find a misunderstanding that a generative AI model has regarding a class concept. Students engage the AI in conversation, and must elicit an incorrect response, and, more importantly, identify that it is in fact wrong.