Archived version: https://archive.ph/sNqZT
Archived version: https://web.archive.org/web/20240301021006/https://www.theguardian.com/world/2024/feb/29/canada-lawyer-chatgpt-fake-cases-ai
Archived version: https://archive.ph/sNqZT
Archived version: https://web.archive.org/web/20240301021006/https://www.theguardian.com/world/2024/feb/29/canada-lawyer-chatgpt-fake-cases-ai
Not the first idiot to attempt this.
You can train a custom implementation of a GPT model with whatever data you want and then ask it about the data, but you can’t just use the ChatGPT free web implementation for such important and specific task…
Even if you did that’s no guarantee that the model won’t hallucinate. It might just hallucinate better. Always manually verify everything that is important to you.
Yes. The output could include some kind of ID or case number for the user to manually verify.