Given a short text <text_to_check>, I want the LLM to check whether there are some facts stated in the text which are NOT true. So I want to detect 'disinformation' / 'fake news'. And the LLM should report which parts of the text are not true.
How would the "best" prompt look like for this task ?
And what is the best 'compact' LLama-2 based model for it ? I suppose some kind of instrucion-following LLM. The LLM shall run on a mobile device with <= 8 GB RAM, so the largest model I can afford is ~ 13B (with 4-bit quantization in llama.cpp framework).
Looking at Alpaca Leaderboard (https://tatsu-lab.github.io/alpaca_eval/), the best 13B models there are XWinLM (not sure if supported by llama.cpp), OpenChat V3.1 and WizardLM 13B V1.2. So I suppose I will use one of those models