News

Pretrained language models (PLMs) have shown remarkable performance on question answering (QA) tasks, but they usually require fine-tuning (FT) that depends on a substantial quantity of QA pairs.