News

All models are trained on sequences of 16k tokens and have shown significant improvements on inputs with up to 100k tokens. ... Code Llama – Python 7B has outperformed Llama 2 70B on HumanEval ...