Llama cpp ubuntu. Running llama2 models with 4 bit quantization using llama.