Run llama3 locally with 1M token context