
GGML and llama.cpp join Hugging Face
GGML and llama.cpp — the foundational libraries behind local LLM inference — are joining Hugging Face to ensure their long-term development and maintenance. This brings the most widely-used local inference stack under HF's open-source umbrella, with commitments to continued compatibility and community governance. A significant consolidation in the local AI ecosystem.









