To download a model from Hugging Face, you don’t need to do anything special because the models are automatically cached locally when you first use them. So, to download a model, all you have to do is run the code provided on the model card.
But to do that, you must use the Transformers library. First, make sure you have the library installed. If not, you can install it using a pip.
pip install transformers
After installing the transformers library, you can download a model using Python.
Here’s a simple example of downloading and using the popular BERT model for text classification.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Specify the model name you want to download from Hugging Face
model_name = "bert-base-uncased"
# Download the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Tokenize and encode some sample text
text = "This is a sample text."
inputs = tokenizer(text, return_tensors="pt")
# Run the text through the model
output = model(**inputs)
# Get the prediction
prediction = output.logits.argmax(dim=1).item()
# Print the prediction
print(f"The prediction for the text is: {prediction}")
Output
Make sure to replace model_name with the name of the model you want to download from the Hugging Face Model Hub. You can browse the available models at https://huggingface.co/models.
That’s it.