Looking through large image datasets can often feel slow and inefficient, especially when you’re searching for specific types of images. While Hugging Face’s datasets library is typically associated with text, it also supports image datasets in a remarkably practical and flexible way. Instead of manually downloading image collections or writing long preprocessing scripts, you can load, filter, and search for specific images using just a few lines of Python.
Whether you’re training a model, testing outputs, or reviewing a dataset before labeling, having an easy way to search through image data makes a big difference. Here’s a clear, step-by-step guide to performing image search with Hugging Face datasets.
To begin working with images, the load_dataset
function from the Hugging Face datasets library is your starting point. This function works with many dataset formats, including image-based ones hosted online or stored locally.
Take the “beans” dataset as an example. It’s a collection of bean leaf images categorized by plant diseases. Here’s how you load it:
from datasets import load_dataset
dataset = load_dataset("beans")
This will give you a dictionary with separate splits: “train”, “validation”, and “test”. Each image sample is represented as a dictionary, typically containing an ‘image’ key (a PIL image object) and a ’labels’ key for the category.
To preview the first image:
dataset["train"][0]["image"].show()
This lets you immediately confirm what the dataset looks like, which is especially helpful if you’re unsure about its structure or quality.
The beauty of this approach is that it abstracts away the file paths and formats. You don’t need to manually handle image directories—everything is structured and ready to use.
Now that the dataset is loaded, you may want to narrow it down. Maybe you’re only interested in images labeled as “healthy” or those meeting a certain size requirement.
First, check the available labels:
labels = dataset["train"].features["labels"].names
print(labels)
You might see something like: ["angular_leaf_spot", "bean_rust", "healthy"]
.
To get only the images labeled as “healthy,” use the filter
method:
healthy_images = dataset["train"].filter(lambda x: x["labels"] == 2)
Now, healthy_images
contains just those samples. The index 2
comes from the label list printed earlier.
You can also filter by image size. Suppose you want all images with a width of at least 300 pixels:
large_images = dataset["train"].filter(lambda x: x["image"].width >= 300)
This makes it easy to slice through large datasets based on any attribute that PIL exposes—width, height, mode, or anything you can write a condition for.
These filters can be chained or combined, letting you perform targeted image searches without needing external tools or manual sorting.
To make your workflow cleaner and reusable, it’s a good idea to wrap your filtering logic into a function. That way, you can perform different searches without repeating code.
Here’s a basic image search function:
def search_images(dataset, label=None, min_width=None, min_height=None):
labels = dataset.features["labels"].names
def match(example):
conditions = True
if label is not None:
conditions &= (example["labels"] == labels.index(label))
if min_width is not None:
conditions &= (example["image"].width >= min_width)
if min_height is not None:
conditions &= (example["image"].height >= min_height)
return conditions
return dataset.filter(match)
You can use it like this:
results = search_images(dataset["train"], label="bean_rust", min_width=250)
This returns a filtered dataset of images labeled as “bean_rust” and at least 250 pixels wide. The results are just like the original dataset—only smaller and more relevant to your needs.
You can easily extend this function to handle more complex filters, such as specific image formats or pixel intensity averages. Since the Hugging Face library is built for speed and handles datasets lazily, even large image sets won’t slow down your notebook or script too much.
Basic filters work well for straightforward use cases, but sometimes, you want to search for images that look similar, even if they aren’t labeled the same. For that, you’ll need image embeddings—numerical representations of images that capture their visual content.
You can generate embeddings using a pre-trained vision model, such as CLIP or Vision Transformer (ViT). Hugging Face Transformers makes this accessible:
from transformers import AutoProcessor, AutoModel
import torch
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
model = AutoModel.from_pretrained("openai/clip-vit-base-patch32")
def get_embedding(image):
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
features = model.get_image_features(**inputs)
return features.squeeze().numpy()
Now, generate an embedding for each image in your dataset:
dataset = dataset.map(lambda x: {"embedding": get_embedding(x["image"])})
Once embeddings are added, you can perform a similarity search. For example, to find images that are visually close to a query image, calculate its embedding and compare it to others using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
query_embedding = get_embedding(query_image).reshape(1, -1)
all_embeddings = np.stack(dataset["train"]["embedding"])
scores = cosine_similarity(query_embedding, all_embeddings)[0]
top_indices = np.argsort(scores)[-5:][::-1]
similar_images = [dataset["train"][i]["image"] for i in top_indices]
Now, you’ve found the five most visually similar images to your query. This method is useful when labels are inconsistent, missing, or too broad.
For large-scale datasets, you can use more efficient tools like FAISS or Annoy for faster approximate search, but the concept remains the same—compare embeddings instead of labels or file names.
Searching images within Hugging Face datasets can be surprisingly practical and efficient. With a few lines of code, you can load image datasets, filter them by labels or properties, and even run similarity searches using embeddings. You don’t need to deal with folder structures, filenames, or manual parsing. Whether you’re preparing data for training or just trying to understand what’s inside a dataset, this approach keeps everything in one place—code, images, and logic. Hugging Face datasets make image search both simple and flexible, allowing you to focus on what matters most: the images themselves, their patterns, their structure, and how they contribute to solving visual learning problems efficiently.
For more on Hugging Face, check out Hugging Face’s official documentation for further reading.
How to fine-tune ViT for image classification using Hugging Face Transformers. This guide covers dataset preparation, preprocessing, training setup, and post-training steps in detail.
How Evaluation on the Hub is transforming AI model benchmarking on Hugging Face. See real-time performance scores and make smarter decisions with transparent, automated testing.
Make data exploration simpler with the Hugging Face Data Measurements Tool. This interactive platform helps users better understand their datasets before model training begins.
Learn how to guide AI text generation using Constrained Beam Search in Hugging Face Transformers. Discover practical examples and how constraints improve output control.
Intel and Hugging Face are teaming up to make machine learning hardware acceleration more accessible. Their partnership brings performance, flexibility, and ease of use to developers at every level.
How Decision Transformers are changing goal-based AI and learn how Hugging Face supports these models for more adaptable, sequence-driven decision-making
The Hugging Face Fellowship Program offers early-career developers paid opportunities, mentorship, and real project work to help them grow within the inclusive AI community.
Accelerate BERT inference using Hugging Face Transformers and AWS Inferentia to boost NLP model performance, reduce latency, and lower infrastructure costs
How Pre-Training BERT becomes more efficient and cost-effective using Hugging Face Transformers with Habana Gaudi hardware. Ideal for teams building large-scale models from scratch.
Explore Hugging Face's TensorFlow Philosophy and how the company supports both TensorFlow and PyTorch through a unified, flexible, and developer-friendly strategy.
Discover how 8-bit matrix multiplication enables efficient scaling of transformer models using Hugging Face Transformers, Accelerate, and bitsandbytes, all while minimizing memory and compute demands.
How the fastai library is now integrated with the Hugging Face Hub, making it easier to share, access, and reuse machine learning models across different tasks and communities
Looking for a faster way to explore datasets? Learn how DuckDB on Hugging Face lets you run SQL queries directly on over 50,000 datasets with no setup, saving you time and effort.
Explore how Hugging Face defines AI accountability, advocates for transparent model and data documentation, and proposes context-driven governance in their NTIA submission.
Think you can't fine-tune large language models without a top-tier GPU? Think again. Learn how Hugging Face's PEFT makes it possible to train billion-parameter models on modest hardware with LoRA, AdaLoRA, and prompt tuning.
Learn how to implement federated learning using Hugging Face models and the Flower framework to train NLP systems without sharing private data.
Adapt Hugging Face's powerful models to your company's data without manual labeling or a massive ML team. Discover how Snorkel AI makes it feasible.
Ever wondered how to bring your Unity game to life in a real-world or virtual space? Learn how to host your game efficiently with step-by-step guidance on preparing, deploying, and making it interactive.
Curious about Hugging Face's new Chinese blog? Discover how it bridges the language gap, connects AI developers, and provides valuable resources in the local language—no more translation barriers.
What happens when you bring natural language AI into a Unity scene? Learn how to set up the Hugging Face API in Unity step by step—from API keys to live UI output, without any guesswork.
Need a fast way to specialize Meta's MMS for your target language? Discover how adapter modules let you fine-tune ASR models without retraining the entire network.
Host AI models and datasets on Hugging Face Spaces using Streamlit. A comprehensive guide covering setup, integration, and deployment.
A detailed look at training CodeParrot from scratch, including dataset selection, model architecture, and its role as a Python-focused code generation model.
Gradio is joining Hugging Face in a move that simplifies machine learning interfaces and model sharing. Discover how this partnership makes AI tools more accessible for developers, educators, and users.