ILSVRC/imagenet-1k
Viewer • Updated • 1.43M • 98k • 797
How to use mlx-vision/vit_large_patch16_224-mlxim with mlx-image:
from mlxim.model import create_model model = create_model(mlx-vision/vit_large_patch16_224-mlxim)
How to use mlx-vision/vit_large_patch16_224-mlxim with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir vit_large_patch16_224-mlxim mlx-vision/vit_large_patch16_224-mlxim
A Vision Transformer image classification model trained on ImageNet-1k data.
Disclaimer: This is a porting of the torchvision model weights to Apple MLX Framework.
pip install mlx-image
Here is how to use this model for image classification:
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
model = create_model("vit_large_patch16_224")
model.eval()
logits = model(x)
You can also use the embeds from layer before head:
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
# first option
model = create_model("vit_large_patch16_224", num_classes=0)
model.eval()
embeds = model(x)
# second option
model = create_model("vit_large_patch16_224")
model.eval()
embeds = model.get_features(x)
Explore the metrics of this model in mlx-image model results.
Quantized