nielsr HF Staff commited on
Commit
625758b
·
verified ·
1 Parent(s): 7b6ca58

Improve model card: add metadata, paper, project & code links

Browse files

This PR enhances the model card by:
- Adding `pipeline_tag: zero-shot-image-classification` to enable discovery on the Hub and activate the inference widget.
- Adding `library_name: open_clip` to provide a ready-to-use code snippet.
- Specifying `license: apache-2.0`.
- Including a link to the paper: [Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning](https://huggingface.co/papers/2505.24424).
- Adding a link to the project page: [https://clic-compositional-clip.github.io/](https://clic-compositional-clip.github.io/).
- Adding a link to the GitHub repository: [https://github.com/AmitPeleg/CLIC](https://github.com/AmitPeleg/CLIC).
- Correcting the sample usage snippet by adding `from urllib.request import urlopen`.

These additions will make the model more discoverable and provide users with comprehensive information.

Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,6 +1,15 @@
 
 
 
 
 
1
 
2
  # Model Card for CLIC-ViT-B-32-224-CogVLM
3
 
 
 
 
 
4
  ## Model Details
5
 
6
  <!-- Provide the basic links for the model. -->
@@ -10,10 +19,12 @@
10
  ## Model Usage
11
  ### With OpenCLIP
12
 
13
- ```
14
  import torch
15
  from PIL import Image
16
  import open_clip
 
 
17
 
18
  model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-B-32-224-CogVLM')
19
 
@@ -38,5 +49,5 @@ with torch.no_grad(), torch.autocast("cuda"):
38
 
39
  text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
40
  idx = torch.argmax(text_probs)
41
- print("Output label:", texts[idx])
42
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: open_clip
4
+ pipeline_tag: zero-shot-image-classification
5
+ ---
6
 
7
  # Model Card for CLIC-ViT-B-32-224-CogVLM
8
 
9
+ This model is presented in the paper [Advancing Compositional Awareness in CLIP with Efficient Fine-Tuning](https://huggingface.co/papers/2505.24424).
10
+ Project Page: [https://clic-compositional-clip.github.io/](https://clic-compositional-clip.github.io/)
11
+ Code: [https://github.com/AmitPeleg/CLIC](https://github.com/AmitPeleg/CLIC)
12
+
13
  ## Model Details
14
 
15
  <!-- Provide the basic links for the model. -->
 
19
  ## Model Usage
20
  ### With OpenCLIP
21
 
22
+ ```python
23
  import torch
24
  from PIL import Image
25
  import open_clip
26
+ from urllib.request import urlopen
27
+
28
 
29
  model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-B-32-224-CogVLM')
30
 
 
49
 
50
  text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
51
  idx = torch.argmax(text_probs)
52
+ print("Output label:", texts[idx])
53
+ ```