Instructions to use d4data/bias-detection-model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use d4data/bias-detection-model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="d4data/bias-detection-model")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model") model = AutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 84fc5642399e3f05c906079f7bd985586153ead74a84a76bcb2f5fac6c43dced
- Size of remote file:
- 268 MB
- SHA256:
- 7f447e66a6e76d24d86e5a7e8235b780d69719beefc153149f906feefca9abf1
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.