Instructions to use google/tapas-tiny-finetuned-sqa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/tapas-tiny-finetuned-sqa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("table-question-answering", model="google/tapas-tiny-finetuned-sqa")# Load model directly from transformers import AutoTokenizer, AutoModelForTableQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("google/tapas-tiny-finetuned-sqa") model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-tiny-finetuned-sqa") - Notebooks
- Google Colab
- Kaggle
First commit with absolute position embeddings
Browse files- config.json +1 -1
- pytorch_model.bin +1 -1
config.json
CHANGED
|
@@ -30,7 +30,7 @@
|
|
| 30 |
"num_hidden_layers": 2,
|
| 31 |
"pad_token_id": 0,
|
| 32 |
"positive_label_weight": 10.0,
|
| 33 |
-
"reset_position_index_per_cell":
|
| 34 |
"select_one_column": true,
|
| 35 |
"softmax_temperature": 1.0,
|
| 36 |
"type_vocab_size": [
|
|
|
|
| 30 |
"num_hidden_layers": 2,
|
| 31 |
"pad_token_id": 0,
|
| 32 |
"positive_label_weight": 10.0,
|
| 33 |
+
"reset_position_index_per_cell": false,
|
| 34 |
"select_one_column": true,
|
| 35 |
"softmax_temperature": 1.0,
|
| 36 |
"type_vocab_size": [
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 18095095
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6c229fdfed44e4516f1b109d5f06014d5b58ee4a66ab7607c7c6964cd5488f19
|
| 3 |
size 18095095
|