Request: DOI
1
#85 opened over 1 year ago
by
moh996
The model repeatedly outputs a large amount of text and does not comply with the instructs.
10
#84 opened over 1 year ago
by
baremetal
Llama repo access not aproved yet
#83 opened over 1 year ago
by
APaul1
Throwing Error for AutoModelForSequence Classification
1
#82 opened over 1 year ago
by
deshwalmahesh
GSM8K Evaluation Result: 84.5 vs. 76.95
17
#81 opened over 1 year ago
by
tanliboy
Deploying Llama3.1 to Nvidia T4 instance (sagemaker endpoints)
4
#80 opened over 1 year ago
by
mleiter
Variable answer is getting predicted for same prompt
#79 opened over 1 year ago
by
sjainlucky
Efficiency low after adding the adapter_model.safetensors with base model
#78 opened over 1 year ago
by
antony-pk
Minimum gpu ram capacity
🔥
2
12
#77 opened over 1 year ago
by
bob-sj
Tokenizer padding token
1
#76 opened over 1 year ago
by
Rish1
new tokenizer contains the cutoff date and today date by default
5
#74 opened over 1 year ago
by
yuchenlin
New bee questions
2
#73 opened over 1 year ago
by
rkapuaala
Add `base_model` metadata
#72 opened over 1 year ago
by
sbrandeis
Full SFT training caused lose its foundational capabilities
10
#71 opened over 1 year ago
by
sinlew
Wrong number of tensors; expected 292, got 291
6
#69 opened over 1 year ago
by
KingBadger
Fine tuned Meta-Llama-3.1-8B-Instruct deployment on AWS Sagemaker fails
➕
7
2
#68 opened over 1 year ago
by
byamasuwhatnowis
Quick Fix: Rope Scaling or Rope Type Error
4
#67 opened over 1 year ago
by
deepaksiloka
Can't reproduce MATH performance
1
#66 opened over 1 year ago
by
jpiabrantes
Banned for Iranian People
🚀
➕
9
15
#65 opened over 1 year ago
by
MustafaLotfi
Inference endpoint deployment for 'meta-llama/Meta-Llama-3.1-8B-Instruct' fails
➕
4
6
#62 opened over 1 year ago
by
Keertiraj
Meta-Llama-3.1-8B-Instruct deployment on AWS Sagemaker fails
3
#61 opened over 1 year ago
by
Keertiraj
Error Loading the original model file consolidated.00.pth from local
3
#60 opened over 1 year ago
by
chanduvkp
Unable to deploy Meta-Llama-3.1-8B-Instruct model on Sagemaker
3
#58 opened over 1 year ago
by
axs531622
CUDA out of memory on RTX A5000 inference.
6
#57 opened over 1 year ago
by
RoberyanL
Update README.md to reflect correct transformers version
#56 opened over 1 year ago
by
priyakhandelwal
Update README.md to reflect correct transformers version
#55 opened over 1 year ago
by
priyakhandelwal
NotImplementedError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend.
➕
2
3
#54 opened over 1 year ago
by
duccio84
Some of you might be interested in my 'silly' experiment.
🧠
1
2
#52 opened over 1 year ago
by
ZeroWw
Updated config.json
#51 opened over 1 year ago
by
WestM
🚀 LMDeploy support Llama3.1 and its Tool Calling. An example of calling "Wolfram Alpha" to perform complex mathematical calculations can be found from here!
#50 opened over 1 year ago
by
vansin
HF pro subscription for llama 3.1-8b
4
#49 opened over 1 year ago
by
ostoslista
Significant bias
👀
👍
3
6
#48 opened over 1 year ago
by
stutteringp0et
`rope_scaling` must be a dictionary with two fields
➕
🤝
8
4
#46 opened over 1 year ago
by
thunderdagger
Unable to load Llama 3.1 to Text-Genration WebUI
4
#45 opened over 1 year ago
by
keeeeesz
BUG Chat template doesn't respect `add_generation_prompt`flag from transformers tokenizer
👍
3
1
#44 opened over 1 year ago
by
ilu000
How to use the ASR on LLama3.1
👀
🔥
1
1
#43 opened over 1 year ago
by
andrygasy
Tokenizer 'apply_chat_template' issue
1
#42 opened over 1 year ago
by
Ksgk-fy
Function Calling Evaluation bench Nexus (0-shot)
#41 opened over 1 year ago
by
WateBear
Error: json: cannot unmarshal array into Go struct field Params.eos_token_id of type int
❤️
1
2
#40 opened over 1 year ago
by
SadeghPouriyanZadeh
ValueError: Pipeline with tokenizer without pad_token cannot do batching. You can try to set it with `pipe.tokenizer.pad_token_id = model.config.eos_token_id`.
👀
👍
2
4
#39 opened over 1 year ago
by
jsemrau
Run this on CPU and use tool calling
1
#38 opened over 1 year ago
by
J22
!!Access Problem
➕
15
13
#37 opened over 1 year ago
by
minglingfeng
LLama-3.1-8B generates way to long answers!
👍
1
3
#36 opened over 1 year ago
by
ayyylemao
Tokenizer error and/or 'rope_scaling' problem
5
#35 opened over 1 year ago
by
fazayjo
Deployment to Inference Endpoints
🔥
➕
3
6
#34 opened over 1 year ago
by
stcat
Best practice for tool calling with meta-llama/Meta-Llama-3.1-8B-Instruct
1
#33 opened over 1 year ago
by
zzclynn
The model often enters infinite generation loops
👍
5
13
#32 opened over 1 year ago
by
sszymczyk
unable to load 4-bit quantized varient with llama.cpp
#31 opened over 1 year ago
by
sunnykusawa
Garbage output ?
10
#30 opened over 1 year ago
by
danielus