CUDA error
#1
by
jacek2024 - opened
Does it work for you in llama.cpp?
/home/jacek/git/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:97: CUDA error
[New LWP 21316]
[New LWP 21315]
[New LWP 21314]
[New LWP 21313]
[New LWP 21306]
This GDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.ubuntu.com>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping.
Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|front|back|data|size|empty) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
__syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
warning: 56 ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S: No such file or directory
#0 __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56
56 in ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S
#1 0x0000777a6069eb63 in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=0, a6=0, nr=61) at ./nptl/cancellation.c:49
warning: 49 ./nptl/cancellation.c: No such file or directory
#2 __syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=0, nr=61) at ./nptl/cancellation.c:75
75 in ./nptl/cancellation.c
#3 0x0000777a6071ae9f in __GI___wait4 (pid=<optimized out>, stat_loc=<optimized out>, options=<optimized out>, usage=<optimized out>) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
#4 0x0000777a60d72fd3 in ggml_print_backtrace () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libggml-base.so.0
#5 0x0000777a60d7317b in ggml_abort () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libggml-base.so.0
#6 0x0000777a5f6c4f17 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libggml-cuda.so.0
#7 0x0000777a5f6c5103 in ggml_backend_cuda_device_event_synchronize(ggml_backend_device*, ggml_backend_event*) () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libggml-cuda.so.0
#8 0x0000777a60d8f772 in ggml_backend_sched_graph_compute_async () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libggml-base.so.0
#9 0x0000777a60ebdcf1 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libllama.so.0
#10 0x0000777a60ec06c2 in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libllama.so.0
#11 0x0000777a60ec5dcf in llama_context::decode(llama_batch const&) () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libllama.so.0
#12 0x0000777a60ec79df in llama_decode () from /home/jacek/git/llama.cpp/build_2026.02.24/bin/libllama.so.0
#13 0x00005c09e927e97e in test_prompt(llama_context*, int, int, int) ()
#14 0x00005c09e927b268 in main ()
[Inferior 1 (process 21305) detached]
Aborted (core dumped)