Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
52.2
TFLOPS
26
9
Doğuş Can Korkmaz
doguscank
Follow
0 followers
·
13 following
doguscank
doguscankorkmaz
AI & ML interests
Vision, LLMs, vLLMs, semantic segmentation, forecasting
Recent Activity
reacted
to
KingNish
's
post
with 🔥
1 day ago
I tested Muon vs MuonClip vs Muon+AdamW for fine-tuning LLMs Just published a blog on that, Read here 👉 https://huggingface.co/blog/KingNish/optimizer-part1
reacted
to
Kseniase
's
post
with 👍
3 days ago
15 Outstanding Research Papers from NeurIPS 2025 NeurIPS 2025, as a premier annual event in machine learning and computational neuroscience, tackles major topics like the future of AI, current research, and the most difficult challenges. While we’re not attending this year, we’re closely following the updates and today we pull together a quick, easy-to-digest roundup of a few standout papers so you can jump in without getting overwhelmed. Here is a list of 15 papers from NeurIPS 2025, including 8 top research papers that received awards, along with 7 others that caught our attention: 1. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks → https://neurips.cc/virtual/2025/loc/san-diego/test-of-time/128328 Test of Time Award winner. Introduces the RPN, a small convnet that predicts objectness and boxes on shared features, enabling Faster R-CNN to share computation and run around 5 fps on a GPU 2. Artificial Hivemind: The Open-Ended Homogeneity of LMs (and Beyond) → https://neurips.cc/virtual/2025/loc/san-diego/poster/121421 Releases a huge open-ended prompt dataset, showing that LLMs often fall into an “artificial hivemind” – generate surprisingly similar answers – and measuring diversity collapse 3. Optimal Mistake Bounds for Transductive Online Learning → https://neurips.cc/virtual/2025/loc/san-diego/poster/119098 Settles a 30-year-old question by showing how much unlabeled data helps in online learning – it gives a precise quadratic advantage with tight matching bounds 4. Gated Attention for LLMs: Non-linearity, Sparsity, and Attention-Sink-Free → https://neurips.cc/virtual/2025/loc/san-diego/poster/120216 Demonstrates how gating actually affects attention: a simple sigmoid gate after Scaled Dot-Product Attention (SDPA) boosts performance, stability, and long-context behavior by adding useful nonlinearity and sparse modulation Read further below ⬇️ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
liked
a dataset
9 days ago
vanta-research/orbital-mechanics-1
View all activity
Organizations
doguscank
's models
1
Sort: Recently updated
doguscank/facenet-onnx
Updated
Dec 18, 2024