Papers
arxiv:2601.09292

Blue Teaming Function-Calling Agents

Published on Jan 14
Authors:
,
,

Abstract

Experimental evaluation of open source large language models' robustness against attacks and defense effectiveness reveals inadequate safety and impractical defense deployment.

AI-generated summary

We present an experimental evaluation that assesses the robustness of four open source LLMs claiming function-calling capabilities against three different attacks, and we measure the effectiveness of eight different defences. Our results show how these models are not safe by default, and how the defences are not yet employable in real-world scenarios.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2601.09292
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.09292 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.09292 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.