Agent Skills: A Data-Driven Analysis of Claude Skills for Extending Large Language Model Functionality
Abstract
Agent skills extend large language model (LLM) agents with reusable, program-like modules that define triggering conditions, procedural logic, and tool interactions. As these skills proliferate in public marketplaces, it is unclear what types are available, how users adopt them, and what risks they pose. To answer these questions, we conduct a large-scale, data-driven analysis of 40,285 publicly listed skills from a major marketplace. Our results show that skill publication tends to occur in short bursts that track shifts in community attention. We also find that skill content is highly concentrated in software engineering workflows, while information retrieval and content creation account for a substantial share of adoption. Beyond content trends, we uncover a pronounced supply-demand imbalance across categories, and we show that most skills remain within typical prompt budgets despite a heavy-tailed length distribution. Finally, we observe strong ecosystem homogeneity, with widespread intent-level redundancy, and we identify non-trivial safety risks, including skills that enable state-changing or system-level actions. Overall, our findings provide a quantitative snapshot of agent skills as an emerging infrastructure layer for agents and inform future work on skill reuse, standardization, and safety-aware design.
Community
Understand Agent Skills at a Glance: The Ecosystem, Opportunities, and Risks Behind 40,000+ Claude Skills
From patterns of explosive growth and a comprehensive, multi-dimensional functional taxonomy to multi-tier security audits, this data-driven study offers a clear picture of the Agent Skills community ecosystem and its current state of development. It provides quantitative insights for technical implementation, platform building, and applied research, while also giving newcomers a clear, realistic understanding of the field as a whole.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Jenius Agent: Towards Experience-Driven Accuracy Optimization in Real-World Scenarios (2026)
- Learning to Recommend Multi-Agent Subgraphs from Calling Trees (2026)
- Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs (2026)
- AutoRefine: From Trajectories to Reusable Expertise for Continual LLM Agent Refinement (2026)
- CoWork-X: Experience-Optimized Co-Evolution for Multi-Agent Collaboration System (2026)
- ProcMEM: Learning Reusable Procedural Memory from Experience via Non-Parametric PPO for LLM Agents (2026)
- Context as a Tool: Context Management for Long-Horizon SWE-Agents (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper