News
Large Language Models (LLMs) have a serious “package hallucination” problem that could lead to a wave of maliciously-coded packages in the supply chain, researchers have discovered in one of ...
Flooding public package repositories with malicious packages is not entirely new. Last year researchers detected a group of 186 packages from the same account on the JavaScript npm repository that ...
From the analysis of 16 code-generation models, including GPT-4, GPT-3.5, CodeLlama, DeepSeek, and Mistral, researchers observed approximately a fifth of the packages recommended to be fakes.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results