Curl author criticizes AI-generated security reports

ai-security

AI used to detect security issues

Few days ago, Daniel Stenberg (the author of Curl) made it known on your blog, a publication in which it expresses not only as a criticizes the use of artificial intelligence tools, but in the form of a complaint, the inconvenience that this generates for him and his team, the security reports generated by artificial intelligence tools.

And in his publication, Daniel Stenberg mentions that for many years the process of verifying all the reports and discarding between “junk” and “real” security problems, It wasn't something that required extra effort., as it mentions that "junk reports have also typically been very easy and quick to detect and discard."

With the recent rise of artificial intelligence, many tasks that previously required many hours of human intervention have been revolutionized. Among the most mentioned cases in this blog, we have addressed the topics of AIs dedicated to programming, image generation, and video editing, such as ChatGPT, Copilot, Bard, among others.

In the specific area of ​​programming, Copilot generated numerous criticisms, the main concern being the possibility of facing legal lawsuits. However, at the other end of the scale, the intervention of artificial intelligence has significantly transformed various areas. For example, in detecting errors and security issues in code, AIs have played a crucial role. Many people have adopted these tools to identify potential bugs and vulnerabilities in code, often participating in bounty programs for detecting security issues.

Curl did not escape this trend, and Daniel Stenberg expressed on his blog that, After several months of holding back his opinion, he finally exploded disagree with the use of artificial intelligence tools. The reason behind your frustration was growing number of “junk” reports generated by the use of these tools.

In the publication, it is highlighted that These reports have a detailed appearance, are written in normal language and appear to be of high quality. However, without careful analysis, they turn out to be misleading, since they replace real problems with low-quality content that appears to be valuable.

The project Curl, which offers rewards for the identification of new vulnerabilities, has received a total of 415 reports of potential problems. From this set, only 64 were confirmed as real vulnerabilities, 77 described non-security-related errors and, surprisingly, 274 (66%) did not contain useful information, eating up developers' time that could have been spent on something useful.

Developers are forced to waste a lot of time analyzing useless reports and repeatedly checking the information contained therein, since the external quality of the design creates additional trust in the information and there is a feeling that the developer did not understand something.

On the other hand, generating such a report requires minimal effort on the part of the requester, who does not bother to check if there is a real problem, but simply blindly copies the data received from the AI ​​assistants, hoping to get lucky in the fight to receive a reward.

Daniel Stenberg, Share two examples of this type of garbage reporting:

  1. In the first case, just before the planned release of information about a critical vulnerability in October, a report was received via HackerOne indicating that a public patch already existed to resolve the issue. However, the report turned out to be "fake," as it contained data on similar problems and snippets of detailed information about past vulnerabilities, compiled by Google's artificial intelligence assistant, Bard. Although the information seemed novel and relevant, it lacked connection to reality.
  2. In the second case, a report was received about a buffer overflow in WebSocket handling. This report came from a user who had already reported vulnerabilities to several projects through HackerOne. To reproduce the issue, the report provided general instructions on how to submit a modified request and an example fix.

Despite thoroughly triple-checking the code, the developer did not find any issues. However, since the report was written in such a way as to generate “some” confidence and even presented a proposed solution, the feeling that something did not add up persisted.

In an effort to clarify how the user managed to bypass the size check, it is mentioned that the explanations contained no additional information and only discussed obvious common causes of buffer overflows unrelated to the Curl code. The responses were reminiscent of communicating with an AI assistant, and after futile attempts to figure out exactly how the problem manifested itself, Daniel Stenberg was finally convinced that no vulnerability actually existed and closed the topic as not “applicable.”

Finally, if you are interested in being able to know more about it, you can consult the details in the following link


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: AB Internet Networks 2008 SL
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.