There are more and more developments of AI, the complexity of this AI is increasing every time, and also the field of application. It is already being used in the health sector, public administrations, research, services, etc. But these algorithms are not foolproof when it comes to security, there are also potential threats to them. In fact, as AI advances, more and more threats are being detected. That is why the importance of tools like counterfit.
As they are quite critical systems, safe artificial intelligence systems are needed. And for this we must develop a series of standards for audits and tools for be able to audit them more easily. An example is this Microsoft tool that I mentioned in the previous paragraph.
Microsoft has developed this tool that will make your work much easier, and it has done so under an open source license (MIT license) and through its GitHub platform, so that it is accessible to all those who wish to use it, and also to be able to participate in its development. . counterfit It will allow developers to test the security of artificial intelligence systems at an early stage of development, thus ensuring that there are no critical holes.
If you are interested know about her, you can access your repository on GitHub from this link. There you will find the source code, information, or you can download and test it on multiple platforms. In addition, since it is written in Python, you can also use it in Linux without problems.
The tool was born within Microsoft, due to their own need to evaluate their systems of artificial intelligence and machine learning looking for vulnerabilities. It was originally intended for testing systems, but now it can also be used during the AI development stage.
As you can see on the site, you will need to install anaconda python in local to be able to use the Python script. Another option is to use it through Azure Shell from a web browser.