Cryptocurrency exchange Coinbase tested Openai’s Chatgpt as a token verification tool against its standard security procedure. In more than half of the cases, the AI platform produced the same results as the manual review, but also failed to recognize some high-risk assets.
Chatgpt approves 5 high-risk tokens, Coinbase can use them for secondary control
Digital asset exchange Coinbase tested the artificial intelligence (AI) chatbot developed by Openai to perform automated token reviews. The US-based trading platform said that while Chatgpt was not accurate enough to be immediately integrated into its asset review process, it showed enough potential to warrant further investigation.
The experiment is part of Coinbase’s efforts to apply efficient and effective methods to review token contracts before deciding to list the assets. The exchange noted that its Blockchain Security team employs in-house automation tools developed to help security engineers review ERC20/721 smart contracts and explained the AI initiative, saying:
With the emergence of OpenAI ChatGPT and the buzz about its ability to detect security vulnerabilities, we wanted to test how well it would perform as a front-end tool applied at scale rather than just a single code reviewer.
“Chatgpt has proven beneficial in improving productivity across a wide range of development and engineering tasks,” Coinbase explained. Also, the AI tool can be used to optimize the code and identify vulnerabilities.
The major American crypto exchange conducted the experiment to compare the accuracy of a token security check performed by Chatgpt against that of a standard check performed by a blockchain security engineer using in-house tools. To produce comparable risk scores, the chatbot had to be taught how to identify risks as defined by the platform’s own security review framework.
The researchers compared 20 smart contract risk scores between Chatgpt and a manual security review. While the AI tool produced the same results as the manual review 12 times, of the eight errors, five were cases where Chatgpt mislabeled a high-risk asset as a low-risk one. “Underestimating a risk score is much more damaging than overestimating it,” the exchange noted in a blog post.
Despite this “worst case failure” and the tool’s tendency to be inconsistent in its responses, when asked the same question multiple times, coin base says that the efficiency of the Chatgpt review has been remarkable. The company hopes that with faster engineering, the tool’s accuracy can be improved.
Currently, the bot alone cannot be trusted to perform a security review, Coinbase concluded. However, he also noted that if his team can increase the accuracy, a “good first use case for the tool would be to serve as a secondary QA check.” That means your engineers can potentially take advantage of it for additional control checks to identify risks that may have gone unnoticed.
Openai’s Chatgpt platform has been in the spotlight this year amid the growing popularity of AI applications. In early March, the world’s largest cryptocurrency exchange Binance announced the launch of a new AI-focused non-fungible token (NFTs) platform.
Do you think other cryptocurrency exchanges will soon consider employing AI tools like Chatgpt for their risk assessment procedures? Share your thoughts on the subject in the comments section below.
image credits: Shutterstock, Pixabay, Wiki Commons
Disclaimer: This article is for informational purposes only. It is not a direct offer or a solicitation of an offer to buy or sell, or a recommendation or endorsement of any product, service or company. bitcoin.com does not provide investment, tax, legal or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.