Code Vulnerabilities Can Be Automatically Fixed Using GitHub’s Most Recent AI Tool

Bugs are having a rough day. Just a few hours after Sentry unveiled its AI Autofix tool for debugging production code earlier today, GitHub is releasing the first beta of its code-scanning autofix function, which finds and fixes security flaws in code as it’s being written. This new service combines the company’s semantic code analysis engine, CodeQL, with the real-time capabilities of GitHub’s Copilot. In November of last year, the business gave a glimpse of this feature.

More than two-thirds of the vulnerabilities that this new technology discovers, according to GitHub, can be fixed, frequently without the developers needing to make any code edits. Additionally, the business guarantees that code scanning autofix, which is presently supported by JavaScript, Typescript, Java, and Python, will cover over 90% of alert kinds in those languages.

All users of GitHub Advanced Security (GHAS) can now access this new functionality.

In today’s release, GitHub states that “Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation,”  “Security teams will also benefit from a reduced volume of everyday vulnerabilities, so they can focus on strategies to protect the business while keeping up with an accelerated pace of development.”

This new technology finds vulnerabilities in code even before it is executed by using GitHub’s semantic analysis engine, CodeQL, in the background. Following the acquisition of the code analysis firm Semmle, where CodeQL was initially developed, the company released the first generation of CodeQL to the public in late 2019.

CodeQL saw many advancements over the years, but one thing remained constant: only scholars and open-source developers could use CodeQL for free.

GitHub states that this new tool’s core is CodeQL, but it also makes use of “a combination of heuristics and GitHub Copilot APIs” to recommend improvements. GitHub use the GPT-4 model from OpenAI to provide the patches and associated descriptions. Furthermore, GitHub acknowledges that “a small percentage of suggested fixes will reflect a significant misunderstanding of the codebase or the vulnerability,” yet being plainly confident enough to suggest that the vast majority of autofix suggestions will be correct.

You might also like

Leave a Reply

Your email address will not be published. Required fields are marked *