AI Coding Companions: Comparing AWS, GitHub, & Google
Top cloud vendors and software companies are rolling out AI coding companions that use generative AI to speed up and streamline DevOps. In this blog, we take a look at what some of these new tools have in common, where they differ, and what they mean for cybersecurity.
Generative AI has gotten a lot of press this year as a revolutionary, paradigm-shifting technology that could alter the destiny of humankind. Civilization’s fate is a little out-of-scope for us at the Trend Micro DevOps Resource Centre, but we can say with confidence that generative AI is already having a big impact on software development, especially with the rise of AI coding companions.
Developers have relied on machine intelligence for years: automation, code completion, low-code development, static code analysis, and the like. But generative AI marks a distinct and major leap forward.
Today’s AI coding companions—such as Amazon CodeWhisperer, GitHub Copilot, and Google Bard—can suggest or complete bits of code in nearly any language or framework. Trained on massive databases, they promise to help developers generate code faster, offload routine tasks, and generally lead happier lives. There are many commonalities in how they each do this—and a few notable differences as well. Let’s explore them…
AI code companions in profile
Amazon CodeWhisperer supports developers as they work by suggesting snippets of code, whole functions, and logical blocks of up to 15 lines in multiple languages. It does this inside the code editor environment, analysing code and comments automatically and intelligently matching the developer’s style and naming conventions when it makes suggestions.
Not surprisingly, Amazon CodeWhisperer optimises for AWS APIs including Amazon Elastic Compute Cloud (Amazon EC2), AWS Lambda, and Amazon Simple Storage Service (Amazon S3). Its suggestions include relevant cloud services and public software libraries to achieve specified functionalities, and the code snippets it recommends meet AWS best practises.
The tool has a “reference tracker” to identify any potential open-source code that may appear in its suggestions, giving relevant details such as a repository’s URL, file reference, and licence information. Developers can assess instances on a case-by-case basis or choose to filter out any possible open-source code completely.
Integrated Development Environments (IDEs): JetBrains IDEs (IntelliJ IDEA, PyCharm, WebStorm, and Rider), Visual Studio Code, AWS Cloud9, AWS Lambda, JupyterLab, and Amazon SageMaker Studio.
Like Amazon CodeWhisperer, GitHub Copilot suggests code and whole functions inside the code editor environment. It covers a range of IDEs and languages, and offers an extension specifically for Microsoft Visual Studio Code.
Copilot uses the OpenAI Codex. It makes inline suggestions automatically as developers work on their code, often giving more than one suggestion at a time. Similar to Amazon CodeWhisperer, it respects developer style conventions. It also provides a filter to check suggested code against public open-source code on GitHub and avoid open-source suggestions.
Copilot has both an inline chat that allows developers to communicate with the AI while working and a dedicated chat view for asking questions and help. The chat responses are context-aware—specific to the code at hand, the developer’s workspace, extensions, settings, and so on.
IDEs supported: Microsoft Visual Studio and Visual Studio Code, Neovim, and JetBrains IDEs.
In April 2023, Google announced its AI chatbot, Bard, was now outfitted to serve as an AI coding companion capable of generating, debugging, and explaining code. Developers who use Google Colaboratory (“Colab”) will be happy to know that Bard exports Python code directly to Colab without cutting and pasting. Bard can also be used to write functions for Google Sheets.
Google notes Bard’s ability to respond to natural language prompts as a key feature. Developers can ask it to, “Fix this broken code,” when something isn’t working, or “Make my code faster,” to discover hidden efficiencies.
Similar to Amazon CodeWhisperer and GitHub Copilot, Bard cheques against open-source projects, providing citations any time it “quotes at length” from open-source code.
IDEs supported: Can be incorporated into existing IDEs.
When to use which AI coding companion?
All three of the AI coding companions profiled here suit a broad range of activities, languages, and environments. Choosing amongst them may come down to a dev team’s subjective preferences and where the software is ultimately going to be deployed. As mentioned, Amazon CodeWhisperer optimises applications for AWS APIs. Microsoft owns GitHub and has a partnership with OpenAI, so Copilot is a natural fit for applications destined to run in Azure. Likewise, as a Google product integrated with apps like Google Sheets and Google Colab, Bard makes sense for Google environments and the Google Cloud Platform.
In all cases, one question comes up consistently across blogs, web searches, and message board threads: “Are these tools safe and secure?”
Using AI securely for coding
Security experts have long cautioned that AI could be used to generate highly effective—and destructive—malicious code. But AI security isn’t just about defending against new vulnerabilities and attack types. Bad code is also a concern: code that doesn’t work, has unintended consequences, or inadvertently exposes private information. A previous Trend Micro blog cited a 2021 study that found GitHub Copilot produced security issues around 40% of the time. The bottom line: AI-generated code is famously fallible, and it doesn’t automatically include security features and apply best practises.
Coding companion providers are aware of the risks. “Responsible AI” is the new catchphrase—meaning “use it wisely”. Google explicitly reminds developers to verify and test any code developed with Bard’s help.
The crucial takeaway here is that software developers and the companies they work for have a proactive role to play in defending against AI coding risks. Best practises strongly recommended by Trend Micro and cybersecurity analyst firms like Gartner include:
- Reviewing and security-testing all AI-generated code
- Treating any AI-generated code as potentially vulnerable
- Not relying exclusively on AI for coding
While all of the AI coding companion providers we looked at have security and data protection policies, it’s the use of their tools that can create vulnerabilities. That makes it incumbent on developers to be aware of the risks, and for corporate policies to mitigate them.
Software developers have lots to gain from adopting AI coding companions. The ability to automate repetitive tasks, to optimise code with machine learning, to quickly find and correct errors, and to speed up coding overall are undeniably “wins”. They free up more time for developers to bring creativity to their work, add user value, and focus on business logic.
It seems reasonable to expect these tools will become more advanced, secure, and reliable over time. But it’s a long time off, if ever, that AI will take over the hard parts of coding, the ones that require imagination, inspiration, and expert judgement.
For now—and maybe for always—that part still falls to people, to the developers. And related to that, it is people who need to bring a responsible, security-minded approach to keep enterprises, their customers, and partners safe and secure wherever AI is used.
For more Trend Micro insights into AI coding and cybersecurity, check out these resources: