Let's Secure Me

Best practices and tips to secure your infrastructure.

Security Risks of Vibe Coded Apps

When AI writes your code, who checks for vulnerabilities?

The software development landscape has undergone a dramatic transformation. With the rise of large language models like GPT-4, Claude, and Gemini, developers can now generate entire functions, modules, and even applications with simple natural language prompts.

This has given birth to a new phenomenon: "vibe coding" — a practice where developers describe what they want, accept the AI-generated output, and move on without thoroughly reviewing the code. While this approach dramatically speeds up development, it introduces a dangerous blind spot: security vulnerabilities that slip through unnoticed.

In this article, we explore the security implications of vibe coding and provide practical recommendations for maintaining security in an AI-assisted development workflow.

The rise of vibe coding

The term "vibe coding" emerged from developer communities to describe a workflow where the programmer focuses on the high-level intent — the "vibe" — while delegating the actual implementation to AI. The developer's role shifts from writing code to prompting, reviewing output, and iterating.

In theory, this sounds efficient. In practice, the "reviewing output" step often gets minimized or skipped entirely. When code appears to work — when it compiles without errors and produces the expected output — many developers simply accept it and move on.

This is particularly concerning because AI models are trained to produce code that looks correct and functions correctly, but they have no inherent understanding of security best practices. They will happily generate SQL queries vulnerable to injection, authentication flows with subtle flaws, or API endpoints that expose sensitive data.

Common vulnerabilities in AI-generated code

Security researchers have identified several categories of vulnerabilities that frequently appear in AI-generated code:

Injection vulnerabilities

AI models often generate code that concatenates user input directly into SQL queries, shell commands, or template strings. Without explicit prompting about security, the model may produce:

# Vulnerable AI-generated code
query = f"SELECT * FROM users WHERE username = '{username}'"

Instead of the secure parameterized version that prevents SQL injection.

Broken authentication

AI-generated authentication code may use weak hashing algorithms, implement flawed session management, or skip essential checks like rate limiting. The code works, but it leaves the door open for attackers.

Sensitive data exposure

Models frequently generate code that logs sensitive information, returns excessive data in API responses, or stores secrets in configuration files that end up in version control.

Insecure dependencies

When asked to implement functionality, AI models may suggest outdated packages with known vulnerabilities. Their training data has a cutoff date, and they cannot know about recently discovered CVEs.

Not all AI models are equal

An important consideration often overlooked in the vibe coding debate is that different AI models produce code of varying quality. Some models have been fine-tuned with a stronger emphasis on security patterns, while others prioritize brevity or performance.

Before settling on an AI coding assistant, it's worth researching which AI models produce the most secure and high-quality code. The differences can be substantial — some models consistently suggest parameterized queries while others default to string concatenation.

Choosing the right model is your first line of defense. A model that inherently produces more secure code reduces the burden on the developer to catch every vulnerability during review.

Best practices for secure AI-assisted development

The goal isn't to abandon AI-assisted development — the productivity gains are too significant to ignore. Instead, we need to adapt our workflows to account for the security blind spots. Here are our recommendations:

1. Always review security-critical code

Authentication, authorization, input validation, and data handling code must be reviewed line by line. Never vibe code your security layer.

2. Include security context in prompts

Explicitly ask for secure implementations: "Write a login function that uses parameterized queries, implements rate limiting, and uses bcrypt for password hashing."

3. Use automated security scanning

Integrate SAST (Static Application Security Testing) tools into your CI/CD pipeline. Tools like Semgrep, CodeQL, or Snyk can catch common vulnerabilities automatically.

4. Keep dependencies updated

Don't trust AI suggestions for package versions. Use tools like Dependabot or Renovate to ensure you're using packages without known vulnerabilities.

5. Choose your AI assistant wisely

Research and compare different AI coding models before committing to one. The quality and security of generated code varies significantly between models.

The human factor remains critical

AI coding assistants are powerful tools, but they are exactly that — tools. They amplify the capabilities of the developer using them. A security-conscious developer using AI will produce more secure code than one who blindly accepts every suggestion.

The irony of vibe coding is that it requires more security expertise, not less. When you write code yourself, you think through each line. When AI writes it for you, you need the knowledge to spot what's wrong in code you didn't write — a fundamentally harder task.

Invest in security training. Understand the OWASP Top 10. Learn to recognize vulnerable patterns. This knowledge becomes even more valuable in an AI-assisted workflow.

Conclusion

Vibe coding isn't inherently bad — it's a natural evolution as AI capabilities improve. The danger lies in treating AI-generated code as trusted code. Every line of code in your application is your responsibility, regardless of who or what wrote it.

By choosing high-quality AI models, implementing proper review processes, and leveraging automated security tools, you can enjoy the productivity benefits of AI-assisted development without compromising on security.

The future of secure development isn't AI or humans — it's AI and humans working together, each covering the other's blind spots.