I use AI coding tools every day. Copilot, Claude, Cursor. They make me faster. I ship things in hours that used to take days.
But speed has a cost. The code AI tools produce isn't broken. It works. The problem is structural. Functions grow to 200 lines because the AI keeps adding logic instead of extracting it. Files balloon because every new feature gets appended to whatever's already open. Validation logic gets duplicated across three handlers because the AI doesn't know it already exists somewhere else.
I maintain six iOS apps, a handful of web projects, and various CLI tools. After a few months of heavy AI-assisted development, I noticed the same structural problems in every single one.
The mess nobody talks about
Duplicated code across files. This is the big one. AI tools don't have full context of your codebase. They generate what works for the current prompt. The result is the same block of logic appearing in multiple places, sometimes with tiny variations.
// src/api/users.ts
function validateEmail(email: string): boolean {
const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!regex.test(email)) return false;
if (email.length > 254) return false;
return true;
}
// src/api/auth.ts (same logic, different file)
function checkEmail(input: string): boolean {
const pattern = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!pattern.test(input)) return false;
if (input.length > 254) return false;
return true;
}
When you fix a bug in one copy, the other stays broken. Multiply this across a real codebase and you've got a maintenance nightmare.
Functions that do too much. AI tools write sequentially. They keep adding to the current function rather than splitting responsibilities. You end up with 150-line functions that handle validation, transformation, API calls, and error handling all in one block. Good luck reviewing that in a PR.
Files that grow unchecked. Same principle. A utils.ts that starts at 100 lines quietly becomes 800. Nobody notices because each addition is small. But the file becomes impossible to navigate and reason about.
Dead code everywhere. AI tools add imports, exports, and helper functions that seem useful at the time. Requirements change, code gets refactored, but the old pieces stay. Half your exports aren't imported anywhere. Entire files sit untouched. New developers (and AI tools reading your codebase) waste time understanding code that does nothing.
Deeply nested logic. Five levels of if statements and loops tangled together. AI models generate these because they solve the problem step by step without refactoring for readability. The code works but reading it requires a stack in your head.
Too many parameters. Functions that take six, seven, eight arguments because the AI keeps bolting on options. A clear sign the function needs restructuring or an options object.
Why this compounds
Each of these problems is manageable in isolation. But they feed each other.
Duplicated code means bugs get fixed in one place and stay broken in another. Large functions become impossible to review properly, so more issues slip through. Dead code confuses anyone reading the codebase, including the AI tools you're using to write more code. The faster you ship, the faster the mess grows.
The worst part is how invisible it is. The code works. Tests pass. Users are happy. But every week, the codebase gets a little harder to change safely.
Existing tools miss the big picture
ESLint catches syntax issues and some style problems. Prettier formats your code. TypeScript strict mode helps with types. These tools are good at what they do.
But who catches that you've got three copies of the same validation logic spread across different files? Who flags that your utils file hit 800 lines? Who notices that half your exports aren't imported anywhere?
Each tool works in isolation. You'd need to configure and run five or six different tools, then somehow combine their output into a single picture of your codebase health. Nobody does that.
What aislop checks
aislop is a code quality CLI that runs everything in one command and gives you a single score out of 100. Zero config. One scan. Here's what it covers.
Code quality. Functions longer than 80 lines get flagged. Files over 400 lines get flagged. Nesting deeper than 5 levels, functions with more than 6 parameters, and duplicate code blocks of 12 or more lines across files. All configurable, all with sensible defaults.
Dead code. Unused files, unused exports, unused types. This runs via knip under the hood. You don't configure it. aislop handles the integration and rolls results into the score.
AI slop patterns. The cherry on top. Trivial comments that restate the code. Swallowed exceptions with empty catch blocks. Thin wrappers that add indirection with no value. Generic variable names like data2 and temp. Console leftovers in production code. as any casts that bypass TypeScript's type system.
Security. Hardcoded secrets, eval usage, innerHTML assignments, SQL injection patterns. Basic checks that catch common mistakes before they ship.
It also bundles language-specific tools. For JS/TS it includes oxlint, biome, and knip. For Python, ruff. For Go, golangci-lint. You don't configure any of them.
$ npx aislop scan
aislop v1.0.0
Scanning 142 files across 3 languages...
Code Quality
⚠ src/utils/helpers.ts file-too-large (812 lines)
⚠ src/api/client.ts:23 function-too-long (147 lines)
⚠ src/api/auth.ts:45 duplicate-block (matches src/api/users.ts:12)
⚠ src/handlers/order.ts:89 deep-nesting (6 levels)
Dead Code
⚠ src/utils/format.ts unused-export (formatCurrency)
⚠ src/types/legacy.ts unused-file
AI Patterns
⚠ src/api/client.ts:67 swallowed-exception
⚠ src/services/cache.ts:4 trivial-comment
Score: 71/100
Issues: 8 (6 warning, 2 info)
Files scanned: 142
One number. You can gate your CI pipeline on it. Score below 80? The PR doesn't merge.
How I use it
I run npx aislop scan before every push. It's the last thing I do before code leaves my machine.
Every CI pipeline across my projects includes an aislop check. If the score drops below the threshold, the build fails. This catches things that slip through during development, especially when AI tools generate code in bulk.
I dogfood it aggressively. aislop scans itself. Every bug I find, every false positive, every edge case goes straight into an issue on the repo. The tool gets better because I use it on real projects daily.
The biggest win has been catching duplication and dead code early. Before aislop, these problems grew silently until a refactor forced me to untangle everything. Now they get flagged the moment they appear.
It's open source
I built aislop because I needed it. But the problem isn't unique to me. Anyone shipping code fast, whether with AI tools or not, accumulates structural debt. The difference is whether you catch it early or discover it six months later.
It's MIT licensed, on GitHub, and on npm. Run npx aislop scan on your project and see what score you get.
Issues and PRs are welcome. The patterns keep evolving, so the rules need to evolve too.