Slop Scoring
Slopcannon assigns a slop score to each PR by detecting patterns commonly found in low-effort AI-generated code. This page explains exactly what we look for and how scores are calculated.
Philosophy
Our detection is:
- Pattern-based — Rules and heuristics, not ML. Every point has a traceable reason.
- Diff-scoped — We analyze what changed, not the whole repo.
- Transparent — Each finding links to a specific line with an explanation.
- Additive — No maximum score. More issues = higher score.
We're not trying to block or punish. We're providing visibility into code quality signals that humans might miss in review.
Scoring Overview
| Finding Type | Points | Severity |
|---|---|---|
| Falsy Coalesce Bug | +3 | High |
| Silent Catch | +3 | High |
| Mystery Fallback | +2-3 | Medium-High |
| Nested Ternary | +2 | Medium |
| Unnecessary Abstraction | +1-2 | Low-Medium |
| Uncertainty Masking | +1-2 | Low-Medium |
| Style Drift | +1 | Low |
| Naming Drift | +1 | Low |
Detection Patterns
Silent Catches (+3 points)
Empty or minimal error handling that swallows exceptions without proper logging or rethrowing.
What we flag:
// Empty catch block
try {
await riskyOperation();
} catch (e) {}
// Comment-only catch
try {
parseData();
} catch (error) {
// ignore
}
// Console.log only (no structured logging)
try {
fetchUser();
} catch (e) {
console.log(e);
}Why it matters: AI assistants often add try/catch blocks defensively without thinking about what should actually happen when errors occur. Real error handling logs structured data, rethrows, or handles specific error types.
Better approach:
try {
await riskyOperation();
} catch (error) {
logger.error('Operation failed', { error, context: relevantData });
throw error; // or handle specifically
}Falsy Coalesce Bug (+3 points)
Using || when ?? was intended — a common AI mistake that causes real bugs.
What we flag:
// BUG: 0 is a valid count, but || treats it as falsy
const count = response.count || 10;
// BUG: false is a valid setting, but || treats it as falsy
const enabled = config.darkMode || true;Why it matters: AI models often use || for defaults without understanding that 0, "", and false are valid values that will incorrectly trigger the fallback. This creates subtle bugs.
Better approach:
// Use ?? for nullish coalescing (only null/undefined trigger fallback)
const count = response.count ?? 10;
const enabled = config.darkMode ?? true;Mystery Fallbacks (+2-3 points)
Fallbacks that mask uncertainty rather than handling it properly.
What we flag (high confidence only):
// Magic string fallbacks - hiding missing data
const name = user.name || "Unknown";
const status = result.status || "N/A";
// Empty fallbacks on response/data variables
const data = response.data || {};
const items = result.items || [];
// Empty fallbacks in catch blocks
catch (err) {
return cached || {}; // Silently swallows the error
}
// Empty fallbacks after await
const user = await fetchUser() || {}; // Masks fetch failureWhat we DON'T flag:
- Fallbacks with nearby comments explaining the reason
- Display/UI fallbacks (clearly for presentation)
- Documented optional fields with sensible defaults
Why it matters: AI adds defensive fallbacks without understanding the data model. Empty objects and "Unknown" strings mask real errors — you never find out the API failed because the code silently continues.
Better approach:
// Throw on unexpected missing data
const data = response.data;
if (!data) {
throw new Error('API returned no data');
}
// Or log/track when fallback is used
const user = await fetchUser().catch(err => {
logger.warn('User fetch failed, using cached', { err });
return cachedUser;
});Complexity Creep (+2 points)
AI-typical complexity patterns.
What we flag:
Nested ternaries (+2 points)
const result = condition1
? condition2 ? valueA : valueB
: condition3 ? valueC : valueD;Why it matters: AI models frequently generate chained ternary operators instead of using if/else or switch statements. Humans typically reach for if/else when conditions get complex, but LLMs often produce nested ternaries that are hard to read.
Better approach:
// Use if/else for complex conditions
let result;
if (condition1) {
result = condition2 ? valueA : valueB;
} else {
result = condition3 ? valueC : valueD;
}
// Or a switch/lookup for multiple cases
const results = {
[key1]: valueA,
[key2]: valueB,
};
const result = results[key] ?? defaultValue;Note: We don't flag deep nesting or high branch density — those are code quality issues but not specifically AI patterns. Humans write arrow code too.
Uncertainty Masking (+1-2 points)
Defensive code that hides uncertainty about data shapes rather than validating properly.
What we flag:
Excessive optional chaining (+2 points)
const value = response?.data?.user?.profile?.settings?.theme;TypeScript any type (+2 points)
function process(data: any) {
return data.items.map((x: any) => x.value);
}Multiple null checks in one expression (+1 point)
if (user !== null && user !== undefined && user.id !== null) { ... }Empty fallback defaults (+1 point)
const items = data.items || [];
const config = opts || {};Why it matters: Long optional chains and any types are signs that the developer doesn't understand the data shape. This leads to runtime errors that TypeScript was supposed to prevent.
Better approach:
// Validate at the boundary
interface UserResponse {
data: { user: User };
}
function processResponse(response: UserResponse) {
const { theme } = response.data.user.profile.settings;
// Now TypeScript knows the shape
}Unnecessary Abstraction (+1-2 points)
Premature or gratuitous abstraction layers.
What we flag:
Generic utility functions in non-utility files (+1 point)
// In components/UserCard.jsx
function formatDate(date) { ... }
function processUserData(user) { ... }Wrapper/proxy/delegate patterns (+2 points)
function handleClickWrapper(e) {
handleClick(e);
}
function userServiceProxy(method, ...args) {
return userService[method](...args);
}Why it matters: LLMs love to create abstractions. They'll extract single-use helpers, create unnecessary factories, and add indirection that makes code harder to follow. Good abstraction emerges from duplication; premature abstraction obscures intent.
Better approach: Write the code inline first. Extract when you see actual repetition. Name things for what they do, not for architectural patterns.
Style Drift (+1 point)
Inconsistent formatting that suggests copy-paste from different sources.
What we flag:
- Trailing whitespace — Often invisible but shows careless editing
- Mixed tabs and spaces — Classic sign of code from multiple sources
- Inconsistent quote style — Mixing
'single'and"double"quotes in the same file
Why it matters: Style inconsistencies make code harder to read and suggest the author isn't familiar with the codebase. They're also a common artifact of AI-assisted coding, where snippets from different training sources get combined.
Better approach: Use a formatter (Prettier, Black, gofmt) and stick to it.
Naming Drift (+1 point)
Mixed naming conventions within the same PR.
What we flag:
// camelCase and snake_case mixed
const userName = 'alice';
const user_email = '[email protected]';
const getUserProfile = () => { ... };
const get_user_settings = () => { ... };Why it matters: Consistent naming is one of the clearest signals of intentional code. Mixing conventions suggests copy-paste from different sources or an AI that doesn't understand your codebase's style.
File Classification
Not all files get the same analysis:
| File Type | Examples | Analysis |
|---|---|---|
| Executable | .ts, .js, .py, .go, .rs | Full analysis |
| Test | *.test.ts, *_test.go | Style checks only |
| Declarative | .json, .yaml, .toml, .css | No behavioral checks |
| Generated | node_modules/, *.lock, dist/ | Skipped entirely |
We don't flag complexity in config files or run behavioral heuristics on test code (where boilerplate is expected).
Score Interpretation
| Score | What it means |
|---|---|
| 0-10 | Clean — few or no patterns detected |
| 11-30 | Minor issues — worth a quick look |
| 31-60 | Needs attention — multiple quality signals |
| 60+ | Significant concerns — likely needs rework |
Remember: these are signals, not judgments. A high score doesn't mean bad code — it means there are patterns worth reviewing. Some codebases legitimately need defensive fallbacks; some PRs are intentionally adding complexity.
The goal is visibility, not gatekeeping.
What We Don't Flag
Slopcannon explicitly avoids:
- ML-based "AI detection" — No statistical classifiers, no perplexity analysis
- Blocking PRs — We report, we don't enforce
- Deep semantic analysis — We don't check if your logic is correct
- Hallucinated APIs — We don't verify that methods exist (yet)
- Test quality — Test code has relaxed rules by design
We believe simple, explainable heuristics are more useful than black-box AI detectors. Every point can be traced to a specific line and a specific reason.