Spaghetti or Modular? How to Assess Your Code Quality in 5 Minutes
- Ctrl Man
- Software Development , Productivity , Tools
- 11 Mar, 2026
The Question That Started It All
I’ve been developing trading bots for three months. One strategy is profitable. The rest? Not so much.
Looking at my repository, I had a nagging question: Is my code well-organized, or is it a tangled mess of spaghetti?
I asked an AI assistant to help me figure this out. What followed was a deep dive into code metrics, complexity ratios, and a complete architectural revelation.
Here’s what I learned—and the exact commands you can use to assess your own projects.
Meet scc: Your Code Quality X-Ray
scc (Succinct Code Counter) is a command-line tool that analyzes your codebase and spits out detailed metrics:
scc .
The output looks like this:
───────────────────────────────────────────────────────────────────────────────
Language Files Lines Blanks Comments Code Complexity
───────────────────────────────────────────────────────────────────────────────
Python 13,532 5,947,021 623,050 686,456 4,637,515 421,794
JavaScript 9,755 829,629 13,141 39,958 776,530 160,776
───────────────────────────────────────────────────────────────────────────────
Total 34,385 22,785,813 730,742 968,083 21,086,988 641,639
But here’s the thing: those raw numbers can lie.
My initial scan showed 22.7 million lines of code. Sounds massive, right? Well, 12.6 million of those were CSV data files. Another 2.2 million were plain text logs. The tool was counting everything—including my node_modules folder with 9,000+ JavaScript files from dependencies.
First lesson: Before you can assess YOUR code, you need to filter out the noise.
The Clean Analysis: What Did I Actually Write?
Here’s the command that changed everything:
scc . --exclude-dir node_modules,venv,.venv,env,dist,build,data,csv
This tells scc to ignore:
node_modules— Third-party JavaScript dependenciesvenv,.venv,env— Python virtual environmentsdist,build— Build artifactsdata,csv— Data files (not code)
The result? My 22.7 million lines dropped to ~11 million. Then I filtered further to see only MY code:
Language Files Lines Code Complexity
───────────────────────────────────────────────────
Python 985 334,794 259,104 83,804
TypeScript 60 5,051 4,225 232
Now we’re talking. This is the code I actually wrote.
The Spaghetti Ratio: Your Most Important Metric
Here’s where it gets interesting.
Cyclomatic Complexity measures the number of decision paths through your code. Every if, else, for, while, or try/except adds complexity.
The key metric is the Complexity-to-Lines ratio:
Complexity Ratio = Code Lines ÷ Complexity
What the Numbers Mean
| Ratio | Status | What It Tells You |
|---|---|---|
| 1:15+ | ✅ Highly Modular | Small functions, single responsibility |
| 1:8-14 | ✅ Healthy | Reasonable function sizes |
| 1:4-7 | ⚠️ Getting Dense | Some functions doing too much |
| 1:3 or lower | ❌ Spaghetti Alert | Deeply nested logic, “God functions” |
My Python Code: 1:3.09 Ratio
Python: 259,104 lines ÷ 83,804 complexity = 1:3.09
Translation: For every 3 lines of code, I’m introducing a new decision path. This is the classic signature of what my AI assistant called a “Trading Bot God Script”—massive functions checking dozens of conditions:
# Spaghetti example (simplified)
if rsi < 30 and macd_crossing and volume > threshold:
if not has_open_position:
if websocket_connected:
if not cooldown_active:
# Finally execute the trade
My TypeScript Code: 1:18.2 Ratio
TypeScript: 4,225 lines ÷ 232 complexity = 1:18.2
Translation: Highly linear, declarative, clean. Each function does one thing and returns. This is the architecture I should have been using everywhere.
Finding Your Worst Offenders
Want to see which files are the biggest tangles? Run this:
scc . --exclude-dir node_modules,venv,.venv,env,dist,build,data,csv \
--by-file --sort complexity | head -n 30
What the flags do:
--by-file— Lists every file individually (not grouped by language)--sort complexity— Puts the most complex files at the tophead -n 30— Shows only the top 30 lines
My Top 5 Worst Files
File Lines Code Complexity Ratio
─────────────────────────────────────────────────────────────────────
~p_v6a_pure_towers_v2_ml.py 1,621 1,195 419 1:2.8
~pure_towers_v3_filtered.py 1,486 1,096 389 1:2.8
~_v6a_pure_towers_v2_fix.py 1,438 1,074 376 1:2.8
~rt_server_v7_mirror_fix.py 1,598 987 375 1:2.6
~top_v6a_pure_towers_sui.py 1,387 1,043 369 1:2.8
The pattern: All these files are 1,000+ lines with brutal 1:2.8 ratios. These are “God Scripts” handling API connections, WebSocket state, indicator math, trade execution, and risk management—all in one massive loop.
The irony: None of these strategies were profitable.
The Profitable Strategy: What Made It Different?
I mentioned one strategy was profitable. Let’s find it:
scc . --exclude-dir node_modules,venv,.venv,env,dist,build,data,csv \
--by-file | grep -i "darvas"
The Winner’s Metrics
File Lines Code Complexity Ratio
─────────────────────────────────────────────────────────────────
~art_server_darvas_kas_3.py 445 377 88 1:4.3
~_darvas_hybrid_fix_psar.py 412 296 88 1:3.4
~art_server_darvas_kas_2.py 376 283 88 1:3.2
Key differences:
| Metric | Failed Strategies | Profitable Strategy |
|---|---|---|
| Lines of Code | 1,000-1,600 | 350-450 |
| Complexity Ratio | 1:2.8 | 1:4.3 |
| File Count | 1 giant file | Multiple focused files |
| Architecture | Monolithic | Modular with libraries |
The profitable strategy had:
- A dedicated
darvas_lib.py(90 lines, complexity: 3) for pure math - Separate execution scripts for different timeframes (H1, M15, M5)
- HTML visualization files to actually SEE what the bot was doing
The lesson: Complexity doesn’t equal profitability. My most tangled code was my worst performer.
The Professional Workflow: .sccignore
If you’re going to use scc regularly, create a .sccignore file in your project root:
# Ignore dependencies
node_modules/
venv/
.venv/
# Ignore build artifacts
dist/
build/
# Ignore massive data files
*.csv
*.txt
*.json
# Ignore cache files
__pycache__/
.git/
Then you can just run:
scc .
…and it automatically filters everything out.
From Spaghetti to Tools: Salvaging a Messy Codebase
After analyzing 22 million lines (most of it failed experiments), I wanted to salvage the work. Here’s what I learned:
What I Actually Built
I didn’t build a trading bot. I built a quantitative research pipeline.
The “bot” is the tiny 350-line script at the end. The other 22 million lines are the factory that produced it:
- Historical data downloads
- WebSocket connectors
- HTML charting visualizers
- 1,000 failed strategy variations
Extracting Reusable Tools
Instead of one giant tangled repo, I’m extracting three standalone tools:
1. Market Replay Visualizer — Custom HTML/JS charting engine to watch strategies trade historical data
2. Data Warehouse (ETL Pipeline) — Scripts to fetch, clean, and format market data without relying on rate-limited APIs
3. Technical Analysis Library — Pure math functions (like the darvas_lib.py) that can be reused across projects
The extraction process:
# Create clean workspace
mkdir -p ~/kasp_suite/{kasp_visualizer/public,kasp_data/data,kasp_ta}
# Copy visualizer files
cp legacy_broken_backtests/chart_viewer*.html ~/kasp_suite/kasp_visualizer/public/
cp legacy_broken_backtests/chart_server*.py ~/kasp_suite/kasp_visualizer/
# Copy data fetchers
cp -r src/fetchers/* ~/kasp_suite/kasp_data/
# Copy math libraries
cp legacy_broken_backtests/darvas_lib.py ~/kasp_suite/kasp_ta/
cp -r src/indicators/* ~/kasp_suite/kasp_ta/
Now, instead of opening a 1,500-line God Script, I can:
- Use
kasp_datato download historical data - Pass it through
kasp_tato calculate indicators - Pipe output into
kasp_visualizerto watch it trade in a browser
Your Action Plan
Step 1: Install scc
# Ubuntu/Debian
sudo apt install scc
# macOS
brew install scc
# Or download from: https://github.com/boyter/scc
Step 2: Run the Clean Analysis
cd /path/to/your/project
scc . --exclude-dir node_modules,venv,.venv,env,dist,build,data,csv
Step 3: Check Your Ratio
Complexity Ratio = Code Lines ÷ Complexity
- 1:10+ — Great! Keep doing what you’re doing
- 1:5-9 — Not bad, but watch for creeping complexity
- 1:4 or lower — Time to refactor
Step 4: Find Your Tangled Files
scc . --by-file --sort complexity | head -n 30
Look at the top 3-5 files. These are your refactoring priorities.
Step 5: Refactor with Purpose
For each tangled file:
- Identify the main responsibility
- Extract helper functions (aim for <50 lines each)
- Create separate modules for distinct concerns (config, math, execution)
- Add a
lib/orutils/folder for reusable logic
Key Takeaways
✅ What I Learned
- Raw line counts lie — Always filter out dependencies and data files first
- Complexity ratio is king — 1:3 means spaghetti, 1:10+ means modular
- Profitability ≠ Complexity — My simplest code made money; my most complex code failed
- Failed experiments have value — They taught me what architecture works
- Visual debugging matters — The profitable strategy had HTML visualizers; the failures didn’t
❌ What I’m Doing Differently Now
- No more copy-paste development — Instead of duplicating 1,500-line files with minor tweaks, I’m using configuration files
- Modular from the start — Separate folders for math, execution, data, and visualization
- Regular complexity checks — Running
sccmonthly to catch creeping complexity early - Embracing deletion — Archiving failed strategies instead of keeping them in the main codebase
Final Thoughts
Three months of development. Twenty-two million lines of code. One profitable strategy.
Was it worth it? Absolutely.
Not because of the profitable strategy (though that’s nice). But because I learned to distinguish between writing code and engineering software.
The difference isn’t how much you write. It’s how well you structure what you write.
Your code doesn’t need to be perfect. But it does need to be maintainable. And now you have the tools to measure that.
Want to Try This Yourself?
Quick Challenge: Run scc on one of your projects and share your complexity ratio.
Drop a comment or reach out on social media. I’m curious: Are you in the “Highly Modular” zone, or did you find some spaghetti like I did?
Resources
- scc GitHub: https://github.com/boyter/scc
- Cyclomatic Complexity Explained: https://en.wikipedia.org/wiki/Cyclomatic_complexity
- Refactoring Guru: https://refactoring.guru/
This article is part of the “My AI Journey” series—real lessons from building real projects with AI assistance. No theory, just what actually worked (and what spectacularly failed).