Move Over Moody's, My Python Script Has Opinions
TLDR: I built a credit rating model in Python using only public financial ratios. No black box, no Bloomberg terminal. It called AMD's comeback, Intel's slide, and Red Robin's collapse.
The year is 2026. In the gleaming war rooms of the world’s largest investment firms, fleets of large language models scan corporate filings in real time, predicting default before the first missed coupon payment, flagging distress before management admits it exists. Pre-crime, but for capital markets.
Bron Fruise is one of the good ones. A financial precog detective working the beat where risk goes to hide. Buried in leverage ratios, obscured by optimistic revenue projections, camouflaged in the footnotes of a 10-K that nobody read carefully enough. His job is simple in theory: find the crime before it happens. Stop the default before it detonates.
All ratings produced by this model are based solely on publicly available financial filings. Nothing in this piece constitutes investment advice, and no inference of wrongdoing by any company mentioned should be drawn from their model output.
His weapon of choice isn’t a neural network with a hundred million parameters or a proprietary black box that nobody can explain to a regulator. It’s a linear model. Four categories, Investment Grade, Low Investment/Upper Speculative, Speculative Grade, and High Risk Approaching Default. Clean, auditable, and brutally honest in a way that most analysts aren’t paid to be.
Bron doesn’t read headlines. He doesn’t follow earnings calls. He reads balance sheets, coverage ratios, and cash flow structures. And he reads them the way a detective reads a crime scene. Not for what’s there, but for what’s missing. Because in corporate finance, the silence in the numbers is usually where the body is buried.
Precrime
The origin of this model is mundane: I was building NPV models and needed reliable discount rates. To get the right discount rate you need to assess credit risk. To assess credit risk you need credit ratings. There was no clean, unified source for them so I built one.
How the Precog Works
There's no single clean source for corporate credit ratings unless you have a Bloomberg terminal and even then you're trusting someone else's judgment. So I built my own dataset, creating my own training data by working through ratings actions company by company, recording what each was rated and for how long.
I used COVID as the starting line deliberately. In the span of a few months M2 money supply increased 40% and M1 an almost incomprehensible 400%. That kind of monetary expansion doesn’t just show up in consumer prices its bound to distort corporate balance sheets in ways that take years to fully surface. Pre-COVID and post-COVID are essentially different financial worlds, and a model trained across both without acknowledging that distinction is a model trained on noise.
Which brings me to the ratings agencies. Stellantis carries a BBB- investment grade rating with perpetual non-maturing bonds. Enron was BBB+ (investment grade) until days before its collapse. That’s not a condemnation of the entire system. Most ratings are graded on reasonable axes: growth trajectory and asset quality. But that framework has a blind spot. A company with a high efficiency ratio and persistently poor free cash flow shouldn’t be evaluated the same way as one that actually services its debt comfortably. I built my model around that distinction.
What the Crime Scene Contains
The model doesn’t care how big a company is. A billion dollar firm with deteriorating coverage ratios and a mid-market company with the same problem look identical to it because they are identical, from a credit risk perspective. Using raw figures like total assets or market cap would just be measuring size. Instead everything is expressed as proportional ratios: coverage, leverage, margins, turnover. The relationship between two numbers, not the numbers themselves.
Every company-period is collapsed into a single row; balance sheet, income statement, and cash flow unified into one trailing twelve month observation. The ratios are calculated inside the model pipeline itself, which means the source financials stay untouched and every prediction traces back to exactly where it came from. No pre-processing tables silently drifting, no black box. Full lineage to the quarter.
The model was trained on 2,727 company-period observations spanning over a decade. On investment grade classification it achieves 81% precision and a 74% F1 score. Random forests and rule-based bucketing were both tested and abandoned, the bucketing approach in particular fell apart on edge cases, which is most of the interesting companies. The linear model, counter-intuitively, was the most stable and the most honest. It also flags low confidence predictions rather than giving you a wrong answer with a straight face. Not bad for a linear model.
Case Files #1: AMD vs Intel
Intel to its credit has remained a strong investment grade stock.
AMD’s rating trajectory is arguably the most compelling validation of the model’s design. Starting as Speculative Grade from 2012 through most of 2017, briefly touching High Risk in early 2015, a period anyone who followed the semiconductor space remembers as genuinely existential for the company, AMD began a slow but decisive climb. By 2018 the model recognized the fundamental improvement in AMD’s financials, upgrading it to Low Investment/Upper Speculative, and by Q4 2020 AMD crossed into Investment Grade, where it has remained. That timing is not a coincidence as it mirrors the Ryzen-driven revenue recovery and the dramatic improvement in AMD’s debt and coverage ratios that any credit analyst would have flagged.
Intel tells the opposite story, or at least the beginning of one. A rock-solid Investment Grade name for over a decade, the model begins flashing Speculative Grade warnings in Q1 2023 and again more persistently from mid-2024 through early 2025, right as Intel's margin compression and competitive losses were becoming impossible to ignore in the financials. Notably, by Q3 2025 the model pulled it back to Investment Grade, suggesting the ratios stabilized before the headlines did. The model didn't read the headlines either way. It read the ratios.
Case Files #1: Tesla vs Ford
Tesla and Ford is a case study in how core financials tells a different story than market narrative. Ford, the century-old automaker, never once cracked Investment Grade in this model, sitting stubbornly in Low Investment/Upper Speculative for the entire 13-year window. That’s not entirely surprising when you look at what Ford was actually selling during this period. The F-150 carried the whole company while the rest of the lineup struggled to justify its existence. The Mustang Mach-E pleased nobody, the electric F-150 Lightning launched with great fanfare and then sat on lots, and billions in EV investment produced products that neither EV buyers nor traditional truck buyers were particularly excited about. When your hero product is doing all the heavy lifting and your modernization strategy is mediocre, the ratios reflect it.
Tesla’s journey is more complicated than its fans would admit. Starting as Speculative Grade in 2012, briefly touching High Risk in early 2014 during a period of intense cash burn, Tesla spent the better part of a decade grinding upward through the speculative tiers on the strength of an idea as much as a product. The investment community, and frankly the broader culture, extended Tesla extraordinary goodwill on the promise of an electrified future, and that goodwill translated into access to capital that kept the balance sheet alive long enough for the operations to catch up. By Q1 2022 the model flipped it to Investment Grade, which does align with genuine margin expansion and positive free cash flow. But it’s worth acknowledging that Tesla earned the right to reach that point partly because the market believed in the mission before the mission fully delivered.
The uncomfortable takeaway: Ford bet on legacy and half-measures. Tesla bet on narrative long enough for the fundamentals to arrive.
Case Files #3: Texas Roadhouse vs Red Robbin
Texas Roadhouse and Red Robin are both casual dining chains, but their credit trajectories over the past two years could not be more different. Texas Roadhouse sits at Investment Grade for every single period in the dataset. Consistent, stable, and exactly what you’d expect from a concept that invested in operational efficiency through mobile ordering and table-side technology, protected its margins, and kept price increases modest enough to hold traffic.
Red Robin is the cautionary tale. Starting at Investment Grade in late 2023, the model begins downgrading it almost every quarter. Speculative by mid-2024, Low Investment/Upper Speculative by Q3 2024, and by Q1 2025 the model flags it as High Risk Approaching Default, where it has remained. That’s a four-tier collapse in roughly six quarters. What the financials are reflecting is a concept caught in the worst possible position in the market. Not cheap enough to compete with Chipotle and Five Guys on value, and not differentiated enough to hold customers who decided to treat themselves somewhere nicer. When consumers got squeezed, they either traded down or traded up. Red Robin was nobody’s first choice in either direction. Heavy lease obligations and a burger category that fast casual effectively colonized made the math nearly impossible to recover from.
Texas Roadhouse picked a lane and executed. Red Robin got caught in the middle of a market that no longer had a middle.
The Case Is Closed
These aren’t back-tested predictions dressed up after the fact. Every rating in this piece was produced by a model trained on historical financials, evaluated on proportional ratios, and applied forward. No hindsight, no narrative overlay, no analyst with a thesis to protect. The model read the balance sheets, the coverage ratios, and the cash flow structures, and it called AMD’s resurrection, Tesla’s legitimacy, Intel’s deterioration, and Red Robin’s collapse before they were consensus opinions.
Good credit analysis finds the body before anyone files a missing persons report.
Building a model that does that cleanly, transparently, and with full rating lineage traceable to the quarter on a linear classifier, with public data, and without a Bloomberg terminal is exactly the kind of work that gets dismissed in a world obsessed with black box AI and over-complicated ensembles. It shouldn’t be.
Sometimes the simplest tool, applied with the right features and the right discipline, tells you everything you need to know.
Currently the model runs on one very overworked desktop. Follow along and you'll be the first to know when it makes it to Hugging Face.


