Self-Audit

We Grade Ourselves

Alitheion evaluates DaedArch Corporation, Trellison Institute, LedgerWell Corporation, and all venture output using the same 18-signal framework applied to external sources. There are zero exceptions.


Commitment

The Self-Audit Commitment

If our work fails our own standards, we publish that finding.

An audit system that exempts itself is not an audit system. Every piece of content produced by our parent organization is subject to the same evaluation rigor we apply to external sources. Self-audit results are published alongside external evaluations with no distinction in methodology.


Entities

Organizations Under Self-Audit

Each entity is evaluated independently. Results reflect the quality of output produced, not the importance of the entity.

DaedArch Corporation

Parent Organization

Platform infrastructure, API documentation, public communications, strategic analysis, and technology claims. Every external-facing output is subject to evaluation.

Source Credibility
0.74
Methodology
0.79
Claim Validity
0.76

Trellison Institute

Research Division

Research publications, methodology documentation, TAAS cognitive profiling reports, and academic submissions. Held to research-grade standards.

Source Credibility
0.81
Methodology
0.85
Claim Validity
0.83

LedgerWell Corporation

Financial Services

Carbon verification methodology, agricultural credit calculations, financial documentation, and compliance reports. Methodology transparency is critical for trust.

Source Credibility
Pending
Methodology
Pending
Claim Validity
Pending

Alitheion (Self)

Evaluation Division

The evaluation framework itself, scoring methodology, and this very website. Alitheion evaluates its own methodology documentation and public claims about the framework.

Source Credibility
0.72
Methodology
0.80
Claim Validity
0.77

Audit Trail

Version Progression — Napoleon's March

Every piece of content shows its Alitheion audit trail. This example demonstrates how internal content iterated through the evaluation framework from initial FAIL to CONDITIONAL PASS.

v1
Mar 1, 2026

FAIL — Missing data sources, no uncertainty quantification, claims exceed evidence.

Signals failed: C3 (scope vs evidence), C6 (uncertainty), M5 (data availability)
v2
Mar 3, 2026

FAIL — Data sources added but methodology transparency still insufficient.

Improved: M5 (0.3 → 0.6). Still failing: M6 (methodology transparency), C4 (confidence calibration)
v3
Mar 5, 2026

CONDITIONAL PASS — Core methodology documented. Confidence calibration needs improvement.

Improved: M6 (0.4 → 0.7), C3 (0.5 → 0.7). Conditional on: C4, C6 improvement
v4-v5
Mar 8-12, 2026

CONDITIONAL PASS — Iterative improvements to uncertainty quantification and limitations documentation.

C4 (0.5 → 0.7), C6 (0.4 → 0.7). Approaching threshold.
v6
Mar 15, 2026

CONDITIONAL PASS — All signals above minimum threshold. Overall score 0.78. Further iteration recommended for production use.

All 18 signals above 0.6 minimum. 14 above 0.7. Overall: 0.78

The bar was not lowered. The content improved. This is how self-audit is supposed to work.