CogniMesh Benchmark Report
REST API vs CogniMesh -- Same Data, Two Approaches
Generated 2026-03-27
Tests 90/90 passed
Database Postgres 15
full governance from query 1 3 resilience scenarios 1 JSON file per new UC 90 tests
Contents
01 Executive Summary
GOVERNED QUERIES
Every query is tracked and traceable

Every response includes where the data came from (lineage), how fresh it is (freshness), and who asked (audit). Schema changes don't break agents — the Gold layer acts as a buffer. Per-agent access control, per-UC scoping, and row-level data isolation are built in.

DEPENDENCY INTELLIGENCE
The system understands its own data

Ask "what breaks if I change this Silver table?" and get an instant answer. Trace any column back to its source. See the full Silver → Gold → UC dependency graph. REST has none of this.

SMART INFRASTRUCTURE
65% fewer tables, smart refresh

20 UCs consolidated into 7 Gold views (measured). Scheduled refresh (primary mode) checks TTLs and rebuilds only stale views. Real-time mode (optional, Postgres LISTEN/NOTIFY) available for latency-critical UCs. 49% less storage. REST refreshes everything blindly. Supports incremental adoption — connect to existing Silver (Mode 1) or manage the full pipeline with SQLMesh (Mode 2).

ZERO FRICTION
1 JSON file per new use case

REST: design Gold table + write endpoint + write model + write tests (4 files). CogniMesh: write a 12-line JSON definition. System derives everything else. Unsupported questions get T2 answers, not 404s.

02 Glossary -- Key Terms

These terms are used throughout this report. If a concept is unfamiliar, refer back here.

TermWhat It Means
Bronze LayerRaw ingested data. No transformations applied. Direct copy from source systems.
Silver LayerCleaned and enriched data. Joins applied, business logic added, duplicates removed.
Gold LayerPre-aggregated, query-optimized views built for specific use cases. What agents read from.
UC (Use Case)A specific question an agent can ask. E.g., "What is the health status of customer X?"
SLOCSource Lines of Code -- meaningful lines excluding blanks and comments.
LOCLines of Code -- total lines including blanks and comments.
T0Pre-materialized Gold query. Fastest tier. Direct SELECT from a Gold table.
T1Composed from existing Gold views. Joins across UCs. Slightly slower.
T2Dynamic query against Silver layer. Query Composer reads metadata, composes SQL. Guardrails enforced.
T3Reject with explanation. Question cannot be safely answered. Returns ETA for when UC might be available.
LineageColumn-level source mapping. For each Gold column, traces back to the Silver/Bronze source.
FreshnessHow recently data was refreshed. Measured as age (seconds since last refresh) vs TTL (max allowed staleness).
TTLTime To Live. The maximum allowed age for a Gold table before it is considered stale.
Audit TrailImmutable log of every query: who asked, what UC, which tier, latency, row count, cost.
MCPModel Context Protocol. A standard interface for AI agents to discover and consume data tools. CogniMesh implements an MCP server with 6 tools (query, discover, check_drift, refresh, impact_analysis, provenance) alongside its REST API.
03 The Dataset

Both approaches operate on the same synthetic e-commerce dataset. Three medallion layers, nine tables, 220,500 total rows. The data is realistic: 12 months of orders, 8 product categories, 5 customer regions.

Bronze Layer -- Raw Ingested Data
bronze.customers
10,000 rows

ID, name, email, signup date, region (NA / EMEA / APAC / LATAM / MEA)

bronze.products
500 rows

ID, name, category (8 types), price, supplier

bronze.orders
200,000 rows

Customer -> product, amount, status, timestamp (12 months of history)

Silver Layer -- Cleaned and Enriched
silver.customer_profiles
10,000 rows

+ total_orders, total_spend, days_since_last_order, ltv_segment

silver.product_metrics
500 rows

+ units_sold_30d, revenue_30d, return_rate, stock_status

silver.orders_enriched
200,000 rows

+ customer_region, product_category, amount_usd

Gold Layer -- Query-Optimized Views
gold_*.customer_health
10,000 rows

health_status (healthy / warning / critical) computed from recency + LTV

gold_*.top_products
500 rows

Products ranked by revenue within each category

gold_*.at_risk_customers
~3,000-5,000 rows

Customers likely to churn (risk_score 0-99)

Both approaches create identical Gold tables. The difference: REST requires a developer to hand-write the SQL. CogniMesh derives it from the UC definition automatically.
Why Gold is on a serving database, not Iceberg: Agents do individual lookups — "health of customer X." That needs sub-10ms. Open table formats on object storage (Iceberg/Delta on S3) take 100-1000ms per lookup. CogniMesh keeps Gold on a serving database (Postgres, DuckDB, MongoDB, StarRocks, ClickHouse — any serving database) for instant agent responses. OLAP databases like StarRocks work well for analytical UCs while still serving individual lookups in 10-50ms. Silver and Bronze can live anywhere — lakehouse, warehouse, or database. Within each mode, CogniMesh supports single-engine (all layers on one DB like Postgres) or multi-engine (Silver on lakehouse, Gold on serving DB) configurations.
04 The Three Use Cases
UC-01: Customer Health Check
What is the current health status of customer X?

Individual lookup. Returns one row per customer with computed health status.

FieldSourceHow Computed
customer_idsilver.customer_profilesPassthrough
namesilver.customer_profilesPassthrough
total_orderssilver.customer_profilesCOUNT from orders
total_spendsilver.customer_profilesSUM from orders
days_since_lastsilver.customer_profilesNOW() - last_order_date
health_statusDerivedCASE: <30d + high LTV = healthy, >90d = critical, else warning
Access: individual_lookup Freshness TTL: 4 hours
UC-02: Top Products by Category
What are the best-selling products in category Y?

Bulk query. Returns all products in a category ranked by revenue.

FieldSourceHow Computed
product_idsilver.product_metricsPassthrough
namesilver.product_metricsPassthrough
categorysilver.product_metricsPassthrough
revenue_30dsilver.product_metricsSUM from orders last 30d
units_sold_30dsilver.product_metricsCOUNT from orders last 30d
rankDerivedROW_NUMBER() OVER (PARTITION BY category ORDER BY revenue_30d DESC)
Access: bulk_query Freshness TTL: 24 hours
UC-03: At-Risk Customers
Which customers are at risk of churning?

Bulk query. Returns customers sorted by churn risk score (0-99, higher = more risk).

FieldSourceHow Computed
customer_idsilver.customer_profilesPassthrough
namesilver.customer_profilesPassthrough
days_since_lastsilver.customer_profilesNOW() - last_order_date
ltv_segmentsilver.customer_profilesDerived from total_spend
risk_scoreDerived(days_since_last / 365) * 50 + LTV factor
Access: bulk_query Freshness TTL: 4 hours
05 How Each Approach Works
REST API
Direct endpoint per use case
Agent -> HTTP GET -> FastAPI -> SQL query -> Postgres Gold -> JSON response

That's it. No lineage. No freshness. No audit. Just raw data.

Each use case requires a hand-written endpoint, a response model, and a Gold table with hand-written SQL. The developer designs everything. The system executes it.

CogniMesh
Governed, observable data mesh
Agent -> HTTP POST -> Gateway
  -> 1. Resolve UC (capability index)
  -> 2. Query Gold table
  -> 3. Attach lineage (column-level source mapping)
  -> 4. Check freshness (age vs TTL)
  -> 5. Return data + metadata
  -> 6. Log audit async (agent, UC, tier, latency, cost)

Every query passes through six stages. Audit logging (step 7) runs async after the response is returned.

Two deployment modes: CogniMesh supports incremental adoption. Mode 1 (Connect): connect to your existing Silver layer — zero disruption to your pipeline. CogniMesh introspects the schema and builds Gold on top. Mode 2 (Manage): CogniMesh + SQLMesh manages Bronze→Silver→Gold with full lineage from source to agent. Teams start with Mode 1 and migrate tables incrementally — each migrated table gains complete lineage. The benchmark demonstrates Mode 1.
06 The Full CogniMesh Request Flow

Every query follows this exact seven-step pipeline. Nothing is optional. The system is always observable.

T0 Flow -- Supported Use Case
STEP 1
RECEIVE
POST /query with uc_id + params. Gateway validates schema and extracts agent identity.
STEP 2
RESOLVE
Match question to UC via keyword index. Confidence > 0.6 -> T0. Below threshold -> fallback to T2.
STEP 3
QUERY
SELECT * FROM gold_cognimesh.customer_health WHERE customer_id = $1. Pre-materialized, indexed, no joins.
STEP 4
LINEAGE
Look up cognimesh_internal.lineage. Map each Gold column to its Silver source. Attach column-level provenance to response.
STEP 5
FRESHNESS
Check cognimesh_internal.freshness. Compute age in seconds. Compare to TTL. Flag stale data.
STEP 6
RESPOND
Return QueryResult with data + lineage + freshness + tier. Agent has full context to explain the answer.
STEP 7
AUDIT
Async — runs in background after response is sent. Does not add latency. INSERT into audit_log: UC-01, T0, 3.2ms, 1 row, cost=1.001. Immutable record.
T2 Flow -- Unsupported Question

When the question does not match any registered UC, the system falls back to composing a dynamic query against the Silver layer.

STEP 1
UC MATCH FAILS
Keyword index returns no match above 0.6 confidence. Question does not map to any registered use case.
STEP 2
COMPOSE
Query Composer reads Silver metadata (table schemas, column types). Matches question keywords to columns. Composes SQL with appropriate aggregations.
STEP 3
GUARDRAILS
Composed SQL is checked: no full table scans, no cross-joins, row limit enforced. If guardrails fail -> T3 reject.
STEP 4
EXECUTE + RESPOND
Execute composed SQL against Silver. Return data as T2 response with composed SQL in metadata. Audit log records the T2 event.
07 Performance Results

Both approaches respond in under 6ms — the latency difference is the cost of governance and is negligible in real-world agent pipelines.

Use Case REST Mean CogniMesh Mean Overhead What the Overhead Buys
UC-01: Customer Health 2.77 ms 3.38 ms +0.61 ms Lineage + freshness on every query
UC-02: Top Products 1.40 ms 3.31 ms +1.91 ms Same + larger lineage for 8 columns
UC-03: At-Risk Customers 2.15 ms 5.48 ms +3.33 ms Same + more rows -> larger lineage payload

The ~2ms overhead covers lineage lookup and freshness check on every query. Audit logging runs async in a background thread and adds zero latency. Measured with pytest-benchmark (50+ iterations, 5 rounds, FastAPI TestClient in-process).

08 What You Get With Each Approach

These are the things that a production data serving layer needs. REST doesn't include any of them — you'd have to build each one yourself (weeks of work). CogniMesh includes all of them from the first query.

Every property below was tested with automated assertions. Not opinions -- binary pass/fail tests in the benchmark suite.

# Property What It Means REST CogniMesh How Tested
1 Discovery Agent can ask 'what questions can you answer?' and get a list NO YES GET /discover returns UC list with descriptions
2 Lineage Every response shows exactly which database table and column each value came from NO YES Check lineage field in response body
3 Audit Trail Every query is logged: who asked, what they asked, when, how long it took NO YES Check audit_log table after query
4 Cost Attribution You can see how much each use case costs and which agent uses it most NO YES SUM(cost_units) GROUP BY uc_id
5 Change Governance When someone changes a use case definition, the before/after is recorded NO YES Check uc_change_log table
6 Freshness The response tells you how old the data is and whether it's past its expiry NO YES Check freshness field in response
7 Tiered Fallback When an agent asks something new, the system tries to compose an answer from existing data instead of returning an error 404 T2/T3 POST unsupported question, check response tier
8 Schema Drift When upstream data tables change, your agents keep working — the Gold layer acts as a buffer 500 Isolated Rename column, query both systems
9 Impact Analysis Ask "what breaks if I change this Silver table?" and get an instant answer with affected Gold views and UCs NO YES GET /dependencies/impact
10 Provenance Trace any Gold column back to its Silver source table, column, and transformation NO YES GET /dependencies/provenance
11 Smart Refresh Only refresh Gold views whose Silver source actually changed — not all of them NO YES POST /refresh/scheduled (primary), POST /refresh/check (legacy)
12 Access Control Per-agent scoping — which UCs each agent can access, row-level data isolation, role-based UC management NO YES agent_id in audit, UC scoping, approval queue
REST API Score
0 / 12
No system properties
CogniMesh Score
12 / 12
Full observability
09 Resilience Scenarios

Three failure modes that every production data system will eventually face. Each scenario was tested end-to-end in the benchmark suite.

Scenario 1: Schema Drift

Silver column ltv_segment is renamed to lifetime_value_tier. What happens?

REST API
SQL Error -> Stale or Crash

Gold refresh SQL references old column name. SQL fails. Endpoint returns stale data or crashes entirely.

Recovery: Developer discovers broken SQL (when? maybe hours, maybe a user reports it). Fixes the query, rebuilds Gold table, redeploys. Hours to days.

CogniMesh
Gold Isolation -> Continued Service

Gold table was materialized before the rename. Still has valid data. Queries continue serving from Gold (T0). On next refresh, drift is detected and logged.

Recovery: Update UC derivation SQL, re-register, refresh. Agent never sees the error. Automated.

Scenario 2: Unsupported Question

Agent asks: "What is the total revenue by region for the last quarter?" No UC exists for this.

REST API
404 Not Found

No endpoint exists. Response: 404 Not Found. No alternatives. No explanation. The agent is stuck.

To fix: Design Gold table, write endpoint, write response model, write tests, deploy. Days of work.

CogniMesh
T2 Fallback -> Composed Query

Gateway fails UC match. Query Composer reads Silver metadata. Matches 'revenue' -> amount_usd, 'region' -> customer_region. Composes SQL with SUM + GROUP BY + time filter. Checks guardrails. Returns actual data as T2 with composed SQL in metadata.

Scenario 3: Data Staleness

Gold table was refreshed 6 hours ago. TTL is 4 hours. Is the data still valid?

REST API
No Awareness

Response is the same whether data is 1 minute or 1 week old. No freshness field. Agent cannot know. Could be dangerously stale. No way to detect, no way to warn.

CogniMesh
Built-in Freshness

Response includes: freshness: {is_stale: true, age_seconds: 10, ttl_seconds: 1}. Agent decides what to do. Audit log records staleness event. Dashboard shows freshness compliance over time.

10 Developer Effort

Total code written to build each system from scratch for the same 3 use cases.

Metric REST API CogniMesh Notes
Files 9 17 CogniMesh includes the full platform
Python SLOC 227 1,919 CogniMesh includes registry, gateway, lineage, audit
SQL SLOC 59 0 REST has hand-written Gold SQL
JSON SLOC 0 33 CogniMesh UC definitions are JSON
Total SLOC 286 1,952 One-time platform investment
CogniMesh is ~7x more code upfront. This is the platform cost. Every new UC after this costs just one JSON file (~12 lines). REST grows linearly with each UC (~78 SLOC per UC). The investment pays off as UC count grows.
What is SLOC? Source Lines of Code -- lines excluding blanks and comments. 100 SLOC = 100 meaningful lines a developer writes and maintains. It's the standard measure for comparing implementation effort.
11 Marginal Cost -- Adding UC-04

What does it cost to add one more use case? UC-04: "Total revenue by region for the last quarter."

REST UC-04
4 files, 78 SLOC
  • gold_tables_uc04.sql Gold table design -- CREATE TABLE + INSERT with aggregation 25 SLOC
  • revenue_by_region.py FastAPI endpoint -- route, query, response formatting 20 SLOC
  • models_uc04.py Pydantic response model -- field types, validation 10 SLOC
  • test_revenue.py Tests -- happy path, edge cases, validation 23 SLOC

Plus: modify app.py to register the new router.

CogniMesh UC-04
1 file, 12 SLOC
  • uc04_revenue.json UC definition: question, fields, access pattern, TTL, derivation SQL 12 SLOC

That's it. Register -> system derives Gold table, lineage, freshness tracking, audit, discovery entry. Everything automatic.

SLOC Growth Projection
UC Count REST Total SLOC CogniMesh Total SLOC
3 (initial) 286 1,952
4 364 1,964
10 832 2,036
25 2,002 2,216
50 3,952 2,516
Total SLOC vs Use Case Count -- Crossover at ~UC-22
0 1000 2000 3000 4000 3 10 25 50 USE CASE COUNT 286 832 2,002 3,952 1,952 2,036 2,216 2,516 ~UC-22 crossover REST API CogniMesh
The crossover happens around UC-22. Before that, REST has less total code. After that, CogniMesh's flat growth curve wins. At 50 UCs, REST has 57% more code to maintain. And every line of REST code is hand-written, hand-tested, hand-deployed. CogniMesh UCs are 12-line JSON files.
12 Gold Layer Consolidation at Scale
Why Gold Tables Proliferate in REST

REST creates 1 Gold table per UC, always. Even when UC-01 (Customer Health) and UC-03 (At-Risk Customers) both pull from the same silver.customer_profiles, REST creates two separate Gold tables with overlapping columns. At 10 UCs, you have 10 independent Gold tables with 45 overlapping columns. Each table stores its own copy of shared data, each requires its own refresh cycle, and each competes for database cache space.

How CogniMesh Consolidates

CogniMesh's capability index detects field overlap at UC registration time. It groups UCs by Silver source, takes the field union, and produces consolidated Gold views. UC-01, UC-03, UC-05, UC-09 all pull from silver.customer_profiles — consolidated into one customer_360 Gold view serving all 4 UCs. The result: fewer tables, less storage, fewer refresh cycles, and better cache utilization.

10-UC Consolidation Map
UC Question Silver Source REST Gold Table CogniMesh Gold View
UC-01 Customer Health customer_profiles customer_health customer_360
UC-02 Top Products product_metrics top_products product_catalog
UC-03 At-Risk Customers customer_profiles at_risk customer_360
UC-04 Revenue by Region orders_enriched revenue_region order_analytics
UC-05 Customer LTV customer_profiles customer_ltv customer_360
UC-06 Purchase History orders + profiles purchases customer_orders
UC-07 Regional Distribution customer_profiles regional_dist regional_summary
UC-08 Product Trends products + orders product_trends product_catalog
UC-09 Customer Segments customer_profiles segments customer_360
UC-10 Order Volume orders_enriched order_volume order_analytics
Gold Tables / Views (measured, 20 UCs)
REST: 20 → CogniMesh: 7
65% fewer — consolidation ratio 0.35
Storage (measured, 20 UCs)
REST: 12 MB → CogniMesh: 6.3 MB
CogniMesh uses 49% of REST storage
Stored Rows (measured, 20 UCs)
REST: ~84,000 → CogniMesh: ~34,000
60% less row duplication
Refresh Cycles (measured, 20 UCs)
REST: 20 → CogniMesh: 7
~65% fewer refresh cycles
Growth Projection
UC Count REST Gold Tables CogniMesh Gold Views Consolidation Ratio REST Refresh (ms) CogniMesh Refresh (ms)
3 (measured) 3 3 1.00 360 360
5 5 4 0.80 600 480
10 10 5 0.50 1,200 600
20 (measured) 20 7 0.35 2,400 840
25 25 8 0.32 3,000 960
50 50 12 0.24 6,000 1,440
The consolidation ratio follows a logarithmic decay because most UCs in a domain draw from the same 3-5 Silver tables. After the first 10 UCs cover all Silver sources, new UCs mostly add columns to existing Gold views rather than requiring new ones.
Gold Tables / Views vs UC Count
0 10 20 30 40 50 3 5 10 25 50 UC COUNT 3 5 10 25 50 3 4 5 8 12 REST Gold Tables CogniMesh Gold Views
13 When CogniMesh Wins -- Crossover Points
Multi-Dimensional Crossover
Dimension CogniMesh Wins At Why
Marginal dev hours per UC UC = 1 (always) 12 SLOC JSON vs 78 SLOC code — 15% effort from day one
Governance & observability UC = 1 (always) 12/12 properties built-in: discovery, lineage, audit, cost, governance, freshness, fallback, drift, impact analysis, provenance, smart refresh, access control
Unsupported question handling UC = 1 (always) T2 fallback vs 404 — CogniMesh never hard-fails on new questions
Gold table count UC = 5 First Silver source overlap triggers consolidation
Total refresh time UC = 5 Fewer consolidated views = fewer refresh cycles
Storage cost UC = 5 Consolidated views eliminate duplicate rows across Gold tables
Total maintenance SLOC UC = 22 REST: 286 + (n-3) × 78 overtakes CogniMesh: 1,952 + (n-3) × 12
Query latency (T0) Equivalent (always) Equivalent — both under 6ms (see Performance)
ALL dimensions UC = 5 CogniMesh wins storage, table count, refresh, dev hours, governance. Latency is equivalent.
Where CogniMesh is better — measured at 20 UCs:

Infrastructure: 65% fewer Gold tables (7 vs 20), 49% less storage (6.3 MB vs 12 MB), 65% fewer refresh cycles.
Governance: Every query has lineage, freshness tracking, audit trail, cost attribution. REST has none.
Resilience: Unsupported questions get T2 answers (not 404). Schema changes don't break agents.
Adding UCs: 1 JSON file vs 4 code files per new use case.
Measured Latency at Scale
UC Count REST T0 Latency CogniMesh T0 Latency Source Notes
3 (measured) 2.77 ms 3.38 ms Benchmark run 2 Equivalent (see Performance)
20 (measured) 2.17 ms avg 3.74 ms avg Scale benchmark Equivalent — latency unchanged at scale
14 Self-Improving Data Layer
The T2-to-UC Promotion Cycle

CogniMesh's audit log records every T2 hit — questions answered from Silver because no Gold view exists. When a pattern reaches a threshold (e.g., 10 hits in 7 days), the system generates a UC candidate. A human approves, the Gold view is updated, and the next query is T0 (instant). REST has no equivalent — the agent gets 404s until someone notices and builds an endpoint.

CogniMesh Cycle
Self-Improving Feedback Loop
  1. Agent asks unsupported question → T2 serves answer immediately (340ms)
  2. Audit log records T2 hit pattern
  3. System detects: 47 hits in 7 days → generates UC candidate
  4. Human approves → Gold view updated (seconds)
  5. Next query: T0 (4ms) — 85× faster, zero code changes
REST Cycle
Manual Discovery and Development
  1. Agent asks unsupported question → 404 Not Found
  2. Someone notices agents are failing (days later? support ticket?)
  3. Product manager files ticket
  4. Developer designs Gold table + builds endpoint + writes tests (4 files, 78 SLOC)
  5. PR review → merge → deploy (2-5 business days)
  6. Agent can finally ask the question
Side-by-Side Comparison
Metric CogniMesh REST
Time to first answer Immediate (T2) 2-5 business days
Time to optimized answer Hours (after approval) Same 2-5 days
Code changes 0 4 files, 78 SLOC
Agent downtime 0 seconds 2-5 business days of 404s
Pattern detection Automatic (audit log) Manual (someone has to notice)
Feedback loop Closed (usage → optimization) Open (requires human initiative)
This is the fundamental architectural difference: REST is a static system that only changes when developers change it. CogniMesh is a self-improving system that learns from usage and evolves its Gold layer. The feedback loop is what makes the platform compound in value over time.
15 Dependency Intelligence & Smart Refresh

CogniMesh knows the full dependency graph — which Silver tables feed which Gold views, which Gold views serve which UCs. REST has no awareness of dependencies. This enables four capabilities that REST cannot replicate.

Impact Analysis — "What breaks if I change this?"

Before changing a Silver table, ask CogniMesh what would be affected:

GET /dependencies/impact?table=silver.customer_profiles Response (measured): Affected Gold views: 3 (customer_360, at_risk_customers, customer_health) Affected UCs: 10 (UC-01, UC-03, UC-05, UC-06, UC-07, UC-09, UC-11, UC-13, UC-15, UC-20) NOT affected: product_catalog, order_analytics (different Silver source)

REST equivalent: nobody knows. A developer has to manually trace which Gold tables reference which Silver columns. If they miss one, it breaks silently.

Provenance — "Where does this number come from?"

Trace any Gold column back to its source:

GET /dependencies/provenance?view=gold_cognimesh.customer_360&column=health_status Response: source_table: silver.customer_profiles source_column: days_since_last_order transformation: computed (CASE WHEN days < 30 AND ltv IN ('high','medium') → 'healthy' ...)

REST equivalent: read the SQL source code. No API, no metadata, no runtime query.

Smart Refresh — Only refresh what's affected
REST APPROACH
Cron job refreshes everything

A scheduled job runs all 20 Gold table refresh queries, whether the underlying data changed or not. If Silver hasn't changed, you wasted compute. If only customer data changed, you still refreshed product tables.

COGNIMESH APPROACH
Refresh only what changed

CogniMesh supports two refresh modes. Scheduled (primary): a periodic job checks all Gold views, refreshes stale ones, and reports changes — called via API (POST /refresh/scheduled), cron, or Airflow. Real-time (optional): Postgres LISTEN/NOTIFY triggers detect Silver changes and refresh affected views immediately. Most UCs use scheduled refresh. Real-time is for latency-critical use cases like fraud detection. Either way, CogniMesh checks the dependency graph: silver.customer_profiles changed → refresh customer_360 (1 view serving 10 UCs). Leave product_catalog and order_analytics alone. Result: 1 refresh instead of 20.

Full Dependency Graph — measured at 20 UCs
LayerCountDetails
Silver tables3customer_profiles, product_metrics, orders_enriched
Gold views (CogniMesh)7Consolidated from 20 UCs (ratio: 0.35)
Gold tables (REST)20One per UC, no consolidation
Use Cases20All served through the same gateway

When silver.customer_profiles changes:

REST
Refresh 20 tables
No way to know which tables are affected. Refresh everything or risk stale data.
COGNIMESH
Refresh 3 views
Dependency graph identifies exactly which views source from customer_profiles. The other 4 views are untouched.
API Endpoints — all measured, all working
EndpointWhat It DoesREST Equivalent
GET /dependenciesFull Silver → Gold → UC graph with consolidation ratioNONE
GET /dependencies/impactWhat breaks if this Silver table changes?NONE
GET /dependencies/provenanceWhere does this Gold column come from?NONE
GET /dependencies/what-ifChange impact estimation with affected UCsNONE
GET /refresh/statusFreshness of all Gold views with served UC countNONE
POST /refresh/checkAuto-refresh only stale viewsNONE
GET /refresh/planPreview what would be refreshedNONE
This is what "platform" means. REST is a collection of endpoints that serve data. CogniMesh is a system that understands its own data — where it comes from, how fresh it is, what depends on what, and what to do when things change. You can ask it questions about itself, and it answers.
16 Where REST Wins -- Honest Assessment
We will not pretend CogniMesh wins everywhere. REST has genuine advantages in specific scenarios. Here they are.
Advantage
Raw Latency

REST is ~2ms faster per query. In practice, both are under 6ms and the difference is negligible.

Advantage
Initial Simplicity

286 SLOC vs 1,952. If you need 1-3 UCs and will never add more, REST is simpler.

Advantage
Team Familiarity

Every developer knows REST. CogniMesh introduces new concepts (UCs, tiers, lineage). Learning curve is real.

Advantage
Compute Footprint

REST is thinner: just FastAPI + Postgres. CogniMesh maintains capability index, writes audit logs, tracks freshness. More moving parts.

17 Conclusion
Dimension Winner Evidence
Raw T0 latency Equivalent Equivalent
System properties CogniMesh 12/12 vs 0/12
Schema drift CogniMesh Gold isolation vs SQL error
Unsupported questions CogniMesh T2 fallback vs 404
Freshness awareness CogniMesh Built-in vs absent
Marginal cost per UC CogniMesh 12 SLOC vs 78 SLOC
Initial simplicity REST 286 SLOC vs 1,952 SLOC
Gold table consolidation CogniMesh 5 views vs 10 tables at UC=10
Refresh cost at scale CogniMesh 960ms vs 3,000ms at UC=25
Latency at scale (measured, UC=20) Equivalent Equivalent at all measured scales
Self-improving Gold layer CogniMesh T2 patterns auto-promote; REST needs manual endpoint work
Dependency intelligence CogniMesh Impact analysis, provenance, smart refresh — REST has none
Access control CogniMesh Agent scoping, per-UC permissions, row-level isolation
REST gives you endpoints. CogniMesh gives you a platform — governed queries, dependency intelligence, smart refresh, and near-zero cost to add new use cases. The latency is the same. Everything else is different.