The intelligence layer that grows with every deal.

SmartLayer is not a feature. It is what Totus becomes over time — a proprietary database of real estate intelligence that compounds as deal teams use the platform. More deals. Better comparables. Sharper decisions.

Proprietary comps database Location intelligence Deal evaluation engine Geospatial data layer Transparent logic
The SmartLayer flywheel
Deal enters inbox
Geocoded + enriched automatically
Historical comps retrieved from radius
Screened, scored, decided
Approved deal feeds back into comps pool
↻ Every deal makes the next decision sharper

Not a black box. A data layer that explains itself.

Structured, transparent, and yours

SmartLayer is the emergent intelligence that builds up inside Totus as your team processes deals. It is not an opaque AI model that produces a number with no explanation. Every score, every band, every comparable reference is derived from real transactions in your database — and can be inspected, traced, and audited.

This is the critical distinction: Totus is a workflow layer first. The intelligence grows from the structured data you generate while using it. You own the data. You control the logic. The system becomes smarter because your team is disciplined about how it captures deals.

For institutional teams and regulated environments, this matters fundamentally. Underwriting outputs remain deterministic and explainable, while the platform continuously improves its reference quality.

Proprietary comps database

Every deal your team processes — plus ingested market data from Thomas Daily and external uploads — builds a geospatially indexed comparables database that is private to your organization.

Location intelligence layer

Deterministic macro and micro-location scores computed from transaction density, price trends, recency of activity, and distance-weighted comparable data. No third-party black box required.

Deal evaluation engine

When a new deal arrives, the system fetches nearby comps, computes variance vs. market bands, and flags outliers automatically. Logic is transparent and formulas are configurable.

Continuous feedback loop

Approved deals are promoted back into the comps pool. The database gets denser over time. Scoring bands tighten. Reference quality improves. Network effects within your own data.

Each layer builds on the one before it.

01

Comparables Database

The foundation. A geospatially indexed store of real estate transactions, enriched from multiple sources. Radius queries return nearby deals with similarity-weighted ranking.

  • PostgreSQL + PostGIS geo-indexed
  • Thomas Daily ingestion pipeline
  • CSV / JSON upload pipeline
  • Append-only transaction history
  • Bounding-box and radius queries
  • Source tracking and batch traceability
02

Location Intelligence

Deterministic scoring modules computed from structured transaction data. Every score has an explanation payload — inputs are inspectable and formulas are configurable.

  • Macro location score (city/district level)
  • Micro location score (radius-weighted)
  • Liquidity score (transaction frequency)
  • Volatility score (price variance)
  • Demand proxy score (activity recency)
  • Fallback behavior when data is thin
03

Deal Evaluation Engine

The screening pipeline that orchestrates geocoding, comparable retrieval, and metric computation for each new deal. Transparent, rule-based, and reproducible.

  • Geocode → fetch comps → calculate metrics
  • Avg and median comparable price/m²
  • Variance vs. market band
  • Screening flags (above/below band)
  • Insufficient comps handled cleanly
  • Approved deals feed back into pool

The SmartLayer compounds over time.

SmartLayer is not turned on — it emerges as your team uses Totus. Each stage brings measurable improvement in screening accuracy and reference quality.

LaunchDay 1

Foundation established

Database, geocoding pipeline, and comparables engine go live. Thomas Daily market data begins ingestion. First manual deals enter the system.

Thin compsExternal data onlyWide screening bands
Phase 1Month 3

Internal data begins accumulating

Screened and approved deals feed back into the comps pool. Location scores start incorporating internal transaction density. CSV uploads enrich coverage by asset type and geography.

Mixed data sourcesNarrowing bandsLocation scores improving
Phase 2Month 9

Proprietary moat forming

Internal transaction density sufficient for reliable micro-location scoring in core markets. Screening accuracy measurably higher than at launch. Comparables coverage expands through team usage alone.

Reliable micro-scoringTight market bandsProprietary reference quality
Phase 3Month 18+

Self-sustaining intelligence layer

Scoring is primarily driven by internal data. New deals immediately benefit from dense historical context. The system has genuine competitive advantage that external tools cannot replicate without your deal history.

Self-sustainingCompetitive moatAI layer becomes viable

Map visualization and data enrichment are platform capabilities, not optional add-ons.

🗺️ Map Visualization

Geospatial deal and comparable view

Every property in Totus has geospatial data compatible with map rendering. The backend exposes optimized endpoints for viewport-based and radius-based queries, ready for Mapbox or Leaflet integration.

  • Bounding-box endpoint for map viewport queries
  • Radius endpoint for comparable retrieval
  • Responses include lat, lng, id, price, asset type
  • Ungeocoded records never appear on map
  • Result limits configurable per viewport call
  • Asset type filtering on both endpoints
Query types
GET /map/properties bbox params + GET /map/comparables radius params
📂 Data Enrichment

External upload pipeline for CSV and JSON

All external data enters through the same controlled pipeline as internal deals. No direct writes to normalized tables. Every upload is traceable to source, batch, and file. Deduplication runs before promotion.

  • CSV and JSON file upload via POST /imports/upload
  • Batch ID assigned on upload — full traceability
  • Raw payload preserved, never overwritten
  • Parser extracts: address, city, price, asset type, size, date
  • Duplicate detection flags matching records
  • Review workflow: unreviewed → approved → rejected
Data pipeline
Upload Raw store Parse Normalize Validate Persist

Built on non-negotiable architecture principles.

The SmartLayer is only as trustworthy as the data infrastructure beneath it. These architectural decisions are fixed — not because of convention, but because they are the conditions for reliable intelligence.

🗄️

PostgreSQL + PostGIS

Mandatory. All geospatial queries run through PostGIS. No proximity search, no radius comps, no location scoring without it.

📌

Geo-indexed everywhere

Every property record must carry geocoded coordinates. GIST spatial index supports both radius and bounding-box queries efficiently.

🔒

Append-only history

Transaction records are never overwritten. Scoring and screening downstream depend on this immutability. Silent history loss corrupts the feedback loop.

Modular API-first

Geocoding, comparables, scoring, ingestion — each is a discrete service. No monolithic logic. Easily extensible without rewriting the system.

Backend stack

  • Python 3.12 + FastAPI
  • PostgreSQL 16 + PostGIS
  • SQLAlchemy 2.x + Alembic
  • Docker Compose local dev
  • pytest test coverage
  • Nominatim geocoding (switchable interface)

Architecture rules

  • Raw data and normalized data strictly separated
  • No direct writes to normalized tables from uploads
  • Every raw_import record carries batch_id + source_type
  • Map endpoints never return ungeocoded records
  • Scoring formulas are configurable and documented
  • AI layer deferred until data density justifies it

Implementation packet sequence — for developers

+
Foundation
Packet 1 — Backend skeleton + Docker + health endpoints
Packet 2 — Core schema: cities, sources, properties, transactions, raw_imports, geocoding_metadata
Packet 3 — Geocoding service: Nominatim provider, address normalization, PostGIS persistence
Intelligence layer
Packet 4 — Comparables engine: radius queries, ranking, bounding-box queries
Packet 4b — Map query layer: /map/properties and /map/comparables endpoints
Packet 5 — Deal screening pipeline: geocode → comps → metrics → flags
Packet 5b — Deal promotion: approved deals feed back into comps pool
Data ingestion
Packet 6 — Thomas Daily raw import: ingestion, parsing, normalization, review workflow
Packet 6b — External upload pipeline: CSV/JSON, deduplication, batch tracking
Scoring + ops
Packet 7 — Location + risk scoring: macro, micro, liquidity, volatility, demand proxy
Packet 8 — Data quality + ops hardening: validation, logging, retries
Packet 9–10 — API expansion + UI contract layer

Every layer feeds the next.

UI Layer
Deal detail view Comparables map Source history Risk panel
API Layer
FastAPI endpoints Map queries (bbox + radius) Screening pipeline Import review queue
Services
Geocoding service Comparables engine Scoring service Ingestion pipeline Upload pipeline
Data layer
Thomas Daily ETL CSV/JSON uploads Deal promotion loop Deduplication hooks
Database
PostgreSQL 16 + PostGIS GIST spatial index Append-only transactions Raw / normalized separation

See SmartLayer in action.

Request a demo and see how the data layer builds behind the workflow — and how your team's deal history becomes a compounding competitive asset.

Request Demo →