Products
Industries
Delivery
Resources
Company
Get Sample Data
AI GOVERNANCE

76% of organizations have a Chief AI Officer. Almost none are in property data.

In May 2026 the IBM Institute for Business Value published its CEO Study of 2,000 chief executives across 33 countries. The headline number tripled in a year: AI governance moved from edge case to executive function. Property data, mostly, missed the memo. We think that’s an opportunity and a problem.

CAIO adoption 2026
76%
CAIO adoption 2025
26%
CEOs surveyed
2,000
Countries
33
Source: IBM Institute for Business Value, 2026 CEO Study — Rewiring the C-suite: The fast track to 2030
The shift

From experiment to executive function

A year ago the Chief AI Officer was a curiosity. Today it’s a line item. HSBC and Lloyds appointed CAIOs in early 2026. Banks, insurers, and consumer technology firms followed at scale. The IBM data is one snapshot, but other industry surveys (Gartner, McKinsey, Randy Bean’s 2026 AI & Data Leadership Executive Benchmark) describe the same pattern. The role’s mandate varies, but the pattern is consistent: AI is being moved out of the lab and into the operating model.

Inside an enterprise, that has a quiet consequence for vendors. When AI ownership rises to the executive floor, vendor due diligence rises with it. The procurement question used to be “does your data fit our schema?” The question is now “can you support our AI risk function in audit, lineage, and explainability?”

Most property data vendors aren’t prepared for that question. We think that’s because property data was built for a different decade — one where data quality was assessed by sample size and the buyer was a marketing team, not a model risk committee.

That assumption is now out of date.

The regulatory landscape

What property data buyers are working under in 2026

Jurisdiction Framework 2026 status Implication for property data buyers
Canada AIDA (proposed) Died on the order paper January 2025. Successor expected via privacy legislation rather than standalone AI law. Plan for AIDA-style obligations on high-impact systems even without an active statute. Voluntary Code of Conduct is interim guidance.
Canada (financial) OSFI Guideline E-23 (Model Risk Management) Active. Applies to federally regulated financial institutions. Banks and lenders must document model lineage, validation, and ongoing monitoring — including input data sources.
Canada (federal AI use) Treasury Board Directive on Automated Decision-Making Active. Applies to federal institutions. Government buyers require Algorithmic Impact Assessments for automated decisions affecting Canadians.
United Kingdom UK GDPR Article 22 + ICO AI guidance Active. Updated ICO automated decision-making guidance published 2026. Solely automated decisions with legal/significant effects require lawful basis, transparency, and human review mechanisms.
United Kingdom (financial) FCA AI expectations Active. 2024 discussion paper + 2026 update. FCA-regulated firms must address AI risk governance, model risk management, explainability, and bias testing.
European Union EU AI Act In force. Risk-tiered obligations apply. Extraterritorial scope. Applies to organizations outside the EU whose AI output is used in the EU.
International OECD AI Principles, ISO/IEC 42001, NIST AI RMF Voluntary frameworks. Widely referenced. Often adopted by enterprises operating across multiple jurisdictions as a unifying internal standard.
This is a summary, not legal advice. Buyers should consult counsel for jurisdiction-specific obligations.
The structural risk

When the data layer can’t answer audit questions

An AVM trained on Canadian property data feeds a mortgage underwriting decision. The model risk team at the lender needs to document training-data provenance for OSFI Guideline E-23. They ask their property data vendor four questions:

  1. What was the exact dataset used for training as of build version X?
  2. How were duplicates, relists, and address ambiguities resolved?
  3. Can you reproduce that dataset today for re-validation?
  4. What changed between version X and the version currently in production?

Most property data vendors today struggle with question 3 and fail question 4. The data shipped, the buyer trained on it, and the methodology, build versioning, and historical reproducibility were never set up for audit. The data was good. The audit trail was missing.

This is the structural risk: as enterprise AI governance matures, upstream data vendors that don’t version, document, and surface lineage will be replaced — not for accuracy reasons, but for governance reasons. The data could be perfect and still fail a model risk review.

A vendor that supports the buyer’s AI governance team is a vendor that survives the AI governance era.

Our posture

What BrightCat does at the data layer

We don’t make claims about downstream models — that’s the buyer’s job. We make the data layer auditable so the buyer’s AI governance team can do their job. Concretely:

Versioning
Every weekly build is versioned
A buyer can request the dataset as of any prior weekly build for re-validation, retraining, or audit reproducibility. 600+ versions of record since 2014.
Lineage
Every record carries provenance metadata
Which build the record came from, address normalization version applied, persistent identifier version, reclassification rules in effect. Lineage travels with the data into Snowflake or your warehouse.
Methodology
Documented, not internal
Our methodology and proof pages publish how data is collected, matched, reclassified, and updated — in writing, not buried in internal documents.
Documentation under NDA
For model risk reviews and AI audits
Enterprise customers receive build-level documentation, schema specifications, and lineage reports under NDA for internal model validation or regulator-facing audits.
Persistent identification
Same property across every event
A property carries the same identifier from first listing through every relist, price change, withdrawal, sale, and rental conversion. Joins survive MLS number changes.
Modern delivery
Snowflake, MCP, API
Data and lineage delivered via Snowflake Marketplace, Model Context Protocol for AI agents, or REST API — native to the workflows AI teams already use.
What we don’t claim

We don’t certify your AVM. We don’t validate your underwriting model. We don’t make explainability claims about a downstream LLM that ingested our data. Those are the buyer’s responsibility, governed by their AI risk function, their CAIO, their regulator, and their internal model risk team. We make sure the data layer underneath is auditable so they can do those things.

Vendor due diligence

Five questions to ask any property data vendor in 2026

If you’re building AVMs, pre-mover models, underwriting systems, or any AI workflow that touches Canadian property, these are worth asking before signing:

  1. Where does your data come from and can you document the chain?
    If they can’t answer this in writing, your model risk team will struggle to defend it later.
  2. Do you version your builds and can a buyer reproduce a historical training dataset?
    If the answer is “we ship the latest file every quarter,” you cannot reproduce the dataset your model was trained on.
  3. Do you document your matching, deduplication, and reclassification logic?
    If they treat every new listing as a fresh record, your model is learning the wrong signal from relisted properties.
  4. Do you carry lineage metadata that survives ingest into Snowflake or a warehouse?
    A separate PDF describing methodology is not lineage. Lineage is per-record metadata that travels with the data.
  5. Can you support an internal model risk review or external AI audit if our regulator asks?
    If they have never received this question, you will not be the first — but you may be the one paying for the gap.
Built for the AI governance era

A data layer your AI risk team can defend.

Read the methodology, query a free sample on Snowflake, or talk to us about how BrightCat fits into your AI governance documentation.

See the proof page Talk to us