Three delivery paths into the same pipeline. 5.8M residential properties. 297K commercial properties. All ten provinces. Weekly refresh since 2014.
Most API documentation starts with endpoints. This one starts with the data, because the data is the reason you are here. BrightCat's pipeline covers Canadian residential and commercial property markets at the property level, with weekly refresh, continuous since 2014. The question is how you get it into your application. BrightCat offers three delivery paths. Choose one, or combine them.
BrightCat does not publish a public HTTP REST endpoint. The data volumes and query patterns enterprise teams use against Canadian property data are better served by the three delivery mechanisms below. Each one is an API in the architectural sense: a programmatic interface between the consumer's application and BrightCat's data. Each one exists at a layer above raw HTTP.
The preferred delivery path for enterprise data teams. BrightCat publishes five Marketplace listings (Listings, Sold, Rentals, Commercial, Core) as Snowflake Secure Data Shares. The consumer mounts the share into their own Snowflake account; queries run as SQL inside the consumer's warehouse against the live BrightCat data. No file transfer, no ETL pipeline, no scheduled refresh to maintain. When BrightCat updates the pipeline, the change is visible in the same query immediately.
Best for: analytics workloads, dashboards, scheduled reports, retention/acquisition use cases where the query joins BrightCat data to the consumer's own customer or property tables.
The delivery path for AI agents and LLMs. The BrightCat MCP Connector implements the Model Context Protocol, letting Claude, custom agents, and any MCP-compatible client query BrightCat data through natural language. Authentication is OAuth-based. The consumer's AI agent issues tool calls; BrightCat's MCP server translates them into SQL, runs the query, and returns structured results. The underlying data is the same Snowflake-hosted pipeline the Marketplace listings expose.
Best for: conversational analytics, agent-driven workflows, retention triggers, next-best-offer routing, natural-language property research.
The delivery path for teams not yet on Snowflake or that have a specific compliance requirement for file-based delivery. BrightCat delivers Parquet or CSV to the consumer's SFTP endpoint on the weekly refresh cadence. Schemas match the Snowflake and MCP delivery paths. Partitioning and chunking are customizable per client.
Best for: Databricks environments, BigQuery pipelines, on-prem data warehouses, regulated workloads requiring file-based audit trails.
Five product lines, each available through all three delivery paths:
There is no separate BrightCat API key system. Authentication happens at the delivery layer of whichever path the consumer chooses:
Join the consumer's customer address table against BrightCat Listings by postal code. Filter for properties with lifecycle state of NEW in the past seven days. Return matched customer IDs for the retention queue.
-- Pre-mover identification: match new listings to customer base
SELECT c.customer_id, c.name, c.postal_code,
b.address, b.asking_price, b.property_type,
b.listing_status, b.days_on_market
FROM customer_table c
JOIN BRIGHTCAT_LISTINGS.PRODUCT.listings_weekly b
ON c.postal_code = b.postal_code
WHERE b.listing_status = 'NEW'
AND b.file_date = (
SELECT MAX(file_date)
FROM BRIGHTCAT_LISTINGS.PRODUCT.listings_weekly
)
ORDER BY b.asking_price DESC;"Show me active residential listings in Toronto with price drops of more than 5% in the last 14 days, sorted by original list price." Claude issues the tool call; the MCP server resolves the query against the BrightCat pipeline and returns structured results.
// MCP server configuration for Claude Desktop or agent stack
{
"mcpServers": {
"brightcat": {
"url": "https://<account>.snowflakecomputing.com/api/v2/databases/BRIGHTCAT_DATA_LISTINGS/schemas/PUBLIC/mcp-servers/BRIGHTCAT",
"auth": {
"type": "oauth2",
"clientId": "${SNOWFLAKE_OAUTH_CLIENT_ID}",
"clientSecret": "${SNOWFLAKE_OAUTH_CLIENT_SECRET}"
}
}
}
}# Example: Claude MCP tool call (what the agent sends)
{
"tool": "brightcat_query",
"input": {
"query": "Active residential listings in Toronto with price
reductions greater than 5% in the last 14 days,
sorted by original list price descending"
}
}
# The MCP server translates to SQL, executes against
# the Snowflake-hosted BrightCat pipeline, and returns
# structured results the agent can reason over.Consumer's SFTP endpoint receives the weekly Parquet drop. Downstream pipeline loads the files into the consumer's warehouse of choice. Schema is identical to what the Snowflake and MCP paths expose.
# Example: loading BrightCat Parquet into a local pipeline
import pandas as pd
# Weekly drop arrives as Parquet, partitioned by product
listings = pd.read_parquet("brightcat_listings_20260428.parquet")
# Schema matches the Snowflake and MCP delivery paths
print(f"Properties: {len(listings):,}")
print(f"Columns: {len(listings.columns)}")
print(f"Provinces: {listings['province'].nunique()}")
# Filter to this week's new listings in Ontario
new_on = listings[
(listings['listing_status'] == 'NEW') &
(listings['province'] == 'ON')
]
print(f"New ON listings this week: {len(new_on):,}")BrightCat licenses the data through a commercial agreement, not through per-call API metering. Pricing is based on scope (which of the five products), delivery path, and term structure (12/24/36-month). Every Snowflake Marketplace listing includes a sample schema that prospective consumers can query before any commercial commitment. The fastest way to evaluate is to accept the Marketplace listing and run queries against the sample.
Start with the Snowflake Marketplace sample, the MCP Connector, or a flat-file extract. Every path includes evaluation data before any commercial agreement.