Products
Industries
Delivery
Resources
Company
Get Sample Data
← From the pipeline
How-to guide · April 2026 · 10 min read

Connect Claude AI to Canadian property data

A step-by-step integration guide for the BrightCat MCP connector. Read through in ten minutes, wire it up in a working afternoon.

Snowflake + MCPOAuth 2.0Read-only1–3 day setup

This guide walks through connecting an AI agent — Claude, ChatGPT, a custom agent stack, anything that speaks the Model Context Protocol — to the Canadian residential and commercial property pipeline operated by BrightCat Data. The target reader is a data engineer or AI builder who needs to move from “we should integrate property data” to “the agent is answering live queries” in a single working day.

BrightCat exposes the same underlying data through two shapes. A Snowflake Marketplace share delivers SQL-driven analytics. An MCP endpoint layered on top of that share delivers agentic AI workflows. Most enterprise deployments use both. The Snowflake share is the durable data surface. The MCP endpoint is the conversational surface layered on top of it.

What follows assumes you already have a Snowflake account, an AI client that supports MCP (Claude Desktop is the reference implementation; Claude in Chrome, Cursor, and custom agent stacks all work), and basic familiarity with OAuth 2.0. The setup pattern follows Snowflake’s standard Marketplace-plus-OAuth flow, which means anything you build here is portable across MCP-compatible clients.

What MCP is, and why it matters here

The Model Context Protocol is an open protocol that lets an AI agent call external data and tools mid-conversation. Introduced by Anthropic in late 2024 and now supported by most major AI stacks, MCP turns a data warehouse into something the agent can query directly. No embeddings pipeline, no retrieval pre-processing, no stale snapshots loaded into context.

For property data specifically, MCP closes a gap that retrieval-augmented generation never closed cleanly. The agent gets the live weekly data as of today, not a static embedding of last quarter. It can issue exact SQL-equivalent queries instead of fuzzy vector retrieval. And because the data stays inside the buyer’s Snowflake environment, it never crosses a third-party boundary.

The practical effect is that an agent can answer something like “which active listings in Burlington crossed $1.2M this week and sat more than 30 days?” by issuing a real query, not by guessing from pre-loaded context.

What’s in the BrightCat data surface

BrightCat ships five product families, each exposed as a separate Snowflake database with a SAMPLE and PRODUCT schema. The sample schema is the public-facing evaluation slice. The product schema is the licensed production data. MCP access is scoped to whichever databases the client has licensed.

ProductCoverageTypical query
ListingsWeekly active residential listings across all 10 Canadian provinces. 5.8M+ unique properties tracked.Territory and lifecycle filters. “Active listings in FSA M5V priced over $900K, under 45 days on market.”
SoldCompleted residential sales. 899K events across 751K unique properties, continuous since 2014.Sold-price and days-on-market analytics. “Median sold price in this postal code over the last eight weeks.”
RentalsWeekly rental listings, July 2021 forward. 233 weekly snapshots as of this writing.Rental-rate tracking. “Asking-rent distribution for 2-bedroom GTA units this week vs the same week last year.”
Commercial~297K unique commercial properties. Dual-listing detection for sale + lease flags.CRE cross-lifecycle queries. “Commercial properties currently dual-listed for sale and lease in Ontario.”
CoreUnified enrichment layer including the Home Price Index. 194K confirmed repeat-sale pairs.Repeat-sale and HPI analytics. “Average HPI-adjusted appreciation by FSA over 12 years.”

Most integrations start with Listings and Core. Rentals and Commercial are added when the use case matures past the initial retention or acquisition workflow.

Sample SQL

Below each step in the integration, you’ll find an abstract sample query. These show the shape of a typical request against each product. Exact column names and table schemas are delivered with access provisioning and may evolve. The queries are illustrative, not executable against a public endpoint.

What you need before starting

None of these are heavy. The bottleneck in practice is the MDLA review inside the buyer’s procurement team, which typically takes five to ten business days in a regulated enterprise and under three in a startup.

Step 1 — Request access

Access is gated by an explicit request, not a self-service signup. This is intentional. It lets BrightCat scope the Snowflake share, the MCP endpoint, and the network allowlist to your environment before any credentials exist.

Submit a request with the following:

BrightCat responds with a share provisioning confirmation, an OAuth integration ID, and the MCP endpoint URL scoped to your account. Typical turnaround is one to three business days from receipt of the signed MDLA.

Step 2 — Accept the Snowflake share

Once provisioning completes, the BrightCat data appears as a pending Snowflake Marketplace share inside your Snowflake account. Accept the share under an ACCOUNTADMIN session. Each licensed product family appears as a separate database.

-- Run as ACCOUNTADMIN. Replace placeholder database names with
-- the provider-supplied identifiers from your provisioning email.

USE ROLE ACCOUNTADMIN;

CREATE DATABASE BRIGHTCAT_LISTINGS
  FROM SHARE <provider-share-id>.BRIGHTCAT_LISTINGS;

CREATE DATABASE BRIGHTCAT_CORE
  FROM SHARE <provider-share-id>.BRIGHTCAT_CORE;

-- Grant read access to the role your team will query under.
GRANT IMPORTED PRIVILEGES ON DATABASE BRIGHTCAT_LISTINGS
  TO ROLE DATA_ANALYST;

After this step, your data team can query BrightCat tables inside Snowflake exactly as they would any other Snowflake table. A quick validation:

-- Abstract example. Exact column names provided with access.
SELECT province, COUNT(*) AS active_listings
FROM BRIGHTCAT_LISTINGS.PRODUCT.listings_weekly
WHERE listing_status = 'active'
  AND file_date = (
    SELECT MAX(file_date)
    FROM BRIGHTCAT_LISTINGS.PRODUCT.listings_weekly
  )
GROUP BY province
ORDER BY active_listings DESC;

If this returns a row per province with counts adding to roughly the expected Canadian active-market size, the share is live and the data is current. If it returns empty, the most common cause is that the share was granted to a different role — check SHOW SHARES as ACCOUNTADMIN.

Step 3 — Configure OAuth for MCP

The MCP endpoint uses Snowflake OAuth for authentication. This keeps the AI client’s access aligned with Snowflake’s existing role-based security. The agent can only see what the read-only role can see, regardless of what the agent is asked to do.

Two things have to exist inside your Snowflake account for MCP to work:

-- Dedicated read-only role for MCP access.
CREATE ROLE CLAUDE_MCP_READONLY;

GRANT IMPORTED PRIVILEGES ON DATABASE BRIGHTCAT_LISTINGS
  TO ROLE CLAUDE_MCP_READONLY;
GRANT IMPORTED PRIVILEGES ON DATABASE BRIGHTCAT_CORE
  TO ROLE CLAUDE_MCP_READONLY;

GRANT ROLE CLAUDE_MCP_READONLY TO USER <service-user>;

-- OAuth security integration.
CREATE SECURITY INTEGRATION CLAUDE_MCP_INTEGRATION
  TYPE = OAUTH
  ENABLED = TRUE
  OAUTH_CLIENT = CUSTOM
  OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'
  OAUTH_REDIRECT_URI = 'https://claude.ai/api/mcp/auth_callback'
  OAUTH_ISSUE_REFRESH_TOKENS = TRUE
  OAUTH_REFRESH_TOKEN_VALIDITY = 7776000
  OAUTH_USE_SECONDARY_ROLES = IMPLICIT;

-- Pre-OAuth: set the default role so the handshake
-- lands on the correct scope.
ALTER USER <service-user>
  SET DEFAULT_ROLE = 'CLAUDE_MCP_READONLY';
Gotcha

The default role has to be set to CLAUDE_MCP_READONLY before the first OAuth handshake, or the AI client authenticates under the wrong role and sees either nothing or too much. After the first successful connection, the default role can be reset. This single step causes more BrightCat MCP support tickets than anything else combined.

Retrieve the OAuth client ID and client secret:

SELECT SYSTEM$SHOW_OAUTH_CLIENT_SECRETS('CLAUDE_MCP_INTEGRATION');

Copy the OAUTH_CLIENT_ID and either OAUTH_CLIENT_SECRET value. You’ll need both in Step 4.

Step 4 — Register the MCP server with Claude

Register BrightCat as an MCP server inside your AI client. For Claude, this is done either through the Claude Desktop settings panel (for interactive use) or through a programmatic config file (for custom agent stacks).

Interactive setup (Claude Desktop)

In Claude Desktop → Settings → Connectors → Add custom connector, enter the following:

Claude performs the OAuth handshake, asks for confirmation of the requested Snowflake role, and establishes the connection. The first request can take 30–60 seconds as Snowflake cold-starts the virtual warehouse. Subsequent requests are fast.

Programmatic setup (agent stacks)

Register the MCP server in your agent config. The shape varies by stack — below is the standard MCP client config block.

{
  "mcpServers": {
    "brightcat": {
      "url": "https://<your-account>.snowflakecomputing.com/api/v2/databases/BRIGHTCAT_LISTINGS/schemas/PUBLIC/mcp-servers/BRIGHTCAT",
      "auth": {
        "type": "oauth2",
        "clientId": "${SNOWFLAKE_OAUTH_CLIENT_ID}",
        "clientSecret": "${SNOWFLAKE_OAUTH_CLIENT_SECRET}"
      }
    }
  }
}

Load your credentials from environment variables or a secret manager. Never commit them to source control. Snowflake OAuth refresh tokens are long-lived (90 days by default) so expiry handling should be automatic for the first few months of operation.

Step 5 — First query and validation

Once the MCP server is registered, the agent can issue natural-language queries that translate to SQL under the hood. The agent chooses the right product database, constructs the query, executes it through the MCP tool call, and returns a structured answer.

A good first query is scoped tight. The goal is to confirm the pipe is open, not to stress-test the model.

Ask Claude

“Using BrightCat, how many active residential listings are in the province of Ontario this week, broken out by FSA? Return the top 10 FSAs only.”

If the connector is wired correctly, Claude will produce a tool call against the BrightCat MCP endpoint, issue a SQL query along the lines of the Listings validation example shown in Step 2, and return the result as a structured table in the conversation.

Common validation failures and their causes:

Agent patterns worth copying

Once the connector is live, the useful question becomes what to do with it. Three patterns come up repeatedly in early-production BrightCat deployments.

Weekly retention trigger

An agent runs every Sunday after the BrightCat refresh. It queries the Listings product for newly-active listings and joins the output to the bank or telecom’s customer table on postal code. Matched customers are flagged for the retention team’s Monday queue. The agent handles the query, the join, the threshold filtering, and the delivery — typically into a Slack channel or a Salesforce task queue.

Conversational territory analysis

A sales ops lead or analyst asks natural-language questions: “How has inventory in Mississauga changed over the last six weeks?” “What’s the median sold price trend in L6L this quarter vs last?” The agent issues the SQL, returns the numbers, and can also produce a written summary in the same turn. Replaces the static weekly dashboard with an interactive one.

Next-best-offer for telecom churn

The agent combines the pre-mover signal (household-level, derived from the BrightCat Listings feed) with the telecom’s own usage and billing data to score each flagged household for the right retention offer. Pricing hold for high-value customers, plan-downgrade protection for price-sensitive ones, product upgrade for upwardly mobile households. The MCP call is one input; the decision logic is entirely inside the buyer’s stack.

Frequently asked questions

What is MCP and why does it matter for Canadian property data?
MCP (Model Context Protocol) is an open protocol for connecting AI agents to external data sources as tool calls. For Canadian property data, MCP lets Claude, ChatGPT, and other agents query BrightCat’s full residential and commercial pipeline directly during a conversation or workflow, rather than requiring the data to be loaded into the model’s context window in advance.
Can Claude query Canadian real estate data in real time?
Yes. Through BrightCat’s MCP connector, Claude can issue SQL queries against BrightCat’s weekly-refreshed Snowflake data and receive structured results inside a conversation. The data covers 5.8 million Canadian residential properties and 297,000 commercial properties, updated weekly.
Do I need a Snowflake account to use BrightCat’s MCP?
A Snowflake account is the preferred delivery channel because MCP access is provisioned through Snowflake’s role-based security. Organizations without Snowflake can request a dedicated BrightCat-hosted endpoint with equivalent access controls, though turnaround is longer.
How long does BrightCat MCP setup take?
Typical setup is one to three business days from the initial request to the first successful query. The slow step is the IP allowlist handshake and the OAuth integration provisioning on BrightCat’s side. Once provisioned, the AI-client configuration takes under 30 minutes.
Is MCP access read-only?
Yes. BrightCat provisions MCP access under a read-only Snowflake role (CLAUDE_MCP_READONLY) with access restricted to the specific product databases the client has licensed. Write access is not available through MCP under any configuration.
What agents and clients work with BrightCat MCP?
Any client that speaks the Model Context Protocol specification. Claude Desktop and Claude in Chrome are the most common for interactive use. LangChain, LlamaIndex, and custom-built agent stacks work for programmatic deployments. ChatGPT’s MCP support is maturing — check the current Anthropic MCP documentation for client compatibility.
Can the MCP connector be used with Snowflake Cortex?
Yes. BrightCat data is exposed as Snowflake-native tables, which means Cortex functions (AI_COMPLETE, AI_CLASSIFY, semantic views, Cortex Analyst) can be used against the data without any additional setup. This is the recommended pattern for Snowflake-centric teams that want AI-augmented analytics without adding a separate MCP client.
How does data refresh work through MCP?
BrightCat refreshes the Snowflake share weekly, every Sunday. The MCP endpoint queries the underlying Snowflake tables live, so each MCP request returns whatever the share currently holds. There is no separate MCP-side cache. When BrightCat publishes the weekly refresh, the MCP endpoint reflects the new data on the next query.
One guide. Five steps. The connector is not the work — the workflows you wire on top of it are. Start with the validation query, then move to the retention trigger or the territory analysis. Read-only, scoped, audited.
BrightCat how-to · April 2026 · Canadian property intelligence since 2014

Related

Ready to integrate

Request access to the BrightCat MCP connector

Snowflake share and MCP endpoint provisioned in one to three business days. Read-only, scoped, audited. Includes a free 25,000-record sample against your own conversion data before any production contract.

Request accessMCP details →