Table Explorer

Load tables in the Environments tab.

Multi-Table OData / FetchXML Query & Visualization

Write FetchXML that joins multiple tables. The system auto-detects entity sets and visualizes results.

Query Results

No results yet.

Visualization

Universal Query Studio (FetchXML / OData / Multi-Table)

Paste any Dataverse Web API OData path or FetchXML query. No table selection is required โ€“ the query controls everything. You can use joins, multi-table expansions and advanced filters.

Query Results

No results yet.

Visualization

Summary

No summary yet.

Visual Query Studio Pro (Multi-Environment โ€ข Multi-Table โ€ข Pivot โ€ข Visuals)

Build Excelโ€‘style pivots across multiple environments and multiple tables. Add sources, load fields, drag fields into Rows/Columns/Measures, choose aggregations, then run a pivot and visualise.

Tip: Table scope (Custom/System/All) is controlled in Environments โ†’ Table scope. Load tables there first, then come back here.

No sources yet. Click โž• Add Source.

Available Fields (from all sources)

Add sources then click Load Fields.

Drag fields into the Pivot tab zones.

Rows

Drop row fields here

Columns

Drop column fields here

Measures (Values)

Drop numeric fields here

If you use a date field in Rows/Columns, this buckets it.

Calculated Measures

Create Excelโ€‘style measures like SUM(revenue), COUNT(*), % of total, Running total. Drag measures into Measures (Values).

Pivot Result

No pivot yet.

Compare (same table across environments)

Add multiple sources pointing to the same table in different environments, set Mode โ†’ Compare, then run Pivot. The system adds a special field __env you can drag into Rows/Columns to compare counts/sums across environments.

Use the Pivot tab. Tip: Put __env into Columns and a measure into Values.
Saved configs are stored locally in your browser (safe, small).

AI Assist (optional)

Ask for help building a pivot (e.g. โ€œShow total budget by status and month across PROD+UATโ€). If your build has AI keys configured, it will use them; otherwise it provides guidance.

No AI output yet.
>

Manage Environments (Unlimited)

Table scope:

๐Ÿ”— Need to login to Microsoft Dataverse? Open Microsoft Device Login

Single Record CRUD - โœ… **Fully Functional**

Perform Read, Create, Update, and Delete operations on a single Dataverse record.

Results

No operation performed yet.

Bulk Data Operations (Batch API) - โœ… **Fully Functional**

Use the Batch API for bulk creation (POST) by uploading JSON files or providing a manual payload.

No files loaded.

Bulk Results

Ready to execute batch request.

AI Voice & Prompted Data Ingestion

Use a natural language prompt or voice command to describe the data you want to create.

AI-Generated Payload (JSON)

AI-generated JSON will appear here.

Multi-Target Ingestion (Many Files โ†’ Many Tables)

Configure multiple ingestion groups. Each group can point a different uploaded file or pasted JSON to a different Dataverse table/environment. The app will process each group in batches and show per-group statistics and a summary at the end.

No ingestion groups defined yet. Click "Add Ingestion Group" to start.

Multi-Target Ingestion Results

No multi-target ingestion executed yet.

Multi-Target Ingestion with Lookups (Wizard-Assisted)

Configure multiple ingestion groups where each group ingests data into a specific table/environment and resolves multiple lookup columns (e.g. contact, owner, parent account) automatically before ingesting. Ideal when you have 50+ different files that each need to go into a different table with lookups.

Wizard Mode (Guided Steps for Non-Technical Users)
  1. Click "Add Lookup Ingestion Group" for each file/table you want to load.
  2. For each group, select Environment and Target Table.
  3. Upload your JSON/TXT file or paste JSON data for that specific group only.
  4. Under Lookup Columns, add one row for each lookup (e.g. Owner, Parent Account, Contact).
  5. For each lookup row, fill:
    • Source data column (where the value comes from in your file, e.g. emailaddress1)
    • Lookup target table (e.g. contact, account, systemuser)
    • Match field on lookup table (e.g. emailaddress1, name, fullname)
    • Binding field on target table (lookup field to bind, e.g. primarycontactid, ownerid)
  6. Click "๐Ÿš€ Run All Lookup Ingestion Groups". The app will:
    • Download lookup rows from Dataverse for each lookup mapping.
    • Match your file values to existing Dataverse records.
    • Attach the correct @odata.bind properties for each lookup.
    • Ingest all records in batches of 100 with full per-group results.
No lookup ingestion groups defined yet. Click "Add Lookup Ingestion Group" to start.

Lookup Ingestion Results

No lookup-based multi-target ingestion executed yet.

Bulk Delete Wizard (Multi-Environment / Multi-Table / Lookups)

โš ๏ธ WARNING: Bulk delete is permanent. Always test on non-production environments first.

Use this wizard to delete records across many environments and tables in a single run. You can paste JSON or text, or upload files for each delete group. Each group targets a specific Environment + Table and can use a key field or lookup fields.

No delete groups defined yet. Click โ€œAdd Delete Groupโ€ to start the wizard.

Bulk Delete Summary & Audit Trail

No bulk delete run yet.

Staging Table Management (Dataverse)

Create real staging tables in Dataverse directly from your uploaded JSON/text data. The app will create a custom table, add columns based on your data, publish it, and insert all records.

Staging Settings

Auto-split large tables
When enabled, very wide tables are split into parts so Dataverse column limits are not exceeded.
No files loaded.
No staging table created yet.

File Statistics (this run)

No file statistics yet.

Staging Batch Progress & Errors (this run)

No batch activity yet.

Staging History / Audit Trail

No staging history yet.

API Ingest โ†’ Staging Table

Connect to any JSON-based API, pull all records (with automatic paging where supported), preview the data, and then create a real Dataverse staging table from the result.

1๏ธโƒฃ Define or Reuse an API

2๏ธโƒฃ Run API & Load Data

The app will automatically handle common paging patterns (OData @odata.nextLink and simple arrays) so you can pull hundreds of thousands or millions of rows safely in chunks.

No API executed yet.
No data loaded yet.

3๏ธโƒฃ Create Dataverse Staging Table from API Data

Use the Target Environment and Staging Table Name at the top of this tab. The app will create a custom table, add columns from the API payload, and insert all records in safe batches respecting Dataverse API limits.

โฑ Recurring API Group Scheduler

Automatically run groups of saved APIs on a schedule and create fresh staging tables every run. Uses the same paging engine and auto-split staging tables.

Use {apiName} and {yyyyMMddHHmmss} to inject API name and timestamp.

Every

No saved APIs yet. Use the API Ingest form above and click Save API.

No schedule groups defined yet.

Scheduler behaviour:

  • While this page is open, the scheduler checks every 60 seconds.
  • When Next run time is reached for an active group, it pulls all pages for each API and creates new staging table(s) using your pattern.
  • All runs respect Dataverse batch limits and are logged in History.

API Ingest Groups โ†’ Staging

Run many API โ†’ Staging operations together. Each group uses a saved API, a target environment, and its own staging table name. Perfect for 50โ€“200+ APIs in one go.

No API groups defined yet. Click โž• Add API Group to begin.

How it works:

  • First, go to API Ingest โ†’ Staging Table and save each API.
  • Then, come here and create one group for each saved API.
  • Each group will auto-page API results and auto-split staging tables when there are too many columns.
  • All activity is stored in History so you can review and export later.

Staging & Cross-Environment Comparison Reports

Compare data from unlimited tables across unlimited environments using FetchXML or OData queries. Results show duplicates and unique differences by a chosen key field.

No sources added yet. Click "Add Table / Query Source" to start.

No comparison executed yet.

Bulk Group Comparison (Many Table/Environment Pairs)

Configure multiple comparison groups. Each group compares one table in one environment against another table in another environment using a chosen key field. Ideal when you need to compare tens of tables in one go (e.g. 50+ groups).

No bulk comparison groups defined yet. Click "Add Comparison Group" to start.

Bulk Group Comparison Results

No bulk group comparison executed yet.

Data Deduplication & Smart Ingestion

Upload files, check for duplicates in Dataverse, and generate ingestion code for new records.

Enter the field name that should be unique across records

No files loaded.

Generated Ingestion Code

Code will appear here after generation.

Data Analysis & Visualization - โœ… **Fully Functional**

Fetch and visualize data using advanced OData queries.

Analysis Summary

No data loaded.

Visualization

No chart data yet.

FetchXML & Advanced Web API Queries - โœ… **Fully Functional**

Execute complex, SQL-like queries using Dataverse **FetchXML**.

Query Results

Ready to execute query.

SQL Query (Experimental) - โœ… **Functional with Joins**

Raw SQL/TSQL Query Editor

Run standard SQL/TSQL against Dataverse tables.

Converted Dataverse API Query (OData Path)

Conversion Pending...

Query Results

Ready to execute TSQL.

AI Support & Analysis - โœ… **Fully Functional**

Get AI-powered insights about your Dataverse environments using various AI providers.

Drag tables from the sidebar here to provide context.

AI Results

AI response will appear here. Configure your AI provider API keys in Settings first.

๐Ÿงช API Advanced โ€“ Custom API Scripts & Ingest

Write your own JavaScript fetch scripts to call any API. The script must return JSON data. Preview results in an Excel-style grid, download them, or ingest into Dataverse as a new staging table.

Example Scripts (10+)

Click to load an example into the editor. You can customise and run it.

Examples use the built-in Secrets Vault: getSecret('name').

Power Tools

๐Ÿง  AI Helpers
Tip: Save tokens in the vault and reference them in scripts with getSecret('name').
๐Ÿ” Enterprise Secret Vault (Key Vaultโ€‘style)
Saved secrets
Locked.
โฑ Scheduler
No schedules.
๐Ÿ” Retry & Backโ€‘off
Your script can call fetchWithRetry(url, options) instead of fetch().
๐Ÿ“ก Webhook Listener (Browser mode)
A pure HTML file cannot host an inbound HTTP webhook. But you can listen via WebSocket / SSE, or poll an endpoint. Incoming events can be previewed and ingested.
Not running.
๐Ÿ” Schema Autoโ€‘Compare (API vs Dataverse)
Compare keys in the last API result to columns in the selected Dataverse table. See missing fields and differences, then generate a recommended transform.
No comparison yet.
๐Ÿ“ฆ Templates
No templates saved.
๐Ÿ“Š Pivot & Charts
Use the builtโ€‘in pivot builder on the API result grid: drag fields into Rows/Columns/Metrics and visualise.

You only need this for Dataverse ingestion. For pure API calls, leave it blank.

If empty, a default name will be generated.

Results Preview (Excel-style grid)

No results yet.

Run Log

No activity yet.

๐Ÿ“š Dataverse Metadata Explorer (Tables โ€ข API Links โ€ข Inline Relationships)

๐Ÿ Python & R Studio โ€“ Run Scripts, Transform Data, Ingest to Dataverse

Run Python or R against uploaded data or live Dataverse tables, then download/email the result or ingest it as a Dataverse staging table. This uses a secure execution backend (Azure Functions or Docker) that you can switch below.

Execution Backend

Mode

Same backend image works for both. Toggle mode to switch endpoint used by this tab.

Endpoints

Your backend must expose /execute and /health.

Backend Auth (optional)

Script


Upload Input Data (optional)

Drop .json / .txt / .csv / .xlsx here, or click to browse.

Result

No results yet. Run a script to preview.

๐Ÿ“š Dataverse Pro Manager Guide & Instructions

Getting Started

Step 1: Environments

Use Environments tab to add one or many Dataverse environments. Click โ€œAdd New Environmentโ€, enter URL and name, then connect with device-code sign in.

Step 2: Load Tables

Still in Environments, click โ€œLoad All Tablesโ€ to fetch all tables across your selected environments. The Table Explorer on the left will then show all tables.

Step 3: Pick the Right Tab

Each main tab focuses on a specific job: single-record work, bulk ingestion, staging & comparison, reports, bulk delete, queries, AI, etc.

Main Tabs Overview

๐ŸŒ Environments

Manage all Dataverse environments. Add, connect, remove, and select which environments are active. You can also reload tables and see connection status.

๐Ÿ” CRUD Operations

Work with single records. Use the environment and table dropdowns, then type a record id, OData query, or JSON payload to create, read, update, or delete.

๐Ÿ“ฆ Bulk Data

All bulk ingestion tools for creating/updating data:

  • Manual/File Ingestion โ€“ upload JSON or paste JSON and send bulk create operations via Dataverse Batch API.
  • AI Voice/Prompt Ingestion โ€“ describe what you want in natural language, then generate or adjust payloads.
  • Multi-Target Ingestion โ€“ run many ingestion jobs in one go (multiple environments/tables).
  • Multi-Target Ingestion (Lookups Wizard) โ€“ map lookup fields so related records are linked correctly.
๐Ÿ—‘๏ธ Bulk Delete (NEW)

Use the Bulk Delete Wizard to delete records (or whole tables) across many environments and tables in a single run.

  • Create multiple Delete Groups, each targeting an Environment + Table.
  • Choose a Key Field (e.g. contactid, emailaddress1) or let the wizard build filters from your JSON.
  • Upload JSON/TXT/CSV or paste JSON/IDs for each group.
  • (Optional) Delete all records in a table (use only for non-production or where you are sure).
๐Ÿ“ Staging Tables

Create and manage staging tables in Dataverse. You can create new entities based on uploaded/pasted data structure, load data, and view per-upload statistics (rows, columns, data types, timestamps).

๐Ÿ“Š Reports & Comparison

Compare staging vs production or any other table/environment combinations. Includes:

  • Standard comparison โ€“ pick two sources and a key field.
  • Bulk group comparison โ€“ run many pairs at once.
  • Detailed statistics and differences ready for ingestion or cleanup.
๐Ÿ”„ Data Deduplication

Upload/paste data and check against Dataverse tables for duplicates. Results are split into Duplicates and New data with statistics and tables. Filtered dedupe allows OData-style pre-filters.

๐Ÿ“Š Data Analysis

Simple analysis and charting. Choose a table, environment and field, then build OData queries and see charts (bar etc.) based on value counts.

๐Ÿ“‹ FetchXML / Advanced Query

Run raw FetchXML or OData queries. Paste full FetchXML or OData paths and execute directly.

๐Ÿงฉ Multi-Table OData / FetchXML Query & Visualization

Write advanced multi-table queries and visualize results. The tool auto-detects involved entity sets and shows tables and charts.

๐Ÿง  Universal Query Studio

Paste any Web API path or FetchXML. No table selection is required; you can query multiple tables and joins in one statement.

๐Ÿ“ˆ Visual Query Studio

Drag-and-drop style query builder. Point at tables and fields to build up queries visually and then execute them.

๐Ÿ”ง SQL Query (Experimental)

Build SQL-like queries visually, or type SQL directly (SELECT/INSERT) and let the app convert and run them as Dataverse operations.

๐Ÿค– AI Support

Ask an AI model to generate queries, explain results, and help with complex Dataverse operations. Tables from the sidebar can be added as context.

๐Ÿ“š Instructions

This Instructions tab explains complex ingestion scenarios, sample codes, and API reference notes.

๐Ÿ“œ History

Full audit trail of API calls sent through the app, including favorites for quick reuse.

โš™๏ธ Settings

Manage local storage and configure AI provider API keys. These settings control how the app talks to outside services.

Complex Data Ingestion Patterns

Batch API with Relationships

Create records with related entities in a single batch operation:

{
  "requests": [
    {
      "id": "1",
      "method": "POST",
      "url": "/api/data/v9.2/accounts",
      "headers": {"Content-Type": "application/json"},
      "body": {
        "name": "Contoso Ltd",
        "revenue": 5000000
      }
    },
    {
      "id": "2", 
      "method": "POST",
      "url": "/api/data/v9.2/contacts",
      "headers": {"Content-Type": "application/json"},
      "body": {
        "firstname": "John",
        "lastname": "Doe",
        "parentcustomerid_account@odata.bind": "/accounts(1)"
      }
    }
  ]
}
Upsert Operations

Update existing records or create new ones if they don't exist:

// Use PATCH with the record GUID
PATCH /api/data/v9.2/accounts(accountid)

// The system will update if exists, but for true upsert use:
// 1. Check existence first with GET
// 2. Then use POST (create) or PATCH (update)
Complex Filtering for Distinct Operations

Use OData queries to find distinct records before ingestion:

// Get unique accounts by name
GET /api/data/v9.2/accounts?$select=name&$filter=statecode eq 0&$count=true

// Use $apply for grouping (Dataverse Web API)
GET /api/data/v9.2/accounts?$apply=groupby((name))

Start Guide โ€“ Running This Tool Safely

Step 1: Disable Browser CORS (for local HTML use)

This tool runs completely in your browser from a local .html file and needs to call Dataverse and Microsoft login endpoints directly. Most browsers block this by default (CORS). For this reason you must either run a dedicated development browser with web security disabled, or use a CORS helper extension.

โš ๏ธ Security Warning: Only use a dedicated development browser profile. Do not use this profile for normal web browsing, online banking, or production work. Close the browser when you are finished.

Option A โ€“ Use a CORS helper extension:
Install the following extension and enable it only while using this tool:
https://mybrowseraddon.com/access-control-allow-origin.html

Option B โ€“ Run a dedicated development browser instance (advanced users):

Browser / OS Command
Windows (Chrome) chrome.exe --user-data-dir="C:\ChromeDev" --disable-web-security
macOS (Chrome) open -n -a "Google Chrome" --args --user-data-dir="/tmp/ChromeDev" --disable-web-security
Windows (Edge) msedge.exe --user-data-dir="C:\EdgeDev" --disable-web-security
Step 2: Open the HTML file

Save the Microsoft Dynamics 365- Dataverse Multi-Environment Pro Manager HTML file on your machine, then open it using the dedicated development browser you configured in Step 1. You should now be able to authenticate against Dataverse and call the Web API from this tool.

Step 3: Connect an environment

Go to the Environments tab, add an environment URL (for example https://<org>.crm.dynamics.com), pick a friendly name, then authenticate using the device login flow.

Step 4: Explore tables safely

Once connected, use Load All Tables to view available Dataverse tables. For your first tests, work in a sandbox or nonโ€‘production environment, and start with readโ€‘only operations (GET/queries) before you try bulk updates or deletes.

Complex Ingestion 2 โ€“ Realโ€‘World JSON Ingestion Workflow

This guide walks a nonโ€‘technical user through a complete endโ€‘toโ€‘end flow: receiving a large JSON file from another system, checking for duplicates against an existing Dataverse table, and ingesting only the unique records using the features already built into this application.

Example problem statement

โ€œOur company receives a weekly JSON file containing thousands of new customer onboarding records from an external system. Some customers may already exist in our Dataverse Contacts table. We want to: (1) detect which rows are duplicates, (2) review the unique records, and (3) ingest only those unique records into Dataverse using this Microsoft Dynamics 365- Dataverse Multi-Environment Pro Manager tool.โ€

Step 1: Land the JSON file into a staging table

Go to the Staging Tables tab and upload the JSON file using the upload area. The tool will:

  • Read the JSON file structure.
  • Create (or reuse) a Dataverse staging table that matches the file columns.
  • Insert all rows from the JSON file into the staging table.
  • Show basic statistics: number of rows, columns and upload time.
Step 2: Configure duplicate detection

Move to the Data Deduplication & Smart Ingestion area (Dedupe tab). Choose:

  • The target Dataverse table (for example contacts).
  • The staging table you created in Step 1.
  • The key fields to match on (for example: email address, phone number or national ID).

Then run the deduplication check. The tool will compare each row in the staging table against the target Dataverse table and mark rows as either duplicate or new/unique.

Step 3: Review duplicates vs unique rows

Still in the deduplication area, review the results:

  • Duplicates โ€“ rows that match existing Dataverse records on the selected key fields.
  • Unique โ€“ rows that do not match and are safe candidates for ingestion.

You can export the duplicate set if you want another team to review it, or you can safely ignore them and focus on the unique set.

Step 4: Ingest only the unique records

Open the Bulk Data tab and choose the subโ€‘tab for bulk ingestion. Configure:

  • The same target Dataverse table used in Step 2.
  • The uniqueโ€‘records view or export from the deduplication step as the source.
  • Any required lookup mappings (for example linking to accounts or existing contacts).

Run the bulk ingestion. The app will send the data to Dataverse in batches, respect platform limits, and display progress and error statistics so you can see how many records were created successfully.

Step 5: Confirm results and audit history

Finally, go to the History tab to review all calls that were executed as part of the ingestion. Here you can:

  • See timestamps, endpoints and status codes for each bulk batch.
  • Mark important runs as favourites.
  • Export selected history entries to CSV or PDF for audit or handโ€‘over.
Summary for nonโ€‘technical staff

You do not need to write any code. The process is: Stage the JSON file โ†’ Run deduplication โ†’ Ingest only unique rows โ†’ Review history. If you follow the steps above in order, the tool will guide you and speak out what each tab does using the Explain button.

Sample Code Repository (20+ Examples)

๐Ÿ” CRUD Operations (5 Examples)

1. Read Single Record
Retrieve a specific record by ID with selected fields
GET /api/data/v9.2/accounts(12345678-1234-1234-1234-123456789012)?$select=name,revenue,accountid
2. Create New Record
Create a new account record with basic information
POST /api/data/v9.2/accounts { "name": "New Customer Inc", "revenue": 1000000, "numberofemployees": 50, "address1_city": "Seattle" }
3. Update Record
Update an existing account record
PATCH /api/data/v9.2/accounts(12345678-1234-1234-1234-123456789012) { "name": "Updated Customer Inc", "revenue": 1500000 }
4. Delete Record
Delete a record by ID
DELETE /api/data/v9.2/accounts(12345678-1234-1234-1234-123456789012)
5. Read Multiple Records with Filter
Retrieve multiple accounts with specific criteria
GET /api/data/v9.2/accounts?$select=name,accountid,revenue&$filter=revenue gt 100000 and statecode eq 0&$orderby=revenue desc&$top=10

๐Ÿ“Š Bulk Operations (5 Examples)

6. Batch Create Multiple Records
Create multiple contacts in a single batch operation
POST /api/data/v9.2/$batch { "requests": [ { "id": "1", "method": "POST", "url": "/api/data/v9.2/contacts", "headers": {"Content-Type": "application/json"}, "body": { "firstname": "John", "lastname": "Doe", "emailaddress1": "john.doe@example.com" } }, { "id": "2", "method": "POST", "url": "/api/data/v9.2/contacts", "headers": {"Content-Type": "application/json"}, "body": { "firstname": "Jane", "lastname": "Smith", "emailaddress1": "jane.smith@example.com" } } ] }
7. Batch Update Records
Update multiple accounts in a batch
POST /api/data/v9.2/$batch { "requests": [ { "id": "1", "method": "PATCH", "url": "/api/data/v9.2/accounts(account-id-1)", "headers": {"Content-Type": "application/json"}, "body": { "name": "Updated Account 1", "revenue": 500000 } }, { "id": "2", "method": "PATCH", "url": "/api/data/v9.2/accounts(account-id-2)", "headers": {"Content-Type": "application/json"}, "body": { "name": "Updated Account 2", "revenue": 750000 } } ] }
8. Mixed Batch Operations
Create, update, and delete in a single batch
POST /api/data/v9.2/$batch { "requests": [ { "id": "1", "method": "POST", "url": "/api/data/v9.2/contacts", "headers": {"Content-Type": "application/json"}, "body": { "firstname": "New", "lastname": "Contact" } }, { "id": "2", "method": "PATCH", "url": "/api/data/v9.2/accounts(existing-id)", "headers": {"Content-Type": "application/json"}, "body": { "name": "Updated Name" } }, { "id": "3", "method": "DELETE", "url": "/api/data/v9.2/contacts(old-id)" } ] }
9. Batch with ChangeSet
Use changeset for transactional batch operations
POST /api/data/v9.2/$batch Content-Type: multipart/mixed; boundary=batch --batch Content-Type: multipart/mixed; boundary=changeset --changeset Content-Type: application/http Content-Transfer-Encoding: binary POST /api/data/v9.2/accounts HTTP/1.1 Content-Type: application/json {"name": "Account 1"} --changeset Content-Type: application/http Content-Transfer-Encoding: binary POST /api/data/v9.2/contacts HTTP/1.1 Content-Type: application/json {"firstname": "John", "lastname": "Doe"} --changeset-- --batch--
10. Large Batch Processing
Process large datasets in batches of 100
// Process records in batches of 100 for (let i = 0; i < records.length; i += 100) { const batch = records.slice(i, i + 100); const batchPayload = { requests: batch.map((record, index) => ({ id: (i + index + 1).toString(), method: "POST", url: "/api/data/v9.2/contacts", headers: {"Content-Type": "application/json"}, body: record })) }; // Execute batch }

๐Ÿ”„ FetchXML & Advanced Queries (5 Examples)

11. Complex FetchXML with Joins
Retrieve accounts with their primary contacts
12. FetchXML with Aggregation
Get count of opportunities by status
13. FetchXML with Date Filtering
Get accounts created in the last 30 days
14. FetchXML with Multiple Conditions
Complex filtering with AND/OR conditions
15. FetchXML with Linked Entity Aggregation
Get account with count of related contacts

๐Ÿ”ง SQL & OData Queries (5 Examples)

16. Basic OData Query
Simple OData query with filtering and ordering
GET /api/data/v9.2/accounts? $select=name,accountid,revenue& $filter=revenue gt 100000 and statecode eq 0& $orderby=revenue desc& $top=10
17. OData with Expand
Include related entity data in query
GET /api/data/v9.2/accounts? $select=name,accountid& $expand=primarycontactid($select=fullname,emailaddress1)& $filter=statecode eq 0
18. OData with Complex Filtering
Advanced filtering with multiple conditions
GET /api/data/v9.2/contacts? $select=fullname,emailaddress1,address1_city& $filter=( (address1_city eq 'Seattle' or address1_city eq 'New York') and createdon ge 2023-01-01T00:00:00Z and statecode eq 0 )& $orderby=createdon desc
19. OData with Count
Get total count of records with data
GET /api/data/v9.2/accounts? $select=name,accountid& $filter=statecode eq 0& $count=true& $top=5
20. OData with Aggregation
Use OData aggregation functions
GET /api/data/v9.2/opportunities? $apply= filter(statecode eq 0)/ groupby( (statuscode), aggregate(actualvalue with sum as totalvalue) )& $orderby=totalvalue desc

Dataverse Web API Reference

Base URL Format

All API calls use this base format:

https://[org-name].crm.dynamics.com/api/data/v9.2/
Common HTTP Methods
  • GET - Retrieve records
  • POST - Create records
  • PATCH - Update records
  • DELETE - Delete records
Key OData Query Parameters
$select

Specify fields to return

?$select=name,accountid
$filter

Filter results

?$filter=revenue gt 100000
$orderby

Sort results

?$orderby=createdon desc
$top

Limit results

?$top=10
Batch Operations

Execute multiple operations in a single request:

POST /api/data/v9.2/$batch
Content-Type: multipart/mixed; boundary=batch

--batch
Content-Type: application/http
Content-Transfer-Encoding: binary

GET /api/data/v9.2/accounts?$top=1 HTTP/1.1

--batch--
Error Responses

Common HTTP status codes:

  • 200 - Success
  • 201 - Created
  • 204 - No Content (Delete success)
  • 400 - Bad Request
  • 401 - Unauthorized
  • 403 - Forbidden
  • 404 - Not Found
  • 429 - Too Many Requests

API History & Audit Log

No API calls logged yet.

No favorite API calls yet.

Application Settings

Local Storage Management

This will clear all cached environments, history, AI settings and theme preferences stored in this browser. Use this if you want to completely reset the tool for a new organisation or user.

Configure AI Service Access

Google Gemini API

OpenAI API (GPT)

Microsoft Copilot API

Anthropic Claude API

DeepSeek API

Other / Custom Provider

๐ŸŽจ App Appearance & Theme

Theme Presets

Quickly switch the overall look and feel of the tool.

Custom Colors

Fine-tune the colours to match your organisation or personal preference.

App Font

Choose a font that matches your organisation or personal taste.

Preview โ€” this is how your text will appear with the selected font and colours:

The quick brown fox jumps over the lazy dog.

Note: Theme and font preferences are stored only in this browser and do not affect Dataverse itself.