Multi-Table OData / FetchXML Query & Visualization
Write FetchXML that joins multiple tables. The system auto-detects entity sets and visualizes results.
Query Results
Visualization
Universal Query Studio (FetchXML / OData / Multi-Table)
Paste any Dataverse Web API OData path or FetchXML query. No table selection is required โ the query controls everything. You can use joins, multi-table expansions and advanced filters.
Query Results
Visualization
Summary
Visual Query Studio Pro (Multi-Environment โข Multi-Table โข Pivot โข Visuals)
Build Excelโstyle pivots across multiple environments and multiple tables. Add sources, load fields, drag fields into Rows/Columns/Measures, choose aggregations, then run a pivot and visualise.
Tip: Table scope (Custom/System/All) is controlled in Environments โ Table scope. Load tables there first, then come back here.
Available Fields (from all sources)
Drag fields into the Pivot tab zones.
Rows
Columns
Measures (Values)
If you use a date field in Rows/Columns, this buckets it.
Calculated Measures
SUM(revenue), COUNT(*), % of total, Running total.
Drag measures into Measures (Values).
Pivot Result
Compare (same table across environments)
Add multiple sources pointing to the same table in different environments, set Mode โ Compare, then run Pivot.
The system adds a special field __env you can drag into Rows/Columns to compare counts/sums across environments.
__env into Columns and a measure into Values.
AI Assist (optional)
Ask for help building a pivot (e.g. โShow total budget by status and month across PROD+UATโ). If your build has AI keys configured, it will use them; otherwise it provides guidance.
Manage Environments (Unlimited)
๐ Need to login to Microsoft Dataverse? Open Microsoft Device Login
Single Record CRUD - โ **Fully Functional**
Perform Read, Create, Update, and Delete operations on a single Dataverse record.
Results
Bulk Data Operations (Batch API) - โ **Fully Functional**
Use the Batch API for bulk creation (POST) by uploading JSON files or providing a manual payload.
Bulk Results
AI Voice & Prompted Data Ingestion
Use a natural language prompt or voice command to describe the data you want to create.
AI-Generated Payload (JSON)
Multi-Target Ingestion (Many Files โ Many Tables)
Configure multiple ingestion groups. Each group can point a different uploaded file or pasted JSON to a different Dataverse table/environment. The app will process each group in batches and show per-group statistics and a summary at the end.
Multi-Target Ingestion Results
Multi-Target Ingestion with Lookups (Wizard-Assisted)
Configure multiple ingestion groups where each group ingests data into a specific table/environment and resolves multiple lookup columns (e.g. contact, owner, parent account) automatically before ingesting. Ideal when you have 50+ different files that each need to go into a different table with lookups.
- Click "Add Lookup Ingestion Group" for each file/table you want to load.
- For each group, select Environment and Target Table.
- Upload your JSON/TXT file or paste JSON data for that specific group only.
- Under Lookup Columns, add one row for each lookup (e.g. Owner, Parent Account, Contact).
- For each lookup row, fill:
- Source data column (where the value comes from in your file, e.g. emailaddress1)
- Lookup target table (e.g. contact, account, systemuser)
- Match field on lookup table (e.g. emailaddress1, name, fullname)
- Binding field on target table (lookup field to bind, e.g. primarycontactid, ownerid)
- Click "๐ Run All Lookup Ingestion Groups". The app will:
- Download lookup rows from Dataverse for each lookup mapping.
- Match your file values to existing Dataverse records.
- Attach the correct
@odata.bindproperties for each lookup. - Ingest all records in batches of 100 with full per-group results.
Lookup Ingestion Results
Bulk Delete Wizard (Multi-Environment / Multi-Table / Lookups)
Use this wizard to delete records across many environments and tables in a single run. You can paste JSON or text, or upload files for each delete group. Each group targets a specific Environment + Table and can use a key field or lookup fields.
Bulk Delete Summary & Audit Trail
Staging Table Management (Dataverse)
Create real staging tables in Dataverse directly from your uploaded JSON/text data. The app will create a custom table, add columns based on your data, publish it, and insert all records.
Staging Settings
File Statistics (this run)
Staging Batch Progress & Errors (this run)
Staging History / Audit Trail
API Ingest โ Staging Table
Connect to any JSON-based API, pull all records (with automatic paging where supported), preview the data, and then create a real Dataverse staging table from the result.
1๏ธโฃ Define or Reuse an API
2๏ธโฃ Run API & Load Data
The app will automatically handle common paging patterns (OData @odata.nextLink and simple arrays)
so you can pull hundreds of thousands or millions of rows safely in chunks.
3๏ธโฃ Create Dataverse Staging Table from API Data
Use the Target Environment and Staging Table Name at the top of this tab. The app will create a custom table, add columns from the API payload, and insert all records in safe batches respecting Dataverse API limits.
โฑ Recurring API Group Scheduler
Automatically run groups of saved APIs on a schedule and create fresh staging tables every run. Uses the same paging engine and auto-split staging tables.
Use {apiName} and {yyyyMMddHHmmss} to inject
API name and timestamp.
No saved APIs yet. Use the API Ingest form above and click Save API.
No schedule groups defined yet.
Scheduler behaviour:
- While this page is open, the scheduler checks every 60 seconds.
- When Next run time is reached for an active group, it pulls all pages for each API and creates new staging table(s) using your pattern.
- All runs respect Dataverse batch limits and are logged in History.
API Ingest Groups โ Staging
Run many API โ Staging operations together. Each group uses a saved API, a target environment, and its own staging table name. Perfect for 50โ200+ APIs in one go.
No API groups defined yet. Click โ Add API Group to begin.
How it works:
- First, go to API Ingest โ Staging Table and save each API.
- Then, come here and create one group for each saved API.
- Each group will auto-page API results and auto-split staging tables when there are too many columns.
- All activity is stored in History so you can review and export later.
Staging & Cross-Environment Comparison Reports
Compare data from unlimited tables across unlimited environments using FetchXML or OData queries. Results show duplicates and unique differences by a chosen key field.
No sources added yet. Click "Add Table / Query Source" to start.
Bulk Group Comparison (Many Table/Environment Pairs)
Configure multiple comparison groups. Each group compares one table in one environment against another table in another environment using a chosen key field. Ideal when you need to compare tens of tables in one go (e.g. 50+ groups).
Bulk Group Comparison Results
Data Deduplication & Smart Ingestion
Upload files, check for duplicates in Dataverse, and generate ingestion code for new records.
Enter the field name that should be unique across records
Generated Ingestion Code
Data Analysis & Visualization - โ **Fully Functional**
Fetch and visualize data using advanced OData queries.
Analysis Summary
Visualization
FetchXML & Advanced Web API Queries - โ **Fully Functional**
Execute complex, SQL-like queries using Dataverse **FetchXML**.
Query Results
SQL Query (Experimental) - โ **Functional with Joins**
Raw SQL/TSQL Query Editor
Run standard SQL/TSQL against Dataverse tables.
Converted Dataverse API Query (OData Path)
Conversion Pending...
Query Results
AI Support & Analysis - โ **Fully Functional**
Get AI-powered insights about your Dataverse environments using various AI providers.
Drag tables from the sidebar here to provide context.
AI Results
๐งช API Advanced โ Custom API Scripts & Ingest
Write your own JavaScript fetch scripts to call any API. The script must return JSON data.
Preview results in an Excel-style grid, download them, or ingest into Dataverse as a new staging table.
Example Scripts (10+)
Click to load an example into the editor. You can customise and run it.
getSecret('name').Power Tools
getSecret('name').
fetchWithRetry(url, options) instead of fetch().
You only need this for Dataverse ingestion. For pure API calls, leave it blank.
If empty, a default name will be generated.
Results Preview (Excel-style grid)
Run Log
๐ Dataverse Metadata Explorer (Tables โข API Links โข Inline Relationships)
๐ Python & R Studio โ Run Scripts, Transform Data, Ingest to Dataverse
Run Python or R against uploaded data or live Dataverse tables, then download/email the result or ingest it as a Dataverse staging table. This uses a secure execution backend (Azure Functions or Docker) that you can switch below.
Execution Backend
Same backend image works for both. Toggle mode to switch endpoint used by this tab.
Your backend must expose /execute and /health.
Script
Drop .json / .txt / .csv / .xlsx here, or click to browse.
Result
๐ Dataverse Pro Manager Guide & Instructions
Getting Started
Use Environments tab to add one or many Dataverse environments. Click โAdd New Environmentโ, enter URL and name, then connect with device-code sign in.
Still in Environments, click โLoad All Tablesโ to fetch all tables across your selected environments. The Table Explorer on the left will then show all tables.
Each main tab focuses on a specific job: single-record work, bulk ingestion, staging & comparison, reports, bulk delete, queries, AI, etc.
Main Tabs Overview
Manage all Dataverse environments. Add, connect, remove, and select which environments are active. You can also reload tables and see connection status.
Work with single records. Use the environment and table dropdowns, then type a record id, OData query, or JSON payload to create, read, update, or delete.
All bulk ingestion tools for creating/updating data:
- Manual/File Ingestion โ upload JSON or paste JSON and send bulk create operations via Dataverse Batch API.
- AI Voice/Prompt Ingestion โ describe what you want in natural language, then generate or adjust payloads.
- Multi-Target Ingestion โ run many ingestion jobs in one go (multiple environments/tables).
- Multi-Target Ingestion (Lookups Wizard) โ map lookup fields so related records are linked correctly.
Use the Bulk Delete Wizard to delete records (or whole tables) across many environments and tables in a single run.
- Create multiple Delete Groups, each targeting an Environment + Table.
- Choose a Key Field (e.g. contactid, emailaddress1) or let the wizard build filters from your JSON.
- Upload JSON/TXT/CSV or paste JSON/IDs for each group.
- (Optional) Delete all records in a table (use only for non-production or where you are sure).
Create and manage staging tables in Dataverse. You can create new entities based on uploaded/pasted data structure, load data, and view per-upload statistics (rows, columns, data types, timestamps).
Compare staging vs production or any other table/environment combinations. Includes:
- Standard comparison โ pick two sources and a key field.
- Bulk group comparison โ run many pairs at once.
- Detailed statistics and differences ready for ingestion or cleanup.
Upload/paste data and check against Dataverse tables for duplicates. Results are split into Duplicates and New data with statistics and tables. Filtered dedupe allows OData-style pre-filters.
Simple analysis and charting. Choose a table, environment and field, then build OData queries and see charts (bar etc.) based on value counts.
Run raw FetchXML or OData queries. Paste full FetchXML or OData paths and execute directly.
Write advanced multi-table queries and visualize results. The tool auto-detects involved entity sets and shows tables and charts.
Paste any Web API path or FetchXML. No table selection is required; you can query multiple tables and joins in one statement.
Drag-and-drop style query builder. Point at tables and fields to build up queries visually and then execute them.
Build SQL-like queries visually, or type SQL directly (SELECT/INSERT) and let the app convert and run them as Dataverse operations.
Ask an AI model to generate queries, explain results, and help with complex Dataverse operations. Tables from the sidebar can be added as context.
This Instructions tab explains complex ingestion scenarios, sample codes, and API reference notes.
Full audit trail of API calls sent through the app, including favorites for quick reuse.
Manage local storage and configure AI provider API keys. These settings control how the app talks to outside services.
Complex Data Ingestion Patterns
Create records with related entities in a single batch operation:
{
"requests": [
{
"id": "1",
"method": "POST",
"url": "/api/data/v9.2/accounts",
"headers": {"Content-Type": "application/json"},
"body": {
"name": "Contoso Ltd",
"revenue": 5000000
}
},
{
"id": "2",
"method": "POST",
"url": "/api/data/v9.2/contacts",
"headers": {"Content-Type": "application/json"},
"body": {
"firstname": "John",
"lastname": "Doe",
"parentcustomerid_account@odata.bind": "/accounts(1)"
}
}
]
}
Update existing records or create new ones if they don't exist:
// Use PATCH with the record GUID PATCH /api/data/v9.2/accounts(accountid) // The system will update if exists, but for true upsert use: // 1. Check existence first with GET // 2. Then use POST (create) or PATCH (update)
Use OData queries to find distinct records before ingestion:
// Get unique accounts by name GET /api/data/v9.2/accounts?$select=name&$filter=statecode eq 0&$count=true // Use $apply for grouping (Dataverse Web API) GET /api/data/v9.2/accounts?$apply=groupby((name))
Start Guide โ Running This Tool Safely
This tool runs completely in your browser from a local .html file and needs to call Dataverse and Microsoft login endpoints directly. Most browsers block this by default (CORS). For this reason you must either run a dedicated development browser with web security disabled, or use a CORS helper extension.
โ ๏ธ Security Warning: Only use a dedicated development browser profile. Do not use this profile for normal web browsing, online banking, or production work. Close the browser when you are finished.
Option A โ Use a CORS helper extension:
Install the following extension and enable it only while using this tool:
https://mybrowseraddon.com/access-control-allow-origin.html
Option B โ Run a dedicated development browser instance (advanced users):
| Browser / OS | Command |
|---|---|
| Windows (Chrome) | chrome.exe --user-data-dir="C:\ChromeDev" --disable-web-security |
| macOS (Chrome) | open -n -a "Google Chrome" --args --user-data-dir="/tmp/ChromeDev" --disable-web-security |
| Windows (Edge) | msedge.exe --user-data-dir="C:\EdgeDev" --disable-web-security |
Save the Microsoft Dynamics 365- Dataverse Multi-Environment Pro Manager HTML file on your machine, then open it using the dedicated development browser you configured in Step 1. You should now be able to authenticate against Dataverse and call the Web API from this tool.
Go to the Environments tab, add an environment URL (for example https://<org>.crm.dynamics.com),
pick a friendly name, then authenticate using the device login flow.
Once connected, use Load All Tables to view available Dataverse tables. For your first tests, work in a sandbox or nonโproduction environment, and start with readโonly operations (GET/queries) before you try bulk updates or deletes.
Complex Ingestion 2 โ RealโWorld JSON Ingestion Workflow
This guide walks a nonโtechnical user through a complete endโtoโend flow: receiving a large JSON file from another system, checking for duplicates against an existing Dataverse table, and ingesting only the unique records using the features already built into this application.
โOur company receives a weekly JSON file containing thousands of new customer onboarding records from an external system. Some customers may already exist in our Dataverse Contacts table. We want to: (1) detect which rows are duplicates, (2) review the unique records, and (3) ingest only those unique records into Dataverse using this Microsoft Dynamics 365- Dataverse Multi-Environment Pro Manager tool.โ
Go to the Staging Tables tab and upload the JSON file using the upload area. The tool will:
- Read the JSON file structure.
- Create (or reuse) a Dataverse staging table that matches the file columns.
- Insert all rows from the JSON file into the staging table.
- Show basic statistics: number of rows, columns and upload time.
Move to the Data Deduplication & Smart Ingestion area (Dedupe tab). Choose:
- The target Dataverse table (for example
contacts). - The staging table you created in Step 1.
- The key fields to match on (for example: email address, phone number or national ID).
Then run the deduplication check. The tool will compare each row in the staging table against the target Dataverse table and mark rows as either duplicate or new/unique.
Still in the deduplication area, review the results:
- Duplicates โ rows that match existing Dataverse records on the selected key fields.
- Unique โ rows that do not match and are safe candidates for ingestion.
You can export the duplicate set if you want another team to review it, or you can safely ignore them and focus on the unique set.
Open the Bulk Data tab and choose the subโtab for bulk ingestion. Configure:
- The same target Dataverse table used in Step 2.
- The uniqueโrecords view or export from the deduplication step as the source.
- Any required lookup mappings (for example linking to accounts or existing contacts).
Run the bulk ingestion. The app will send the data to Dataverse in batches, respect platform limits, and display progress and error statistics so you can see how many records were created successfully.
Finally, go to the History tab to review all calls that were executed as part of the ingestion. Here you can:
- See timestamps, endpoints and status codes for each bulk batch.
- Mark important runs as favourites.
- Export selected history entries to CSV or PDF for audit or handโover.
You do not need to write any code. The process is: Stage the JSON file โ Run deduplication โ Ingest only unique rows โ Review history. If you follow the steps above in order, the tool will guide you and speak out what each tab does using the Explain button.
Sample Code Repository (20+ Examples)
๐ CRUD Operations (5 Examples)
๐ Bulk Operations (5 Examples)
๐ FetchXML & Advanced Queries (5 Examples)
๐ง SQL & OData Queries (5 Examples)
Dataverse Web API Reference
All API calls use this base format:
https://[org-name].crm.dynamics.com/api/data/v9.2/
- GET - Retrieve records
- POST - Create records
- PATCH - Update records
- DELETE - Delete records
Specify fields to return
?$select=name,accountid
Filter results
?$filter=revenue gt 100000
Sort results
?$orderby=createdon desc
Limit results
?$top=10
Execute multiple operations in a single request:
POST /api/data/v9.2/$batch Content-Type: multipart/mixed; boundary=batch --batch Content-Type: application/http Content-Transfer-Encoding: binary GET /api/data/v9.2/accounts?$top=1 HTTP/1.1 --batch--
Common HTTP status codes:
- 200 - Success
- 201 - Created
- 204 - No Content (Delete success)
- 400 - Bad Request
- 401 - Unauthorized
- 403 - Forbidden
- 404 - Not Found
- 429 - Too Many Requests
API History & Audit Log
No API calls logged yet.
No favorite API calls yet.
Application Settings
Local Storage Management
This will clear all cached environments, history, AI settings and theme preferences stored in this browser. Use this if you want to completely reset the tool for a new organisation or user.
Configure AI Service Access
Google Gemini API
OpenAI API (GPT)
Microsoft Copilot API
Anthropic Claude API
DeepSeek API
Other / Custom Provider
๐จ App Appearance & Theme
Theme Presets
Quickly switch the overall look and feel of the tool.
Custom Colors
Fine-tune the colours to match your organisation or personal preference.
App Font
Choose a font that matches your organisation or personal taste.
Preview โ this is how your text will appear with the selected font and colours:
The quick brown fox jumps over the lazy dog.
Note: Theme and font preferences are stored only in this browser and do not affect Dataverse itself.