Data Transfer
Move data between databases using your existing noorm configs. Tables are transferred in foreign key order so referential integrity is maintained. Cross-dialect transfers (e.g., PostgreSQL to MySQL) are supported with automatic type conversion.
When You Need This
You have a staging database full of test data that you want to push to a fresh QA environment. Or you need to seed a local dev database from a shared development server. Or you're migrating data between two production instances.
The databases share the same schema—same tables, same columns. You need the data moved, not the structure.
Supported Dialects
| Dialect | Supported | Same-server optimization | Cross-dialect |
|---|---|---|---|
| PostgreSQL | Yes | Only within same database | Yes |
| MySQL | Yes | Yes (cross-database on same host) | Yes |
| MSSQL | Yes | Yes (cross-database on same host) | Yes |
| SQLite | No | — | — |
Same-server optimization uses direct INSERT...SELECT SQL instead of reading data into the application and writing it back. Significantly faster for large datasets.
Cross-dialect transfers convert data types automatically through a universal type system. Most common types map cleanly; some dialect-specific features (like PostgreSQL arrays) become JSON in dialects that don't support them natively.
Interactive Mode
From the home screen:
- Press
dto enter the database menu - Select a config
- Choose the transfer option
- Walk through the wizard:
- Pick destination (another config, export to file, or import from file)
- Select tables (all or specific)
- Choose a conflict strategy
- Review the plan
- Execute
The TUI shows live progress per table with row counts and batch completion.
Export/Import options appear in the destination selection list:
- "Export to .dt file" — saves data to portable files
- "Import from .dt file" — loads data from previously exported files
Headless Mode
The --to flag is required. Source defaults to the active config.
# Transfer all tables from active config to backup
noorm -H db transfer --to backup
# Specify source explicitly
noorm -H db transfer staging --to production
# Transfer specific tables only
noorm -H db transfer --to backup --tables users,posts,commentsDry Run
Preview what will happen without transferring anything:
noorm -H db transfer --to backup --dry-runOutput shows:
- Whether same-server optimization applies
- Table count and estimated row counts
- Transfer order (FK dependency sorted)
- Warnings (missing destination tables, etc.)
Conflict Strategies
When destination tables already contain data with matching primary keys:
# Abort on first conflict (default)
noorm -H db transfer --to backup --on-conflict fail
# Skip rows that already exist
noorm -H db transfer --to backup --on-conflict skip
# Update existing rows with source data
noorm -H db transfer --to backup --on-conflict update
# Delete and re-insert conflicting rows
noorm -H db transfer --to backup --on-conflict replace| Strategy | What happens |
|---|---|
fail | Stops transfer on first primary key conflict |
skip | Leaves existing rows untouched, inserts new ones |
update | Overwrites non-PK columns on existing rows |
replace | Removes conflicting rows entirely, inserts fresh copies |
Options
# Set batch size for cross-server transfers (default: 1000)
noorm -H db transfer --to backup --batch-size 5000
# Clear destination tables before transfer
noorm -H db transfer --to backup --truncate
# Don't disable foreign key checks (risky for dependent tables)
noorm -H db transfer --to backup --no-fk
# Don't preserve identity/auto-increment values
noorm -H db transfer --to backup --no-identityJSON Output
noorm -H --json db transfer --to backupTransfer result:
{
"success": true,
"status": "success",
"tables": [
{
"table": "users",
"status": "success",
"rowsTransferred": 1500,
"rowsSkipped": 0,
"durationMs": 234
}
],
"totalRows": 1500,
"durationMs": 1234
}Dry run result:
{
"success": true,
"dryRun": true,
"sameServer": false,
"tableCount": 5,
"estimatedRows": 10000,
"tables": [
{
"name": "users",
"rowCount": 1000,
"hasIdentity": true,
"dependsOn": []
}
],
"warnings": []
}What Happens During a Transfer
Planning — noorm reads the source schema, builds a foreign key dependency graph, and sorts tables so parents are transferred before children.
FK checks disabled — Foreign key constraints are temporarily disabled on the destination to avoid ordering issues within batches.
Truncate (optional) — If
--truncateis set, destination tables are cleared first. PostgreSQL usesTRUNCATE ... CASCADE, MSSQL usesDELETE(can't truncate with FKs), MySQL uses standardTRUNCATE.Data transfer — Each table is transferred in order. Same-server uses direct SQL. Cross-server reads batches from source and writes them to destination.
Identity handling — Identity/auto-increment values from the source are preserved by default. Sequences are reset after transfer to continue from the max value.
FK checks re-enabled — Constraints are turned back on.
Requirements
- Destination tables must already exist with compatible column structure
- The noorm project must have configs for both source and destination databases
- For cross-dialect transfers, column types must be convertible (most are; check the dry-run output for warnings)
Common Patterns
Seed a dev database from staging
noorm -H db transfer staging --to local --truncateClears the local database first, then copies everything from staging.
Incremental sync with skip
noorm -H db transfer --to backup --on-conflict skipOnly inserts rows that don't already exist in backup. Existing rows are left untouched.
Upsert from source of truth
noorm -H db transfer --to target --on-conflict updateUpdates all existing rows with the latest data from source, inserts new rows.
Transfer specific tables
noorm -H db transfer --to backup --tables users,user_preferencesOnly transfers the specified tables. FK dependencies between selected tables are still respected.
CI/CD test data setup
noorm -H --json db transfer staging --to ci-test --truncate --on-conflict failClean transfer for test environments. JSON output for pipeline integration. Fails fast if anything goes wrong.
Cross-dialect migration
noorm -H db transfer postgres-legacy --to mysql-new --dry-run
noorm -H db transfer postgres-legacy --to mysql-newMigrate from PostgreSQL to MySQL. Run --dry-run first to check for type conversion warnings.
File Export/Import
Export data to portable .dt files for backup, sharing, or migration without a live destination database.
File Formats
| Extension | Description |
|---|---|
.dt | Plain text (human-readable) |
.dtz | Compressed (gzip) |
.dtzx | Encrypted + compressed (requires passphrase) |
Export to Files
The --tables flag is required for export.
# Export single table
noorm -H db transfer --export ./backup/users.dt --tables users
# Export multiple tables to a directory
noorm -H db transfer --export ./backup/ --tables users,posts,comments
# Export compressed
noorm -H db transfer --export ./backup/ --tables users,posts --compress
# Export encrypted (implies compression)
noorm -H db transfer --export ./backup/ --tables users --passphrase "my-secret"Path rules:
- Single table → path is the output file
- Multiple tables → path is a directory, noorm creates
<table>.dtper table
Import from Files
# Import from .dt file
noorm -H db transfer --import ./backup/users.dt
# Import with upsert
noorm -H db transfer --import ./backup/users.dtz --on-conflict update
# Import encrypted file
noorm -H db transfer --import ./backup.dtzx --passphrase "my-secret"
# Validate schema compatibility only
noorm -H db transfer --import ./backup/users.dt --dry-runExport/Import JSON Output
Export result (single table):
{
"success": true,
"mode": "export",
"filepath": "./backup/users.dt",
"tables": [
{ "table": "users", "filepath": "./backup/users.dt", "rowsExported": 1500, "bytesWritten": 45230 }
],
"totalRows": 1500,
"totalBytes": 45230
}Export result (multiple tables):
{
"success": true,
"mode": "export",
"directory": "./backup/",
"tables": [
{ "table": "users", "filepath": "./backup/users.dt", "rowsExported": 1500, "bytesWritten": 45230 },
{ "table": "posts", "filepath": "./backup/posts.dt", "rowsExported": 800, "bytesWritten": 23100 }
],
"totalRows": 2300,
"totalBytes": 68330
}Import result:
{
"success": true,
"mode": "import",
"filepath": "./backup/users.dt",
"rowsImported": 1500,
"rowsSkipped": 0
}Common Export/Import Patterns
Backup specific tables before risky operation:
noorm -H db transfer --export ./pre-migration-backup/ --tables users,orders --compress
# ... run migration ...
# If something goes wrong:
noorm -H db transfer --import ./pre-migration-backup/users.dt --truncate
noorm -H db transfer --import ./pre-migration-backup/orders.dt --truncateShare test data with team:
# Export encrypted for sharing
noorm -H db transfer --export ./fixtures.dtzx --passphrase "team-secret" --tables users,posts
# Teammate imports
noorm -H db transfer --import ./fixtures.dtzx --passphrase "team-secret"Cross-dialect migration via file:
# Export from PostgreSQL
noorm use postgres-source
noorm -H db transfer --export ./migration-data/ --tables users,posts
# Import into MySQL
noorm use mysql-target
noorm -H db transfer --import ./migration-data/users.dt
noorm -H db transfer --import ./migration-data/posts.dt