Mastodon
All tools

SQL Insert & Upsert Builder

Paste Excel, CSV, or TSV data and generate bulk SQL INSERT, PostgreSQL/SQLite ON CONFLICT, MySQL ON DUPLICATE KEY UPDATE, or SQL Server MERGE statements locally.

Your data stays private. Pasted rows, column mappings, and generated SQL all stay in your browser. No uploads. No server processing. Safer for customer lists, CRM exports, finance data, and internal admin work — still review organisation data policy before pasting confidential datasets.

Quick examples
Pasted spreadsheet data
Paste straight from Excel, Google Sheets, a CSV export, or a TSV file. Delimiter is auto-detected.
Parsed preview
Parsed rows and columns will appear here as soon as your paste is non-empty.
Generated SQL
Paste spreadsheet data above to generate SQL. The output updates automatically as you tweak settings.
Generation summary

Paste rows and pick a dialect to see a plain-English summary of the output.

Dialect & mode

SQL dialect

Operation mode

Plain bulk INSERT. No conflict handling — duplicates will error if the target has a unique constraint.
Target table
Parsing

Delimiter

Batching & output

Transaction wrapper

Heads-up

This tool escapes literal values. Table, schema, and column identifiers are validated or quoted separately — it is not a substitute for parameterized queries in application code.

Upsert depends on a real PRIMARY KEY or UNIQUE index on the key columns you pick. Without one, MySQL will insert duplicates and PostgreSQL/SQLite will raise a conflict-target error.

Always review the generated SQL and test on a non-production copy before running against live data.

Pasted rows, column mappings, and generated SQL stay in your browser. No uploads, no server processing, no network calls during generation. Still review your organisation's data handling policy before pasting customer, finance, or security-sensitive data into any tool.

Overview

Turning a spreadsheet into a safe SQL script is one of those chores every backend developer hits constantly: a product manager sends a CSV of rows to seed, an analyst shares a Google Sheet with user metadata, a support ticket comes in with a list of records to backfill. In the worst case people hand-write INSERT statements in a text editor; in the slightly-better case they paste the data into a one-off script. Both approaches are error-prone: an un-escaped apostrophe, a comma inside a quoted field, or a boolean that silently becomes the string "true" are all easy to miss.

This tool removes that chore. Paste rows straight from Excel, Google Sheets, a CSV export, or a TSV file. The delimiter is auto-detected. Columns are scanned to infer types — numbers are emitted unquoted, booleans are normalised (TRUE/FALSE for PostgreSQL, 1/0 for MySQL, SQLite, and SQL Server), JSON is validated and emitted as a proper literal (with a ::jsonb cast for PostgreSQL), and empty cells become NULL. Every literal is escaped using portable rules — single quotes are doubled; backslashes are left alone — which is the safe default across every major dialect regardless of its backslash-escape settings.

When you need more than a plain INSERT, switch modes. Generate PostgreSQL and SQLite INSERT ... ON CONFLICT (keys) DO UPDATE SET col = EXCLUDED.col, MySQL’s modern alias-based ON DUPLICATE KEY UPDATE, or SQL Server MERGE with dedicated source and target aliases. Tick which columns are keys and which should be updated on conflict and the builder assembles the exact clause each engine expects. Large row sets are split into configurable batches, optionally wrapped in transactions, and annotated with a header comment so the person running the script (very possibly future-you) sees the dialect, mode, row count, and any warnings at a glance.

Everything runs locally in the browser — pasted data, column mappings, and generated SQL never leave your machine. That makes it safe for customer lists, CRM exports, and internal admin workflows where uploading to a third-party service would be uncomfortable. It also makes the output reviewable: you get the SQL as text, not as a one-click “run against production” button, and you're encouraged to test against a non-production copy before running against live data.

Use cases

When to use it

  • Seeding a database from a CSV exportProduct catalogs, fixture data, or a one-off customer import where you need to run a single SQL script.
  • Idempotent data backfillsUpsert rows so re-running the script is safe — ON CONFLICT DO UPDATE, DO NOTHING, or MERGE depending on dialect.
  • Copy-and-paste from Excel or Google SheetsTSV pastes from spreadsheets are auto-detected; smart quotes, blank rows, and trailing whitespace are cleaned up for you.
  • Quick ticket reproductionsTurn a bug report's CSV attachment into a clean INSERT you can run in a local database to reproduce the issue.
  • Migrating between dialectsGenerate Postgres ON CONFLICT today, then flip the dialect to MySQL or SQL Server MERGE for the same source data.

When it's not enough

  • Streaming production ETLFor scheduled, high-volume loads use COPY / LOAD DATA INFILE, a real ETL tool, or parameterized prepared statements in code.
  • Untrusted user inputThis tool escapes literal values, but production apps should use bound parameters — never concatenate user input into SQL.
  • Schema changesThe generator assumes the target table already exists with the right columns, keys, and constraints. It does not emit CREATE TABLE.
  • Extremely wide rows or multi-million-row loadsFor very large datasets, use your database's native bulk loader — psql \copy, mysqlimport, bcp, or cloud warehouse ingest.

How to use it

  1. 1

    Paste or upload spreadsheet data

    Paste rows straight from Excel, Google Sheets, a CSV export, or a TSV file. Delimiter auto-detection handles commas, tabs, semicolons, and pipes.

  2. 2

    Pick a dialect and operation mode

    Choose PostgreSQL, MySQL, SQLite, or SQL Server. Then pick bulk INSERT, ON CONFLICT DO UPDATE / DO NOTHING, ON DUPLICATE KEY UPDATE, or MERGE.

  3. 3

    Confirm target schema, table, and columns

    Enter schema + table. Review inferred target column names (normalised to snake_case), types, and null handling. Tick the column(s) that make up the PRIMARY KEY or UNIQUE index you expect to conflict on.

  4. 4

    Tune batching, transactions, and identifier quoting

    Split output into batches of N rows for more readable diffs, wrap in a transaction, and force-quote identifiers if your names collide with reserved words.

  5. 5

    Copy or download and test on a non-production copy

    Always run the generated SQL against a staging or local copy first. Review warnings about non-numeric values in numeric columns, invalid JSON, or missing key columns before executing against live data.

Common errors and fixes

ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification

PostgreSQL requires the ON CONFLICT target columns to match an existing PRIMARY KEY or UNIQUE index. Create the index first, or tick the correct key columns in the builder.

Duplicate entry 'xxx' for key 'PRIMARY'

You're running a plain INSERT against a MySQL table with a duplicate key. Switch to Upsert (ON DUPLICATE KEY UPDATE) mode, or de-duplicate rows before generating SQL.

near "ON": syntax error (SQLite)

ON CONFLICT upsert requires SQLite 3.24+. If you're on an older version, switch to plain INSERT OR REPLACE / INSERT OR IGNORE, or upgrade SQLite.

Incorrect syntax near 'MERGE' (SQL Server)

MERGE requires SQL Server 2008+ and a semicolon-terminated statement. The builder emits the semicolon; the likely cause is a copy-paste issue or running against a compatibility level that disabled MERGE.

Unterminated string literal

Usually caused by a raw apostrophe in the source data. The builder doubles single quotes automatically; if you still hit this, verify you didn't post-edit the generated SQL and re-introduce an unescaped quote.

Column count doesn't match value count at row N

A CSV row has fewer or more cells than the header. The builder normalises widths and warns you; check the warning banner on the parsed preview panel and fix the source row.

ERROR: invalid input syntax for type jsonb

A cell marked as JSON is not valid JSON. Double-check the cell in the parsed preview or change the column type to String if you don't need JSON semantics.

Frequently asked questions

Direct answers: INSERT, UPSERT, and MERGE without the footguns

Short, accurate summaries you can apply to real databases. Where dialects diverge, the notes call the differences out so you don't accidentally generate Postgres syntax and run it against MySQL.

How do I insert multiple rows in a single SQL statement?

Every major dialect supports the comma-separated VALUES form: INSERT INTO table (a, b) VALUES (1, 2), (3, 4), (5, 6);. It's a single statement with many tuples, wrapped in a single implicit transaction. This tool splits very large row sets into configurable batches so each statement stays readable and so you can resume safely if one batch fails.

What is an UPSERT and why should I use it?

UPSERT means “insert a row, or update it if a row with the same key already exists”. It's the idempotent form of an INSERT: re-running the same script doesn't duplicate data and doesn't fail with a unique-constraint error. You should use it for backfills, data imports, and any script that needs to be safely re-runnable — which is almost every script that touches production.

PostgreSQL UPSERT: ON CONFLICT DO UPDATE vs DO NOTHING

PostgreSQL (9.5+) and SQLite (3.24+) both use INSERT ... ON CONFLICT (keys) DO UPDATE SET col = EXCLUDED.col. The conflict target must match a real PRIMARY KEY or UNIQUE index. Use DO NOTHING when you want to insert missing rows but silently ignore anything that already exists. Use DO UPDATE with EXCLUDED.col when you want to overwrite existing rows with the values you just tried to insert.

MySQL UPSERT: alias-based ON DUPLICATE KEY UPDATE

MySQL 8.0.19 introduced an alias form that replaces the now-deprecated VALUES(col) reference: INSERT INTO t (a, b) VALUES (1, 2) AS new ON DUPLICATE KEY UPDATE a = new.a, b = new.b;. This tool emits the modern alias form. On older MySQL versions you'll need the legacy VALUES(col) reference, which the deprecation notes recommend replacing at your earliest opportunity.

SQL Server UPSERT: MERGE (with caveats)

SQL Server doesn't have ON CONFLICT. The closest equivalent is MERGE. MERGE is powerful but has historical edge cases (duplicate source rows, concurrency races) that make it easy to get wrong. Always review the match and insert / update clauses, ensure your source data is de-duplicated on the match key, and test against a non-production copy. For straightforward cases many teams prefer an explicit UPDATE followed by an INSERT ... WHERE NOT EXISTS.

Escaping single quotes, Unicode, and NULLs correctly

This tool escapes single quotes by doubling them (O'Neill becomes 'O''Neill'), which is portable across every supported dialect. It avoids backslash escaping because standard_conforming_strings in PostgreSQL and NO_BACKSLASH_ESCAPES in MySQL both treat backslashes literally inside regular string literals. Empty cells become NULL by default; you can toggle this per column. SQL Server strings are prefixed with N'...' so Unicode characters round-trip cleanly.

UPSERT cheat sheet by dialect

DialectUpsert clauseNotes
PostgreSQL 9.5+INSERT ... ON CONFLICT (keys) DO UPDATE SET col = EXCLUDED.colRequires an existing PRIMARY KEY or UNIQUE index matching the conflict target. EXCLUDED refers to the would-be-inserted row.
SQLite 3.24+INSERT ... ON CONFLICT (keys) DO UPDATE SET col = excluded.colSame shape as Postgres. Before 3.24 use INSERT OR REPLACE / INSERT OR IGNORE instead.
MySQL 8.0.19+INSERT ... AS new ON DUPLICATE KEY UPDATE col = new.colModern alias form replaces deprecated VALUES(col). Requires MySQL 8.0.19 or later.
MySQL <8.0.19INSERT ... ON DUPLICATE KEY UPDATE col = VALUES(col)Legacy form using VALUES(col). Still works but is marked deprecated in newer MySQL releases.
MariaDB 10.3+INSERT ... ON DUPLICATE KEY UPDATE col = VALUES(col)MariaDB follows the legacy MySQL syntax. MariaDB 10.3+ also supports the Postgres-style ON CONFLICT in some contexts.
SQL Server 2008+MERGE INTO tgt USING (VALUES ...) AS src (cols) ON match WHEN MATCHED THEN UPDATE ... WHEN NOT MATCHED THEN INSERT ...;Has historical edge cases. De-duplicate source rows on the match key; always terminate with a semicolon; test before production.
Oracle 19c+MERGE INTO t USING (...) ON (...) WHEN MATCHED THEN UPDATE ...Oracle's MERGE. Not emitted by this tool yet; use the SQL Server MERGE output as a starting point and adapt syntax.

Working from a messy export? Clean it up first with the CSV ↔ JSON converter and cleaner, then paste the cleaned output into this builder. If your workflow is moving the other way — from a SQL result set into a spreadsheet — pair this tool with the JSON formatter and validator to reshape responses, or the text case converter to normalise column names before generating SQL. For generated identifiers like UUIDs and ULIDs, use the UUID / ULID / Nano ID generator to seed key columns, and convert Unix timestamps to readable dates with the Unix timestamp converter before picking a date column type.