Why I Migrated from Turso to Neon

When business requirements meet database limitations—an indie developer's Serverless database migration story
The Trigger: A Requirement About Time
One of MetaSight's core features is Chinese astrology calculations, which require "True Solar Time"—not the standard timezone time on your phone, but the actual solar time calculated from the user's geographic coordinates.
This means I need to store user geographic coordinates, query the corresponding timezone based on coordinates, and handle various time formats and timezone conversions precisely. When I started evaluating this requirement, I realized the current database choice was becoming a bottleneck.
Looking Back: Why I Chose Turso
When the project started, I chose Turso—a Serverless database built on SQLite. SQLite's lightweight nature means faster cold starts, no need to manage a separate database server, and the free tier was sufficient for early development. For a side project just getting started, these were reasonable considerations.
Limitations Encountered
As the product evolved, problems started piling up.
Compromises on Time Types
SQLite lacks native timestamp types. In Drizzle ORM, I had to simulate with integers:
// SQLite: storing timestamps as integers
export const users = sqliteTable("users", {
id: text("id").primaryKey(),
createdAt: integer("created_at", { mode: "timestamp" }),
birthTime: integer("birth_time", { mode: "timestamp" }),
});
Every read required manual conversion, and timezone handling was a mess.
Missing Geographic Data Support
SQLite has no native geographic data types. Storing coordinates meant splitting into two float fields:
// SQLite: split storage for coordinates
export const locations = sqliteTable("locations", {
latitude: real("latitude"),
longitude: real("longitude"),
});
Want to query "which timezone does this point belong to"? That requires complex geometric calculations in the application layer.
Third-Party Library Adaptation Costs
Auth libraries like Better Auth natively support PostgreSQL, but require extra adapter layers for SQLite:
// SQLite adapter requires manual type conversions
const adapter = new LibsqlAdapter(client, {
transformRow: (row) => ({
...row,
createdAt: new Date(row.createdAt),
updatedAt: row.updatedAt ? new Date(row.updatedAt) : null,
}),
});
Every library upgrade required extra caution, worrying about adapter compatibility.
Serverless Isn't the Ideal Use Case
Turso recently made significant changes: multi-region edge replication has been discontinued, and new users can only deploy to a single region. This means if your users are distributed globally, database latency becomes a bottleneck.
Turso's most powerful performance feature is Embedded Replicas—embedding a database replica directly in your application process, with near-zero read latency. However, this feature requires filesystem support, which serverless platforms like Vercel Functions and Cloudflare Workers don't have.
In other words, Turso's ideal use case is VPS or container deployment (where Embedded Replicas can be used), not pure serverless. If you're deploying on Vercel and need low-latency global access, Turso may not be the optimal choice.
Performance Comparison
Before making the migration decision, I researched third-party benchmark data:
Cold Start Latency
| Region | Neon | Turso |
|---|---|---|
| US East | 449ms | 105ms |
| Europe | 792ms | 482ms |
| Tokyo | 642ms | 856ms |
Hot Query Latency
| Region | Neon | Turso |
|---|---|---|
| US East | 22ms | 14ms |
| Europe | 275ms | 182ms |
| Tokyo | 357ms | 180ms |
The data shows Turso has an advantage in cold starts, though the gap narrows in the Tokyo region. Unfortunately, mainstream benchmarks don't cover Singapore or Hong Kong regions.
However, Neon supports connection pooling (PgBouncer), effectively mitigating cold start issues; hot query latency is comparable; and Neon deployed on Vercel can leverage Fluid Compute optimizations. Overall, performance wasn't a blocker for migration.
Why PostgreSQL
What convinced me was PostgreSQL's data type support.
Native Timestamps
// PostgreSQL: native timestamp support
export const users = pgTable("users", {
id: text("id").primaryKey(),
createdAt: timestamp("created_at", { withTimezone: true }).defaultNow(),
birthTime: timestamp("birth_time", { withTimezone: true }),
});
No conversion needed, timezone information stored directly, automatically handled during queries.
PostGIS Geographic Extension
PostgreSQL's PostGIS extension is the industry standard for geographic data. While I haven't enabled it in production yet, this capability is now readily available:
-- Query timezone by coordinates (planned feature)
SELECT tzid FROM timezones
WHERE ST_Intersects(
geom,
ST_SetSRID(ST_MakePoint(116.4074, 39.9042), 4326)
);
One line of SQL solves what previously required extensive application-layer code.
Why Neon Instead of Supabase
There's more than one Serverless PostgreSQL service on the market. Supabase is equally popular, but I chose Neon because it better fits my use case.
Neon's core feature is Scale to Zero—the database automatically suspends compute after 5 minutes of inactivity, with only storage incurring costs. For a side project still validating product-market fit, this means development environments are essentially free. Supabase doesn't support this feature; even idle databases continue to incur charges.
Another decisive factor is Neon's Serverless Driver. @neondatabase/serverless supports HTTP and WebSocket protocols, working directly in edge environments like Vercel Edge Functions and Cloudflare Workers without traditional TCP connections. This pairs well with Vercel's Fluid Compute.
Supabase is more like a "Firebase alternative," providing Auth, Realtime, Storage, and other full-stack services. If you need these, it's a good choice. But I only need a pure database—the extra middleware adds complexity. Neon is closer to standard Postgres, with lower migration costs.
Migration Strategy Selection
Database migrations have multiple strategies, each with its use cases and risks.
Option 1: Downtime Migration
The simplest approach: stop service → export data → import to new database → restore service.
The risk is unpredictable downtime—large datasets may require hours, rollback costs are high if migration fails, and user experience suffers. This approach only suits very early-stage projects with minimal users where extended downtime is acceptable.
Option 2: Dual-Write + Gradual Cutover
Write to both old and new databases simultaneously, gradually shifting read traffic to the new database.
// Dual-write example
async function createUser(data: UserData) {
// Write to old database
await tursoDb.insert(users).values(data);
// Sync write to new database
await neonDb.insert(users).values(transformForPg(data));
}
The advantages are clear: zero downtime, rollback anytime, gradual data consistency verification. But the risks are equally significant—maintaining two database connections, transaction consistency during dual-write is hard to guarantee, and code complexity increases significantly. This approach suits high-availability production systems, provided the team has sufficient resources to maintain dual-write logic.
Option 3: CDC (Change Data Capture)
Use tools like Debezium to capture source database change logs and sync to the target database in real-time. This approach requires no application code changes, supports incremental sync, and suits large dataset migrations. But the architecture is complex—requires Kafka or similar message queues, SQLite's CDC ecosystem is immature, and operational overhead is high. Better suited for large enterprise systems with dedicated data teams.
My Choice: Phased Migration + Idempotent Scripts
Given MetaSight's reality—manageable user count, modest data volume, but data integrity requirements—I adopted a balanced approach: choose off-peak hours to set brief read-only mode, use idempotent scripts ensuring migration can be re-run and resume after failures, and validate data integrity table-by-table post-migration comparing record counts and key fields.
Migration Practice
Idempotent Script Design
The core requirement for migration scripts is repeatability. Even if interrupted, re-running won't create duplicate data:
async function migrateTable(tableName: string) {
// Check migration status
const status = await getMigrationStatus(tableName);
if (status === 'completed') {
console.log(`[Skip] ${tableName} already migrated`);
return;
}
// Get last migration offset
const lastOffset = status?.lastOffset ?? 0;
let offset = lastOffset;
const batchSize = 1000;
console.log(`[Start] ${tableName} resuming from offset ${offset}`);
while (true) {
const rows = await sourceDb
.select()
.from(table)
.limit(batchSize)
.offset(offset);
if (rows.length === 0) break;
// Use upsert for idempotency
await targetDb
.insert(table)
.values(rows.map(transformRow))
.onConflictDoUpdate({
target: table.id,
set: rows.map(transformRow)[0], // Update to latest value
});
offset += rows.length;
// Record progress for resume capability
await updateMigrationStatus(tableName, { lastOffset: offset });
console.log(`[Progress] ${tableName}: ${offset} rows`);
}
await updateMigrationStatus(tableName, { status: 'completed' });
console.log(`[Complete] ${tableName}`);
}
Data Transformation Layer
From SQLite to PostgreSQL, type differences need handling:
function transformRow(row: SqliteRow): PostgresRow {
return {
...row,
// integer → timestamp
createdAt: new Date(row.createdAt * 1000),
updatedAt: row.updatedAt ? new Date(row.updatedAt * 1000) : null,
// text → jsonb (for JSON fields)
metadata: row.metadata ? JSON.parse(row.metadata) : null,
};
}
Data Validation
After migration, validate data integrity table by table:
async function validateMigration() {
for (const tableName of tables) {
const sourceCount = await sourceDb
.select({ count: count() })
.from(table);
const targetCount = await targetDb
.select({ count: count() })
.from(table);
if (sourceCount !== targetCount) {
throw new Error(
`${tableName} count mismatch: source ${sourceCount}, target ${targetCount}`
);
}
console.log(`[Validated] ${tableName}: ${sourceCount} rows`);
}
}
Connection Pooling
In serverless environments, connection management is crucial. Neon's built-in PgBouncer makes this simple:
import { Pool } from "@neondatabase/serverless";
import { drizzle } from "drizzle-orm/neon-serverless";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10,
});
export const db = drizzle({ client: pool });
Using the -pooler suffixed connection string, Neon automatically handles pooling, supporting up to 10,000 concurrent connections.
After the Migration
Code became cleaner. Type conversions that previously needed adapter layers are now natively supported by the database.
Developer experience improved noticeably. Drizzle ORM's PostgreSQL support is more complete, IDE hints more accurate, documentation examples directly usable.
Future features now have room to grow. True solar time calculations, geofencing, location search—these requirements now only need business layer implementation.
For Those Making Similar Choices
If you're also debating database selection, start from requirements—not "which is faster," but "which meets my data type needs." Mainstream solutions mean better tooling and community support, and technical debt becomes harder to repay as data grows, so migrate early rather than late.
Turso is an excellent product, better suited for scenarios that don't require complex data types and prioritize minimal cold start latency. But for applications like MetaSight that need to handle time and geographic data, PostgreSQL is the better choice.