Problem
I'm facing a strange issue with Alembic's autogenerate command when setting up the initial database migration for a PostgreSQL database with a public schema.
When running alembic revision --autogenerate -m "Initial schema setup"
against a database containing only an empty public.alembic_version
table, the generated migration script incorrectly includes:
op.drop_table('alembic_version')
near the end of theupgrade()
functionop.create_table('alembic_version', ...)
in thedowngrade()
function
This causes subsequent alembic upgrade head
to fail with:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "public.alembic_version" does not exist
[SQL: INSERT INTO public.alembic_version (version_num) VALUES ('xxxxxxxx') ...]
This happens because the upgrade()
function executes the erroneous op.drop_table('alembic_version')
before Alembic attempts to record the migration's success by inserting the revision hash into that same table.
Configuration
Here's my env.py
setup (relevant portions):
# --- Target Metadata and Schema ---
target_metadata = frontend_metadata # Use the imported metadata
target_schema = getattr(settings.FRONTEND_DB, 'SCHEMA', None) or 'public'
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode."""
url = get_url()
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
include_schemas=True,
version_table_schema=target_schema,
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode."""
configuration = config.get_section(config.config_ini_section, {})
configuration["sqlalchemy.url"] = get_url()
connectable = engine_from_config(
configuration,
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
include_schemas=True,
version_table_schema=target_schema,
compare_type=True,
compare_server_default=True
)
with context.begin_transaction():
context.run_migrations()
I'm using SQLAlchemy with PostgreSQL. The database models are defined with a more sophisticated setup:
In database.py
:
class DatabaseConnection:
def __init__(self):
# Create metadata with proper schema
self.frontend_metadata = MetaData(schema=settings.FRONTEND_DB.SCHEMA)
# Create bases - these are the definitive base classes for models
self.FrontendBase = declarative_base(metadata=self.frontend_metadata)
# ... other initialization code ...
# Create global database instance
db = DatabaseConnection()
# Export commonly used items
FrontendBase = db.FrontendBase
frontend_metadata = db.frontend_metadata
__all__ = [
"db",
"FrontendBase",
"frontend_metadata",
# ... other exports ...
]
Then in base.py
:
from src.services.vamos.database.database import FrontendBase
class BaseModel(FrontendBase):
"""Base model for all SQLAlchemy models."""
__abstract__ = True
# Model fields...
What I've Verified
alembic_version
is not defined in my SQLAlchemy models/metadata- I confirmed
target_metadata
does not contain any definition foralembic_version
- Both
target_schema
andversion_table_schema
correctly resolve to 'public' - If I manually delete the
op.drop_table('alembic_version')
line from the generated script, the migration works perfectly
Questions
- Is this a known issue with Alembic? Should it be ignoring the
alembic_version
table by default? - Is there something wrong with my configuration that's causing this behavior?
- What's the proper way to handle this without manually editing the generated migration script?
Environment
- Alembic version: 1.15.1
- SQLAlchemy version: 2.0.39
- Python: 3.10.11
- Database: PostgreSQL 17.2 (AWS RDS)
- OS: Windows
Any help would be appreciated - I'd like to understand if I'm doing something wrong or if this is indeed a bug with Alembic.