Prev Next

Database / Liquibase interview questions

1. What is Liquibase and what problem does it solve? 2. What is a changeLog in Liquibase? 3. What is a changeSet in Liquibase and how is it identified? 4. What are the DATABASECHANGELOG and DATABASECHANGELOGLOCK tables? 5. What is the difference between Liquibase and Flyway? 6. What are contexts in Liquibase and how do you use them? 7. What are labels in Liquibase and how do they differ from contexts? 8. How does rollback work in Liquibase? 9. What is a Liquibase precondition and why would you use one? 10. How do you integrate Liquibase with Spring Boot? 11. What are Liquibase change types and can you name common ones? 12. What is the Liquibase diff command and when is it useful? 13. What are Liquibase tags and how do you use rollback to a tag? 14. What is runOnChange and runAlways in a Liquibase changeSet? 15. What is the markNextChangeSetRan and markNextChangeSetRanSQL command used for? 16. How do you manage Liquibase changeLog files in a multi-module Maven project? 17. What is the Liquibase updateSQL command and when is it used? 18. What is a Liquibase snapshot and what commands use it? 19. What is the failOnError attribute in a changeSet and when should you set it to false? 20. How does Liquibase handle transactions during migration? 21. What is the Liquibase Hub and what does it provide? 22. How do you use Liquibase with multiple databases or schemas in a single application? 23. What is the Liquibase Pro checksum validity and how do clearCheckSums and validChecksums work? 24. What are Liquibase custom change types and how do you create one? 25. How do you handle large table migrations in Liquibase without causing downtime? 26. What is the Liquibase Maven plugin and how do you run migrations with it? 27. What is the Liquibase status command and how does it differ from validate? 28. What are Liquibase property substitution and how does it work? 29. How do you write Liquibase changeLogs in YAML format? 30. What is the Liquibase formatted SQL changeLog format? 31. What is the Liquibase updateTestingRollback command? 32. How do you use Liquibase with Docker and Testcontainers in integration tests? 33. What is the difference between addColumn with a default value and a nullable column in Liquibase for large tables? 34. What is liquibase.properties and what configuration belongs in it?
Could not find what you were looking for? send us the question and we would be happy to answer your question.

1. What is Liquibase and what problem does it solve?

Liquibase is an open-source database schema change management tool that tracks, versions, and deploys changes to a database in a controlled, repeatable way. It solves the problem of managing database migrations alongside application code — ensuring every environment (dev, test, staging, production) runs against the exact same schema version.

Without a tool like Liquibase, teams often rely on a shared folder of SQL scripts, a README with manual steps, or tribal knowledge about what has been run where. This leads to drift: production might be missing a column that staging has, or a developer's local database has leftover tables from an abandoned feature branch. Liquibase eliminates this drift by maintaining a change log that records every schema change ever applied, along with a tracking table (DATABASECHANGELOG) in the database itself that records exactly which changes have been applied.

When Liquibase runs, it compares the change log against the tracking table and applies only the changes that haven't been executed yet in that database. This makes deployments idempotent — you can run Liquibase on any environment without fear of re-applying already-executed changes or missing ones. It also supports rollback for changes that define an undo operation, enabling controlled recovery from bad deployments.

Which table does Liquibase maintain in the database to track applied changes?
What does Liquibase do when it runs against a database that already has some changes applied?
2. What is a changeLog in Liquibase?

A changeLog is the master configuration file (or set of files) that contains all the database changes Liquibase should manage. It acts as the single source of truth for every schema modification your application has ever needed — from the initial table creation to the latest column rename. Liquibase reads this file on every run and decides which changeSets to apply based on what's already recorded in the database tracking table.

A changeLog can be written in four formats: XML, YAML, JSON, or formatted SQL. XML is the most common in enterprise Java projects. Here's a minimal example:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
    xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
        http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.9.xsd">

    <changeSet id="1" author="alice">
        <createTable tableName="customer">
            <column name="id" type="BIGINT" autoIncrement="true">
                <constraints primaryKey="true" nullable="false"/>
            </column>
            <column name="email" type="VARCHAR(255)"/>
        </createTable>
    </changeSet>
</databaseChangeLog>

For large projects it is common to split the changeLog into multiple files organised by version or feature, and then include them from a root changeLog using the <include> or <includeAll> tags. This keeps individual files manageable and makes code reviews easier — each pull request only adds a new changeSet file rather than modifying a single growing file.

Which Liquibase changeLog format is most common in enterprise Java projects?
How do you split a large Liquibase changeLog across multiple files?
3. What is a changeSet in Liquibase and how is it identified?

A changeSet is the atomic unit of change in Liquibase — it contains one or more related database operations (create table, add column, insert data, etc.) that should be applied together as a single migration step. Each changeSet is uniquely identified by the combination of three attributes: id, author, and the changeLog file path. Liquibase uses this three-part key to track whether a given changeSet has already been applied in a particular database.

The id is a free-form string — it can be a sequential number (1, 2, 3), a timestamp (20240115-001), or a descriptive slug (add-email-column). The author is usually the developer's name or username. The file path is recorded automatically from the location of the changeLog file on the classpath or filesystem.

<changeSet id="20240115-add-phone" author="bob">
    <addColumn tableName="customer">
        <column name="phone" type="VARCHAR(20)"/>
    </addColumn>
    <rollback>
        <dropColumn tableName="customer" columnName="phone"/>
    </rollback>
</changeSet>

A critical rule: once a changeSet has been applied to any database (especially production), you must never modify its content. If the content of an applied changeSet changes, Liquibase will detect a checksum mismatch on the next run and fail with an error, preventing the migration from running. To make a correction, you always add a new changeSet rather than editing an existing one. This immutability guarantee is what makes the change history trustworthy.

What three attributes uniquely identify a Liquibase changeSet?
What happens if you modify the content of an already-applied changeSet?
4. What are the DATABASECHANGELOG and DATABASECHANGELOGLOCK tables?

Liquibase creates and manages two system tables in the target database. These tables are the backbone of how Liquibase tracks state and prevents concurrent deployments from corrupting the database.

DATABASECHANGELOG is the audit trail of every applied changeSet. Each row records the changeSet ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, EXECTYPE (EXECUTED or MARK_RAN), MD5SUM (the checksum), DESCRIPTION, COMMENTS, TAG, LIQUIBASE version, CONTEXTS, and LABELS. On every run, Liquibase reads this table to determine which changeSets have already been applied.

DATABASECHANGELOGLOCK is a single-row locking table. Before Liquibase begins applying any changes, it acquires a lock by setting LOCKED=1 along with the hostname and timestamp. This prevents two Liquibase processes (for example, two application pods starting simultaneously in Kubernetes) from running migrations concurrently and causing race conditions. Once migrations complete, the lock is released. If a Liquibase process crashes mid-migration, the lock remains set. You can release it manually with the releaseLocks command or by directly updating the table.

-- Manually release a stuck lock
UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKEDBY=NULL, LOCKGRANTED=NULL WHERE ID=1;

Both tables are created automatically on the first Liquibase run if they do not exist. They should never be dropped or manually edited in normal operations — doing so can desynchronise Liquibase's view of the database from its actual state.

Why does Liquibase need the DATABASECHANGELOGLOCK table?
What column in DATABASECHANGELOG stores a hash to detect if an applied changeSet was modified?
5. What is the difference between Liquibase and Flyway?

Liquibase and Flyway are the two dominant Java database migration tools, and they share the same core goal — versioned, automated schema management — but differ in philosophy, flexibility, and feature set.

Liquibase vs. Flyway Comparison
AspectLiquibaseFlyway
Migration formatXML, YAML, JSON, or formatted SQLSQL or Java callbacks only
Rollback supportBuilt-in, per changeSetManual — requires writing separate undo scripts (Pro only for auto-undo)
Diff/comparisondiff command compares two databases or snapshotsNo built-in diff; requires external tooling
Conditional logicpreconditions, contexts, labels, runOnChange, runAlwaysLimited — mostly relies on versioned file naming
Checksum handlingStrict — fails on checksum mismatchStrict for versioned migrations; repeatable migrations re-run on checksum change
Tracking table nameDATABASECHANGELOGflyway_schema_history
Spring Boot integrationspring-boot-starter-data-liquibase or spring.liquibase propertiesspring-boot-starter-flyway or spring.flyway properties

Flyway is often praised for its simplicity — you name your files V1__create_table.sql and Flyway handles the rest. Liquibase is more powerful but has a steeper learning curve. Teams that need fine-grained rollback, database-agnostic migrations (writing once and targeting Oracle, PostgreSQL, MySQL), or complex conditional logic tend to prefer Liquibase. Teams that want minimal configuration with plain SQL scripts often prefer Flyway.

Which feature does Liquibase offer that Flyway lacks in its Community edition?
What is the name of Flyway's tracking table?
6. What are contexts in Liquibase and how do you use them?

Contexts in Liquibase are labels you attach to a changeSet to control which environments it runs in. When you execute Liquibase and pass a context value (via command line, properties file, or Spring configuration), only the changeSets whose contexts match the active context will be executed. ChangeSets without a context attribute always run regardless of what context is active.

A common use case is seeding test or development data that should never touch production:

<changeSet id="seed-test-users" author="alice" context="test,dev">
    <insert tableName="user">
        <column name="id" value="1"/>
        <column name="email" value="testuser@example.com"/>
    </insert>
</changeSet>

<changeSet id="create-admin" author="alice" context="prod">
    <insert tableName="user">
        <column name="id" value="100"/>
        <column name="email" value="admin@company.com"/>
    </insert>
</changeSet>

In Spring Boot you configure the active context with:

spring.liquibase.contexts=dev

Contexts support logical expressions like context="test or dev" and context="!prod". The ! (not) operator is useful for changeSets you want to run everywhere except production. This gives you a powerful mechanism for maintaining one changeLog that covers all environments without duplicating files or maintaining separate changeLog trees.

What happens to a changeSet that has NO context attribute when Liquibase runs with --contexts=prod?
Which context expression runs a changeSet in all environments except production?
7. What are labels in Liquibase and how do they differ from contexts?

Labels in Liquibase look superficially similar to contexts — both are string tags on a changeSet used to filter which changeSets run. The key difference is in where the expression logic lives. With contexts, the filtering expression is written on the changeSet itself (e.g., context="!prod"). With labels, the expression is passed at runtime (e.g., --label-filter="release-2.0 and !experimental") — the changeSet just lists its labels, and the caller decides the filter logic.

Contexts vs. Labels
AspectContextsLabels
Where filter logic livesOn the changeSet attributePassed at runtime via --label-filter
Typical useEnvironment targeting (dev/test/prod)Feature flags, release batches, sprint grouping
Boolean expressionsOn the changeSet (e.g., !prod)On the command line (e.g., release-2.0 and !experimental)
No-match behaviorchangeSet with no context runs alwayschangeSet with no label runs always (when a filter is active)

Example: A team tags each changeSet with a sprint or release label:

<changeSet id="add-audit-columns" author="carol" labels="release-3.1,audit-feature">
    <addColumn tableName="orders">
        <column name="created_by" type="VARCHAR(100)"/>
    </addColumn>
</changeSet>

To deploy only release-3.1 changes: liquibase --label-filter="release-3.1" update. Labels give operations teams flexibility to apply subsets of a changeLog — for phased rollouts, feature deployments, or running only a specific batch of changes in a maintenance window.

Where is the boolean filter expression placed when using Liquibase labels (as opposed to contexts)?
What is the primary semantic difference between contexts and labels?

8. How does rollback work in Liquibase?

Liquibase rollback allows you to undo previously applied changeSets, reverting the database to an earlier state. There are two ways to define rollback behavior for a changeSet: automatic rollback and explicit rollback.

For many built-in Liquibase change types — createTable, addColumn, createIndex, addForeignKeyConstraint — Liquibase can automatically generate the inverse operation (DROP TABLE, DROP COLUMN, etc.) without you writing anything. These are called automatically reversible changes.

For changes that are not automatically reversible — dropTable, insert, sql, any custom SQL — you must provide an explicit rollback block:

<changeSet id="add-status-column" author="dave">
    <addColumn tableName="order">
        <column name="status" type="VARCHAR(20)" defaultValue="PENDING"/>
    </addColumn>
    <!-- Explicit rollback not needed here - addColumn is auto-reversible -->
</changeSet>

<changeSet id="populate-status" author="dave">
    <sql>UPDATE "order" SET status = 'PENDING' WHERE status IS NULL</sql>
    <rollback>
        <!-- We cannot un-update rows without knowing old values, so mark empty -->
        <sql>-- intentionally empty: data rollback not feasible</sql>
    </rollback>
</changeSet>

Rollback commands available in Liquibase:

  • rollback <tag> — rolls back all changeSets applied after the specified tag.
  • rollbackCount <N> — rolls back the last N changeSets.
  • rollbackToDate <datetime> — rolls back to a specific point in time.

You can also preview what would be rolled back without actually executing it using rollbackSQL, rollbackCountSQL, etc. — these generate the SQL that would be run, letting you review before committing.

For which Liquibase change type is rollback NOT automatic and must be specified explicitly?
Which Liquibase command shows the SQL that WOULD be run during rollback without actually executing it?
9. What is a Liquibase precondition and why would you use one?

A precondition is a condition that Liquibase checks before executing a changeSet or the entire changeLog. If the condition is not met, Liquibase can either halt execution, skip the changeSet, mark it as run, or issue a warning — depending on the onFail and onError settings. Preconditions let you write defensive migrations that check the actual database state before blindly executing DDL.

Common use cases:

  • Verify a table or column already exists before trying to modify it (useful for changeSets that might have been applied manually in some environments).
  • Check that a specific database type or version is in use before executing vendor-specific SQL.
  • Confirm a required extension or configuration exists before proceeding.
<changeSet id="add-index-if-table-exists" author="eve">
    <preConditions onFail="MARK_RAN">
        <tableExists tableName="order_item"/>
        <not>
            <indexExists indexName="idx_order_item_product"/>
        </not>
    </preConditions>
    <createIndex tableName="order_item" indexName="idx_order_item_product">
        <column name="product_id"/>
    </createIndex>
</changeSet>

The onFail attribute controls what happens if the precondition is not met: HALT (stop everything), CONTINUE (skip this changeSet), MARK_RAN (record it as executed without running it), or WARN (log a warning and continue). Choosing the right onFail behavior is important — MARK_RAN is useful when a change may have been applied manually and you want Liquibase to catch up without re-applying, while HALT is appropriate for critical prerequisites.

Which onFail value records a changeSet as executed in DATABASECHANGELOG without actually running it?
Where can a precondition block be placed in a Liquibase changeLog?
10. How do you integrate Liquibase with Spring Boot?

Spring Boot has first-class auto-configuration for Liquibase. When you add the spring-boot-starter-data-jpa or the dedicated liquibase-core dependency alongside Spring Boot, the auto-configuration detects the Liquibase JAR and automatically runs migrations at application startup before the application context fully initialises. This means your schema is always in sync before your application starts serving requests.

Maven dependency:

<dependency>
    <groupId>org.liquibase</groupId>
    <artifactId>liquibase-core</artifactId>
</dependency>

By default, Spring Boot looks for the changeLog at classpath:db/changelog/db.changelog-master.yaml. You can override this and other settings in application.properties:

spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.xml
spring.liquibase.enabled=true
spring.liquibase.contexts=dev
spring.liquibase.default-schema=myapp
spring.liquibase.drop-first=false  # NEVER true in production
spring.liquibase.user=migration_user   # separate low-privilege DB user for migrations
spring.liquibase.password=secret

A good practice is running Liquibase with a separate database user (spring.liquibase.user) that has DDL permissions, while the main application datasource user has only DML rights. This limits the blast radius if the application is compromised — the app can read/write rows but cannot drop tables. The drop-first=true option drops the entire schema before running migrations and should never be enabled in production; it is occasionally useful in isolated test environments.

Where does Spring Boot look for the Liquibase changeLog file by default?
What is the purpose of setting spring.liquibase.user separately from the main datasource user?
11. What are Liquibase change types and can you name common ones?

A change type is the specific DDL or DML operation that a changeSet performs. Liquibase provides over 40 built-in change types that are database-agnostic — you write the intent (add a column, create an index) and Liquibase generates the correct SQL for the target database. This is one of Liquibase's biggest advantages over raw SQL migrations: the same changeLog can deploy to PostgreSQL, Oracle, MySQL, and SQL Server without modification.

Common structural change types:

  • createTable / dropTable — create or drop a table with columns and constraints.
  • addColumn / dropColumn — add or remove a column from an existing table.
  • renameColumn / renameTable — rename columns or tables without recreating them.
  • modifyDataType — change a column's data type.
  • addNotNullConstraint / dropNotNullConstraint — add or remove NOT NULL constraints.
  • addUniqueConstraint / dropUniqueConstraint — manage unique constraints.
  • addForeignKeyConstraint / dropForeignKeyConstraint — manage foreign key relationships.
  • createIndex / dropIndex — manage indexes.
  • createSequence / dropSequence — manage sequences (for Oracle, PostgreSQL).
  • createView / dropView — manage views.

Data change types:

  • insert — insert rows into a table.
  • update — update existing rows.
  • delete — delete rows matching a condition.
  • loadData / loadUpdateData — bulk load from a CSV file.

There is also sql and sqlFile for arbitrary SQL when no built-in type covers your needs, though using them gives up the database-agnostic benefit.

What is the main advantage of using built-in Liquibase change types (like addColumn) over raw <sql> tags?
Which Liquibase change type allows bulk loading rows from a CSV file?
12. What is the Liquibase diff command and when is it useful?

The diff command compares two database connections (or a database against a snapshot) and reports differences in schema objects — tables, columns, indexes, foreign keys, views, sequences, and stored procedures. It is one of Liquibase's most powerful tools for identifying schema drift and for bootstrapping a changeLog from an existing database.

Common uses:

  • Detecting drift — Compare production against staging to find schema objects that exist in one but not the other, often caused by hotfixes applied directly to production without updating the changeLog.
  • Bootstrapping — You have an existing database with no changeLog. Run generateChangeLog (which uses the same underlying diff logic) to generate a starting changeLog from the current schema, then begin tracking future changes from that baseline.
  • Pre-deployment verification — Before deploying to production, diff the current production schema against a snapshot of what Liquibase expects after update to catch surprises.

Basic command syntax:

liquibase diff \
  --url=jdbc:postgresql://prod-db:5432/myapp \
  --username=admin --password=secret \
  --reference-url=jdbc:postgresql://staging-db:5432/myapp \
  --reference-username=admin --reference-password=secret

The related diffChangeLog command goes one step further — it not only shows the differences but generates the changeSets needed to bring the target database in line with the reference. This is useful for capturing changes that a developer made directly in the database and needs to codify as a formal Liquibase migration.

What does the Liquibase diffChangeLog command produce?
Which Liquibase command generates a starting changeLog from an existing database with no prior changeLog?
13. What are Liquibase tags and how do you use rollback to a tag?

A tag in Liquibase is a named marker you apply to the current state of DATABASECHANGELOG — a named checkpoint that records exactly which changeSets have been applied at the time of tagging. Tags are used as rollback targets: instead of counting changeSets or specifying a date, you roll back to a named release like v2.3.0.

You create a tag in two ways:

Via the CLI before a deployment:

liquibase tag v2.3.0

Inline in the changeLog using the tagDatabase change type:

<changeSet id="tag-v2.3.0" author="release-bot">
    <tagDatabase tag="v2.3.0"/>
</changeSet>

Including the tag as a changeSet is the preferred approach in automated pipelines because it ensures the tag is always created at exactly the right point in the changeLog regardless of when the deployment runs. The CLI approach requires the operator to run the tag command before the update, which is easy to forget.

To roll back to a tag:

liquibase rollback v2.3.0

This reverses all changeSets applied after the tag point, in reverse order. Before executing, you can preview what would happen:

liquibase rollbackSQL v2.3.0

A good practice for CI/CD pipelines is tagging the database with the release version at the start of every deployment. If the deployment fails partway through, rolling back to the previous tag restores the exact schema that was stable before the release.

Why is the inline <tagDatabase> changeSet approach preferred over the CLI liquibase tag command?
What is the order in which changeSets are reversed when rolling back to a tag?
14. What is runOnChange and runAlways in a Liquibase changeSet?

By default, Liquibase runs a changeSet exactly once — it executes it, records it in DATABASECHANGELOG, and never runs it again on subsequent Liquibase runs. Two attributes override this behavior for specific use cases where idempotent re-execution is desired.

runOnChange="true" — Re-executes the changeSet whenever its checksum changes (i.e., whenever the file content changes). Liquibase detects the change by comparing the current file's checksum against the stored MD5SUM in DATABASECHANGELOG. If they differ, it runs the changeSet again and updates the stored checksum. This is useful for objects that are completely replaced rather than incrementally modified — stored procedures, views, functions, and triggers fall into this category.

<changeSet id="sp-calculate-discount" author="frank" runOnChange="true">
    <createProcedure>
        CREATE OR REPLACE PROCEDURE calculate_discount(...)
        ...
    </createProcedure>
</changeSet>

runAlways="true" — Re-executes the changeSet on every single Liquibase run, regardless of whether the content changed. This is appropriate for operations that must be executed every deployment — refreshing statistics, clearing caches, or running a post-deployment validation query. Use sparingly because it adds overhead to every run.

The difference: runOnChange triggers on content changes only; runAlways triggers unconditionally. For stored procedures, runOnChange is almost always the better choice because it avoids unnecessary re-execution when nothing changed.

Which attribute re-executes a changeSet only when its file content has changed?
For which type of database object is runOnChange most appropriate?
15. What is the markNextChangeSetRan and markNextChangeSetRanSQL command used for?

The markNextChangeSetRan command records the next unrun changeSet in DATABASECHANGELOG as executed — without actually running it. This is a rescue command for situations where a change was already applied to a database outside of Liquibase (manually or through some other tool), and you need Liquibase to catch up its tracking without re-executing the DDL that was already performed.

A concrete scenario: a DBA ran an emergency hotfix directly on production — for example, ALTER TABLE payment ADD COLUMN gateway_ref VARCHAR(100); — to address a critical incident. Later, the developer adds the same change as a Liquibase changeSet. When Liquibase runs in production, it would try to add the same column again and fail with a column-already-exists error. Using markNextChangeSetRan tells Liquibase: "I know this change is already done; just record it."

# Mark the next changeSet as ran without executing it
liquibase markNextChangeSetRan

The companion command markNextChangeSetRanSQL shows the SQL that would be inserted into DATABASECHANGELOG without executing it, useful for review and audit purposes.

A related but broader command is changelogSync, which marks ALL pending changeSets as run in one go. This is used when you bootstrap Liquibase onto an existing fully-provisioned database — you generate a changeLog from the current schema, then run changelogSync to register all those changeSets as already applied without actually executing them.

When should you use markNextChangeSetRan?
What does changelogSync do differently from markNextChangeSetRan?
16. How do you manage Liquibase changeLog files in a multi-module Maven project?

In a multi-module Maven project (common in microservices or modular monolith architectures), each module typically manages its own schema. The recommended approach is to have each module own a separate changeLog tree under its src/main/resources, and then compose them from a top-level root changeLog if the modules share a single deployment unit — or treat each module's changeLog independently if they each deploy their own schema.

A typical structure for a shared-schema approach:

src/main/resources/
  db/
    changelog/
      db.changelog-master.xml        <-- root changeLog
      releases/
        v1.0/
          01-create-user-table.xml
          02-create-order-table.xml
        v1.1/
          03-add-status-column.xml
          04-add-index-on-email.xml
        v2.0/
          05-create-payment-table.xml

The root changeLog uses <includeAll> or ordered <include> tags:

<databaseChangeLog ...>
    <includeAll path="db/changelog/releases/v1.0/"/>
    <includeAll path="db/changelog/releases/v1.1/"/>
    <includeAll path="db/changelog/releases/v2.0/"/>
</databaseChangeLog>

For microservices where each service has its own database, each service's changeLog is entirely independent and deployed only with that service. There is no shared root changeLog. In this case, each service configures its own spring.liquibase.change-log pointing to its own changeLog file.

One important consideration with includeAll: Liquibase processes files alphabetically within the included directory. Use a numeric prefix on file names to ensure deterministic ordering. Do not rely on filesystem order — it can vary across operating systems and CI environments.

When using <includeAll> in a Liquibase root changeLog, what determines the processing order of the included files?
In a microservices architecture, should each service share a root Liquibase changeLog?
17. What is the Liquibase updateSQL command and when is it used?

The updateSQL command generates the SQL that Liquibase would execute during an update run — without actually applying any changes to the database. It writes the complete SQL to standard output or a file, including the INSERT statements that would update DATABASECHANGELOG and the actual DDL/DML from your changeSets.

liquibase updateSQL > pending-migrations.sql

# Or with Spring Boot CLI plugin
mvn liquibase:updateSQL

Common scenarios where updateSQL is invaluable:

  • Pre-deployment review — In regulated environments or strict change-management processes, the generated SQL is submitted to a DBA or change review board for approval before the deployment runs. This is common in banks, healthcare, and government projects where direct application access to production databases is prohibited.
  • DBAs execute migrations — Some organisations require that only DBAs run scripts against production. The generated SQL file is handed to the DBA, who reviews and executes it manually.
  • Debugging — Developers use updateSQL to verify that Liquibase is generating the expected SQL before running it against a shared development database.
  • CI pipeline artifact — Generate the SQL as a pipeline artifact attached to each release so there is an auditable record of exactly what was applied to each environment.

The output includes comments that identify each changeSet by ID, author, and file, making it straightforward to trace which part of your changeLog produced which SQL statement.

What does the Liquibase updateSQL command do?
Why is updateSQL particularly valuable in regulated industries like banking or healthcare?
18. What is a Liquibase snapshot and what commands use it?

A snapshot in Liquibase is a serialised representation of a database's schema state captured at a specific point in time. It records tables, columns, indexes, foreign keys, views, sequences, and other schema objects as a JSON or YAML file. Unlike a database backup, it captures structure — not data.

Creating a snapshot:

liquibase snapshot --snapshot-format=json --output-file=prod-snapshot-2024-01-15.json \
  --url=jdbc:postgresql://prod-db:5432/myapp \
  --username=admin --password=secret

Snapshots are used in several commands:

  • diff with a snapshot — Compare the current live database against a saved snapshot to see what changed between the snapshot timestamp and now, without needing two live databases available simultaneously. This is useful for scheduled drift detection in CI.
  • diffChangeLog with a snapshot — Generate changeSets that represent the delta between a saved snapshot and the current database, useful for creating a changeLog that captures schema evolution over time.
  • Baseline documentation — Archive snapshots as part of each release process to have a historical record of what the schema looked like at each release boundary.

Snapshots can also be used as the reference in a diff, pointing --reference-url to a local snapshot file rather than a live database:

liquibase diff \
  --url=jdbc:postgresql://staging-db:5432/myapp \
  --reference-url=offline:postgresql?snapshot=prod-snapshot-2024-01-15.json

This offline diff capability is particularly useful in environments where direct connections to production databases from CI are not permitted for security reasons.

What does a Liquibase snapshot capture?
How do you diff against a saved snapshot instead of a live database?
19. What is the failOnError attribute in a changeSet and when should you set it to false?

The failOnError attribute on a changeSet controls what Liquibase does when an error occurs during that changeSet's execution. By default, failOnError="true" — any error causes Liquibase to stop, roll back the transaction (if within one), and report a failure. Setting failOnError="false" tells Liquibase to log the error as a warning and continue with the next changeSet.

This sounds useful but it is almost always the wrong choice in production. Silent failures mean your schema is in an unknown state — some changes succeeded, some didn't — while Liquibase reports overall success. This can lead to application startup errors, silent data corruption, or security vulnerabilities (like a permission grant that silently failed).

Legitimate uses for failOnError="false":

  • Idempotent data cleanup — A changeSet that deletes orphan rows where the condition might not match in all environments. If zero rows are deleted, that is not an error but some databases return a warning-level error code.
  • Database object creation when existence is uncertain — A changeSet that tries to create an index that might already exist in some environments (though preconditions are the cleaner solution here).
  • Known cross-database compatibility issues — Running optional changeSets that work on PostgreSQL but produce benign errors on MySQL, where the feature is optional.
<changeSet id="cleanup-orphan-sessions" author="grace" failOnError="false">
    <sql>DELETE FROM session WHERE user_id NOT IN (SELECT id FROM "user")</sql>
</changeSet>

Even in these cases, add a comment explaining why failOnError=false is set. Future developers will thank you.

Why is setting failOnError="false" risky in production migrations?
What is the cleaner alternative to failOnError="false" when a changeSet should be skipped if an object already exists?
20. How does Liquibase handle transactions during migration?

By default, Liquibase wraps each changeSet in a database transaction. If any change within a changeSet fails, the entire changeSet is rolled back and Liquibase stops. This transactional wrapper ensures that a changeSet either succeeds completely or leaves the database unchanged — partial application of a changeSet is prevented.

However, not all SQL statements can run inside a transaction in all databases. For example, CREATE INDEX CONCURRENTLY in PostgreSQL, DDL statements in Oracle (which auto-commit), and some MySQL table operations are not transactional. For these cases, Liquibase provides the runInTransaction attribute on a changeSet:

<changeSet id="create-index-concurrently" author="henry" runInTransaction="false">
    <sql>CREATE INDEX CONCURRENTLY idx_order_created ON "order"(created_at)</sql>
    <rollback>
        <sql>DROP INDEX IF EXISTS idx_order_created</sql>
    </rollback>
</changeSet>

Setting runInTransaction="false" means that if the changeSet fails mid-execution, the already-executed statements within it are not automatically rolled back — you may be left with a partially applied changeSet. This is an acceptable trade-off for operations that genuinely cannot run in a transaction (like concurrent index creation, which cannot hold a table lock for the duration of a transaction in PostgreSQL).

A changeSet with runInTransaction="false" should contain the minimum number of operations necessary, and the rollback block becomes critical since Liquibase will not automatically undo partial work if the changeSet fails.

What does Liquibase do by default when a changeSet fails mid-execution?
In which scenario is runInTransaction="false" necessary on a PostgreSQL database?
21. What is the Liquibase Hub and what does it provide?

Liquibase Hub (now rebranded and incorporated into Liquibase Pro and Liquibase Cloud) was a cloud-based observability service that connected to your local or CI Liquibase runs and provided a centralised dashboard showing deployment history, changeSet execution timelines, and operational statistics across all your environments and databases.

At its core, Hub provided these capabilities:

  • Deployment tracking — Every Liquibase update run sends metadata (which changeSets ran, on which database, at what time, from which pipeline) to a centralised hub, giving a unified view across all environments.
  • Audit trail — Who deployed what, when, and to which database — all queryable from the Hub dashboard, supplementing the per-database DATABASECHANGELOG with a cross-environment view.
  • Operation reports — Identify slow changeSets, failed deployments, and environments that are behind the expected schema version.
  • Team collaboration — Multiple team members working on different projects can all send their deployment events to the same Hub organisation, providing visibility without everyone needing access to every database.

With Liquibase 4.x, Hub integration is configured via the hubApiKey property. As Liquibase has evolved its commercial offering (Liquibase Pro), Hub features have been incorporated into the Pro dashboard. The open-source Community edition still has Hub integration, but the most advanced analytics and governance features are in the commercial tiers.

For teams already using a centralised CI/CD platform (Jenkins, GitHub Actions with deployment logs), Hub provides database-specific observability that general CI dashboards lack.

What is the primary benefit Liquibase Hub provides beyond what DATABASECHANGELOG already offers?
22. How do you use Liquibase with multiple databases or schemas in a single application?

When an application uses multiple databases or schemas — for example, a multi-tenant SaaS app with one schema per tenant, or an application split across a billing schema and a product schema — Liquibase can manage each independently. The key is that each schema/database gets its own DATABASECHANGELOG table and its own Liquibase execution context.

In Spring Boot with multiple data sources, you configure multiple Liquibase beans manually since Spring Boot's auto-configuration only handles one:

@Configuration
public class LiquibaseConfig {

    @Bean
    @Primary
    public SpringLiquibase primaryLiquibase(
            @Qualifier("primaryDataSource") DataSource primaryDataSource) {
        SpringLiquibase liquibase = new SpringLiquibase();
        liquibase.setDataSource(primaryDataSource);
        liquibase.setChangeLog("classpath:db/changelog/primary/db.changelog-master.xml");
        return liquibase;
    }

    @Bean
    public SpringLiquibase billingLiquibase(
            @Qualifier("billingDataSource") DataSource billingDataSource) {
        SpringLiquibase liquibase = new SpringLiquibase();
        liquibase.setDataSource(billingDataSource);
        liquibase.setChangeLog("classpath:db/changelog/billing/db.changelog-master.xml");
        return liquibase;
    }
}

For per-tenant schema provisioning in multi-tenant applications, a common pattern is running Liquibase programmatically when a new tenant is created, using the tenant's schema name:

public void provisionTenantSchema(String schemaName) {
    SpringLiquibase liquibase = new SpringLiquibase();
    liquibase.setDataSource(sharedDataSource);
    liquibase.setChangeLog("classpath:db/changelog/tenant/db.changelog-master.xml");
    liquibase.setDefaultSchema(schemaName);
    liquibase.afterPropertiesSet();  // triggers migration immediately
}

Each schema gets its own DATABASECHANGELOG in that schema, keeping migrations isolated even when sharing a single database server.

When configuring two separate Liquibase beans in Spring Boot for two different data sources, where does each database's migration history get stored?
Which SpringLiquibase method triggers migration execution programmatically outside the normal Spring lifecycle?
23. What is the Liquibase Pro checksum validity and how do clearCheckSums and validChecksums work?

Liquibase computes an MD5 checksum for each changeSet when it first runs and stores it in DATABASECHANGELOG. On every subsequent run, Liquibase recomputes the checksum from the current file and compares it against the stored value. If they differ, Liquibase throws a ValidationFailedException and refuses to proceed.

This is a deliberate safety mechanism — if someone edits an applied changeSet, you know immediately. However, there are legitimate cases where you need to update a checksum without re-running the migration:

clearCheckSums — Clears all stored checksums in DATABASECHANGELOG, setting them to NULL. On the next run, Liquibase recomputes checksums from the current files and stores them again. This is a brute-force fix but it means Liquibase loses its ability to detect modifications until the checksums are re-stored on the next run.

liquibase clearCheckSums

validCheckSums — A more targeted approach. You declare additional valid checksums for a specific changeSet in the changeLog itself, telling Liquibase to accept any of those checksum values as legitimate. This is used when a changeSet content changed due to whitespace normalisation, encoding changes, or Liquibase version upgrades that changed how checksums are computed:

<changeSet id="1" author="alice">
    <validCheckSum>8:abc123...</validCheckSum>  <!-- previous computed checksum -->
    <createTable tableName="customer">
        ...
    </createTable>
</changeSet>

The validCheckSum child element accepts the old checksum string and tells Liquibase that this old hash is also acceptable. This is less disruptive than clearCheckSums because it only affects the specific changeSet, not the entire migration history.

What is the targeted alternative to clearCheckSums when only one changeSet has a checksum issue?
When would a Liquibase version upgrade legitimately cause checksum mismatches on unchanged changeSets?
24. What are Liquibase custom change types and how do you create one?

When Liquibase's built-in change types do not cover a specific requirement, you can create a custom change by implementing the CustomChange (or CustomSqlChange for SQL-generating changes) interface. This lets you write Java logic that runs as part of a migration — useful for complex data transformations, calling external APIs during migration, or any operation that requires conditional logic beyond what Liquibase XML can express.

Implementing a custom change:

public class MigrateEncryptedPasswordsChange implements CustomChange {

    private String tableName;

    // Liquibase sets this via reflection using <param name="tableName" value="user"/>
    public void setTableName(String tableName) {
        this.tableName = tableName;
    }

    @Override
    public void execute(Database database) throws CustomChangeException {
        try (Connection conn = ((JdbcConnection) database.getConnection()).getUnderlyingConnection()) {
            // Custom Java logic: read rows, re-encrypt passwords, write back
            // ...
        } catch (Exception e) {
            throw new CustomChangeException("Password migration failed", e);
        }
    }

    @Override
    public String getConfirmationMessage() {
        return "Encrypted passwords migrated for table " + tableName;
    }

    @Override
    public void setUp() throws SetupException {}

    @Override
    public void setFileOpener(ResourceAccessor resourceAccessor) {}

    @Override
    public ValidationErrors validate(Database database) { return new ValidationErrors(); }
}

In the changeLog:

<changeSet id="migrate-passwords" author="ivan">
    <customChange class="com.example.MigrateEncryptedPasswordsChange">
        <param name="tableName" value="user"/>
    </customChange>
</changeSet>

Custom changes must also implement rollback logic if they define a <rollback> block, or they can implement CustomSqlRollback. For SQL-generating changes, implementing CustomSqlChange is preferable because Liquibase can then generate the SQL for the updateSQL preview command.

Which interface should you implement to create a Liquibase custom change that also supports the updateSQL preview command?
How does Liquibase pass configuration parameters to a CustomChange class?
25. How do you handle large table migrations in Liquibase without causing downtime?

Large table migrations — adding a non-nullable column to a 500 million row table, back-filling data, adding an index on a frequently queried column — are among the most challenging database operations to perform without application downtime. The raw DDL that Liquibase would generate often acquires table locks that block reads and writes for the duration.

The key principle is the expand/contract pattern (also called multi-phase migration or online schema change):

Phase 1 — Expand (backward compatible): Add the new column as nullable with no default, or create the new index concurrently. At this point, both old and new code can coexist with the schema.

<changeSet id="phase1-add-nullable-status" author="jane">
    <addColumn tableName="order">
        <column name="status_v2" type="VARCHAR(30)"/>  <!-- nullable initially -->
    </addColumn>
</changeSet>

Phase 2 — Backfill in batches (via Eventing/app code or chunked SQL): Update existing rows in batches — not in a single UPDATE that would lock the entire table.

<changeSet id="phase2-backfill-status" author="jane" runInTransaction="false">
    <sql>
        DO $$
        DECLARE batch_size INT := 10000;
        BEGIN
            LOOP
                UPDATE "order" SET status_v2 = status WHERE status_v2 IS NULL
                LIMIT batch_size;
                EXIT WHEN NOT FOUND;
                PERFORM pg_sleep(0.01); -- brief pause to avoid lock contention
            END LOOP;
        END $$;
    </sql>
</changeSet>

Phase 3 — Contract (after all code using old column is retired): Add NOT NULL constraint, drop the old column, rename if needed. By this point no active code references the old column.

For index creation on large tables in PostgreSQL, always use CREATE INDEX CONCURRENTLY in a changeSet with runInTransaction="false" to build the index without a full table lock.

Why should you avoid a single UPDATE statement to back-fill a new column on a 500 million row table in one Liquibase changeSet?
What runInTransaction setting is required when using CREATE INDEX CONCURRENTLY in PostgreSQL?
26. What is the Liquibase Maven plugin and how do you run migrations with it?

The Liquibase Maven plugin integrates Liquibase commands directly into the Maven build lifecycle, allowing you to run database migrations as part of your build without needing a separate CLI installation. It is the standard approach for Java projects that use Maven as their build tool and want to apply migrations outside of a running Spring Boot application — for example, in a CI pipeline or as a pre-integration-test step.

Add the plugin to pom.xml:

<plugin>
    <groupId>org.liquibase</groupId>
    <artifactId>liquibase-maven-plugin</artifactId>
    <version>4.25.0</version>
    <configuration>
        <changeLogFile>src/main/resources/db/changelog/db.changelog-master.xml</changeLogFile>
        <url>jdbc:postgresql://localhost:5432/myapp</url>
        <username>${db.username}</username>
        <password>${db.password}</password>
        <driver>org.postgresql.Driver</driver>
    </configuration>
</plugin>

Common Maven goals:

  • mvn liquibase:update — Apply pending changeSets.
  • mvn liquibase:updateSQL — Generate pending SQL without applying.
  • mvn liquibase:rollback -Dliquibase.rollbackTag=v1.5.0 — Rollback to a tag.
  • mvn liquibase:diff — Compare two database schemas.
  • mvn liquibase:clearCheckSums — Clear all checksums.
  • mvn liquibase:status — Show pending changeSets without running them.

For integration tests, bind the update goal to the pre-integration-test phase so the schema is prepared before tests run:

<executions>
    <execution>
        <phase>pre-integration-test</phase>
        <goals><goal>update</goal></goals>
    </execution>
</executions>
Which Maven goal shows pending changeSets without applying them?
In what Maven lifecycle phase should you bind liquibase:update for integration tests?
27. What is the Liquibase status command and how does it differ from validate?

The status command queries DATABASECHANGELOG and compares it against the changeLog file to report which changeSets are pending — not yet applied to the database. It shows each pending changeSet's ID, author, filename, and whether a rollback is defined.

liquibase status --verbose

Example output:

3 changesets have not been applied to JDBC:postgresql://localhost:5432/myapp
     db/changelog/releases/v2.0/05-add-audit-columns.xml::add-audit-columns::carol
     db/changelog/releases/v2.0/06-add-index.xml::add-email-index::carol
     db/changelog/releases/v2.0/07-seed-permissions.xml::seed-permissions::release-bot

The validate command, by contrast, checks the changeLog file itself for structural correctness — XML validity, schema compliance, duplicate changeSet IDs, missing referenced files, and checksum mismatches against the database. It does not tell you what is pending; it tells you whether the changeLog is safe to run.

In a CI/CD pipeline, a good practice is to run both in sequence:

  1. liquibase validate — Fail fast if the changeLog has errors, before attempting any database access.
  2. liquibase status — Confirm what will be applied and fail if no changes are pending when a deployment is expected to carry changes.
  3. liquibase update — Apply the changes.

Some teams parse the status output to gate deployments — if status shows 0 pending changes but the pipeline expects schema changes, something is wrong (perhaps the changeLog file was not updated before the deploy).

What does the Liquibase validate command check?
In a CI/CD pipeline, which command should run first — validate or status?
28. What are Liquibase property substitution and how does it work?

Liquibase property substitution allows you to use placeholders in your changeLog files that are replaced with values at runtime. Placeholders follow the syntax ${propertyName}. This makes your changeLogs more portable — table prefixes, schema names, default values, and environment-specific settings can be externalised rather than hardcoded.

Properties can be defined in three ways, in decreasing priority order:

  1. Command-line parameter--changelog-variables=tablePrefix=app_ (highest priority)
  2. liquibase.properties filetablePrefix=app_
  3. Inside the changeLog itself using <property> tags (lowest priority)

Declaring a property in the changeLog:

<databaseChangeLog ...>
    <property name="tablePrefix" value="prod_" dbms="postgresql"/>
    <property name="tablePrefix" value="dev_" dbms="h2"/>

    <changeSet id="1" author="kate">
        <createTable tableName="${tablePrefix}customer">
            <column name="id" type="BIGINT"/>
        </createTable>
    </changeSet>
</databaseChangeLog>

The dbms attribute on <property> makes the value database-type-specific — a different prefix for PostgreSQL vs H2 in tests. This is a handy pattern for managing schema name or type differences without duplicating entire changeLogs.

In Spring Boot, you can pass Liquibase-specific properties through:

spring.liquibase.parameters.tablePrefix=app_

Property substitution happens at changeLog parsing time, before any checksum computation. The checksum is computed on the final resolved content, not the template with placeholders.

What syntax does Liquibase use for property placeholders in changeLog files?
Which property source has the highest priority and overrides all others?
29. How do you write Liquibase changeLogs in YAML format?

YAML is a supported Liquibase changeLog format that many teams prefer for its readability compared to verbose XML. The structure maps directly to the XML model — databaseChangeLog is the root, containing a list of changeSet entries, each with an id, author, and a list of changes.

databaseChangeLog:
  - changeSet:
      id: 1
      author: lena
      changes:
        - createTable:
            tableName: product
            columns:
              - column:
                  name: id
                  type: BIGINT
                  autoIncrement: true
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: name
                  type: VARCHAR(200)
                  constraints:
                    nullable: false
              - column:
                  name: price
                  type: DECIMAL(10, 2)

  - changeSet:
      id: 2
      author: lena
      context: prod
      changes:
        - addColumn:
            tableName: product
            columns:
              - column:
                  name: sku
                  type: VARCHAR(50)
      rollback:
        - dropColumn:
            tableName: product
            columnName: sku

YAML changeLogs support all the same features as XML: preconditions, contexts, labels, rollback, include/includeAll. The root changeLog file uses the same include mechanism:

databaseChangeLog:
  - includeAll:
      path: db/changelog/releases/v1.0/
  - includeAll:
      path: db/changelog/releases/v1.1/

One common pitfall with YAML changeLogs: YAML is whitespace-sensitive, and incorrect indentation causes silent parsing errors or wrong structure interpretation. Use a YAML linter or IDE plugin that validates structure. Also note that the default Spring Boot changeLog file is YAML format (db.changelog-master.yaml), so new Spring Boot projects automatically use YAML unless configured otherwise.

What is the default Liquibase changeLog format for new Spring Boot projects?
What is a common pitfall when writing Liquibase changeLogs in YAML format?
30. What is the Liquibase formatted SQL changeLog format?

Formatted SQL changeLog is the option for teams that want to write pure SQL migrations while still using Liquibase's change tracking, rollback, and contextual features. Instead of XML or YAML wrappers, changeSet metadata is embedded as SQL comments in a specific format that Liquibase parses.

A formatted SQL changeLog file:

--liquibase formatted sql

--changeset mike:20240201-create-invoice
CREATE TABLE invoice (
    id          BIGINT PRIMARY KEY,
    customer_id BIGINT NOT NULL,
    total       DECIMAL(12, 2) NOT NULL,
    created_at  TIMESTAMP NOT NULL DEFAULT NOW()
);
--rollback DROP TABLE invoice;

--changeset mike:20240202-add-invoice-status context:prod,staging
ALTER TABLE invoice ADD COLUMN status VARCHAR(20) NOT NULL DEFAULT 'DRAFT';
--rollback ALTER TABLE invoice DROP COLUMN status;

--changeset mike:20240203-seed-test-data context:dev runOnChange:true
INSERT INTO invoice(id, customer_id, total, created_at)
VALUES (1, 100, 299.99, NOW());
--rollback DELETE FROM invoice WHERE id = 1;

The metadata in the --changeset comment supports the same attributes as XML: context, labels, runOnChange, runAlways, failOnError, runInTransaction. Rollback SQL goes in a --rollback comment immediately after the forward change.

The formatted SQL approach has a significant trade-off: it is not database-agnostic. Because you are writing raw SQL, you must write PostgreSQL SQL for PostgreSQL databases and Oracle SQL for Oracle. You lose Liquibase's cross-database abstraction. This is acceptable for projects locked to a single database but defeats one of Liquibase's core advantages for polyglot teams.

How is changeSet metadata specified in a formatted SQL Liquibase changeLog?
What is the main trade-off of using formatted SQL changeLogs compared to XML or YAML?
31. What is the Liquibase updateTestingRollback command?

The updateTestingRollback command is a quality assurance command that tests whether your rollback definitions are actually correct and complete. It performs three steps in sequence on the target database:

  1. Applies all pending changeSets (like a normal update).
  2. Immediately rolls back all the changeSets it just applied (in reverse order).
  3. Applies the same changeSets again.

The goal is to verify that your rollback blocks actually return the database to the pre-migration state, and that the migration can then be successfully re-applied after rolling back. If either the rollback step or the second apply fails, you know your rollback definitions are broken.

mvn liquibase:updateTestingRollback

This command is most valuable as a pre-production gate in CI pipelines. If updateTestingRollback passes against an integration test database, you have confidence that:

  • The migration applies successfully.
  • The rollback does what you expect — not a silent no-op or a partially-reversing operation.
  • The migration is idempotent enough to be re-applied after a rollback.

It is not appropriate for production runs (it applies, rolls back, and re-applies all pending changes — three times the operations). Use it in a dedicated test environment where temporary schema thrashing is acceptable. For production safety testing, use updateSQL and rollbackSQL to review the generated SQL manually instead.

What three operations does updateTestingRollback perform in sequence?
Why is updateTestingRollback not suitable for direct use against a production database?
32. How do you use Liquibase with Docker and Testcontainers in integration tests?

Testcontainers is a Java library that spins up real Docker containers during JUnit tests, providing actual database instances instead of in-memory databases. Combining Testcontainers with Liquibase gives you integration tests that run against a realistic database — same engine type, same version — with an auto-migrated schema, without any manual database setup.

Maven dependency:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>postgresql</artifactId>
    <version>1.19.3</version>
    <scope>test</scope>
</dependency>

JUnit 5 test with Spring Boot and Testcontainers:

@SpringBootTest
@Testcontainers
class OrderRepositoryIntegrationTest {

    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16")
        .withDatabaseName("testdb")
        .withUsername("test")
        .withPassword("test");

    @DynamicPropertySource
    static void postgresProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    // Spring Boot auto-runs Liquibase on startup using the dynamic datasource URL
    // By the time @Test methods run, all changeSets are applied

    @Autowired
    OrderRepository orderRepository;

    @Test
    void shouldSaveAndRetrieveOrder() {
        Order order = new Order("customer-1", BigDecimal.valueOf(99.99));
        orderRepository.save(order);
        assertThat(orderRepository.findById(order.getId())).isPresent();
    }
}

When Spring Boot starts with the dynamic Testcontainers URL, it auto-runs Liquibase migrations before the test context is ready. By the time @Test methods execute, the schema is fully migrated. The container is stopped and removed after the test class completes.

When in the Spring Boot application lifecycle does Liquibase run migrations in a Testcontainers integration test?
What annotation makes Spring Boot read the dynamic Testcontainers JDBC URL at startup?
33. What is the difference between addColumn with a default value and a nullable column in Liquibase for large tables?

Adding a column to a large table is a common migration, but the choice between nullable (no default) and non-nullable (with a default value) has dramatic performance differences depending on the database engine, and Liquibase's generated SQL directly reflects this.

Adding a nullable column with no default — This is the safest and fastest operation on nearly all databases. Because existing rows need no update (NULL is a valid default), most modern databases (PostgreSQL 11+, MySQL 8.0+) complete this as an instant metadata-only operation. No rows are touched.

<changeSet id="add-nullable-notes" author="nina">
    <addColumn tableName="order">
        <column name="notes" type="TEXT"/>  <!-- nullable, no default -->
    </addColumn>
</changeSet>

Adding a NOT NULL column with a constant default value — On PostgreSQL 11+ this is also instant (the default is stored as a catalog entry and applied lazily). On older PostgreSQL (before 11), MySQL (before 8.0), and most other databases, this rewrites every row in the table, acquiring a full table lock for the duration. On a 500 million row table, this is minutes of downtime.

<changeSet id="add-status-with-default" author="nina">
    <addColumn tableName="order">
        <column name="status" type="VARCHAR(20)" defaultValue="PENDING">
            <constraints nullable="false"/>
        </column>
    </addColumn>
</changeSet>

The safest pattern for large tables across all database versions is the expand/contract approach: add the column as nullable → backfill in batches using the application or a chunked UPDATE → add NOT NULL constraint once all rows are populated. This avoids any single operation that locks the full table.

Why is adding a NOT NULL column with a default value risky on MySQL versions before 8.0?
On PostgreSQL 11+, what makes adding a NOT NULL column with a constant default value an instant operation?
34. What is liquibase.properties and what configuration belongs in it?

The liquibase.properties file is the default configuration file for the Liquibase CLI and Maven plugin. When Liquibase runs, it looks for this file in the current working directory (or on the classpath) and reads database connection settings, changeLog location, and default command options. You avoid having to repeat these values on every command-line invocation.

Typical content:

# Database connection
url=jdbc:postgresql://localhost:5432/myapp
username=liquibase_user
password=secret
driver=org.postgresql.Driver

# ChangeLog location
changeLogFile=src/main/resources/db/changelog/db.changelog-master.xml

# Default context (override with --contexts on CLI)
contexts=dev

# Logging
logLevel=INFO

# Liquibase Hub API key (optional, for Hub integration)
hubApiKey=abc123...

Properties in this file map directly to command-line options. The command-line equivalent of changeLogFile=... is --changeLogFile=.... When both are present, the command-line option takes precedence.

Important security consideration: liquibase.properties should never be committed to version control when it contains real database credentials. Common approaches:

  • Commit a liquibase.properties.example with placeholder values and add the real file to .gitignore.
  • Use environment variable substitution: password=${LIQUIBASE_DB_PASSWORD} — Liquibase resolves ${VAR} from environment variables.
  • Pass sensitive values via CI/CD pipeline secrets injected as CLI arguments: mvn liquibase:update -Dliquibase.password=$DB_PASS.
When both liquibase.properties and a command-line flag specify the same option, which takes precedence?
How should database passwords be handled in liquibase.properties for a team project?
«
»
MuleESB

Comments & Discussions