Skip to content

CUBRID Transaction & Recovery — Section Overview

Contents:

This section is the part of the CUBRID code-analysis tree that implements the transactional contract a relational engine sells to its clients: ACID. Every doc in txn-recovery/ is a piece of that contract. Atomicity is the promise that a transaction either commits in full or leaves no trace, and the engine keeps it through undo records in the log and the recovery manager’s undo pass. Durability is the promise that once COMMIT returns, the change survives a crash, and the engine keeps it through write-ahead logging, the prior-list commit pipe, periodic checkpoints, and the double-write buffer (which lives in the Storage Engine section but is named here because durability cannot be discussed without it). Isolation is the promise that concurrent transactions see a well-defined order of effects, and the engine keeps it through MVCC snapshots for reads, the lock manager for write-write conflicts, and the vacuum subsystem to reclaim the dead versions that MVCC leaves behind. Consistency — preserving every constraint the schema declares — is enforced above this layer (DDL, schema manager, integrity rules), but every constraint check that fires during DML emits log records through the same log manager and prior list, so the txn-recovery section is where the bookkeeping lives even if the rules are owned elsewhere.

Three docs extend the same machinery beyond local ACID. 2PC adds atomicity across multiple servers — the same TDES, the same log records, plus a coordinator/participant FSM and a prepared state that survives crash. Flashback turns the WAL into a queryable history: instead of replaying the log to recover, it replays the log to report what happened in a time window. Backup and restore packages the data volumes and a bracketed log range into a self-contained snapshot and uses the recovery manager’s redo pass to converge on any commit boundary inside the bracket — point-in-time recovery (PITR).

This overview is a router. It names which doc owns which piece of the contract, sketches how those pieces compose at run time, and points at the cross-section dependencies (the page buffer and the DWB live in Storage Engine; the boot path lives in Server Architecture; the locator lives in Storage Engine but is the caller of the log manager). The eleven detail docs hold the actual code-level analysis; this doc does not duplicate it.

The storage-engine section ahead of this one organises its modules as a strict bottom-up stack (volumes -> buffer -> records -> keys). The transaction-and-recovery section is harder to pin down that way because its modules cluster around a state hub (the TDES) and a substrate (the WAL) rather than a vertical layering of storage abstractions. Even so, a structural diagram of who-talks-to-whom helps before the ACID-property carve-up below — it answers what this section’s perimeter is and how its eleven detail docs reach each other and the storage engine.

flowchart TB
  subgraph CALLERS["Callers above this section"]
    LOC["locator / DML entry<br/>(cubrid-locator.md)"]
    DDL["DDL / schema<br/>(DDL & Schema section)"]
    BOOT["boot / startup<br/>(cubrid-boot.md)"]
    XA["XA client / coordinator"]
    UTIL["backup / restore CLI<br/>(util_sa.c, util_cs.c)"]
  end

  subgraph HUB["Per-transaction state hub"]
    TX["cubrid-transaction.md<br/>(TDES, trantable, savepoints,<br/>isolation-level dispatch)"]
  end

  subgraph ISO["Isolation plane"]
    MVCC["cubrid-mvcc.md<br/>(MVCCIDs, snapshot,<br/>active-MVCCID table)"]
    LM["cubrid-lock-manager.md<br/>(lock table, intention modes,<br/>waits-for graph)"]
    VAC["cubrid-vacuum.md<br/>(WAL-driven dead-version GC)"]
  end

  subgraph WAL["WAL pipeline (atomicity + durability)"]
    PRIOR["cubrid-prior-list.md<br/>(producer queue,<br/>group commit)"]
    LOGM["cubrid-log-manager.md<br/>(WAL append, LSAs,<br/>log reader)"]
    CHKPT["cubrid-checkpoint.md<br/>(fuzzy chkpt,<br/>redo-LSA hint)"]
    REC["cubrid-recovery-manager.md<br/>(ARIES analysis /<br/>redo / undo)"]
  end

  subgraph EXT["Extensions"]
    P2C["cubrid-2pc.md<br/>(coord/participant FSM)"]
    FLASH["cubrid-flashback.md<br/>(log mining for time travel)"]
    BR["cubrid-backup-restore.md<br/>(online backup + PITR)"]
  end

  subgraph SE["Storage engine (separate section, below)"]
    PB["page-buffer + DWB<br/>(cubrid-page-buffer-manager.md,<br/>cubrid-double-write-buffer.md)"]
    HEAP["heap / btree / ehash<br/>(record + key on-page)"]
    DM["disk-manager / volumes"]
  end

  LOC --> TX
  LOC --> HEAP
  DDL --> TX
  DDL --> LM
  BOOT --> REC
  XA --> P2C
  UTIL --> BR

  TX --> MVCC
  TX --> LM
  TX -- "commit / abort log node" --> PRIOR

  HEAP -- "page-mutation log node" --> PRIOR
  PRIOR --> LOGM
  CHKPT --> LOGM
  LOGM --> PB

  MVCC -. reads MVCC headers .-> HEAP
  VAC -. reads WAL forward .-> LOGM
  VAC -. reclaims dead versions .-> HEAP

  REC -. reads WAL .-> LOGM
  REC -. anchors at chkpt_lsa .-> CHKPT
  REC -. rebuilds trantable .-> TX
  REC -- "redo replays into" --> PB

  P2C -. extends .-> TX
  P2C --> LOGM
  FLASH -. reads archived WAL via .-> LOGM
  BR -. drives .-> REC
  BR -. brackets with .-> CHKPT

  PB --> DM

Reading the diagram top-down:

  • Callers above the section. Five entry points drive everything in txn-recovery/. The locator (cubrid-locator.md, in Storage Engine) is the DML hot path: it routes through the TDES for transaction state and reaches into the heap / B+Tree directly for the page mutation. DDL & Schema reaches the same hub plus the lock manager for schema-stability locks. Boot (cubrid-boot.md) is the cold path that runs the recovery manager at startup, before the server accepts clients. XA clients drive 2PC externally. Backup / restore utilities reuse the recovery manager in standalone mode.
  • Per-transaction state hub (cubrid-transaction.md). Every later doc in this section reads or writes fields on the TDES — trid, MVCCID, isolation level, savepoint chain, undo-next / postpone-next LSAs, the per-transaction lock-table cursor. This is why cubrid-transaction.md is the “read first” doc in the reading order: holding the descriptor in your head makes every later module fall into place.
  • Isolation plane. MVCC for read visibility, the lock manager for write-write conflicts and schema stability, vacuum to reclaim what MVCC leaves behind. The three are linked by the oldest-visible-MVCCID watermark (vacuum’s stop marker, derived from MVCC’s active table) and by the TDES, which carries both the snapshot identity and the lock-table cursor.
  • WAL pipeline (atomicity + durability). The prior list buffers LOG_PRIOR_NODE records produced by the heap, by the TDES (commit / abort), and (where applicable) by the lock manager. The log manager drains the queue, fsyncs, and exposes the durable-LSN watermark that group-commit waiters block on. Checkpoint anchors how far back recovery has to look. The recovery manager runs the ARIES three-pass restart by walking the same WAL forward (analysis + redo) and then backward (undo).
  • Extensions. 2PC, flashback, and backup / restore reuse the TDES + WAL machinery without adding new substrates of their own — they extend the same trantable, the same log records, the same recovery passes.
  • Storage engine (below). Three boundary points cross the section line. Heap pages emit log records up into the prior list. The log manager (and the recovery manager’s redo pass) push log / data pages down through the page buffer (which routes through the DWB before reaching the disk manager). MVCC reads down into heap headers without going through the page-buffer flush path.

The next section, “The ACID lens”, revisits the same eleven modules from a different angle — which ACID property each one primarily supports.

A useful frame for navigating the eleven docs is to ask which ACID property each one primarily supports, while being honest that almost all of them touch more than one. The mapping below is the dominant duty, not the only one.

Atomicity is owned by the recovery manager and the log manager together. Every undo-eligible mutation emits a log record carrying enough information to reverse itself (physiological for the data side, logical for B+Tree operations); rollback walks the per-transaction LSA chain backward from the TDES head and applies each record’s undo function, emitting a compensation log record (CLR) so the undo itself is restartable. Restart-time atomicity uses the same mechanism — the analysis pass classifies losers, the undo pass unwinds them — but with the trantable rebuilt from the log rather than carried forward. See cubrid-log-manager.md (record shapes, LSA naming, append discipline) and cubrid-recovery-manager.md (the three-pass restart, with the undo pass driving atomicity at recovery time). cubrid-transaction.md is where the per-transaction state — the TDES with its head/tail/undo-next/postpone-next chains — that both rollback and recovery walk lives.

Consistency is mostly enforced above this section. DDL constraints, foreign keys, unique indexes, and the schema manager’s integrity rules belong to Storage Engine and DDL & Schema. But every constraint check that fires during DML emits log records through cubrid-log-manager.md and cubrid-prior-list.md, and every aborted constraint violation is rolled back through the same atomicity machinery. The txn-recovery section therefore carries consistency — it is the bookkeeping plane, even when the rules are owned elsewhere.

Isolation is the joint operation of MVCC and the lock manager, with vacuum cleaning up afterward. CUBRID’s default isolation is snapshot isolation built on monotonic MVCCIDs and an active-MVCCID table; readers consult the snapshot, never block writers, and never get blocked by writers — that is the core MVCC promise. Writers, however, still need exclusive locks to serialise write-write conflicts, prevent lost updates, and respect schema-stability requirements; the lock manager handles this with intention modes (IS, IX, SIX, plus BU, SCH-S, SCH-M, NON_2PL), a square conversion table, and a waits-for graph deadlock detector. Vacuum is the gardener: every UPDATE leaves a previous version, every DELETE leaves a tombstone, and the engine cannot reuse the space until no live snapshot can see them. CUBRID’s vacuum walks the WAL forward in fixed-size blocks below the oldest-visible-MVCCID watermark and dispatches per-block jobs to a worker pool. See cubrid-mvcc.md, cubrid-lock-manager.md, and cubrid-vacuum.md.

Durability is the longest pipeline. A mutation produces a LOG_PRIOR_NODE outside the global log latch, hands it to the prior list, and (at commit) waits on a condition variable for the durable-LSN watermark to advance past its tail; the log-flush daemon drains the prior list under the log critical section, copies bytes into the authoritative log pages, and fsyncs. Group commit emerges naturally from many committers sleeping on one CV. The page side is bracketed by checkpoints: a periodic daemon emits LOG_START_CHKPT / LOG_END_CHKPT carrying an active-transaction snapshot and a redo-LSA hint, which advances log_Gl.hdr.chkpt_lsa so the next analysis pass can skip everything below it. The data-page durability story adds the double-write buffer (covered in Storage Engine, cubrid-double-write-buffer.md): every dirty page lands in the DWB before its home, so a torn write at the home is recoverable from the DWB copy. See cubrid-log-manager.md, cubrid-prior-list.md, cubrid-checkpoint.md, and cubrid-recovery-manager.md.

Cross-server atomicity is cubrid-2pc.md. The same TDES and log machinery, plus a LOG_2PC_EXECUTE enum that dispatches by role (coordinator vs. participant), plus a prepared-state log record that survives crash. The ARIES analysis pass picks up in-doubt transactions and rebuilds the gtrid → tid map; XA clients drive the FSM externally.

Time-travel queries are cubrid-flashback.md. Two-phase forward log walk: phase 1 builds a per-transaction summary (trid, user, time, INSERT/UPDATE/DELETE counts, classes touched), phase 2 materialises a chosen transaction’s row images. The wire format is shared with CDC, but flashback reads against archived log volumes rather than the live tail.

Backup is cubrid-backup-restore.md. Online physical backup snapshots data volumes while bracketing them with start_lsa (current checkpoint) and the LOG_END_CHKPT after the copy; restore mounts the page images and runs the recovery manager’s redo pass forward to a user-supplied stop time. PITR is by wall-clock timestamp (resolved internally to an LSA-based stop point during redo). Three incremental levels chain by skipping pages whose prv.lsa is older than the parent backup’s start LSA.

flowchart TB
  subgraph A["Atomicity"]
    A1["cubrid-transaction.md<br/>(TDES, savepoints, LSA chains)"]
    A2["cubrid-log-manager.md<br/>(undo records, LSAs)"]
    A3["cubrid-recovery-manager.md<br/>(undo pass, CLRs)"]
  end

  subgraph I["Isolation"]
    I1["cubrid-mvcc.md<br/>(MVCCIDs, snapshots,<br/>active-MVCCID table)"]
    I2["cubrid-lock-manager.md<br/>(intention modes,<br/>waits-for graph)"]
    I3["cubrid-vacuum.md<br/>(WAL-driven dead-version GC)"]
  end

  subgraph D["Durability"]
    D1["cubrid-log-manager.md<br/>(WAL, append discipline)"]
    D2["cubrid-prior-list.md<br/>(commit pipe, group commit)"]
    D3["cubrid-checkpoint.md<br/>(fuzzy chkpt, redo-LSA hint)"]
    D4["cubrid-recovery-manager.md<br/>(redo pass)"]
    DWB["cubrid-double-write-buffer.md<br/>(torn-write protection)<br/><i>(in Storage Engine)</i>"]
  end

  subgraph C["Consistency"]
    CC["enforced above<br/>(DDL & Schema, integrity rules)<br/>but bookkeeping flows through D1+D2"]
  end

  subgraph X["Extensions"]
    X1["cubrid-2pc.md<br/>(cross-server atomicity)"]
    X2["cubrid-flashback.md<br/>(time-travel queries)"]
    X3["cubrid-backup-restore.md<br/>(PITR via redo replay)"]
  end

  TDES["TDES<br/>(per-transaction state hub)"]

  TDES --> A1
  TDES --> I1
  TDES --> X1

  A2 -. emits .-> D1
  I3 -. walks WAL .-> D1
  D2 -. drains into .-> D1
  D3 -. anchors .-> D4
  D4 -. uses .-> A3
  DWB -. brackets page writes .-> D1

  X1 -. extends .-> A1
  X2 -. reads archives via .-> D1
  X3 -. uses .-> D4

A reader new to CUBRID’s transaction stack should not start with the recovery manager, even though it is the most theatrical of the eleven. Recovery makes sense only when you already know what is being recovered — what a transaction is in CUBRID, how its state is laid out, what an MVCC snapshot looks like, and what the log records that recovery walks contain. Read in this order:

  1. cubrid-transaction.md first. It defines the TDES, the trantable, the savepoint and system-op nesting model, and the dispatch from isolation level to either snapshot-based or lock-based reads. Every later doc reads or writes fields on the TDES; you need the descriptor in your head before anything else.

  2. cubrid-mvcc.md next. It explains how MVCCIDs are assigned, how the active-MVCCID table is maintained, how a snapshot is constructed at statement (or transaction) start, and how visibility is decided per row version. This is where the “snapshot for reads” half of CUBRID’s isolation story lives.

  3. cubrid-log-manager.md follows. WAL is the substrate everything below depends on: durability, recovery, replication, vacuum, flashback, backup. Read it for the record shapes (LOG_RECORD_HEADER, the union of record-type-specific bodies), the LSA decomposition into (page_id, offset), and the append discipline that hands off bytes from prior list to log page buffer.

  4. cubrid-prior-list.md is the tight companion. It explains how the producer side stays out of the log critical section — LOG_PRIOR_NODE records built outside the latch, queued under a small mutex, drained by the log-flush daemon. Group commit and the durable-LSN watermark live here.

  5. cubrid-checkpoint.md comes next because it is the recovery boundary that the next two docs presume. The fuzzy ARIES protocol, the active-transaction snapshot, the redo-LSA hint derived from the page-buffer dirty list, and the way log_Gl.hdr.chkpt_lsa advances are all here.

  6. cubrid-recovery-manager.md is the payoff. ARIES three-pass restart — analysis from chkpt_lsa forward to end-of-log, redo from min-redo-LSA forward (parallelised by a per-page worker pool through RV_fun[]), undo backward per loser. With docs 1–5 in your head, every step has a familiar data structure underneath it.

  7. cubrid-lock-manager.md is the “when MVCC isn’t enough” doc. Write-write conflicts, schema-stability locks, intention modes, and deadlock detection via the waits-for graph. CUBRID’s mode set extends the textbook with BU, SCH-S, SCH-M, NON_2PL.

  8. cubrid-vacuum.md closes the MVCC loop. WAL-forward replay below the oldest-visible-MVCCID watermark, fixed-size vacuum blocks, master/worker dispatch, dropped-files tracking. Read after both cubrid-mvcc.md (you need the watermark concept) and cubrid-log-manager.md (vacuum reads the same WAL recovery does).

  9. cubrid-2pc.md is for distributed-commit needs. Coordinator/participant FSM via LOG_2PC_EXECUTE, prepared-state log records, in-doubt recovery folded into the analysis pass.

  10. cubrid-flashback.md is for log-mining needs. Two-phase forward walk, summary then detail, archived-log access via the same log reader CDC uses.

  11. cubrid-backup-restore.md is for operational PITR needs. The on-disk backup format, the LSA bracket, the three incremental levels, and the timestamp-to-LSA resolution at restore time.

Docs 9, 10, 11 can be read independently once 1–8 are in place; they are extensions of the core machinery rather than prerequisites for one another.

Three integrations recur across the eleven docs and are easier to internalise as named patterns than to rediscover each time.

MVCC + lock manager: snapshot for reads, X-lock for writes

Section titled “MVCC + lock manager: snapshot for reads, X-lock for writes”

CUBRID’s isolation is not “MVCC alone” or “locks alone” but a specific division: readers go through MVCC visibility (no blocking, no deadlock against writers), writers acquire exclusive locks for write-write conflicts, lost-update prevention, and schema stability. The TDES carries both the MVCCID for snapshot identity and the lock-table cursor, and the isolation-level dispatch in cubrid-transaction.md decides which path a given access uses. This is why CUBRID supports both read-committed and snapshot-isolation modes without two separate access paths — the same code reads mvcc_snapshot for visibility and consults the lock manager for writes; the isolation level just changes when a fresh snapshot is taken. See cubrid-transaction.md (dispatch), cubrid-mvcc.md (snapshot), cubrid-lock-manager.md (writer locks).

Log + prior list + checkpoint + DWB: the durability pipeline

Section titled “Log + prior list + checkpoint + DWB: the durability pipeline”

Durability is not one component; it is a four-stage pipeline where each stage has a distinct invariant and removing any stage breaks the contract.

  1. Prior list (cubrid-prior-list.md). Producers build LOG_PRIOR_NODE records outside the log critical section and queue them with a short-held mutex. Producers’ work stays off the global hot path.
  2. Log manager (cubrid-log-manager.md). The log-flush daemon drains the prior list under the log critical section, copies bytes into the authoritative log page buffer, and fsyncs. Group commit emerges naturally from many committers sleeping on the durable-LSN CV.
  3. Checkpoint (cubrid-checkpoint.md). Periodic LOG_START_CHKPT / LOG_END_CHKPT brackets carry the active-transaction snapshot and the redo-LSA hint that bounds the next analysis pass.
  4. Double-write buffer (cubrid-double-write-buffer.md, in Storage Engine). Dirty data pages land in the DWB before their home. A torn write at the home is recovered from the DWB copy on restart.

The contract: a committed transaction’s log records are durable before commit returns (stages 1–2); the corresponding data pages need not be on disk yet (recovery’s redo pass will catch them up — stage 3 bounds how much it must replay); and no half-written page can corrupt either the home or the log (stage 4). Remove the DWB and torn writes silently corrupt; remove the checkpoint and recovery walks the entire log; remove the prior list and every commit serialises through the log latch.

Vacuum’s input is the WAL itself, not the heap. Every LOG_MVCC_* record describes one MVCC operation; vacuum walks those records forward in fixed-size blocks below the oldest-visible-MVCCID watermark and applies the appropriate clean-up to the target page (heap row reclaim, B+Tree leaf entry removal, OID list compaction). The watermark comes from cubrid-mvcc.md — the smallest snapshot lower bound across all live transactions, recomputed when transactions start/finish. The log records come from cubrid-log-manager.md — the same WAL that recovery uses, read forward instead of for redo. The dispatch comes from cubrid-vacuum.md — a master daemon partitions the log into blocks, a worker pool consumes them. The trade-off (vs. PostgreSQL’s heap-scan autovacuum or InnoDB’s undo-log purge) is that workloads with many never-updated tuples save scan cost but workloads with long scans without modifications gain nothing.

DocWhat it ownsOne-line summary
cubrid-transaction.mdTDES, savepoints, isolation dispatchPer-transaction state hub: trid, lifecycle, isolation level, LSA chains, side-state registries; nests partial rollback via system ops and savepoints.
cubrid-mvcc.mdMVCCIDs, snapshots, active-MVCCID tableSnapshot construction with monotonic MVCCIDs and an in-memory active-MVCCID table; coordinates with vacuum through the oldest-visible watermark.
cubrid-lock-manager.mdLocks, intention modes, deadlock detectionMulti-granularity locks per OID with intention modes (IS, IX, SIX, BU, SCH-S, SCH-M), conversion table, and a waits-for-graph deadlock detector.
cubrid-vacuum.mdDead-version reclamationWalks WAL forward in fixed-size blocks below the oldest-visible-MVCCID watermark; master/worker dispatch; tracks dropped files separately.
cubrid-log-manager.mdWAL, LSAs, append disciplineLays out write-ahead-log records, names them with (page_id, offset) LSAs, disciplines the prior-list / append-page pipeline that flushes to disk.
cubrid-prior-list.mdLock-free producer queueSingly-linked LOG_PRIOR_NODE queue with one short-held mutex; drained by the log-flush daemon under the log critical section; group commit emerges from the queue.
cubrid-checkpoint.mdFuzzy ARIES checkpointPeriodic LOG_START_CHKPT / LOG_END_CHKPT bracket carrying an active-transaction snapshot and a redo-LSA hint; advances chkpt_lsa to bound restart work.
cubrid-recovery-manager.mdARIES three-pass restartAnalysis from chkpt_lsa forward, redo from min-redo-LSA forward (parallelised per page through RV_fun[]), undo backward per loser; CLRs make undo restartable.
cubrid-2pc.mdTwo-phase commit, in-doubt recoveryCoordinator/participant FSM via LOG_2PC_EXECUTE; prepared-state log records survive crash; in-doubt recovery folded into the ARIES analysis pass.
cubrid-flashback.mdLog mining for time travelTwo-phase forward log walk: per-transaction summary (counts, classes), then per-transaction detailed log-info pull; reads archived log volumes via the CDC log reader.
cubrid-backup-restore.mdOnline backup, PITRSnapshots data volumes bracketed by start_lsa and a checkpoint; restore mounts pages and replays log forward to a user timestamp; three incremental levels chain by skipping pages with prv.lsa <= parent.start_lsa.

Three other code-analysis subcategories are tightly coupled to this one and the cross-references run in both directions.

Storage Engine. The page buffer is the layer between the log manager and the disk. Every dirty page goes through the page buffer’s three-zone LRU and through the double-write buffer before reaching its home volume; the DWB is what makes a torn write at the home recoverable from the DWB copy. The log manager’s “no dirty page may reach disk before its log records do” invariant is enforced at the page buffer, which consults the per-BCB oldest_unflush_lsa against the durable LSN before flushing. Read cubrid-page-buffer-manager.md and cubrid-double-write-buffer.md after the durability pipeline in this section. The locator (cubrid-locator.md) is the caller into the log manager — every DML routes through locator_*_force which produces the log records the prior list and the recovery manager later walk.

Server Architecture. The boot path (cubrid-boot.md) is the dispatcher that runs the recovery manager at startup. It loads the log header, finds log_Gl.hdr.chkpt_lsa, and hands control to log_recovery_analysislog_recovery_redolog_recovery_undo before the server accepts any client. Backup and restore enter through the SA-mode utility entry points (util_cs.c, util_sa.c) and into boot_sr.c to reuse the same machinery in standalone mode. Read cubrid-boot.md and cubrid-sa-cs-runtime.md to see how recovery is dispatched and how the SA-mode utilities reuse the engine in-process.

HA & Replication. Replication consumes the same log the recovery manager replays. copylogdb ships log records to a slave and applylogdb (and the newer page-server replication path) replays them. Flashback (cubrid-flashback.md) and CDC share the log-reader infrastructure. The 2PC FSM (cubrid-2pc.md) is the bridge between local commit and external coordinators; XA clients drive it through the same log records, and HA promotion uses the same in-doubt recovery to clean up after a coordinator failure. Read cubrid-ha-replication.md, cubrid-cdc.md, and cubrid-heartbeat.md to see how the WAL becomes a network artefact.

DDL & Schema. The “Consistency” gap in this section’s ACID mapping is filled by DDL & Schema. Constraint definitions, foreign-key enforcement, schema-modification locks (which use the lock manager’s SCH-M mode), and the integrity-check rules that fire from the locator are owned there. Every constraint check that fires emits log records through this section’s machinery, but the rules — what counts as a violation — are external. Read cubrid-ddl-execution.md and cubrid-class-object.md to see the rules; read this section to see how their effects are recorded and recovered.