Skip to content

Code Analysis

Open-source DBMS internals — currently a line-by-line read of the CUBRID codebase, broken down by storage, MVCC, lock manager, and the layers around them.

Jump to: Overview & Reading Paths · Base / Infrastructure · Storage Engine · Transaction & Recovery · Query Processing · DDL & Schema · Replication & HA · Procedural Language · Server Architecture · Internationalization & Specialty

Overview & Reading Paths (18)

  • CUBRID Architecture Overview — Process Model, Layered Stack, and the Map Into the Detail DocsFront-door router for the CUBRID code-analysis tree — names the four long-lived processes (`cub_master`, `cub_server`, `cub_pl`, `cub_broker`+`cub_cas`) and their IPC, the layered storage stack (disk → page buffer + DWB → heap/B+Tree/extendible-hash → catalog → class-object → workspace), the query pipeline (parser → semantic check → rewrite → optimizer → XASL → executor → scan manager → access methods → list-file), the concurrency/logging/recovery axis (MVCC + lock + transaction + log + prior-list + checkpoint + recovery + DWB), the distribution layer (heartbeat + HA replication + CDC + 2PC + flashback + backup), the PL family (`pl_server` JVM, JavaSP, PL/CSQL), and the cross-cutting infrastructure (boot, sessions, thread pools, network protocol, broker, errors, parameters, monitoring, DBI/CCI, SA/CS) — with one Mermaid diagram per axis and direct cross-refs into ~70 detail docs across eight subcategories.
  • CUBRID Code-Analysis Coverage — Map of What's Documented and What's OpenOverview-level coverage map for the cubrid code-analysis tree — groups the existing docs by subsystem, names the still-open gaps at the same grouping, and records intentional non-goals. Not a per-doc catalog (see README.md) and not an architecture map (see cubrid-architecture-overview.md); this is the answer to 'what's left to write?'
  • CUBRID Design Philosophy — Why the Codebase Looks the Way It DoesThe thirteen architectural decisions that explain the shape of the CUBRID codebase — OODB heritage from UniSQL, ARIES-by-the-book recovery, MVCC paired with a lock manager, Volcano executor, Selinger-style optimizer, lock-free prior list, double-write buffer, separate JVM for PL, broker process pool, local-only HA liveness, page-based storage, SA/CS dual-build utilities, and the deliberate non-goals — each traced to its historical and academic origin.
  • CUBRID Base / Infrastructure — Section OverviewRouter for the base/infra subcategory — the `src/base/` substrate every layer composes with. Two families: custom memory allocators (AREA slab pool for fixed-size objects, per-thread Lea-heap private allocator with C++ STL wrapper) and the lock-free primitives spine (legacy C plus modern C++ generations, sharing one transactional reclamation table). Storage, query, recovery, and PL all sit on top of this.
  • CUBRID DDL & Schema — Section OverviewRouter for the DDL & schema layer — DDL pipelines stage through SM_TEMPLATE, write the on-disk catalog (catcls system classes), rebuild the in-memory SM_CLASS graph, and feed back into authorization, triggers, and statistics that gate every later DML and plan compile.
  • CUBRID Internationalization — Section OverviewRouter for the i18n bucket — two cross-cutting primitives (charset+collation, timezone) shared by every string operator, every comparison, and every date-arithmetic call. Both compile external standards data (LDML for collation, IANA tzdata for timezone) into per-platform shared libraries that the server `dlopen`s at startup, then surface results through small per-record encoded IDs.
  • CUBRID Procedural Language — Section OverviewOne external JVM (cub_pl) hosts two language frontends — JavaSP and PL/CSQL — that ride a single transport, share one catalog, and dispatch through one executor.
  • CUBRID Query Processing — Section OverviewRouter for the largest CUBRID code-analysis subcategory — the parse → execute pipeline organised into a front-end (parser, semantic check, query rewrite), a middle-end (cost-based optimizer, XASL generator, XASL cache), and a back-end (executor, scan manager, list-file) plus runtime helpers (predicate evaluator, scalar functions, external sort, post-processing, hash join, runtime memoization) and specialised features (partition, cursor, serial, parallel query); links nineteen detail docs without duplicating their content.
  • CUBRID Replication & HA — Section OverviewRouter for the replication-ha subcategory: primary/standby replication via logical-log streaming on top of the WAL, leader election by local liveness scoring (no global consensus), and CDC piggy-backing on the same supplemental log stream.
  • CUBRID Server Architecture — Section OverviewProcess-level shape of CUBRID's server tier — JDBC/CCI clients reach a `cub_broker` listener, are routed to a forked `cub_cas` worker, which proxies CSS-framed NRP traffic to `cub_server`, where per-thread workers driven by either the legacy worker pool or the CBRD-26177 NG redesign land on a per-client `SESSION_STATE`, run through the locator OID workspace into storage, and share cross-cutting sysparam, error, and monitoring infrastructure.
  • CUBRID Storage Engine — Section OverviewRouter for the storage-engine subcategory — names the layered stack from OS files (volumes / sectors / pages) up through the page buffer and double-write buffer, the on-page record organisations (heap, B+Tree, extendible hash) plus their overflow chains, the out-of-band path (LOB on the host filesystem), and the page-level encryption layer; explains how the 9 detail docs in `subcategory: storage-engine` fit together and in what order to read them. The AREA slab pool, formerly listed here, has moved to `cubrid-overview-base-infra.md` since it is a memory-allocator concern that any layer can use, not a storage layer.
  • CUBRID System Catalog — Section OverviewRouter for the system-catalog bucket — the engine's SQL-visible self-description surface. Two complementary docs: cubrid-system-catalog-classes covers the *static* surface (the data-driven framework that defines and installs _db_class, _db_attribute, _db_index, ... and the 22 system views layered on top) and cubrid-show-commands covers the *dynamic* surface (SHOW commands rewritten to virtual scans over server runtime state — threads, page buffer, log header, transaction tables, locks).
  • CUBRID Transaction & Recovery — Section OverviewRouter for the txn-recovery subcategory — names how CUBRID realises ACID through MVCC + lock manager for isolation, log manager + prior list + checkpoint + recovery manager (+ DWB cross-section) for atomicity and durability, with 2PC, flashback, and backup-restore extending the same machinery to distributed commit, time travel, and point-in-time recovery — and points at the eleven detail docs that explain each piece.
  • CUBRID Reading Path — How a Stored Procedure Call Executes End-to-End (JavaSP / PL/CSQL with Embedded SQL Callback)End-to-end synthesis of one `CALL my_sp(...)` — a JavaSP whose body issues an embedded `SELECT` via JDBC — from JDBC `CallableStatement.execute` through cub_broker → cub_cas → cub_server's CALL-statement compile → cubpl::executor::request_invoke_command shipping `SP_CODE_INVOKE` to cub_pl → ListenerThread/ExecuteThread dispatch → reflective TargetMethod.invoke → user code's CUBRIDServerSidePreparedStatement embedded SELECT → JVM packing METHOD_CALLBACK_QUERY_PREPARE inside an SP_CODE_INTERNAL_JDBC envelope → server's response_callback_command → callback_prepare/_execute/_fetch invoking the normal compile-and-execute pipeline recursively under METHOD_MAX_RECURSION_DEPTH = 15 — and back. Threads ~15 detail docs (broker, network-protocol, server-session, transaction, parser, semantic-check, optimizer, xasl-generator, query-executor, list-file, mvcc, pl-javasp, pl-plcsql, pl-server-bridge, scan-manager) into one trip and is the natural fourth member of the rpath family alongside cubrid-rpath-select.md / cubrid-rpath-write.md / cubrid-rpath-recovery.md.
  • CUBRID Reading Path — How a Server Restart RecoversReading-path through a `cub_server` cold-start: process boot opens volumes and dispatches to recovery, DWB heals torn pages, the log header points at the most-recent checkpoint, ARIES analysis/redo/undo restore committed work and erase losers, vacuum and HA catch up, then the network listener begins accepting client traffic.
  • CUBRID Reading Path — How a SELECT Executes End-to-EndEnd-to-end synthesis of a single SELECT — JDBC → broker → cub_cas → cub_server → parser → semantic-check → rewrite → optimizer → XASL generator → XASL cache → executor → scan-manager → heap/B+Tree → predicate evaluator → MVCC visibility → list-file → cursor → broker → JDBC, threading roughly twenty detail docs into one trip.
  • CUBRID Reading Path — How a Write Commits End-to-EndINSERT INTO t VALUES (...) followed by COMMIT — parse, locator force fan-in, heap slotted-page write with MVCC stamp, btree key||OID insert, locator constraint and FK checks, BEFORE/AFTER triggers, prior-list WAL append, optional repl record, X-locks via locator, log_commit_local force-flush, log-flush daemon fsync, eventual dirty-page flush via DWB, and vacuum reclaiming dead versions.
  • CUBRID KO Translation Status — Per-Document Phase BoardPer-document phase board for the 103 KO mirrors under `knowledge/ko/code-analysis/cubrid/`. Derived state — each mirror's phase tag (`p1`/`p2`/`p3`/`p4`) in its frontmatter is the SSOT. The general framework (4-phase model, per-doc phase tag, promote/demote rules, model routing) lives in `knowledge/methodology/korean-translation.md`; this doc is cubrid's applied instance, holding only project-specific gates and the aggregated status view.

Base / Infrastructure (8)

  • CUBRID AREA Allocator — Slab-Style Pool Allocator With Free Lists for Same-Size ObjectsHow CUBRID's AREA module slabs-out same-size objects (DB_VALUE, TP_DOMAIN, OBJ_TEMPLATE, DB_OBJLIST, set objects, …) by chaining 256-block BLOCKSET arrays of fixed-cell blocks, fronts each block with a lock-free bitmap, and short-cuts the common case through a single hint pointer.
  • CUBRID Lock-Free Bitmap — Chunked Atomic Allocator for Per-Thread Indexes and Slot PoolsHow CUBRID allocates and recycles small integer slots concurrently — a chunked array of `std::atomic<unsigned int>` words, two chunking styles (one-chunk full-usage and list-of-chunks usage-bounded), CAS bit-flip, and a round-robin start hint that under SERVER_MODE bumps atomically per `get_entry`.
  • CUBRID Lock-Free Circular Queue — Bounded MPMC Ring with Per-Slot Block FlagsHow CUBRID hands work between threads on hot paths — a bounded multi-producer multi-consumer ring buffer with two cursor atomics and a per-slot block-flag word, used for vacuum log-block dispatch, page-buffer victim hand-off, and CDC log-info forwarding.
  • CUBRID Lock-Free Freelist — Typed Node Pool with Back-Buffer Block AllocatorHow CUBRID recycles typed lock-free nodes between operations — a typed `freelist<T>` with a single available stack, a one-block back-buffer that swaps in lazily so concurrent claimers do not race to allocate, an `on_reclaim` hook the payload type implements, and a clearly-documented ABA window in the pop path bounded by the back-buffer time.
  • CUBRID Lock-Free Hash Map — Legacy C, Modern C++, the Bridge, and the ConsumersHow CUBRID implements its main concurrent associative table — a Harris–Michael chained hash with optional per-entry mutex, in two parallel implementations (legacy C `lf_hash_*` and modern C++ `lockfree::hashmap<K,T>`) bridged by `cubthread::lockfree_hashmap<K,T>` whose `m_type ∈ {OLD, NEW}` is decided at init by `PRM_ID_ENABLE_NEW_LFHASH`, with `lf_entry_descriptor` as the shared type that lets one entry layout drive both code paths.
  • CUBRID Lock-Free Primitives — Overview, Two Generations, and Reclamation SpineMap of CUBRID's lock-free primitives — the legacy C `lock_free.{h,c}` family and the modern C++ `lockfree::*` namespace — anchored on a single transactional reclamation spine that every other lock-free structure in the engine sits on.
  • CUBRID Lock-Free Transactional Reclamation — System, Table, Descriptor, and Address MarkerHow CUBRID reclaims retired nodes from lock-free data structures safely — a per-data-structure transaction id, per-thread descriptors that bracket reads, and a periodic minimum-active-id scan that tells the freelist when a retired node is no longer reachable from any live reader.
  • CUBRID Private Allocator — Per-Thread Lea Heap, C++ STL Allocator Wrapper, and Build-Mode RoutingPer-thread Lea-heap arena (Doug Lea's `dlmalloc` vendored under `customheaps`) instantiated once per `THREAD_ENTRY`, fronted by `db_private_alloc / _free / _realloc` macros that route SERVER_MODE allocations to the thread's heap, CS_MODE to the client workspace, and SA_MODE through a `PRIVATE_MALLOC_HEADER`-tagged dispatch that remembers whether the block came from the Lea heap or the workspace. C++ STL wrapper `cubmem::private_allocator<T>` lets STL containers participate; `private_unique_ptr<T>` and `PRIVATE_BLOCK_ALLOCATOR` are the convenience layers; `switch_to_global_allocator_and_call` is the escape hatch for cross-thread or process-global allocations.

Storage Engine (9)

  • CUBRID B+Tree — Layout, Latch-Coupling, and Unique-Key SuffixingHow CUBRID lays out a B+Tree index — slotted-page nodes with key||OID concatenation, separate non-leaf and leaf records, and overflow OID pages — and how insert / delete / scan walk it under latch-coupling discipline with unique-constraint enforcement at the OID-suffix level.
  • CUBRID Disk Manager and File Manager — Volumes, Sectors, Files, Page Allocation, and ExtensionHow CUBRID layers a four-level hierarchy — OS file as a volume, 64-page sector as the disk-manager allocation unit, logical file as a sector bundle, page as the I/O unit — under everything else; how the disk cache splits permanent and temporary purposes to drive a two-step sector reservation and adaptive volume extension; and how the file manager turns reserved sectors into pages via three extensible-data tables (Partial / Full / User).
  • CUBRID Double Write Buffer — Torn-Page Protection Between Page Buffer and Data FilesHow CUBRID protects against torn writes by staging every dirty data page first into a sequential, fixed-size DWB volume — fsync'd before the home write — so that a crash mid-flush always leaves either a clean home page or a clean DWB copy that recovery can use to replace it.
  • CUBRID Extendible Hash — Disk-Resident Hash File With Doubling Directory and Local DepthHow CUBRID realizes Fagin et al.'s extendible hashing on top of the page buffer — an EHID-rooted directory file whose pointer count doubles when a bucket overflows, slotted bucket pages with binary search and per-bucket local depth, system-op-bracketed splits/merges, RVEH_* WAL records for redo and logical undo, and a small set of internal callers (class-name → OID, catalog → repr-id, UPDATE/DELETE OID dedup).
  • CUBRID Heap Manager — Slotted Pages, Record Layout, Operations, MVCC, and CachesHow CUBRID stores variable-length records in slotted heap pages, how INSERT / UPDATE / DELETE / READ flow through the nine record types, how MVCC versioning lives inside the record header, and which caches keep the hot paths short.
  • CUBRID LOB — External Storage, Locator Lifecycle, and Transactional CleanupHow CUBRID stores BLOB/CLOB data as files outside the data volume, names them with locator URIs, tracks per-transaction state in a red-black tree on the TDES, and reconciles file-system reality with transaction commit / rollback through a single dispatch point.
  • CUBRID Overflow File — Heap Big-Record and B+Tree Overflow-OID Page ChainsHow CUBRID spills oversized records out of slotted heap pages and oversized OID lists out of B+Tree leaves into chained overflow pages, with two distinct on-page formats sharing one underlying file abstraction (`FILE_MULTIPAGE_OBJECT_HEAP` / `FILE_BTREE_OVERFLOW_KEY` / per-tree OID overflow) and the WAL discipline that keeps the chains crash-safe.
  • CUBRID Page Buffer Manager — BCB, Three-Zone LRU, Private Quotas, Direct Victim Handoff, and Custom LatchesHow CUBRID maps disk pages to memory via BCBs (Buffer Control Blocks), evicts under a three-zone LRU split into per-thread private and shared lists with adjustable quotas, hands off victims directly to sleeping waiters via lock-free queues, and protects each BCB with a custom read/write/flush latch.
  • CUBRID TDE — Transparent Page-Level Encryption With Master-Key-Wrapped DEKHow CUBRID realizes transparent data encryption — a two-level key hierarchy (master key wraps three per-database data keys), AES-256-CTR or ARIA-256-CTR with per-page nonces (LSA for permanent pages, atomic counter for temp, logical pageid for log), encrypt-on-flush hooks in the page buffer and log page buffer, decrypt-on-read hooks at the same boundaries, a separate ``<db>_keys`` master-key file held outside the database, and a per-file tablespace-style TDE flag that propagates down to each page's ``pflag`` bits.

Transaction & Recovery (11)

Query Processing (20)

  • CUBRID Cursor — Client-Side Fetch Handle Over a Server List-File With Holdability and Scroll StateHow CUBRID realises an ANSI-style fetch cursor as a client-side `CURSOR_ID` that locks onto a server-side `QFILE_LIST_ID`, paging tuples one network-page at a time across a `qfile_get_list_file_page` round-trip, decoding length-prefixed packed rows into `DB_VALUE`s, prefetching dereferenced object identifiers in vector form, and surviving COMMIT through the session-scoped holdable-cursor list when the broker requests `RESULT_HOLDABLE`.
  • CUBRID External Sort — Run Generation, Multi-Way Merge, and the Sort SubstrateHow CUBRID performs disk-backed sorting through a two-phase replacement-selection-style run generator (`sort_inphase_sort`) and a balanced k-way merge (`sort_exphase_merge`) over `FILE_TEMP` runs, exposing a single callback-driven entry point (`sort_listfile`) used by ORDER BY / sort GROUP BY / DISTINCT, B+Tree bulk load, and parallel index build.
  • CUBRID Hash Join — Build/Probe Pattern, Hash-Scan Primitives, and Spill BehaviourHow CUBRID realises hash join as a Build/Probe driver in `query_hash_join.c` that reuses the `HASH_LIST_SCAN` primitive of `query_hash_scan.c`, picks one of three table layouts (in-memory `mht_hls`, hybrid memory-index-plus-file-tuples, or extendible `FHS` hash file) based on the `max_hash_list_scan_size` budget, escalates to grace-style equi-hash partitioning when the build side spills, and is admitted by the optimizer through a deliberately conservative `qo_examine_hash_join` gate keyed on the `USE_HASH` hint.
  • CUBRID JSON_TABLE — Table Function Turning JSON Documents Into Virtual RowsOne C++ scanner whose cursor stack walks a parser-built tree of `cubxasl::json_table::node` objects, expanding the input JSON document with `db_json_iterator_*` per NESTED PATH and emitting rows at the leaves — a SCAN_TYPE that lets CUBRID promote a JSONPath plus a column spec into a row source for the executor's `scan_next_scan` loop.
  • CUBRID List-File — Spillable Tuple-Stream Inter-Operator Pipe and Materialisation SubstrateHow CUBRID realises every materialised tuple stream — sub-query result, sort output, hash-build side, group-by accumulator, and final query result — as a single `QFILE_LIST_ID` linked-page abstraction backed by a per-query `QMGR_TEMP_FILE` membuf-then-`FILE_TEMP` substrate, and how the executor and scan layer pipe data through it via a uniform open / add / scan / close contract.
  • CUBRID Parallel Query — Intra-Query Parallelism Across Heap Scan, Hash Join, and Query ExecuteOne global parallel-query worker pool, a `compute_parallel_degree()` policy keyed on page count, a `worker_manager` reservation handle, and three operator-specific orchestrators — `parallel_heap_scan::manager` (block-range partitioning of heap sectors with mergeable-list / xasl-snapshot / buildvalue result handlers), `parallel_query::hash_join::{build_partitions,execute_partitions}` (shared partition fan-out then per-partition build+probe), and `parallel_query_execute::query_executor` (uncorrelated-aptr fan-out for `BUILDLIST_PROC` / `BUILDVALUE_PROC` / `UNION_PROC` / `HASHJOIN_PROC` / `MERGELIST_PROC`) — sitting on top of the `cubthread::worker_pool` named "parallel-query" and threaded through XASL via `px_executor`, `m_px_orig_thread_entry`, and the `S_PARALLEL_HEAP_SCAN` switch arm.
  • CUBRID Parser — Flex/Bison Pipeline, PT_NODE Tree, and the Parser Memory ModelHow CUBRID turns SQL text into a `PT_NODE` parse tree — a Flex lexer driven by a single-buffer `YY_INPUT`, a GLR Bison grammar that builds the tree through reduce-action calls to `parser_new_node`, a polymorphic-tagged-union `PT_NODE` whose per-type child layout is encoded in three function-pointer arrays (`pt_apply_f`, `pt_init_f`, `pt_print_f`), and a per-`PARSER_CONTEXT` block allocator that lets the whole tree be freed in one pass.
  • CUBRID Partitioning — Range/Hash/List Strategies, Partition Pruning, and Per-Partition ExecutionHow CUBRID partitions a logical table into a master class plus N child classes, encodes the per-partition rule (range bounds, hash modulus, list values) on the master class via SM_PARTITION, and uses a server-side PRUNING_CONTEXT to eliminate partitions at optimize time, route each insert/update record to its target partition heap, and dispatch a per-partition scan list at execute time.
  • CUBRID Post-Processing — Aggregation, Window/Analytic Functions, and Sort vs Hash GROUP BYHow CUBRID's query executor turns a sorted (or hash-accumulated) list file into grouped, aggregated, and window-framed output through `qexec_groupby` and `qexec_execute_analytic`, choosing between sort-based and hash-based GROUP BY at runtime, and falling back to external sort when the hash table outgrows `max_agg_hash_size`.
  • CUBRID Query Evaluator — PRED_EXPR Walking, regu_variable Fetch, and the Row-Level Filter EngineHow CUBRID turns each pulled tuple into a keep/skip verdict — `eval_pred` walks a `PRED_EXPR` tree of `T_PRED` boolean nodes and `T_EVAL_TERM` leaves under three-valued logic (`V_TRUE` / `V_FALSE` / `V_UNKNOWN` / `V_ERROR`), every leaf calls `fetch_peek_dbval` which dispatches on `REGU_VARIABLE::type` (constant, attribute fetch, list-file position, arithmetic expression, function call, host variable, OID, list-id) into a path-specific resolver, and `eval_fnc` pre-compiles a fast single-shape predicate to bypass the recursion when possible.
  • CUBRID Query Executor — XASL Interpretation, Iterator Model, and Heap/Index Scan OperatorsHow CUBRID interprets a serialized XASL plan tree as a Volcano-style operator tree — `qexec_execute_mainblock_internal` dispatches by `xasl->type`, drives a uniform open/next/close loop over `SCAN_ID` operators (heap, index, list, set, value, JSON, dblink, parallel-heap), and pushes results into per-XASL list files that downstream operators read back as plain list scans.
  • CUBRID Query Optimizer — Query Graph, Cost Model, Join Enumeration, and Compiled PlanHow CUBRID lowers a semantically-checked PT_NODE into a `QO_ENV` query graph of `QO_NODE`/`QO_SEGMENT`/`QO_TERM`, runs partial-then-total dynamic-programming join enumeration over a 2^N `join_info` vector with a System R-style fixed-cpu/io + variable-cpu/io cost model, and finalises the surviving `QO_PLAN` tree into the XASL access-spec tree shipped to the server.
  • CUBRID Query Rewrite — Pre-Optimization Tree Transformations and the LIMIT-Clause Case StudyHow CUBRID lowers the LIMIT clause into INST_NUM/ORDERBY_NUM/GROUPBY_NUM predicates during semantic checking, then re-rewrites surviving LIMITs in mq_rewrite, and how that lowering interacts with CNF conversion, predicate reduction, view inlining, subquery flattening, auto-parameterization, and the plan-generation-time multi-range LIMIT optimization.
  • CUBRID Runtime Memoization — Subquery Cache, Filter-Predicate Cache, and Per-Query Memoize HelpersHow CUBRID avoids redundant per-row work through three independent caches sharing one playbook — DB_VALUE-array hash key, fail-on-full memory budget, hit-ratio guard — but operating at three different lifecycle scopes: per-XASL `sq_cache` for uncorrelated scalar-subquery results, server-wide per-BTID `fpcache` for deserialised function-index predicates, and per-XASL `memoize::storage` for nested-loop-join inner-side tuple sets.
  • CUBRID Scalar Functions — Arithmetic, String, Numeric, JSON, Regex, and Cryptographic Operator PrimitivesHow CUBRID's scalar function library — `arithmetic.c`, `numeric_opfunc.c`, `string_opfunc.c`, `query_opfunc.c`, `crypt_opfunc.c`, and the `string_regex_*` family — implements the operator-primitive layer underneath the regu-variable evaluator, with `fetch_peek_arith` driving a per-`OPERATOR_TYPE` switch into `qdata_*_dbval` arithmetic dispatchers (which fan out by `DB_TYPE` into the per-pair `qdata_add_int_to_dbval`-style variants), `qdata_evaluate_function` driving a per-`FUNC_CODE` switch into JSON / regex / list / generic handlers, BCD arithmetic on `DB_NUMERIC` via `numeric_db_value_{add,sub,mul,div}` walking byte-wise binary digits in `numeric_{add,sub,mul,long_div}`, collation-aware string ops (`db_string_substring`, `db_string_lower`, `db_string_like`, `db_string_concatenate`) honoring `INTL_CODESET`, and a regex façade (`cubregex::compile / search / count / instr / replace / substr`) that routes to either RE2 or `std::regex` per the `regexp_engine` system parameter.
  • CUBRID Scan Manager — SCAN_ID Dispatch, Open/Next/Close Protocol, and the Access-Method CatalogueOne polymorphic SCAN_ID handle, a switch-driven open/start/next/end/close protocol, and a per-`SCAN_TYPE` dispatch into heap, B+Tree, list-file, set, value, JSON-table, dblink, show, parallel-heap, and method scans — the access-method catalogue that the executor's `scan_next_scan` loop sits directly on top of.
  • CUBRID Semantic Check — Name Resolution, Type Checking, Constant Folding, and Statement-Specific ValidationHow CUBRID's `pt_check_with_info` driver turns a freshly parsed `PT_NODE` tree into an analyzed, type-checked, constant-folded, CNF-normalized intermediate form by chaining four passes — name resolution, where-clause aggregate check, host-variable replacement, and a statement-aware semantic_check_local that internally calls `pt_semantic_type` for type evaluation and constant folding — and finally `pt_cnf` to push the predicate into conjunctive normal form before the optimizer runs.
  • CUBRID SERIAL — Sequence/Auto-Increment Subsystem With Catalogged State and Cached ValuesHow CUBRID stores every sequence as a row in the `_db_serial` system class, advances it under an exclusive object lock with optional client-side caching, and re-uses the same machinery to drive AUTO_INCREMENT columns through synthesized `<class>_ai_<attr>` serials.
  • CUBRID XASL Cache — Plan Cache Keyed by SHA-1 of SQL Hash Text with RT Recompile and Per-Class InvalidationHow CUBRID short-circuits parse → semantic-check → optimize → XASL-generate on the second execute by keying a server-wide latch-free hashmap on a SHA-1 hash of the rewritten SQL plus a per-entry `time_stored`, refcounting the entries with a single 32-bit `cache_flag`, watching cardinality drift through a recompile-threshold (RT) check, and invalidating dependent entries from a per-class OID list whenever DDL or schema-altering operations fire `xcache_remove_by_oid`.
  • CUBRID XASL Generator — Compiling the Optimized Plan Tree to a Server-Side Execution TreeHow CUBRID compiles a name-resolved, type-checked PT_NODE plus the optimizer's QO_PLAN tree into the procedural XASL_NODE tree the server actually executes — covering the recursive `gen_outer`/`gen_inner` walk, the `aptr/dptr/scan_ptr` slots that hide subqueries and joins inside one node, the REGU_VARIABLE / ACCESS_SPEC / OUTPTR_LIST sub-IRs, and the `xts_*` offset-table serialization that ships the whole tree to the server.

DDL & Schema (6)

Replication & HA (3)

  • CUBRID CDC — Streaming DML and DDL Through the WALHow CUBRID streams DML and DDL changes downstream — the modern `cdc_*` API that walks `LOG_SUPPLEMENTAL_INFO` records forward through `log_reader`, alongside the legacy HA `la_*` log applier that replays log archives onto a slave.
  • CUBRID HA Replication — Logical-Log Based Master/Slave Replication via copylogdb and applylogdbHow CUBRID's master engine emits auxiliary `LOG_REPLICATION_DATA` / `LOG_REPLICATION_STATEMENT` records alongside its physiological WAL during DML, and how a separate `copylogdb` daemon ships log volumes to a slave host where `applylogdb` (`la_apply_log_file`) walks them forward and dispatches per-record-type back into the storage layer for serialised, transactionally consistent replay.
  • CUBRID Heartbeat — Cluster Liveness, Failover and FailbackHow CUBRID's `cub_master` peers gossip liveness over UDP, score themselves into a single elected master per node's local view, and turn the resulting state transitions (slave→to-be-master→master, master→slave) into failover and failback through a job-queue FSM driving four worker threads.

Procedural Language (3)

  • CUBRID PL/JavaSP — Java Stored Procedures, JDBC Bridge, and the PL/CSQL-Sibling External PL EngineHow CUBRID runs Java and PL/CSQL stored procedures through a separate JVM process (cub_pl) that shares catalog rows and transport infrastructure with PL/CSQL while JavaSP alone owns the reflective dispatch on user JARs, the classloader hierarchy, and the security sandbox.
  • CUBRID PL/CSQL — Oracle-Compatible Procedural SQL Compiled to Java in the PL Family RuntimeHow CUBRID's PL/CSQL — the Oracle-dialect half of the PL family alongside JavaSP — is parsed by an ANTLR 4 grammar inside the shared `pl_server` JVM, lowered to a CUBRID-specific Java AST (DeclProgram / StmtBlock / ExprBinaryOp / loopOpt), translated to Java source by an emitter visitor, compiled by an in-process `javax.tools.JavaCompiler`, packaged as a JAR (Base64), and returned to the C-side `compile_handler` so `sp_add_stored_procedure_code` can persist it next to the same catalog rows JavaSP uses.
  • CUBRID PL Server Bridge — The Mid-Execution Callback Channel That Both PL Runtimes Ride OnThird sibling in the PL family — the shared mid-execution callback channel that both JavaSP and PL/CSQL ride on top of. Two physically distinct paths share the same opcode taxonomy and packed structures: Path A is the legacy `cub_server`→CAS channel for C-method scans (`SCAN_TYPE_METHOD` driven by `cubscan::method::scanner`, dispatched on the CAS side by `cubmethod::callback_handler::callback_dispatch`); Path B is the modern `cub_pl`→`cub_server` channel for JavaSP and PL/CSQL invocations (`SP_CODE_INTERNAL_JDBC` ferrying the same `METHOD_CALLBACK_*` opcodes, dispatched server-side by `cubpl::executor::response_callback_command`). Covers the request taxonomy (`METHOD_REQUEST_INVOKE` / `_CALLBACK` / `_END` / `_ARG_PREPARE` / `_COMPILE` / `_SQL_SEMANTICS` / `_GLOBAL_SEMANTICS`), the response taxonomy (~18 `METHOD_CALLBACK_*` opcodes), the compile-time semantic-check round-trips that PL/CSQL uses to validate embedded SQL, the recursion guard (`METHOD_MAX_RECURSION_DEPTH = 15`), and the `tran_begin/end_libcas_function` bracketing that scopes a callback nest under the parent transaction.

Server Architecture (14)

  • CUBRID Boot — Server Startup, First-Time Creation, Restart-Recovery Dispatch, and Client ConnectHow `cub_server` brings every subsystem online — first-time `createdb` formats volumes and bootstraps the root-class catalog, restart hands off to `log_recovery`'s three-pass replay, and the client side wires `boot_restart_client` to `xboot_register_client` over the network.
  • CUBRID Broker — CAS Process Pool, Connection Routing, and the Client-Facing Front-EndHow CUBRID's `cub_broker` parent forks a fixed pool of `cub_cas` worker processes, exposes a single TCP listener, hands accepted client sockets to an idle CAS through a Unix-domain rendezvous channel using SCM_RIGHTS file-descriptor passing, and lets each CAS proxy the client's CSS-framed traffic upstream to `cub_server` — all coordinated through a single SysV shared-memory segment that also carries job queues, ACL state, monitoring counters, and the broker administration interface.
  • CUBRID DBI and CCI — Client API Surface, Statement Lifecycle, and Wire-Driver FaçadeHow CUBRID layers a single client-side `db_*` C API on top of `boot_cl` and `network_cl` — `db_open_buffer` / `db_compile_statement_local` / `db_execute_statement` / `db_query_first_tuple` / `db_query_get_tuple_value` / `db_close_session` walk every statement through a four-stage FSM (Initial → Compiled → Prepared → Executed) inside a `DB_SESSION`, and how the broker's CAS process wraps that same surface in a `T_SRV_HANDLE`-keyed wire driver (`ux_database_connect`, `ux_prepare`, `ux_execute`, `ux_fetch`, `ux_end_tran`) dispatched by a flat `server_fn_table` so JDBC, CCI, ODBC, Python, and PHP all reach the engine through the same `db_*` core.
  • CUBRID Error Management — Per-Thread Error Context, Stack, Message Catalog, and Wire PropagationHow CUBRID reports failures from any subsystem — the global `ER_*` error-code enum (`error_code.h`), the per-thread `cuberr::context` that owns a base `er_message` plus a `std::stack<er_message>` for nested errors, the `er_set` family that formats arguments through a printf-style spec compiled by `er_study_fmt`, the localised `cubrid.msg` / `csql.msg` / `utils.msg` catalogs loaded by `msgcat_init` in NetBSD/FreeBSD `nl_catd` format, the `cubrid_*.err` log file with size-based rotation and a `_latest` symlink, and the wire format (`er_get_area_error` / `er_set_area_error`) that flattens an error to three `OR_INT` fields plus the message string for client-server propagation.
  • CUBRID loaddb — Bulk Loader, Direct-Path Heap+B+Tree Insert, and Post-Load Statistics RebuildHow CUBRID's loaddb utility tokenises a CUBRID-format object file, splits it into batches, ships each batch to a server-side worker pool that holds a Bulk-Update lock and writes through `locator_multi_insert_force` directly into heap pages, then closes the load with a class-by-class statistics rebuild.
  • CUBRID Locator — OID Workspace, Bulk Fetch/Flush, and the Server-Side Insert/Update/Delete BridgeHow CUBRID translates between in-memory objects and on-disk OIDs — a client-side workspace that batches dirty objects into LC_COPYAREA buffers and a server-side `locator_*_force` family that fans out into heap, btree, lock, log, FK, and replication paths through one canonical entry point.
  • CUBRID cub_master Process — Daemon Lifecycle, Connection Registry, Request Dispatch, and the Auto-Restart Server MonitorEnd-to-end analysis of `cub_master` — the long-lived daemon that owns the per-host CUBRID service registry. Covers the boot sequence (`master_util_config_startup` → `css_does_master_exist` duplicate check → `css_daemon_start` fork → `css_master_init` socket binding + signal handlers → optional `hb_master_init` for HA → optional `server_monitor` instantiation when `auto_restart_server = on`); the `select()` loop that multiplexes the listening socket with every registered child connection in `css_Master_socket_anchor`; the `process_master_request` opcode taxonomy (~30 opcodes split across status / shutdown / HA-process registration / HA-info-query families) implementing what `commdb` and `cubrid commdb` send; the `server_monitor` C++ subsystem that runs a producer-consumer job queue (REGISTER / UNREGISTER / REVIVE / CONFIRM_REVIVE / SHUTDOWN) on a dedicated `std::thread` and uses `m_server_entry_map` to track every registered cub_server's PID + argv so it can re-fork on abnormal exit. Distinct from `cubrid-heartbeat.md` which covers the HA-replication subsystem layered on top of master.
  • CUBRID Monitoring — Perfmon Counters, Statistics Aggregation, and Per-Subsystem MonitorsHow CUBRID instruments hot paths with two layered counter systems — a C++ template-based `cubmonitor` library that registers groups of statistics and supports per-transaction sheets, and the older C `perf_monitor`/`pstat_Metadata` array used by SHOW STATS and statdump — plus per-subsystem monitors such as the per-vacuum-worker overflow-page threshold tracker that keep their own non-counter state.
  • CUBRID Network Protocol — Connection Accept, NRP Dispatch, and Server-Side Request HandlersHow CUBRID frames every server entry point as one NET_SERVER_* opcode dispatched through a static table of `(action_attribute, handler)` records — connections accepted by `cub_master`, handed to `cub_server` workers via `master::connector` over a Unix-domain socket, then driven by an epoll-based `cubconn::connection::worker` that reads CSS-framed packets and delegates to symmetric `or_pack_*` / `or_unpack_*` request marshalling on both sides of the wire.
  • CUBRID SA vs CS Runtime — Standalone (linked-in server) vs Client-Server (over the wire) ModesHow CUBRID compiles the same source tree three times — `cub_server` (SERVER_MODE), `libcubridsa` (SA_MODE), `libcubridcs` (CS_MODE) — so admin utilities can either embed the entire engine in-process and operate on the on-disk database directly, or talk over CSS to a separately running daemon, with the choice driven by per-utility classification (SA_ONLY / CS_ONLY / SA_CS) and a runtime `dlopen` of either `libcubridsa.so` or `libcubridcs.so`.
  • CUBRID Server Session — Per-Client State, Prepared-Statement Registry, and TDES BindingHow CUBRID maintains a per-client server-side state container — the `SESSION_STATE` — keyed by an integer session id in a lock-free hash, cached on the connection entry for O(1) request lookup, and bound to the per-thread TDES so that every server request lands on its rightful transaction descriptor, prepared-statement cache, and parameter set.
  • CUBRID System Parameters — Tunable Registry, Conf/Env/URL Parsing, and Per-Session ScopingHow CUBRID's `prm_Def[]` registry, the `cubrid.conf` INI parser with section selection, environment-variable overrides, the `db_set_system_parameters` SQL path, and the per-session `SESSION_PARAM` array combine into one ordered resolution flow that every other subsystem reads through `prm_get_*_value`.
  • CUBRID Thread Manager NG — Connection/Worker Pool Redesign for High-Concurrency (CBRD-26177)Guava-version redesign of CUBRID's connection/worker pool — bounded epoll-driven connection workers, a coordinator brokering rebalancing and auto-scaling, send/recv budgets, per-worker context freelists, and atomic-free statistics — replacing the legacy thread-per-connection plus max_clients-task-worker layout described in cubrid-thread-worker-pool.md.
  • CUBRID Thread and Worker Pool — Workers, Daemons, Lock-Free Primitives, and Critical SectionsHow CUBRID structures every server thread of execution — the per-thread `cubthread::entry` context, the `worker_pool` template (cores → workers → task queue) that runs queries / vacuum / loaddb / parallel-redo, the `daemon` + `looper` pattern that drives every periodic background flush and detect, the lock-free hashmap shared by lock manager and page buffer, and the heavyweight `csect` RW primitive with its per-thread tracker.

Internationalization & Specialty (2)

  • CUBRID Charset and Collation — Codeset Conversion, Locale-Aware Comparison, and Multi-Encoding SupportHow CUBRID encodes text in four codesets (binary, ISO-8859-1, EUC-KR, UTF-8), compiles LDML locale rules into shared libraries of UCA weights, and dispatches collation-aware comparison through a function-pointer LANG_COLLATION vtable consumed by B+Tree, sort, and string operators.
  • CUBRID Timezone — IANA Data Compilation, tz_id Resolution, and DATETIMETZ/TIMESTAMPTZ ConversionHow CUBRID compiles raw IANA tzdata files into a generated `timezones.c` and shared library `libcubrid_timezones.so`, loads the TZ_DATA blob at runtime via `dlsym`, packs a (zone, gmt-offset-rule, ds-rule) triple into a 32-bit `TZ_ID` with two reserved high bits distinguishing zone IDs from raw offsets, resolves wall-clock to UTC through `tz_datetime_utc_conv` walking the zone's offset-rule list and a daylight-saving ruleset while honouring the `LOCAL_STD` / `LOCAL_WALL` / `UTC` "AT" qualifier and overlap intervals, and exposes `DATETIMETZ`, `TIMESTAMPTZ`, `DATETIMELTZ`, and `TIMESTAMPLTZ` to the SQL layer through `tz_create_datetimetz`, `tz_conv_tz_datetime_w_region`, and `tz_explain_tz_id`.

Other (10)

  • CUBRID checksumdb — HA Replica vs Master Row-Checksum Verifier with Chunked, Replication-Replayed ComparisonEnd-to-end analysis of `checksumdb` — the HA-cluster integrity verifier that detects silent divergence between a master and its slaves by chunking each table along the primary key, computing per-chunk checksums on the master, replicating those checksums through the same WAL stream that ferries the data, and letting the slave recompute and compare. Covers the on-disk artefacts (`db_ha_apply_info_chksum_*` result table + `_schema` table) and the chunked walk (lower-bound iteration via the PK ordering, `--chunk-size` rows per chunk, `chksum_get_next_lower_bound` driving the cursor); the schema-checksum side that compares serialised class definitions; the include/exclude list filtering; the resume-from-prior-run mode; the report formatter (`chksum_report_summary` / `_diff` / `_schema_diff`); the SHARED lock acquired during chunk computation that allows reads but blocks writes long enough to make the chunk consistent.
  • CUBRID compactdb — Offline Database Compaction and Page Defragmentation UtilityHow CUBRID's compactdb utility complements the online MVCC vacuum by walking each class heap, NULL-ing dangling OID references, reclaiming empty heap pages, dropping obsolete catalog representations, and defragmenting heap files — driven from the client side, scoped per class, and run in three numbered passes against an unmounted-but-restartable database.
  • CUBRID csql — Interactive SQL Client, Two-Binary Launcher Split, Session-Command Prefix, and Single-Line Execution ModeEnd-to-end analysis of the `csql` interactive SQL client — the two-binary launcher pattern (`csql_launcher.c` parses argv, then `dlopen`s `cubridsa` or `cubridcs` and `dlsym`s into the `csql` entry depending on `--SA-mode` / `--CS-mode`); the CSQL_ARGUMENT option table and the validation matrix that rejects invalid combinations (`-p` / `-q` / `--loaddb-output`, `--write-on-standby` without `--sysadm`, `--skip-vacuum` only in SA); the `start_csql` read-execute-print loop with single-line-execution detection via `csql_walk_statement` / `csql_is_statement_complete` for in-block (string / comment / identifier) tracking; the session-command (`;`-prefixed) dispatch table with 47 entries split across file, edit, command, environment, help, and history families plus the `CMD_CHECK_CONNECT` flag that gates DDL-class commands when the session is offline; the readline / `stifle_history` integration and the `.hist` set of histogram commands; the DDL audit logging hookup (`logddl_*`) and the four output styles (column, line-output, plain, query, loaddb).
  • CUBRID cubrid Admin CLI — Verb Dispatcher, SA/CS-Routed Library Loading, and the Service · Server · Broker · Heartbeat FamilyEnd-to-end analysis of the unified `cubrid` admin CLI — the two-axis verb taxonomy (service-family verbs `service|server|broker|manager|heartbeat|pl|gateway` × command-family verbs `start|stop|restart|status|reload|on|off|acl|reset|info|deregister|list|getid|test|copylogdb|applylogdb|replication`, plus the database-admin verbs `createdb|backupdb|loaddb|unloaddb|...|memmon`) backed by `us_Service_map` + `us_Command_map` + `ua_Utility_Map` tables; the bitmask cross-check (`MASK_SERVICE`/`MASK_SERVER`/`MASK_BROKER`/`MASK_HEARTBEAT`/`MASK_PL`/`MASK_GATEWAY`) that rejects nonsensical (verb, command) pairs at parse time; the SA_ONLY/CS_ONLY/SA_CS routing that picks `libcubridsa` vs `libcubridcs` per verb; the `dlsym` of a verb-specific entry function with the standard `UTIL_FUNCTION_ARG` signature; the legacy compatibility shim in `util_front.c` that translates old short-arg invocations (`createdb -p 1000 ...`, `loaddb -u dba ...`) into modern `cubrid <verb> --long-arg ...` and `execvp`s into the unified entry.
  • CUBRID loadjava — JAR · .class Installer for the JavaSP Classloader TreeEnd-to-end analysis of `loadjava` — the small standalone utility that installs a `.class` or `.jar` into the database's per-database Java classloader root so JavaSP can find it. The whole utility is one C++17 file using `<filesystem>`. Covers the path resolution: `$CUBRID_DATABASES/<db>/java/<package>/<file>` for the dynamic (default) tree that the `cub_pl` JVM's `ContextClassLoader` watches, or `$CUBRID_DATABASES/<db>/java_static/<package>/<file>` (with `--jni`) for the static tree that is loaded once at JVM start; the `--package` flag with regex-validated dot notation that becomes the directory hierarchy under the install root; the `--overwrite` / `-y` flag for non-interactive overwrite; the deliberate `fs::remove` before copy that updates the directory's mtime so the JVM's classloader-manager picks up the change without restart (CBRD-24695); the lack of any database-server connection — the install is purely filesystem-side, with the JVM-side classloader manager doing the discovery.
  • CUBRID migrate — One-Shot 9.1→9.2 In-Place Format Upgrader for Volume Headers, Active Log Codeset, and Collation SyncEnd-to-end analysis of `migrate` — the version-locked one-shot in-place format upgrader from CUBRID 9.1 to 9.2 disk format. Covers the four-phase sequence: (1) per-volume header rewrite that converts the v9.1 disk-var-header layout to v9.2 (with an undo journal so a mid-migration crash can be rolled back via `undo_fix_volume_header`); (2) active-log codeset patch (`fix_codeset_in_active_log`) that retags the log header's codeset field consistently with the catalog; (3) `db_restart` boot, `synccoll_force` to refresh the catalog's collation rows from the new locale library, with strict per-collation compatibility checking (codeset match, checksum match unless contractions present); (4) `file_update_used_pages_of_vol_header` to refresh used-pages statistics. The lockstep undo discipline and the explicit `rel_disk_compatible() != V9_2_LEVEL` guard make migrate the canonical reference for how CUBRID handles disk-format upgrades.
  • CUBRID SHOW Commands — System-Introspection Virtual Scans Over Server Runtime StateHow CUBRID exposes server-internal state — volume headers, log headers, heap and B+Tree capacity, critical sections, threads, page buffer, transaction tables, timezones — as queryable virtual tables by rewriting `SHOW <name>` into `SELECT * FROM (PT_SHOWSTMT)`, dispatching the resulting access spec through `S_SHOWSTMT_SCAN`, and synthesising tuples on demand from per-`SHOWSTMT_TYPE` start/next/end function pointers.
  • CUBRID System Catalog Classes — Data-Driven Definition, Bootstrap Install, and System View Query SpecsHow CUBRID defines and bootstraps the SQL-visible system class family — `_db_class`, `_db_attribute`, `_db_index`, `_db_auth`, ... — through a data-driven framework. Each class is a `cubschema::system_catalog_definition` (attributes + constraints + grants + optional row initializer) registered into a global vector at `catcls_init`; a single `system_catalog_builder` walks the vector at `catcls_install`, issuing `db_create_class` -> `smt_add_attribute` -> `sm_update_class` -> constraints/grants for every class, then a second pass builds the system views from their query-spec strings. Separated from the disk-layer `catalog_manager` and from the in-memory `schema_manager`, this module owns the *meta-schema of the SQL-visible system catalog itself*.
  • CUBRID unloaddb — Schema and Data Export, Four-File Output Layout, and the Per-Class Multi-Thread · Multi-Process DriverEnd-to-end analysis of `unloaddb` — the schema-and-data exporter that is the natural inverse of `loaddb`. Covers the four-file output layout (`<prefix>_schema`, `<prefix>_trigger`, `<prefix>_indexes`, `<prefix>_objects`) emitted in a fixed order so a downstream `loaddb` can replay schema → triggers → data → indexes; the `extract_context` carrier struct that threads user / auth / storage-order / split-files / dba-bypass settings into every emitter; the `extract_classes_to_file` → `emit_schema(EXTRACT_CLASS)` → `emit_schema(EXTRACT_VCLASS)` ordering that resolves table-before-view dependencies; the `extract_objects` driver with `--thread-count` (per-class concurrent fetch, ≤127) and `--mt-process N/M` (mutually-exclusive multi-process partitioning across ≤36 processes by class); the `--datafile-per-class` mode that writes one data file per class instead of one big bag; the `--latest-image` MVCC snapshot toggle; the `--keep-storage-order` vs default `FOLLOW_ATTRIBUTE_ORDER` axis; the `--as-dba` group bypass for cross-owner extracts; the `--split-schema-files` and `--input-class-file` selective-extract path.
  • CUBRID Utilities Miscellany — commdb, gencat, generate_timezone, daemon, cubrid_version, pl Bootstrap HelperOmnibus coverage of the small `src/executables/` utilities that don't warrant a dedicated doc each. Six tools span three categories: (1) **runtime probes / helpers** — `commdb` (the standalone `cub_master` RPC client predating `cubrid commdb`; sends MASTER_REQ_* opcodes), `pl` (cub_pl JVM bootstrap helper used by `cubrid pl ping/start/stop/restart/status` reading `$CUBRID/var/pl_<db>.info`), `daemon` (tiny `fork`/`setsid` double-fork wrapper used by other binaries that don't already detach themselves); (2) **build-time generators** — `gencat` (NetBSD-derived POSIX message-catalog compiler that produces the binary `.cat` files for `MSGCAT_CATALOG_*` lookups), `generate_timezone` (62-line wrapper around `timezone_compile_data` that turns IANA `tzdata` into the C source the `cubrid_timezones` library is built from); (3) **trivia** — `cubrid_version` (37-line program that prints `PRODUCT_STRING` and exits).