Skip to content

CUBRID SA vs CS Runtime — Standalone (linked-in server) vs Client-Server (over the wire) Modes

Contents:

A modern relational engine is not a single program. It is a protocol boundary with a client side, a server side, and a wire format that glues them. The defining choice that an engine makes early — usually in its first commit, almost never reversed — is whether that boundary is a runtime boundary (two processes, an actual TCP socket, real serialisation in both directions) or a compile-time boundary (one process, a function call, a memcpy). The two stances are not interchangeable: each makes one class of operation easy and the other class hard.

The two-process stance — what most production engines look like — puts the storage manager, the transaction manager, the lock manager, the buffer pool, and the recovery manager into a long-lived daemon process. Every client tool, every application, every administrative utility connects to that daemon over a network protocol; even a tool running on the same host as the daemon goes through the socket. The daemon is the single mediator of the on-disk database state. Two clients writing the same row see consistent locking because both go through the same in-memory lock table. The buffer pool is shared because it lives in the daemon’s heap. Crash recovery happens once, when the daemon starts, and the only mode in which the bytes on disk are touched without going through the daemon is a backup or a filesystem-level copy.

The two-process stance has two costs. The first is operational: the daemon must be running before anything else works. The second is administrative: a class of operations — “rebuild the database from scratch”, “compact the heap by physically moving every record”, “roll the WAL forward and stop”, “load a billion rows as fast as the disk can write them” — wants exclusive access to the database. Doing those operations through the daemon means convincing the daemon to stop accepting other connections, doing the work, and convincing it to resume. Doing them without the daemon is faster, simpler, and less error-prone if the engine can be made to run in the utility’s own process.

This is where the in-process stance appears, and almost every engine has some version of it. The same code that the daemon runs to manage pages, log records, locks, and B+Trees is compiled into a library. A standalone utility links that library into its own binary, opens the database files itself, runs whatever recovery is needed, performs its operation, flushes pages, and exits. There is no socket. There is no daemon. The utility is the engine for as long as it runs. The cost of this stance is also operational: the utility must hold an exclusive lock on the database files for its entire lifetime, because two such utilities running concurrently — or one such utility running while a daemon is also running — would corrupt the database.

The third axis is what the engine does with its parser, optimiser, and query executor. These components are conceptually “client side” in the SQL sense — they translate SQL text into an executable plan — but they need the schema, statistics, and class metadata that live on disk. An engine can put them anywhere on the boundary: in the daemon (everything is centralised, the wire format is SQL text); in the client library (the daemon never sees SQL, the wire format is already-planned executable bytecode); or in a hybrid (the parser runs client-side, the optimiser runs server-side). CUBRID picks the client-side placement: SQL is parsed, name-resolved, and translated to XASL on whichever side of the boundary the user code is running, and the XASL stream is what crosses the wire.

The combination — exclusive-access utilities that compile the entire engine into their own process, plus a parser that lives on the client side regardless — is what CUBRID calls SA mode (“standalone”). The opposite combination — utilities and applications that link only the client side and reach the storage manager through a network protocol to a separately running cub_server — is CS mode (“client-server”). The choice is made at compile time (different preprocessor symbols, different libraries) and resolved at runtime by which library the utility’s launcher chooses to dlopen.

Every mature engine has had to answer the same question — “what does the user run when there is no daemon to talk to?” — and the answers differ in revealing ways.

PostgreSQL keeps a one-process mode reachable through postgres --single, the so-called single-user backend. This mode exists primarily for crash recovery scenarios where the postmaster will not start: the user invokes the postgres binary directly, and that binary opens the data directory, runs WAL recovery, and presents a SQL prompt on stdin/stdout. Every layer above storage — parser, planner, executor — runs in the same process. The on-disk format is identical to multi-user mode; the difference is only that no postmaster, no shared memory, and no other backends are present. The recovery utility pg_resetwal is a separate strand of the same idea: a binary that reads PostgreSQL’s WAL file format directly, without going through any backend, and rewrites the control file.

SQLite is the extreme case of only the in-process stance. SQLite has no daemon. The library libsqlite3.so is the engine, and every program that uses SQLite links it directly. There is no “client mode” because there is no server. Concurrency between processes is handled by file locking on the database file itself (POSIX advisory locks or, on later versions, WAL-mode shared memory). The price of this design is that two SQLite “clients” cannot share a buffer pool — each process has its own page cache — and writers must serialise via the file lock. The benefit is that the engine starts in microseconds and has no operational footprint.

Oracle uses a different split. The default Oracle binary connects to a running instance (the shadow process model), but Oracle also ships a Direct Path API for SQL*Loader and Data Pump that lets a client process bypass the SQL layer and write blocks directly into data files. The blocks are formatted in user space and either streamed through the server (which still mediates the disk write) or, with parallel direct path, written by client processes that hold appropriate locks at the segment level. The architectural lesson is that even an otherwise pure two-process engine needs a code path where the client can construct on-disk structures itself.

MySQL has historically shipped an embedded server library (libmysqld) that linked the entire MySQL server into a client binary. The server-mode and embedded-mode binaries used the same storage and SQL code paths but differed in transport: embedded mode called server functions directly, while normal mode used the client protocol. The library was effectively deprecated in MySQL 8 because maintaining two transport paths through the same code base was an ongoing source of bugs — the SA-vs-CS divergence is a real and recurring engineering tax.

CUBRID sits closest to the MySQL embedded model, with one important refinement that PostgreSQL’s postgres --single shares: the SA-mode utilities are not “an extra mode of the daemon binary” but a separate library (libcubridsa.so) that the utility’s launcher loads on demand. The same source tree compiles three times — once with SERVER_MODE for cub_server, once with SA_MODE for libcubridsa.so, once with CS_MODE for libcubridcs.so — and the admin utility (cub_admin, csql, cub_compactdb, cub_loaddb) chooses which to load at runtime based on a per-utility classification table.

CUBRID’s SA/CS distinction is layered. There are five layers of mechanism — preprocessor flags, library wiring, per-utility classification, runtime library selection, and per-call dispatch divergence — and each one builds on the one below.

The same .c and .cpp files appear in both sa/CMakeLists.txt and cs/CMakeLists.txt (and many of them in cubrid/CMakeLists.txt too), but the three CMake targets pass different target_compile_definitions so the preprocessor produces three different translation units. The relevant lines are:

# cubrid/CMakeLists.txt — the cub_server daemon
target_compile_definitions(cubrid PRIVATE SERVER_MODE EnableThreadMonitoring ${COMMON_DEFS})
# sa/CMakeLists.txt — the standalone library
target_compile_definitions(cubridsa PRIVATE SA_MODE CUBRID_EXPORTING ${COMMON_DEFS})
# cs/CMakeLists.txt — the client-server library
target_compile_definitions(cubridcs PRIVATE CS_MODE CUBRID_EXPORTING ${COMMON_DEFS})

Three flags, mutually exclusive. Every conditional compilation block across the source tree branches on #if defined(CS_MODE), #if defined(SA_MODE), or #if defined(SERVER_MODE). The convention is that SA_MODE and SERVER_MODE together cover “the server side is in this binary”; CS_MODE and SERVER_MODE are mutually exclusive (one is always the client side, the other always the daemon).

The composition of source files differs between the three libraries and reveals the architecture:

  • cs/CMakeLists.txt builds libcubridcs.so from the client-side files only. The transaction directory contributes boot_cl.c, transaction_cl.c, locator_cl.c, but not boot_sr.c, transaction_sr.c, or locator_sr.c. The communication directory contributes network_cl.c, network_interface_cl.c, and network_callback_cl.cpp — the file network_cl.c carries an explicit #error Does not belong to cs module guard at line 65 that fires if the file is ever pulled into the wrong target. The storage directory contributes only the small client-side stubs (statistics_cl.c, storage_common.c) — there is no btree.c, no heap_file.c, no page_buffer.c in the CS build. The result is a library that knows how to parse and plan SQL and how to talk to a server, but cannot open a database file by itself.

  • sa/CMakeLists.txt builds libcubridsa.so from everything. Every server-side file (boot_sr.c, locator_sr.c, btree.c, heap_file.c, page_buffer.c, log_manager.c, lock_manager.c, query_executor.c, vacuum.c, …) is compiled in, alongside every client-side file. The communication directory contributes network_interface_cl.c (the dispatch layer) but not network_cl.c — there is no socket transport in SA mode. The resulting library is large (the whole engine) and self-sufficient: it can open the on-disk database, run recovery, execute queries, and shut down without ever opening a socket.

  • cubrid/CMakeLists.txt builds cub_server, the daemon binary. It has every server-side file plus network_sr.c and network_interface_sr.cpp (the server-side transport) but none of the *_cl.c files. There is no parser in cub_server. Parsing happens client-side; the daemon receives compiled XASL and executes it.

The three libraries are deployed side-by-side. A typical CUBRID installation has lib/libcubridcs.so, lib/libcubridsa.so, and bin/cub_server. Each utility binary picks one of the two libraries at startup.

The choice between SA and CS for an administrative utility is encoded in src/executables/util_admin.c as a static table:

// ua_Utility_Map — src/executables/util_admin.c
static UTIL_MAP ua_Utility_Map[] = {
{CREATEDB, SA_ONLY, 2, UTIL_OPTION_CREATEDB, "createdb", ...},
{DELETEDB, SA_ONLY, 1, UTIL_OPTION_DELETEDB, "deletedb", ...},
{BACKUPDB, SA_CS, 1, UTIL_OPTION_BACKUPDB, "backupdb", ...},
{RESTOREDB, SA_ONLY, 1, UTIL_OPTION_RESTOREDB, "restoredb", ...},
{ADDVOLDB, SA_CS, 2, UTIL_OPTION_ADDVOLDB, "addvoldb", ...},
{SPACEDB, SA_CS, 1, UTIL_OPTION_SPACEDB, "spacedb", ...},
{LOCKDB, CS_ONLY, 1, UTIL_OPTION_LOCKDB, "lockdb", ...},
{KILLTRAN, CS_ONLY, 1, UTIL_OPTION_KILLTRAN, "killtran", ...},
{OPTIMIZEDB, SA_ONLY, 1, UTIL_OPTION_OPTIMIZEDB, "optimizedb", ...},
{INSTALLDB, SA_ONLY, 1, UTIL_OPTION_INSTALLDB, "installdb", ...},
{DIAGDB, SA_ONLY, 1, UTIL_OPTION_DIAGDB, "diagdb", ...},
{CHECKDB, SA_CS, 1, UTIL_OPTION_CHECKDB, "checkdb", ...},
{LOADDB, SA_CS, 1, UTIL_OPTION_LOADDB, "loaddb_user", ...},
{UNLOADDB, SA_CS, 1, UTIL_OPTION_UNLOADDB, "unloaddb", ...},
{COMPACTDB, SA_CS, 1, UTIL_OPTION_COMPACTDB, "compactdb", ...},
{STATDUMP, CS_ONLY, 1, UTIL_OPTION_STATDUMP, "statdump", ...},
{CHANGEMODE, CS_ONLY, 1, UTIL_OPTION_CHANGEMODE, "changemode", ...},
{COPYLOGDB, CS_ONLY, 1, UTIL_OPTION_COPYLOGDB, "copylogdb", ...},
{APPLYLOGDB, CS_ONLY, 1, UTIL_OPTION_APPLYLOGDB, "applylogdb", ...},
{VACUUMDB, SA_CS, 1, UTIL_OPTION_VACUUMDB, "vacuumdb", ...},
{CHECKSUMDB, CS_ONLY, 1, UTIL_OPTION_CHECKSUMDB, "checksumdb", ...},
{FLASHBACK, CS_ONLY, 2, UTIL_OPTION_FLASHBACK, "flashback", ...},
// ... condensed ...
{-1, -1, 0, 0, 0, 0, 0}
};

Each row tags a utility with one of three classes:

  • SA_ONLY — must run with no cub_server attached. These are the operations that need exclusive access to the on-disk state: createdb (the database does not exist yet), deletedb (the database is about to disappear), restoredb (the on-disk state is being overwritten from a backup), optimizedb (rebuilds statistics by scanning every heap page directly), installdb (installs system catalog from scratch), genlocale / dumplocale (rewrites locale data files), synccolldb (rewrites collation tables). Allowing CS mode here would either be incorrect (two writers to the same files) or impossible (the daemon does not run yet because the database does not exist).

  • CS_ONLY — must run with a cub_server attached. These operations interact with a live server: lockdb (dumps the in-memory lock table), killtran (kills a running transaction), statdump (dumps in-memory performance counters), changemode (HA state transitions), copylogdb / applylogdb (HA replication daemons), flashback (reads online change history), checksumdb (HA-mode consistency checks). None of these operate on a quiescent on-disk database — they all need the daemon’s in-memory state.

  • SA_CS — runs in either mode, user picks. These are the utilities where both stances make sense: backupdb (online backup through the server, or offline backup with exclusive access), loaddb (load through the server while applications still query, or load directly into a quiescent database for maximum throughput), unloaddb, compactdb, checkdb, vacuumdb, addvoldb, spacedb, paramdump, tde. The -S / -C command-line flag picks the mode; the default depends on the utility.

Once util_admin.c knows which mode the user requested, it has to load the matching shared library. The selector is util_get_library_name:

// util_get_library_name — src/executables/util_admin.c
static const char *
util_get_library_name (int utility_index)
{
int utility_type = ua_Utility_Map[utility_index].utility_type;
UTIL_ARG_MAP *arg_map = ua_Utility_Map[utility_index].arg_map;
switch (utility_type)
{
case SA_ONLY:
return LIB_UTIL_SA_NAME;
case CS_ONLY:
return LIB_UTIL_CS_NAME;
case SA_CS:
{
// SA_CS utilities accept -S, -C, or HIDDEN_CS_MODE_S as a flag.
for (int i = 0; arg_map[i].arg_ch; i++)
{
int key = arg_map[i].arg_ch;
if (key == 'C' && arg_map[i].arg_value.p != NULL)
return LIB_UTIL_CS_NAME;
if (key == HIDDEN_CS_MODE_S && arg_map[i].arg_value.p != NULL)
return LIB_UTIL_CS_NAME;
if (key == 'S' && arg_map[i].arg_value.p != NULL)
return LIB_UTIL_SA_NAME;
}
}
}
if (utility_index == VACUUMDB || utility_index == TDE)
return LIB_UTIL_SA_NAME;
return LIB_UTIL_CS_NAME; // SA_CS default for everything else: CS
}

The library names are resolved at compile time by utility.h:

// LIB_UTIL_*_NAME — src/executables/utility.h
#if defined(WINDOWS)
#define LIB_UTIL_CS_NAME "cubridcs.dll"
#define LIB_UTIL_SA_NAME "cubridsa.dll"
#elif defined(__APPLE__)
#define LIB_UTIL_CS_NAME "libcubridcs.dylib"
#define LIB_UTIL_SA_NAME "libcubridsa.dylib"
#else
#define LIB_UTIL_CS_NAME "libcubridcs.so"
#define LIB_UTIL_SA_NAME "libcubridsa.so"
#endif

The library is then dlopen’d, and a function pointer to the utility’s entry point is looked up by name with dlsym:

// util_admin.c — main path through util_get_library_name -> utility_load_library
library_name = util_get_library_name (utility_index);
status = utility_load_library (&library_handle, library_name);
// ... error handling ...
utility_load_symbol (library_handle, &symbol, function_name);
// ... call (*func) (arg) ...

The same pattern repeats in csql_launcher.c for the SQL shell:

// csql_launcher.c — runtime library choice
if (csql_arg.sa_mode)
utility_load_library (&util_library, LIB_UTIL_SA_NAME);
else
utility_load_library (&util_library, LIB_UTIL_CS_NAME);
utility_load_symbol (util_library, (DSO_HANDLE *) (&csql), "csql");
error = (*csql) (argv[0], &csql_arg);

The architectural consequence is that the binaries on disk (cub_admin, csql, cub_compactdb, …) are very thin. They are launchers. They parse command-line flags, classify the request, load the right .so, and call into it. All of the actual database logic — including parsing — lives in the shared library that the launcher chose at runtime.

flowchart TD
    A[user runs cub_admin compactdb -S mydb] --> B[util_admin.c main]
    B --> C{lookup ua_Utility_Map\nfor compactdb}
    C -->|SA_CS, -S given| D[util_get_library_name -> LIB_UTIL_SA_NAME]
    C -->|SA_CS, -C given| E[util_get_library_name -> LIB_UTIL_CS_NAME]
    C -->|SA_ONLY| D
    C -->|CS_ONLY| E
    D --> F[dlopen libcubridsa.so]
    E --> G[dlopen libcubridcs.so]
    F --> H[dlsym compactdb -> sa-mode entry]
    G --> I[dlsym compactdb -> cs-mode entry]
    H --> J[in-process: open DB,\nrun recovery, compact, exit]
    I --> K[CSS connect to cub_server,\nsend RPC, server compacts]

The single most-edited pattern in the CUBRID source tree is the client-side dispatch function whose body is #if defined(CS_MODE)#else#endif. The CS branch packs arguments into a wire buffer and sends them across CSS to the server; the SA branch calls the corresponding xfoo_* server-side implementation directly. The same public symbol — locator_fetch, boot_register_client, heap_create, … — is available to the layer above in both modes.

The canonical example is locator_fetch in src/communication/network_interface_cl.c:

// locator_fetch — src/communication/network_interface_cl.c
int
locator_fetch (OID * oidp, int chn, LOCK lock,
LC_FETCH_VERSION_TYPE fetch_version_type,
OID * class_oid, int class_chn, int prefetch,
LC_COPYAREA ** fetch_copyarea)
{
#if defined(CS_MODE)
// ... pack OID, chn, lock, fetch_version_type, class_oid into request ...
req_error =
net_client_request_recv_copyarea (NET_SERVER_LC_FETCH, request,
OR_ALIGNED_BUF_SIZE (a_request),
reply, OR_ALIGNED_BUF_SIZE (a_reply),
fetch_copyarea);
// ... unpack reply, return success code ...
return success;
#else /* CS_MODE */
int success = ER_FAILED;
THREAD_ENTRY *thread_p = enter_server ();
success =
xlocator_fetch (thread_p, oidp, chn, lock, fetch_version_type,
fetch_version_type, class_oid, class_chn, prefetch,
fetch_copyarea);
exit_server (*thread_p);
return success;
#endif /* !CS_MODE */
}

In CS mode the function packs its arguments, calls net_client_request_recv_copyarea (which sits on top of CSS — the CUBRID Socket Service — and ultimately writes to a TCP socket), and unpacks the reply. In SA mode the function obtains a thread entry from the embedded thread manager, calls the server-side implementation xlocator_fetch directly through a normal C call, and releases the thread entry. The two paths converge on the same return type and the same error reporting; callers above this layer cannot tell which branch they are in.

The thread-entry plumbing in SA mode is interesting. SA mode has no real thread pool — the utility is a single thread of control — but the server-side code paths assume they are reached by a worker thread that owns a THREAD_ENTRY *. SA mode fakes this with enter_server / exit_server:

// enter_server, exit_server — src/communication/network_interface_cl.c
unsigned int db_on_server = 0;
#if defined (SA_MODE)
static void
enter_server_no_thread_entry (void)
{
db_on_server++;
er_stack_push_if_exists ();
if (private_heap_id == 0)
{
assert (db_on_server == 1);
private_heap_id = db_create_private_heap ();
}
}
static THREAD_ENTRY *
enter_server ()
{
enter_server_no_thread_entry ();
return thread_get_thread_entry_info ();
}
static void
exit_server_no_thread_entry (void)
{
if ((db_on_server - 1) == 0 && private_heap_id != 0)
{
db_clear_private_heap (NULL, private_heap_id);
}
er_restore_last_error ();
db_on_server--;
}
#endif // SA_MODE

The counter db_on_server tracks recursion depth: SA-mode call sequences can re-enter “server space” (a query inside a stored procedure that itself triggers another query), and the private heap must be cleared only on the outermost return. The function thread_get_thread_entry_info returns the singleton thread entry that the SA-mode thread manager allocates at boot. From the perspective of the server-side code this is indistinguishable from being called by cub_server’s worker pool.

Boot is where the SA/CS difference is most visible. In CS mode boot_restart_client opens a TCP connection to a cub_server that is already running and asks it to register a new client. In SA mode the same function calls into the server-side boot_restart_server in the same process, which performs full crash recovery if needed.

The key fragment of boot_initialize_client (the createdb path) is:

// boot_initialize_client — src/transaction/boot_cl.c
int
boot_initialize_client (BOOT_CLIENT_CREDENTIAL * client_credential, ...)
{
// ... lang_init, msgcat_init, sysprm load, area_init ...
#if defined(CS_MODE)
/* Initialize the communication subsystem */
error_code =
boot_client_initialize_css (db, client_credential->client_type, false,
BOOT_NO_OPT_CAP, false,
DB_CONNECT_ORDER_SEQ, false);
if (error_code != NO_ERROR)
goto error_exit;
#endif /* CS_MODE */
// ... tp_init, perfmon_initialize ...
/* Initialize the disk and the server part */
tran_index =
boot_initialize_server (client_credential, db_path_info, db_overwrite,
file_addmore_vols, npages, db_desired_pagesize,
log_npages, db_desired_log_page_size,
&rootclass_oid, &rootclass_hfid,
tran_lock_wait_msecs, tran_isolation);
// ...
}

The function boot_initialize_server always exists — it is the public API. But its implementation in the dispatch layer (network_interface_cl.c) differs by mode:

// boot_initialize_server (CS dispatch / SA pass-through)
// — src/communication/network_interface_cl.c
int
boot_initialize_server (const BOOT_CLIENT_CREDENTIAL * client_credential,
BOOT_DB_PATH_INFO * db_path_info, ...)
{
#if defined(CS_MODE)
/* Should not called in CS_MODE */
assert (0);
return NULL_TRAN_INDEX;
#else /* CS_MODE */
int tran_index = NULL_TRAN_INDEX;
enter_server_no_thread_entry ();
tran_index =
xboot_initialize_server (client_credential, db_path_info, db_overwrite,
file_addmore_vols, db_npages,
db_desired_pagesize, log_npages,
db_desired_log_page_size, rootclass_oid,
rootclass_hfid, client_lock_wait,
client_isolation);
exit_server_no_thread_entry ();
return (tran_index);
#endif /* !CS_MODE */
}

This is one of the few places where the CS-mode body is assert(0). The reason is that creating a brand-new database in CS mode would require a cub_server to start up against a non-existent database — a contradiction. createdb is therefore SA_ONLY in ua_Utility_Map, and the assert fires only if a developer accidentally wires a CS-mode caller into the createdb path.

The restart path branches more interestingly. boot_restart_client in boot_cl.c is called by every client (every csql session, every loaddb, every JDBC connection — but not every utility, because some utilities skip the registration phase). In SA mode it ends up calling into boot_restart_server directly through a chain that, for the embedded build, performs full disk-side initialisation:

sequenceDiagram
    participant U as utility
    participant CL as boot_cl.c (boot_restart_client)
    participant DISP as network_interface_cl.c
    participant SR as boot_sr.c (xboot_register_client / boot_restart_server)
    participant DAEMON as cub_server (CS only)

    rect rgb(230, 240, 255)
    Note over U,SR: SA mode
    U->>CL: boot_restart_client(creds)
    CL->>DISP: boot_register_client(creds)
    DISP->>DISP: enter_server_no_thread_entry()
    DISP->>SR: xboot_register_client(...)
    SR->>SR: boot_restart_server(...) if needed
    SR->>SR: log_recovery, locator_initialize, ...
    SR-->>DISP: tran_index
    DISP->>DISP: exit_server_no_thread_entry()
    DISP-->>CL: tran_index
    CL-->>U: NO_ERROR
    end

    rect rgb(255, 240, 230)
    Note over U,DAEMON: CS mode
    U->>CL: boot_restart_client(creds)
    CL->>CL: boot_client_initialize_css(db, ...)
    CL->>DAEMON: NET_SERVER_BO_REGISTER_CLIENT (over CSS)
    DAEMON->>SR: xboot_register_client(...)
    Note right of SR: cub_server already booted;<br/>recovery happened at server start
    SR-->>DAEMON: BOOT_SERVER_CREDENTIAL
    DAEMON-->>CL: response packet
    CL-->>U: NO_ERROR
    end

The diagram makes the structural point: in SA mode boot_sr.c runs in the utility’s own process, and crash recovery (via boot_restart_serverlog_recovery) is triggered the first time the database is “opened” by the utility. In CS mode boot_sr.c runs in the daemon, and recovery already happened when the daemon started.

A SA-mode utility holds the database files exclusively for its lifetime, because the same files cannot be safely opened by two processes that both believe they own the buffer pool, the lock table, and the WAL writer. CUBRID enforces this through cooperative mechanisms — a SA-mode utility refuses to start if cub_server is already running on the same database, and cub_server refuses to start if a SA-mode utility is currently using the database. The mechanism is the database volume lock: boot_restart_server attempts to acquire an exclusive flock-style lock on the volume information file, and if that fails it reports ER_BO_CWD_FAIL / ER_BO_CANNOT_FINE_VOLINFO or ER_LOG_DOESNT_CORRESPOND_TO_DATABASE depending on the failure mode. In production the rule is simply “stop the server before running an SA-only utility”.

Recovery in SA mode is the same code as recovery in cub_server because it is the same code, compiled into a different binary. When compactdb -S opens a database that was not cleanly shut down last time, the boot_restart_server path inside the standalone process detects an unclean shutdown via the log control record, runs log_recovery (analysis, redo, undo phases) before the utility’s first read, and only then performs the compaction. The utility does not know that recovery happened — it requested an ordinary database open and got one. The price is that a SA-mode utility which inherits a damaged database can take a long time to start; the benefit is that the operator does not have to remember to “start the server first to recover, then stop it, then run the utility”.

The error symbols ER_NOT_IN_STANDALONE and ER_ONLY_IN_STANDALONE (defined at src/base/error_code.h:734) are the runtime guard rails. Code that absolutely requires CS mode raises ER_NOT_IN_STANDALONE when called in SA mode (e.g. a db@host syntax in boot_restart_client line 847 — connecting to a remote host is meaningless when the server is in your own process). Code that absolutely requires SA mode raises ER_ONLY_IN_STANDALONE (e.g. log_manager.c line 1018, where the no-logging mode is only acceptable in a single-process build).

The mapping between utilities and modes is the single most important table in this analysis. The shape of the table — many SA_ONLY, several CS_ONLY, a smaller SA_CS middle — reflects a design rule that is worth stating explicitly:

A utility is SA_ONLY when it operates on the on-disk state of a quiescent database. A utility is CS_ONLY when it operates on the runtime state of a live server. A utility is SA_CS when the same logical operation makes sense in either context, and the operator chooses the trade-off (throughput vs concurrency).

Translated into specific utilities:

UtilityClassWhy
createdbSA_ONLYDatabase does not exist yet; daemon cannot connect to nothing.
deletedbSA_ONLYFiles about to vanish; daemon must not hold them open.
restoredbSA_ONLYBytes are being overwritten from a backup image; exclusive access required.
installdbSA_ONLYInstalls system catalog into a freshly created database.
optimizedbSA_ONLYRebuilds statistics by direct heap scan.
diagdbSA_ONLYReads on-disk pages directly for diagnostic dumping.
patchdbSA_ONLYForcibly mutates control structures; cannot share with a live writer.
alterdbhostSA_ONLYRewrites database location metadata.
genlocaleSA_ONLYBuilds locale data files; standalone tool.
dumplocaleSA_ONLYDumps locale data files; standalone tool.
synccolldbSA_ONLYRewrites collation tables.
gen_tzSA_ONLYBuilds timezone library; standalone tool.
dump_tzSA_ONLYDumps timezone library; standalone tool.
restoreslaveSA_ONLYSets up a slave from a backup; same trust model as restoredb.
lockdbCS_ONLYDumps the daemon’s in-memory lock table.
killtranCS_ONLYKills a transaction running inside the daemon.
plandumpCS_ONLYDumps the daemon’s plan cache.
statdumpCS_ONLYDumps the daemon’s performance counters.
changemodeCS_ONLYHA state transition; only meaningful with a live daemon.
copylogdbCS_ONLYHA replication daemon that pulls log records from the master.
applylogdbCS_ONLYHA replication daemon that applies pulled log to the slave.
applyinfoCS_ONLYReads HA replication state.
acldbCS_ONLYReloads the daemon’s IP ACL.
tranlistCS_ONLYLists transactions running in the daemon.
checksumdbCS_ONLYHA-mode page-by-page consistency check between master and slave.
flashbackCS_ONLYReads online change history kept by the running server.
memmonCS_ONLYReads the daemon’s memory monitor.
backupdbSA_CSOnline (CS) or offline (SA) backup; trade-off is concurrency vs simplicity.
addvoldbSA_CSAdd a volume online (CS) or offline (SA).
spacedbSA_CSReport space usage; CS reads from live server, SA scans on-disk metadata.
cleanfiledbSA_CSClean dangling files; either online or offline.
checkdbSA_CSConsistency check; SA mode is more thorough but blocks the database.
loaddbSA_CSSA = direct heap insert with class-exclusive lock; CS = ordinary inserts.
unloaddbSA_CSSA = direct heap scan; CS = ordinary SELECTs.
compactdbSA_CSSA = exclusive compaction; CS = online compaction.
paramdumpSA_CSDump parameters from disk (SA) or from running server (CS).
vacuumdbSA_CSDefault SA: full vacuum offline. CS: trigger vacuum on running server.
tdeSA_CSTransparent Data Encryption key management.

The table is the whole contract between users and the engine for administrative work. Anything not on the table is a SQL command and goes through csql (which is itself SA_CS).

Loaddb — a concrete walk-through of SA_CS

Section titled “Loaddb — a concrete walk-through of SA_CS”

load_db.c is the launcher for cub_loaddb and demonstrates the SA_CS pattern in real code. The same source compiles into both libcubridsa.so and libcubridcs.so, and the function loaddb that the launcher calls dispatches on the compile-time mode flag:

// load_db.c — split paths
#if defined (SA_MODE)
#include "load_sa_loader.hpp"
#endif // SA_MODE
// ...
#if defined (SA_MODE)
/* to avoid compiler warning (clobbered by longjump) */
volatile bool interrupted = false;
#else
bool interrupted = false;
#endif
// ...
#if defined (SA_MODE)
// load_sa: open class with class-exclusive BU lock,
// insert directly through locator_*, no per-row WAL
#else // !SA_MODE = CS_MODE
// load_cs: send batches to the server through CSS,
// server runs the worker pool
#endif // !SA_MODE = CS_MODE

The user-level interface presents the same flag to the operator: -S selects SA mode (args->sa_mode = ...), -C selects CS mode (args->cs_mode = utility_get_option_bool_value (arg_map, LOAD_CS_MODE_S);, line 1281). The launcher in util_admin.c notices that flag, loads the matching .so, and the loaddb symbol resolved out of the chosen library executes the corresponding code.

CSQL — the same pattern for the SQL shell

Section titled “CSQL — the same pattern for the SQL shell”

csql_launcher.c implements cub_csql and follows the same template:

  • The launcher is a tiny binary; it links against neither libcubridsa.so nor libcubridcs.so directly.
  • It parses --SA-mode / -S and --CS-mode / -C flags into csql_arg.sa_mode and csql_arg.cs_mode.
  • It validates the combination (-S cannot coexist with -C or with --write-on-standby).
  • It calls utility_load_library with LIB_UTIL_SA_NAME or LIB_UTIL_CS_NAME, looks up the csql entry point with dlsym, and calls into it.
  • Inside the library, csql.c parses SQL, generates XASL, and either ships it across CSS (CS_MODE) or executes it in-process by calling the server-side query executor directly (SA_MODE).

The implication is that csql -S mydb is effectively a different binary from csql mydb even though they share an executable on disk. The first opens mydb’s files in the csql process; the second connects to a cub_server that is already serving mydb.

flowchart LR
    subgraph SRC[Source tree - same .c/.cpp files compiled three times]
        BC[boot_cl.c]
        BS[boot_sr.c]
        NC[network_cl.c]
        NIC[network_interface_cl.c]
        NS[network_sr.c]
        BTREE[btree.c, heap_file.c,<br/>page_buffer.c, log_*.c, ...]
        PARSER[parser/, optimizer/,<br/>query/, compat/, object/]
    end

    subgraph SERVER[cub_server binary - SERVER_MODE]
        BS2[boot_sr.c]
        NS2[network_sr.c]
        BTREE2[storage + transaction]
    end

    subgraph SA[libcubridsa.so - SA_MODE]
        BC1[boot_cl.c]
        BS1[boot_sr.c]
        NIC1[network_interface_cl.c<br/>SA branches: enter_server,<br/>direct xfoo calls]
        BTREE1[storage + transaction]
        PARSER1[parser + optimizer + query]
    end

    subgraph CS[libcubridcs.so - CS_MODE]
        BC2[boot_cl.c]
        NC2[network_cl.c]
        NIC2[network_interface_cl.c<br/>CS branches: pack args,<br/>net_client_request]
        PARSER2[parser + optimizer]
    end

    BC --> BC1
    BC --> BC2
    BS --> BS1
    BS --> BS2
    NC --> NC2
    NIC --> NIC1
    NIC --> NIC2
    NS --> NS2
    BTREE --> BTREE1
    BTREE --> BTREE2
    PARSER --> PARSER1
    PARSER --> PARSER2

    LAUNCHER[cub_admin / csql / cub_loaddb / cub_compactdb] -->|dlopen| SA
    LAUNCHER -->|dlopen| CS
    CS -. CSS / TCP .-> SERVER
flowchart TD
    A[utility starts] --> B{which library was<br/>loaded by launcher?}
    B -->|libcubridsa.so| SA1[boot_initialize_client / boot_restart_client<br/>-- SA_MODE compile]
    B -->|libcubridcs.so| CS1[boot_initialize_client / boot_restart_client<br/>-- CS_MODE compile]

    SA1 --> SA2[skip boot_client_initialize_css<br/>guarded by #if defined CS_MODE]
    SA2 --> SA3[boot_initialize_server in network_interface_cl.c<br/>SA branch -> enter_server_no_thread_entry]
    SA3 --> SA4[xboot_initialize_server in boot_sr.c<br/>opens volumes, runs log_recovery,<br/>locator_initialize]
    SA4 --> SA5[utility code runs in same process,<br/>direct calls to xfoo_*]
    SA5 --> SA6[boot_shutdown_client -> close volumes,<br/>log_final, exit]

    CS1 --> CS2[boot_client_initialize_css opens TCP to cub_server,<br/>NET_SERVER_PING handshake]
    CS2 --> CS3[boot_initialize_server in network_interface_cl.c<br/>CS branch -> assert 0]
    CS3 --> CS4{is this createdb?}
    CS4 -->|yes, SA_ONLY| CS5[never reached -- launcher chose SA]
    CS4 -->|no| CS6[boot_register_client -> NET_SERVER_BO_REGISTER_CLIENT<br/>over CSS]
    CS6 --> CS7[utility code runs locally,<br/>each xfoo_* call becomes packed RPC]
    CS7 --> CS8[boot_shutdown_client -> NET_SERVER_BO_UNREGISTER_CLIENT,<br/>close socket, exit]

Functions and structures grouped by subsystem and call-flow.

Build wiring

  • cubrid/CMakeLists.txtadd_executable(cubrid …), target_compile_definitions(cubrid PRIVATE SERVER_MODE …)
  • sa/CMakeLists.txtadd_library(cubridsa SHARED …), target_compile_definitions(cubridsa PRIVATE SA_MODE CUBRID_EXPORTING …)
  • cs/CMakeLists.txtadd_library(cubridcs SHARED …), target_compile_definitions(cubridcs PRIVATE CS_MODE CUBRID_EXPORTING …)

Library / launcher selection

  • LIB_UTIL_SA_NAME (macro) — src/executables/utility.h
  • LIB_UTIL_CS_NAME (macro) — src/executables/utility.h
  • ua_Utility_Map (table) — src/executables/util_admin.c
  • util_get_library_namesrc/executables/util_admin.c
  • utility_load_library / utility_load_symbolsrc/executables/utility.h (declarations) and library loaders
  • main (cub_admin entry point) — src/executables/util_admin.c
  • main (csql entry point) — src/executables/csql_launcher.c

Boot — client side

  • BOOT_IS_CLIENT_RESTARTED (macro) — src/transaction/boot_cl.h
  • boot_initialize_clientsrc/transaction/boot_cl.c
  • boot_restart_clientsrc/transaction/boot_cl.c
  • boot_client_initialize_css (CS only) — src/transaction/boot_cl.c
  • boot_check_locales (CS only) — src/transaction/boot_cl.c
  • boot_check_timezone_checksum (CS only) — src/transaction/boot_cl.c
  • boot_shutdown_clientsrc/transaction/boot_cl.c
  • boot_Host_connected (CS-only static) — src/transaction/boot_cl.c

Boot — server side

  • BO_IS_SERVER_RESTARTED (macro) — src/transaction/boot_sr.h
  • boot_Server_process_idsrc/transaction/boot_sr.c
  • xboot_initialize_serversrc/transaction/boot_sr.c
  • boot_restart_serversrc/transaction/boot_sr.c
  • xboot_register_clientsrc/transaction/boot_sr.c
  • log_initialize (recovery driver) — src/transaction/log_manager.c
  • locator_initializesrc/transaction/locator_sr.c

Per-call dispatch (the SA/CS fork in every server-bound call)

  • db_on_server (counter) — src/communication/network_interface_cl.c
  • enter_serversrc/communication/network_interface_cl.c (SA only)
  • enter_server_no_thread_entrysrc/communication/network_interface_cl.c (SA only)
  • exit_serversrc/communication/network_interface_cl.c (SA only)
  • exit_server_no_thread_entrysrc/communication/network_interface_cl.c (SA only)
  • boot_initialize_server (dispatch) — src/communication/network_interface_cl.c
  • boot_register_client (dispatch) — src/communication/network_interface_cl.c
  • locator_fetch (canonical example) — src/communication/network_interface_cl.c

CS-only transport

  • set_server_errorsrc/communication/network_cl.c
  • net_client_requestsrc/communication/network_cl.c
  • net_client_request_internalsrc/communication/network_cl.c
  • net_client_request_no_replysrc/communication/network_cl.c
  • net_client_request2src/communication/network_cl.c
  • net_client_request_send_large_datasrc/communication/network_cl.c
  • net_client_request_recv_large_datasrc/communication/network_cl.c
  • net_client_request_recv_copyareasrc/communication/network_cl.c
  • net_Server_host, net_Server_name (statics) — src/communication/network_cl.c
  • The compile-time guard #if !defined (CS_MODE) / #error Does not belong to cs modulesrc/communication/network_cl.c

Loaddb fork

  • loaddb (entry point, exported from sa/cs library) — src/loaddb/load_db.c
  • load_args (struct) — src/loaddb/load_common.hpp
  • LOAD_CS_MODE_S (option char) — src/executables/utility.h
  • load_sa_loader.cpp (SA-only file in sa/CMakeLists.txt only) — src/loaddb/
  • ldr_validate_object_filesrc/loaddb/load_db.c

CSQL launcher

  • mainsrc/executables/csql_launcher.c
  • CSQL_SA_MODE_S, CSQL_CS_MODE_S (option chars) — src/executables/csql_launcher.c

Error guards

  • ER_NOT_IN_STANDALONE (macro) — src/base/error_code.h
  • ER_ONLY_IN_STANDALONE (macro) — src/base/error_code.h
  • emit site for ER_NOT_IN_STANDALONEsrc/transaction/boot_cl.c
  • emit site for ER_ONLY_IN_STANDALONEsrc/transaction/log_manager.c
SymbolFileLine
target_compile_definitions(cubrid PRIVATE SERVER_MODE …)cubrid/CMakeLists.txt675
target_compile_definitions(cubridsa PRIVATE SA_MODE …)sa/CMakeLists.txt718
target_compile_definitions(cubridcs PRIVATE CS_MODE …)cs/CMakeLists.txt577
add_library(cubridsa SHARED …)sa/CMakeLists.txt671
add_library(cubridcs SHARED …)cs/CMakeLists.txt536
LIB_UTIL_SA_NAME (linux)src/executables/utility.h1809
LIB_UTIL_CS_NAME (linux)src/executables/utility.h1808
ua_Utility_Map[]src/executables/util_admin.c966
util_get_library_namesrc/executables/util_admin.c1168
csql_launcher main, dlopen of util librarysrc/executables/csql_launcher.c466
boot_initialize_clientsrc/transaction/boot_cl.c275
boot_initialize_client CS-mode CSS initsrc/transaction/boot_cl.c509
boot_restart_clientsrc/transaction/boot_cl.c690
boot_restart_client CS-mode db@host parsingsrc/transaction/boot_cl.c824
ER_NOT_IN_STANDALONE emit sitesrc/transaction/boot_cl.c847
boot_Host_connected (CS-only)src/transaction/boot_cl.c150
xboot_initialize_serversrc/transaction/boot_sr.c1385
boot_restart_serversrc/transaction/boot_sr.c1969
db_on_serversrc/communication/network_interface_cl.c103
enter_server_no_thread_entrysrc/communication/network_interface_cl.c124
enter_serversrc/communication/network_interface_cl.c142
exit_server_no_thread_entrysrc/communication/network_interface_cl.c152
exit_serversrc/communication/network_interface_cl.c168
locator_fetch (CS/SA fork)src/communication/network_interface_cl.c270
boot_initialize_server (dispatch)src/communication/network_interface_cl.c3919
#error Does not belong to cs modulesrc/communication/network_cl.c65
set_server_errorsrc/communication/network_cl.c148
net_client_requestsrc/communication/network_cl.c587
loaddb SA include of load_sa_loader.hppsrc/loaddb/load_db.c28
loaddb SA volatile bool interruptedsrc/loaddb/load_db.c543
loaddb SA/CS forksrc/loaddb/load_db.c823
LOAD_CS_MODE_S cs_mode optionsrc/loaddb/load_db.c1281
ER_ONLY_IN_STANDALONE emit sitesrc/transaction/log_manager.c1018
ER_ONLY_IN_STANDALONE (#define)src/base/error_code.h734

The SA/CS distinction has been a stable architectural feature of CUBRID since the first open-source release; it predates the move of broker/CAS into the engine and has survived multiple HA, MVCC, and TDE refactors. A few points where the layering is less clean than it appears:

  1. network_interface_cl.c carries both code paths. Despite the name, the file is compiled into both libcubridsa.so and libcubridcs.so. Inside, every server-bound call has the #if defined (CS_MODE)#else#endif shape. The size of this file (≈278 KB at the time of this writing) is directly attributable to that doubled body. A previous attempt to factor the dispatch into separate *_cs.c and *_sa.c files would have meant duplicating the function signatures, the argument packing logic, and the public-API boilerplate without reducing the total source mass — the per-function fork is the lesser evil.

  2. Some files are “client” by name but carry SA_MODE-only server-side code. For example, locator_cl.c has blocks guarded by #if defined (SA_MODE) && !defined (CUBRID_DEBUG) (lines 1011, 1713, 1872) — these are paths that cross the boundary in SA-mode-only debug builds. The structural rule “*_cl.c is the client, *_sr.c is the server” is not absolute; it is “the *_cl.c file is the entry point that dispatches into either the wire or the server”.

  3. The SERVER_MODE flag is not the same as SA_MODE, even though both pull in the server-side translation units. The distinguishing axes are: (a) SERVER_MODE enables the real thread pool, the connection acceptor, and the master process protocol; (b) SA_MODE short-circuits these with single-thread stubs (enter_server_no_thread_entry, thread_get_thread_entry_info returns the singleton). Code that is gated by #if defined (SERVER_MODE) runs only in cub_server; code gated by #if !defined (CS_MODE) runs in both cub_server and libcubridsa.so.

  4. The classification table is a single point of truth, but the table is not exhaustive. A few utilities have hard-coded defaults inside util_get_library_name (the trailing if (utility_index == VACUUMDB || utility_index == TDE) return LIB_UTIL_SA_NAME;) that override the SA_CS default. New utilities added to the table without thinking about that trailing block can silently end up choosing the wrong library.

  5. Recovery in SA mode is not a no-op even on a clean shutdown. The boot_restart_server path in SA mode still runs the analysis pass of recovery to reconstruct in-memory state from the log control record. The redo and undo phases are skipped if the log says “shut down cleanly”, but the analysis pass always runs. Operators sometimes attribute slow compactdb -S startup on a large database to “recovery”; in practice the slow part is volume open and locator_initialize building the class cache, not log replay.

  • Whether the SA/CS split is worth keeping in the long term, or whether — as MySQL did with libmysqld — CUBRID would eventually consolidate around cub_server plus a “single-user daemon mode” similar to PostgreSQL’s postgres --single. The argument for keeping SA mode is that some operations (createdb, restoredb, installdb) genuinely cannot be done through a daemon. The argument against is the doubled translation-unit count, the doubled binary size, and the ongoing maintenance tax of the #if defined (CS_MODE) blocks.

  • Whether cub_admin’s dlopen-of-cubrid{sa,cs}.so model could be replaced by static linking, given that no current utility actually loads both libraries in the same process. The model was originally chosen to allow a single binary to handle both cases (e.g. cub_admin compactdb -S vs -C from the same binary), but in practice every utility decides at startup and never changes its mind.

  • How the SA-mode buffer pool sizing interacts with the operator’s expectation that “the daemon is off”. A SA-mode utility allocates the same buffer pool that cub_server would have allocated (prm_get_integer_value (PRM_ID_PB_NBUFFERS) * page_size), which on a production database can be tens of gigabytes. The error reporting if that allocation fails is not as clean as the “cannot start, please reduce buffer pool” message that cub_server would print.

  • Whether ER_NOT_IN_STANDALONE and ER_ONLY_IN_STANDALONE should be promoted to a single error with a mode parameter. They are essentially the same logical error (“you used a CS-only feature in SA mode” / vice versa) and the duplicated error code makes message-catalogue maintenance noisier than necessary.

  • src/transaction/boot_cl.c — client-side boot, mode-aware.
  • src/transaction/boot_sr.c — server-side boot and recovery.
  • src/communication/network_cl.c — CS-only socket transport.
  • src/communication/network_interface_cl.c — per-call dispatch layer with #if defined (CS_MODE)#else#endif.
  • src/executables/util_admin.c — utility classification table and runtime library loader.
  • src/executables/utility.hLIB_UTIL_*_NAME macros.
  • src/executables/csql_launcher.ccsql library selection.
  • src/loaddb/load_db.c — SA_CS fork at the loader entry point.
  • src/base/error_code.hER_NOT_IN_STANDALONE, ER_ONLY_IN_STANDALONE.
  • sa/CMakeLists.txtcubridsa shared library composition; target_compile_definitions(cubridsa PRIVATE SA_MODE …).
  • cs/CMakeLists.txtcubridcs shared library composition; target_compile_definitions(cubridcs PRIVATE CS_MODE …).
  • cubrid/CMakeLists.txtcub_server executable; target_compile_definitions(cubrid PRIVATE SERVER_MODE …).