CUBRID DBI and CCI — Client API Surface, Statement Lifecycle, and Wire-Driver Façade
Contents:
- Theoretical Background
- Common DBMS Design
- CUBRID’s Approach
- Source Walkthrough
- Cross-check Notes
- Open Questions
- Sources
Theoretical Background
Section titled “Theoretical Background”A relational engine is reached through three nested concentric APIs and
the most common source of confusion in DBMS client-driver code is
collapsing them. The outermost layer is the language binding — JDBC,
Python DB-API, ODBC, REST front-ends. The middle layer is the
call-level interface (CLI) in the X/Open sense: a procedural C
surface where every operation is a function call (SQLPrepare,
SQLBindParameter, SQLExecute, SQLFetch, SQLFreeStmt) that owns
an opaque handle and returns a status code. The innermost layer is the
embedded SQL layer — what Database System Concepts (Silberschatz
et al., ch. 5 §“Embedded SQL”) describes as the host-language
preprocessor that turns EXEC SQL SELECT ... into a series of CLI
calls. Every modern engine — Oracle OCI, PostgreSQL libpq, MySQL
libmysqlclient, SQLite — implements the CLI layer and lets bindings sit
on top.
Three independent design choices then shape the resulting C API:
-
Session-as-handle versus statement-as-handle. ODBC has separate
HENV,HDBC,HSTMThandles. PostgreSQL’slibpqcollapses connection and statement: everyPQexecis a one-shot text execution against thePGconn. Oracle’s OCI givesOCIStmtas a free-standing statement object later associated with a service context. CUBRID sits closest to ODBC: theDB_SESSION *is the statement handle, and the connection lives implicitly in module-scope globals (db_Connect_status,db_Database_name) populated bydb_restart. ManyDB_SESSIONobjects per process; exactly one connected database at a time. -
Compile-then-execute versus prepare-and-execute fused. A CLI can offer a one-shot text path (
PQexec,mysql_query), a separate prepare/execute pair (SQLPrepare/SQLExecute), or both. CUBRID picks the split form internally —db_compile_statement_localanddb_execute_statement_localare distinct calls — and exposes the fused form as a wrapper (db_open_buffer_and_compile_first_statementplus an execute loop indb_compile_and_execute_queries_internal). The split is what lets the broker’s CAS layer reuse one compiledDB_SESSIONacross manyux_executecalls when the JDBC client uses aPreparedStatement. -
Cursor as iterator versus cursor as random-access view. A result-set API is either forward-only (
cursor_next_tuple, end of stream) or random-access withseek_tuple,prev_tuple,last_tuple. CUBRID picks random-access, backed by the server-side QFILE list file: everyDB_QUERY_RESULTof typeT_SELECTowns aCURSOR_IDand aQFILE_LIST_IDand supports the full ODBC-style scrollable cursor surface.
After these three are named, the rest of the CUBRID DBI is a direct
consequence of taking the (session-as-statement-handle, split
compile/execute, scrollable-cursor) corner of the design space. The
broker’s CCI driver inverts the first choice — it externalises a
per-CAS handle (T_SRV_HANDLE) so a JDBC client can hold many
statement handles per connection — but keeps the other two unchanged
because the underlying executor is the same db_* core.
Common DBMS Design
Section titled “Common DBMS Design”Below the textbook CLI layer, every primary client/server DBMS ships a similar handful of patterns. They are not in the X/Open spec; they are the engineering vocabulary that lives between the abstract API and the source.
A statement object that is also the parser context
Section titled “A statement object that is also the parser context”Every CLI needs somewhere to keep the parsed AST, the host-variable
array, the column type list, and the server-side plan handle between
prepare and execute. CUBRID’s DB_SESSION carries parser (the
PARSER_CONTEXT), statements (the PT_NODE ** array of parse
trees), type_list (per-statement column descriptions), and stage
(a one-byte-per-statement FSM array). One DB_SESSION can carry
several statements separated by ; because
parser_parse_string_with_escapes returns an array; dimension
records how many. Oracle’s OCIStmt is the closest analogue;
PostgreSQL’s PGresult is purely a result and the prepared statement
lives on the server.
A four-stage statement FSM with one byte per stage
Section titled “A four-stage statement FSM with one byte per stage”The CLI must reject double-execution, double-compile, and execute-
before-compile. db_vdb.c enforces this with one byte per statement
in session->stage[] taking values StatementInitialStage (0),
StatementCompiledStage, StatementPreparedStage,
StatementExecutedStage. db_compile_statement_local advances
Initial → Compiled → Prepared (the Prepared stage means
do_prepare_statement has populated statement->xasl_id from the
XASL cache); db_execute_and_keep_statement_local advances Prepared
→ Executed. The byte is consulted at every entry point — for
example, the executor re-runs the compiler if
session->stage[stmt_ndx] < StatementPreparedStage. CUBRID never
exposes the Prepared stage as a distinct API call but the FSM is
internal.
A separate result-set object with its own type tag
Section titled “A separate result-set object with its own type tag”Once a statement executes, the column list and the cursor have to
live somewhere outside the statement because the statement handle
(DB_SESSION) can be closed before all results are consumed. The
result object — DB_QUERY_RESULT in CUBRID — carries both the
column type list and the cursor. The type tag
(T_SELECT/T_CALL/T_OBJFETCH/T_GET/T_CACHE_HIT) selects
which arm of every cursor function does the work. PostgreSQL’s
PGresult and MySQL’s MYSQL_RES follow the same pattern.
A wire-driver façade for non-embedded clients
Section titled “A wire-driver façade for non-embedded clients”The DBI surface above is embedded — it links into a process that
runs the CUBRID client library directly. JDBC, ODBC, Python, and
PHP all live in other processes that talk to a broker daemon
whose CAS workers embed the CUBRID client library and translate
the wire calls back to db_* calls. The broker calls this layer
CCI — the CUBRID Call Interface — and its server side lives in
src/broker/cas_execute.c as a series of ux_* functions
(ux_database_connect, ux_prepare, ux_execute, ux_fetch,
ux_end_tran). The shape mirrors db_* but adds CCI-specific
concerns: server-handle ids (T_SRV_HANDLE), CCI flags
(CCI_PREPARE_UPDATABLE, CCI_PREPARE_HOLDABLE,
CCI_EXEC_QUERY_INFO), broker log streams, and an autocommit hint
that triggers db_commit_transaction inside the same call.
Schema operations and value primitives stay procedural
Section titled “Schema operations and value primitives stay procedural”Schema definition (db_create_class, db_add_attribute,
db_add_method) and value construction (db_make_int,
db_make_string, db_make_object) are not part of the X/Open CLI
spec but are present in every C client API for an OO-flavoured
RDBMS. CUBRID inherits a heavy schema surface from its OODB
ancestry — MOP (memory object pointer) is the universal class or
instance handle, and db_create / db_put / db_get work on
instances directly, bypassing SQL.
CUBRID’s Approach
Section titled “CUBRID’s Approach”CUBRID’s client API is a layered system with one rule: the layer
above always speaks the layer below; nothing skips. From the top:
(1) language bindings — JDBC (cubrid-jdbc/), CCI native C
driver (cubrid-cci/, a separate repo), Python, PHP, Perl DBD,
CSQL (src/executables/csql.c); (2) CCI wire — flat opcode
space (CAS_FC_* in cas_protocol.h) dispatched through
server_fn_table in broker/cas.c; (3) CAS-side ux_* —
broker-side façade in broker/cas_execute.c, each a thin wrapper
that calls db_* and serialises back through T_NET_BUF;
(4) DBI db_* — public C client API in src/compat/; every
function either runs purely client-side or issues an NRP request via
network_cl; (5) network_cl — the symmetric wire-marshalling
layer (see cubrid-network-protocol.md); (6) boot_cl —
connection-lifecycle layer; db_restart delegates to
boot_restart_client.
flowchart TD
JDBC[JDBC / Python / ODBC client]
CCI_drv[CCI native driver<br/>cubrid-cci/]
CSQL[CSQL command line<br/>src/executables/csql.c]
Broker[Broker daemon<br/>src/broker/broker.c]
CAS[CAS worker process<br/>src/broker/cas.c]
UX[ux_prepare / ux_execute / ux_fetch<br/>src/broker/cas_execute.c]
DBI[db_compile_statement / db_execute_statement<br/>src/compat/db_vdb.c]
Boot[boot_restart_client<br/>src/transaction/boot_cl.c]
NET[network_cl<br/>src/communication/network_cl.c]
Server[cub_server<br/>NET_SERVER_QM_QUERY_PREPARE]
JDBC -->|CCI wire| Broker
CCI_drv -->|CCI wire| Broker
Broker --> CAS
CAS --> UX
UX --> DBI
CSQL --> DBI
DBI --> Boot
DBI --> NET
Boot --> NET
NET --> Server
The figure above is the load-bearing one for this whole document:
every client path eventually reaches db_* in src/compat/. The
broker exists because the CUBRID server is single-threaded per
connection and a heavy connect — boot_restart_client does locale
init, timezone load, parameter sync, host failover, schema-version
negotiation — and pooling that work in a CAS process is the only way
to serve thousands of short-lived JDBC connections at acceptable
latency.
Connection: db_login, db_restart, db_shutdown
Section titled “Connection: db_login, db_restart, db_shutdown”The DBI’s connection lifecycle is intentionally minimalist. There is
no connection handle. The connection is a process-global state
machine recorded in db_Connect_status and db_Database_name. Any
db_* function checks it through CHECK_CONNECT_* macros and
short-circuits if the process is not connected.
db_login is a one-liner that defers entirely to au_login and
records the credential in the authorization module; it does not
touch the network. The two-step exists because CUBRID utilities can
run standalone and call db_login first, then db_init (create) or
db_restart (open).
db_restart populates a BOOT_CLIENT_CREDENTIAL from module-scope
variables (db_Client_type, db_Preferred_hosts, db_Connect_order,
db_Client_ip_addr) and calls boot_restart_client. On success it
sets db_Connect_status = DB_CONNECTION_STATUS_CONNECTED, copies
volume into db_Database_name, runs install_static_methods, and
installs a SIGFPE handler. The credential is a flat struct, not an
object; the caller is given no handle for it. The heavy work is in
boot_restart_client (see cubrid-boot.md).
db_restart_ex is the form most modern code uses because it combines
au_login, db_set_client_type, and db_restart into one call:
// db_restart_ex — db_admin.cdb_restart_ex (const char *program, const char *db_name, const char *db_user, const char *db_password, const char *preferred_hosts, int client_type){ retval = au_login (db_user, db_password, false); if (retval != NO_ERROR) return retval; db_set_client_type (client_type); if (preferred_hosts != NULL) db_set_preferred_hosts (preferred_hosts); return db_restart (program, false, db_name);}This is the function the broker’s CAS worker calls inside
ux_database_connect to open the database the first time a JDBC
client requests it. The client_type is significant: it tells the
server whether this connection is a normal broker, a read-only
broker, a slave-only broker, a standalone CSQL, an admin utility,
etc., which determines what HA roles, lock modes, and read-from-
secondary capabilities the connection inherits.
db_shutdown ends the session and disconnects: it calls
db_end_session (to flush the server-side SESSION_STATE — see
cubrid-server-session.md — so prepared statements and SET-variable
bindings on the server don’t leak), then boot_shutdown_client (true),
then clears the connection-status globals, restores the SIGFPE
handler, and frees any cached execution plan.
Statement compile: db_open_buffer → db_compile_statement
Section titled “Statement compile: db_open_buffer → db_compile_statement”Compilation is split across two API calls — db_open_buffer to set
up the parser context and parse, db_compile_statement to drive the
semantic check, view translation, and XASL prepare. The session
struct that ties them together is small:
// db_session struct — db_session.hstruct db_session{ char *stage; /* per-statement FSM state */ char include_oid; /* NO_OIDS, ROW_OIDS */ int dimension; /* number of statements */ int stmt_ndx; /* next statement to compile */ int line_offset; int column_offset; PARSER_CONTEXT *parser; /* parser context */ DB_QUERY_TYPE **type_list; /* nice column headings, per stmt */ PT_NODE **statements; /* parse trees, per stmt */
bool is_subsession_for_prepared; DB_SESSION *next; /* sub-sessions for prepared stmts */};db_open_buffer parses; it does not lock classes, does not call the
server, and does not produce a stmt_id. It calls db_open_local to
allocate the session and a fresh PARSER_CONTEXT, then
parser_parse_string_with_escapes to drive Bison/Flex over the
buffer. On success initialize_session records dimension = count(statements) and returns. The db_open_file and
db_open_file_name variants accept a FILE * instead.
db_compile_statement is where the work happens. It is the same
function name referenced from cubrid-query-rewrite.md and
cubrid-semantic-check.md — every CUBRID client SQL request goes
through this function on the way to the server.
// db_compile_statement — db_vdb.cintdb_compile_statement (DB_SESSION * session){ int statement_id; er_clear (); CHECK_CONNECT_MINUSONE (); statement_id = db_compile_statement_local (session); return statement_id;}db_compile_statement_local runs the full client-side SQL pipeline
on one statement at index stmt_ndx. Its phases match the modules
documented in their own analysis files:
sequenceDiagram
participant Caller
participant DB as db_compile_statement_local
participant Parser as PARSER (PT_*)
participant DBlink as pt_rewrite_for_dblink
participant Cls as pt_class_pre_fetch
participant Sem as pt_compile (semantic)
participant View as mq_translate
participant Plan as do_prepare_statement
participant Server as cub_server
Caller->>DB: db_compile_statement(session)
DB->>Parser: pt_reset_error
DB->>DBlink: pt_rewrite_for_dblink(parser, statement)
DB->>Parser: pt_get_titles (qtype for SELECT)
DB->>Cls: pt_class_pre_fetch (lock classes)
Cls->>Server: locator_fetch_class (NRP)
DB->>Sem: pt_compile(parser, statement)
DB->>View: mq_translate(parser, statement)
DB->>Cls: pt_class_pre_fetch again (post-mq)
DB->>Plan: do_prepare_statement(parser, statement)
Plan->>Server: NET_SERVER_QM_QUERY_PREPARE
Server-->>Plan: XASL_ID
Plan-->>DB: NO_ERROR
DB->>DB: stage[ndx] = StatementPreparedStage
DB-->>Caller: stmt_id (1-indexed)
The headline rule is that compilation hits the server twice: once
during pt_class_pre_fetch (to lock referenced classes and avoid
deadlock at execute time) and once during do_prepare_statement (to
ship the XASL stream to the server’s XASL cache and get back an
XASL_ID). Every other phase is purely client-side. This is why a
db_compile_statement is non-trivial work and why CUBRID’s broker
keeps the resulting DB_SESSION alive across multiple
ux_execute calls when a JDBC PreparedStatement is in use.
The function returns the 1-indexed statement number on success — an
ODBC convention — and 0 if the session has no more statements left
to compile. A negative return is an error code. Multi-statement input
(SELECT ... ; UPDATE ... ; INSERT ... ;) compiles one statement per
call so the caller can interleave compilation with execution. Internally
the body of db_compile_statement_local walks the phases listed in the
sequence diagram above, calling pt_rewrite_for_dblink,
pt_class_pre_fetch, pt_compile, mq_translate, and finally
do_prepare_statement (only when the XASL cache is enabled and the
statement is cacheable). The stage[] byte advances Compiled →
Prepared at the end of the call.
db_open_buffer_and_compile_first_statement is the convenience
wrapper for one-shot SQL — the path CSQL takes for utility commands:
// db_open_buffer_and_compile_first_statement — db_vdb.c, condensedintdb_open_buffer_and_compile_first_statement (const char *CSQL_query, DB_QUERY_ERROR *query_error, int include_oid, DB_SESSION **session, int *stmt_no){ CHECK_CONNECT_ERROR ();
*session = db_open_buffer_local (CSQL_query); if (*session == NULL) return er_errid ();
db_include_oid (*session, include_oid); *stmt_no = db_compile_statement_local (*session);
errs = db_get_errors (*session); if (errs != NULL) { int line, col; (void) db_get_next_error (errs, &line, &col); error = er_errid (); if (query_error) { query_error->err_lineno = line; query_error->err_posno = col; } }
if (*stmt_no < 0 || error < 0) { db_close_session_local (*session); *session = NULL; return er_errid (); } return error;}Bind and execute: db_push_values, db_execute_statement
Section titled “Bind and execute: db_push_values, db_execute_statement”Host variables are pushed into the parser’s host_variables array
before execute. The parser already knows how many slots it needs
(parser->host_var_count) from the ? markers it counted during
parse, so the API contract is “give me an array of exactly this size”.
db_push_values calls pt_set_host_variables (which copies the
DB_VALUE array) and reports any pre-existing parser semantic error
through pt_report_to_ersys. pt_set_host_variables does not do
type coercion — that happens inside
db_execute_and_keep_statement_local via
do_cast_host_variables_to_expected_domain once the parser has
inferred the expected types from semantic check.
db_execute_statement is the public entry; it differs from
db_execute_and_keep_statement in that it frees the statement’s
parse tree after the call, so the statement cannot be re-executed.
The “and-keep” form preserves the parse tree for repeated use. Both
wrappers do the same three things: check connection state, call
db_invalidate_mvcc_snapshot_before_statement, and call the
matching _local worker — then set
db_set_read_fetch_instance_version (LC_FETCH_MVCC_VERSION) on the
way out.
Both eventually run db_execute_and_keep_statement_local, which is
the heart of execution. After validating that the host-variable array was
populated (set_host_var == 1 when host_var_count > 0) and that
the statement is at least at Prepared stage (otherwise re-running
db_compile_statement_local automatically), the function:
- Pulls server time / transaction-id values via
qp_get_server_infowhen the statement referencesSYSTIMESTAMPor local-tran-id. - Pre-executes any cached CTE sub-queries via
do_execute_subquery_pre. - Branches to
do_process_prepare_statement,do_recompile_and_execute_prepared_statement, ordo_process_deallocate_preparefor the SQLPREPARE/EXECUTE/DEALLOCATE PREPAREfamily. - Calls
do_execute_statement(XASL-cache fast path) or, if the cache is disabled or the statement is not cacheable, copies the parse tree, runspt_bind_values_to_hostvars/pt_resolve_names/pt_semantic_type, and callsdo_statement. - Sets
stage[stmt_ndx] = StatementExecutedStage. - Builds the
DB_QUERY_RESULTdescriptor: forCUBRID_STMT_SELECTandCUBRID_STMT_EXECUTE_PREPAREit callspt_new_query_result_descriptor(which wraps the server-side list-file in a cursor); forCUBRID_STMT_INSERT/CUBRID_STMT_CALLit pulls the row count or generated keys from the AST’spt_node_etcpayload. - Calls
update_execution_valuesto refresh the parser’srow_countcache.
Three points from this routine are easy to miss but matter:
MVCC snapshot invalidation is at the API boundary, not the
statement boundary. db_execute_statement calls
db_invalidate_mvcc_snapshot_before_statement before delegating
to _local, and resets LC_FETCH_MVCC_VERSION after. This is
what makes READ COMMITTED isolation work — every top-level API
execute call gets a fresh snapshot — but it also explains why
calling db_execute_statement_local directly from inside the
broker without the wrapper would silently observe stale data.
The XASL cache can fail underneath you. When the server
evicts a cached plan or detects schema modification mid-execution,
the executor returns ER_QPROC_INVALID_XASLNODE or
ER_QPROC_XASLNODE_RECOMPILE_REQUESTED, the DBI clears the
statement’s xasl_id, and the prepare-then-execute pair runs
again silently. This is invisible to the caller in the success
case but visible in the error case if the schema change cannot
be auto-recovered (DB_CLASS_MODIFIED returns the error to
the caller).
INSERT row counts have a special case for REPLACE. A REPLACE
statement is conceptually DELETE + INSERT. When the executor
returns the insert count, the DBI compares it to the row count
from pt_node_etc(statement) (the value the executor stashes on
the AST node) and may keep the larger count.
Cursor: db_query_first_tuple, db_query_next_tuple, db_query_get_tuple_value
Section titled “Cursor: db_query_first_tuple, db_query_next_tuple, db_query_get_tuple_value”DB_QUERY_RESULT is a tagged union. result->type is one of
T_SELECT, T_CALL, T_OBJFETCH, T_GET, or T_CACHE_HIT, and
the result->res union has one arm per tag: res.s carries
{QUERY_ID query_id; CURSOR_ID cursor_id; CACHE_TIME cache_time; int stmt_id;} for T_SELECT (the only type with a real list-file
cursor); res.c carries {DB_VALUE *val_ptr; CURSOR_POSITION crs_pos;} for T_CALL; res.o carries
{DB_VALUE **valptr_list; CURSOR_POSITION crs_pos;} for T_OBJFETCH.
The cursor functions are uniform from the outside but dispatch on the
type tag inside. db_query_first_tuple, db_query_next_tuple,
db_query_prev_tuple, db_query_last_tuple, and
db_query_seek_tuple all share the same shape: validate, switch on
result->type, call into cursor_* for T_SELECT (the list-file
backed cursor), or move a tiny state machine crs_pos ∈ { C_BEFORE,
C_ON, C_AFTER } for T_CALL/T_OBJFETCH. db_query_seek_tuple is
the interesting one: it computes three relative offsets (vs.
beginning, vs. current, vs. end) and picks the smallest absolute one
before invoking cursor_first_tuple / cursor_last_tuple and
stepping.
stateDiagram-v2
[*] --> Initial: db_open_buffer
Initial --> Compiled: db_compile_statement_local\n(pt_compile, mq_translate)
Compiled --> Prepared: do_prepare_statement\n(server XASL cache miss => fill)
Prepared --> Executed: db_execute_and_keep_statement_local\n(do_execute_statement)
Executed --> Prepared: re-execute\n(stmt is kept)
Executed --> Initial: db_drop_statement
Prepared --> Initial: db_drop_statement
Compiled --> Initial: db_drop_statement
Initial --> [*]: db_close_session
Executed --> [*]: db_close_session
Once positioned, value extraction is by column index
(db_query_get_tuple_value) or column name
(db_query_get_tuple_value_by_name). For T_SELECT the function
calls cursor_get_tuple_value which fetches from the list-file
buffer; for T_OBJFETCH it reads valptr_list[index]; for T_CALL
it reads the single val_ptr. In all cases the value is copied via
pr_clone_value into the caller’s DB_VALUE.
pr_clone_value deep-copies the value into the caller’s
DB_VALUE. The caller owns the result and must call
db_value_clear (or db_value_free for heap-allocated values) to
release any string/object/set storage; this is one of the
contract-style aspects of the API that bites new users — failing
to clear a string value is a memory leak that valgrind catches but
that the API does not warn about.
db_query_get_tuple_value_by_name is a convenience wrapper that
linearly searches result->query_type for the column name (case-
insensitive) and then defers to db_query_get_tuple_value. It tries
both the column alias (name) and the original name
(original_name) so a SELECT with ... AS x is reachable by both
the alias and the underlying expression’s name.
Result lifecycle: db_query_end
Section titled “Result lifecycle: db_query_end”db_query_end is the matched destructor for whatever
db_execute_statement produced. It checks tran_was_latest_query_ended
to decide whether to notify the server, then delegates to
db_query_end_internal which (a) is idempotent against
status == T_CLOSED; (b) for T_SELECT results, calls
qmgr_end_query (result->res.s.query_id) to ship an NRP that lets
the server free the list file (skipped when the transaction already
ended); (c) calls cursor_close to release the client-side cursor
handle; (d) frees the DB_QUERY_RESULT itself via
db_free_query_result.
Schema operations: db_create_class, db_add_attribute
Section titled “Schema operations: db_create_class, db_add_attribute”The schema surface in db_class.c is small and procedural.
db_create_class (name) calls smt_def_class to build an
SM_TEMPLATE, reserves the name with locator_reserve_class_name,
and commits via sm_update_class; the returned MOP is the new
class object. This is the reverse of how SQL DDL works: the SQL
path goes parser → semantic → do_alter/do_create_entity →
schema-template manager. The direct API skips the SQL parser and
builds the same SM_TEMPLATE directly. Both paths share
sm_update_class so they cannot diverge in schema semantics.
Attribute addition db_add_attribute (obj, name, domain, default)
defers to db_add_attribute_internal which opens a template via
smt_edit_class_mop, calls smt_add_attribute_w_dflt to record the
new column with its default, and commits with sm_update_class.
Domain strings such as "INTEGER" or "VARCHAR(64)" are parsed by
the schema template module, not by the SQL parser, so the direct
schema API skips the full Bison cost when adding columns
programmatically. Variants db_add_shared_attribute and
db_add_class_attribute exist for the OO-flavoured shared and
class-level attributes inherited from CUBRID’s OODB ancestry.
Transactions: db_commit_transaction, db_abort_transaction
Section titled “Transactions: db_commit_transaction, db_abort_transaction”The transaction API is the smallest part of the surface.
db_commit_transaction calls tran_commit (false) — the false is
the “retain locks” flag the API does not surface — and then calls
cubmethod::get_callback_handler ()->free_query_handle_all (true)
to forcibly close all prepared-statement handles bound to the SP
method-callback layer. This is the SP equivalent of JDBC closing all
PreparedStatement objects on commit when the connection is
non-holdable. db_abort_transaction is the same shape with
tran_abort instead. Both live in db_admin.c.
Error reporting: db_get_errors, er_*
Section titled “Error reporting: db_get_errors, er_*”CUBRID has two error channels and the DBI talks to both. The
session-scoped channel db_get_errors/db_get_next_error iterates
the parser’s error list (line and column per error) and also pushes
the message into the global error stack via er_set, so a caller
that only checks er_errid() still sees the syntax error. The
thread-local global error channel is the primary mechanism for
everything else: every db_* returns NO_ERROR/-1/er_errid()
and the caller is expected to call db_error_string /
db_error_code to extract the most recent error. The asserts
assert(er_errid() != NO_ERROR) sprinkled through db_vdb.c
enforce that returning -1 always sets er_errid().
CCI driver: the broker-side ux_* façade
Section titled “CCI driver: the broker-side ux_* façade”The CCI wire protocol lives in src/broker/cas_protocol.h and the
server side is in src/broker/cas_function.c and
src/broker/cas_execute.c. The dispatch table server_fn_table[] in
broker/cas.c is a flat array indexed by (func_code - 1). Its
entries cover the JDBC surface end-to-end: transaction control
(fn_end_tran), prepare/execute (fn_prepare, fn_execute,
fn_prepare_and_execute, fn_execute_batch, fn_execute_array),
result navigation (fn_cursor, fn_fetch, fn_cursor_update,
fn_cursor_close, fn_next_result, fn_close_req_handle,
fn_get_generated_keys, fn_make_out_rs), schema introspection
(fn_schema_info, fn_get_db_version, fn_get_class_num_objs,
fn_get_attr_type_str, fn_parameter_info, fn_get_query_info),
object access (fn_oid_get, fn_oid_put, fn_oid,
fn_collection), session state (fn_get_db_parameter,
fn_set_db_parameter, fn_savepoint, fn_get_row_count,
fn_get_last_insert_id, fn_end_session,
fn_set_cas_change_mode, fn_check_cas), XA two-phase commit
(fn_xa_prepare, fn_xa_recover, fn_xa_end_tran), and LOB I/O
(fn_lob_new, fn_lob_write, fn_lob_read). Deprecated GLO slots
(CAS_FC_DEPRECATED1..4) and a CAS_FC_GET_SHARD_INFO placeholder
sit at the unused indexes as fn_deprecated / fn_not_supported.
The CAS process reads a func code off the socket, dispatches to the
matching fn_*, which marshals arguments out of argv[] and into
DBI-shaped C types, then calls one or more ux_* functions. The
ux_* layer is where the real work happens; the fn_* layer is
purely the wire glue.
On the connect path, ux_database_connect maps the broker’s access
mode (shm_appl->access_mode ∈ {READ_ONLY_ACCESS_MODE,
SLAVE_ONLY_ACCESS_MODE, default}) to a client type
(DB_CLIENT_TYPE_BROKER, DB_CLIENT_TYPE_READ_ONLY_BROKER,
DB_CLIENT_TYPE_SLAVE_ONLY_BROKER, plus *_REPLICA_ONLY variants),
copies the broker’s preferred-hosts / connect-order settings into the
DBI globals via db_set_preferred_hosts / db_set_connect_order /
db_set_max_num_delayed_hosts_lookup, and calls db_restart_ex to
bring the database connection up. On success it copies database name
and host into as_info, sets last_connect_time, and calls
ux_get_default_setting to read the broker’s lock-timeout,
isolation level, and system parameters. The trigger toggle
(db_enable_trigger / db_disable_trigger) follows
shm_appl->trigger_action_flag.
Two CCI-isms stand out. First, the broker treats every CCI
connection as a stateful database-client connection but pools them
across many CCI connections from JDBC clients — when a JDBC client
connects, the broker hands it whatever CAS process is idle and that
CAS process is already attached to the database. Second, the CAS
“connection” to the database is per-process and lives across many
JDBC connections; ux_database_connect short-circuits when the
broker process is already attached to the right database with the
right user.
On the prepare path, ux_prepare allocates a T_SRV_HANDLE via
hm_new_srv_handle, opens a DB_SESSION with db_open_buffer, calls
db_compile_statement to drive the full client-side compile pipeline,
then serialises four pieces back through T_NET_BUF: the new
srv_h_id, the result-cache lifetime, the statement type byte, and
the bind-marker count. The column list (column names, attribute names,
spec names, types, sizes, domains) follows via
prepare_column_list_info_set. The DB_SESSION * is stashed inside
srv_handle->session so a later ux_execute can reach it again. The
flags CCI_PREPARE_INCLUDE_OID, CCI_PREPARE_XASL_CACHE_PINNED,
CCI_PREPARE_HOLDABLE, and CCI_PREPARE_UPDATABLE translate one-for-
one into db_include_oid and db_session_set_* calls before the
compile. Combining HOLDABLE with UPDATABLE is rejected up front
with CAS_ER_HOLDABLE_NOT_ALLOWED.
The CCI server-handle T_SRV_HANDLE is allocated by
hm_new_srv_handle (in cas_handle.c) and indexed by an integer
that goes back to the JDBC client; later ux_execute /
ux_fetch calls find the handle through hm_find_srv_handle. This
is the layer that lets one JDBC connection hold many open
PreparedStatement objects simultaneously, even though the
underlying CAS process only has one db_* connection.
The execute path is where the broker reaches deepest into DBI.
ux_execute:
- Calls
hm_qresult_endto free any previous result on the same handle. - Recovers the
DB_SESSION *fromsrv_handle->sessionif the statement was prepared earlier; otherwise re-parses withdb_open_buffer (srv_handle->sql_stmt)and re-compiles. - Marshals
argv[]into aDB_VALUE *array viamake_bind_value, then pushes them into the parser viaset_host_variables(which wrapsdb_push_values). - Honours
CCI_EXEC_RETURN_GENERATED_KEYS,CCI_EXEC_QUERY_INFO,srv_handle->is_holdable, andsrv_handle->auto_commit_modeby calling the matchingdb_session_set_*setters. - Runs
db_execute_and_keep_statement (session, stmt_id, &result)— neverdb_execute_statement, because JDBC may re-execute. - Switches the result to peek mode with
db_query_set_copy_tplvalue (result, 0)so tuple-value pointers alias the list-file buffer rather than being deep-copied. - If JDBC supplied
max_row > 0, callsdb_query_seek_tuple (result, max_row, 1)and trims the row count. - Serialises the result row count, then column-info via
execute_info_set, then trailing protocol-version-gated fields (column lifetime info forPROTOCOL_V2, shard id forPROTOCOL_V5). - If
do_commit_after_executereturns true (per-statement autocommit on a non-SELECT), setsreq_info->need_auto_commit = TRAN_AUTOCOMMITso the dispatcher commits after the wire reply has been sent.
Notice the db_execute_and_keep_statement call, not
db_execute_statement: the broker keeps the parse tree alive across
re-executions of the same PreparedStatement. The flag
CCI_EXEC_QUERY_INFO is JDBC’s CUBRIDPreparedStatement.setQueryInfo
toggle; when set, the CAS recompiles the statement and writes the
plan to a temporary file so the JDBC client can pull it back. The
max_row cap is enforced client-side (broker-side, from the
server’s view) by seeking the cursor; the server does not know about
JDBC’s Statement.setMaxRows.
On the fetch path, ux_fetch validates the handle, dispatches a
stored-procedure call to fetch_call when CCI_PREPARE_CALL is set,
otherwise indexes a small dispatch table fetch_func[] by
srv_handle->schema_type (-1 for ordinary queries → fetch_result,
CCI_SCH_* constants → schema-info fetchers like fetch_class,
fetch_attribute, etc.). The leaf fetch_result walks
db_query_seek_tuple to cursor_pos, then loops fetch_count times
calling cur_tuple which in turn calls db_query_get_tuple_value per
column and serialises each DB_VALUE into the wire buffer with
dbval_to_net_buf.
sequenceDiagram
participant JDBC as JDBC PreparedStatement.executeQuery
participant CCI as CCI driver (TCP)
participant Broker as Broker daemon
participant CAS as CAS worker
participant DBI as db_compile_statement / db_execute_and_keep_statement
participant NetCl as network_cl
participant Server as cub_server
JDBC->>CCI: SQL + params
CCI->>Broker: CAS_FC_PREPARE + sql_stmt
Broker->>CAS: dispatch
CAS->>DBI: db_open_buffer + db_compile_statement
DBI->>NetCl: pt_class_pre_fetch (NRP locks)
NetCl->>Server: NET_SERVER_LC_FETCH_LOCKSET
DBI->>NetCl: do_prepare_statement
NetCl->>Server: NET_SERVER_QM_QUERY_PREPARE
Server-->>NetCl: XASL_ID
DBI-->>CAS: stmt_id
CAS-->>CCI: srv_h_id, num_markers, column types
JDBC->>CCI: setInt(1, 42)
JDBC->>CCI: executeQuery
CCI->>Broker: CAS_FC_EXECUTE + bind values
Broker->>CAS: dispatch
CAS->>DBI: db_push_values + db_execute_and_keep_statement
DBI->>NetCl: do_execute_statement
NetCl->>Server: NET_SERVER_QM_QUERY_EXECUTE
Server-->>NetCl: row count, list_id
DBI-->>CAS: DB_QUERY_RESULT
CAS-->>CCI: row count, column info, first batch
JDBC->>CCI: rs.next()
CCI->>Broker: CAS_FC_FETCH
Broker->>CAS: dispatch
CAS->>DBI: db_query_seek_tuple + db_query_get_tuple_value
DBI-->>CAS: DB_VALUE per column
CAS-->>CCI: serialised tuples
The takeaway is that there is no “CCI executor” — the broker’s CCI
layer is a thin façade over db_* and reuses the entire client-side
compile, execute, and fetch logic. The split that matters is at the
network boundary above the broker: a CSQL csql -u dba demodb
session calls db_* directly in-process; a JDBC session goes
through three TCP sockets (JDBC ↔ broker, broker ↔ CAS, CAS ↔
server) but the deepest two of those are inside the same machine
and the deepest one — CAS ↔ server — uses the same NRP wire format
documented in cubrid-network-protocol.md.
Source Walkthrough
Section titled “Source Walkthrough”The DBI surface is spread across nine files in src/compat/. Key
symbols by responsibility:
Connection lifecycle (db_admin.c) — db_init, db_login,
db_restart, db_restart_ex, db_shutdown,
db_shutdown_without_request_to_server, db_ping_server,
db_disable_modification, db_enable_modification,
db_end_session, db_set_client_type, db_set_preferred_hosts,
db_get_row_count, db_get_last_insert_id.
Transaction control (db_admin.c) — db_commit_transaction,
db_abort_transaction, db_set_isolation, db_set_lock_timeout,
db_set_system_parameters, db_get_system_parameters.
Statement compile and execute (db_vdb.c) — db_open_buffer,
db_open_buffer_local, db_open_file,
db_compile_statement, db_compile_statement_local,
db_rewind_statement, db_statement_count,
db_open_buffer_and_compile_first_statement,
db_compile_and_execute_queries_internal,
db_execute_statement, db_execute_statement_local,
db_execute_and_keep_statement,
db_execute_and_keep_statement_local,
db_drop_statement, db_drop_all_statements,
db_close_session, db_close_session_local,
db_push_values, db_get_hostvars, db_get_lock_classes,
db_get_errors, db_get_next_error, db_get_warnings,
db_session_set_holdable, db_session_set_xasl_cache_pinned,
db_session_set_return_generated_keys,
db_set_statement_auto_commit, db_get_query_type_list,
db_invalidate_mvcc_snapshot_before_statement,
do_process_prepare_statement, do_get_prepared_statement_info,
do_cast_host_variables_to_expected_domain,
do_recompile_and_execute_prepared_statement,
do_process_deallocate_prepare.
Query result and cursor (db_query.c) — db_query_first_tuple,
db_query_next_tuple, db_query_prev_tuple, db_query_last_tuple,
db_query_seek_tuple, db_query_get_tplpos,
db_query_set_tplpos, db_query_free_tplpos,
db_query_get_tuple_value, db_query_get_tuple_value_by_name,
db_query_get_tuple_valuelist, db_query_tuple_count,
db_query_column_count, db_query_end, db_query_end_internal,
db_query_set_copy_tplvalue, db_is_client_cache_reusable,
db_query_prefetch_columns, db_free_query_format,
db_cp_query_type, or_pack_query_format,
or_unpack_query_format.
Schema definition (db_class.c) — db_create_class,
db_drop_class, db_drop_class_ex, db_rename_class,
db_add_attribute, db_add_shared_attribute,
db_add_class_attribute, db_drop_attribute, db_add_method,
db_add_class_method, db_add_super, db_drop_super,
db_change_default, db_constrain_non_null,
db_constrain_unique.
Object API (db_obj.c) — db_create, db_create_by_name,
db_get, db_get_shared, db_get_expression, db_put,
db_get_attribute_descriptor, db_get_method_descriptor,
db_create_trigger, db_get_serial_current_value,
db_get_serial_next_value.
Value primitives (db_macro.c, db_set.c) — db_make_int,
db_make_bigint, db_make_string, db_make_varchar,
db_make_date, db_make_datetime, db_make_timestamp,
db_make_set, db_make_multiset, db_make_sequence,
db_make_object, db_make_oid, db_make_null, db_value_clear,
db_value_clone, db_value_free, db_value_create,
db_value_domain_init.
CCI broker server-side (broker/cas_execute.c) —
ux_database_connect, ux_database_shutdown, ux_set_session_id,
ux_prepare, ux_execute, ux_execute_all, ux_execute_call,
ux_execute_batch, ux_execute_array, ux_fetch, ux_oid_get,
ux_cursor, ux_cursor_update, ux_cursor_close, ux_end_tran,
ux_end_session, ux_get_row_count, ux_get_last_insert_id,
fetch_result, fetch_call, prepare_column_list_info_set,
set_host_variables, make_bind_value, netval_to_dbval,
dbval_to_net_buf, do_commit_after_execute.
CCI broker dispatch (broker/cas.c, broker/cas_function.c) —
server_fn_table, server_func_name, process_request,
fn_end_tran, fn_prepare, fn_execute, fn_get_db_parameter,
fn_close_req_handle, fn_cursor, fn_fetch, fn_schema_info,
fn_oid_get, fn_oid_put, fn_oid, fn_collection,
fn_next_result, fn_execute_batch, fn_execute_array,
fn_cursor_update, fn_xa_prepare, fn_xa_recover,
fn_xa_end_tran, fn_con_close, fn_check_cas,
fn_get_generated_keys, fn_end_session, fn_get_row_count,
fn_get_last_insert_id, fn_prepare_and_execute,
fn_cursor_close.
Network client wrapper (communication/network_cl.c) —
net_client_init, net_client_final, net_client_request,
net_client_request_with_callback,
net_client_request_2recv_copyarea.
Boot (transaction/boot_cl.c, cross-referenced from
cubrid-boot.md) — boot_initialize_client,
boot_restart_client, boot_shutdown_client,
boot_client_initialize_css.
Position hints (as of updated: date)
Section titled “Position hints (as of updated: date)”| Symbol | File | Line |
|---|---|---|
db_open_buffer_local | src/compat/db_vdb.c | 214 |
db_open_buffer | src/compat/db_vdb.c | 246 |
db_compile_statement_local | src/compat/db_vdb.c | 531 |
db_compile_statement | src/compat/db_vdb.c | 851 |
db_get_errors | src/compat/db_vdb.c | 1011 |
db_push_values | src/compat/db_vdb.c | 1612 |
db_execute_and_keep_statement_local | src/compat/db_vdb.c | 1708 |
do_process_prepare_statement | src/compat/db_vdb.c | 2492 |
do_cast_host_variables_to_expected_domain | src/compat/db_vdb.c | 2797 |
do_recompile_and_execute_prepared_statement | src/compat/db_vdb.c | 3064 |
db_execute_and_keep_statement | src/compat/db_vdb.c | 3243 |
db_execute_statement_local | src/compat/db_vdb.c | 3276 |
db_execute_statement | src/compat/db_vdb.c | 3315 |
db_open_buffer_and_compile_first_statement | src/compat/db_vdb.c | 3344 |
db_compile_and_execute_queries_internal | src/compat/db_vdb.c | 3433 |
db_close_session_local | src/compat/db_vdb.c | 3581 |
db_close_session | src/compat/db_vdb.c | 3659 |
db_init | src/compat/db_admin.c | 170 |
db_login | src/compat/db_admin.c | 854 |
db_restart | src/compat/db_admin.c | 918 |
db_restart_ex | src/compat/db_admin.c | 982 |
db_shutdown | src/compat/db_admin.c | 1012 |
db_end_session | src/compat/db_admin.c | 1086 |
db_commit_transaction | src/compat/db_admin.c | 1169 |
db_abort_transaction | src/compat/db_admin.c | 1194 |
db_query_next_tuple | src/compat/db_query.c | 2188 |
db_query_first_tuple | src/compat/db_query.c | 2409 |
db_query_seek_tuple | src/compat/db_query.c | 2555 |
db_query_get_tuple_value | src/compat/db_query.c | 2978 |
db_query_get_tuple_value_by_name | src/compat/db_query.c | 3064 |
db_query_tuple_count | src/compat/db_query.c | 3194 |
db_query_end | src/compat/db_query.c | 3477 |
db_query_end_internal | src/compat/db_query.c | 3588 |
db_create_class | src/compat/db_class.c | 70 |
db_add_attribute_internal | src/compat/db_class.c | 184 |
db_add_attribute | src/compat/db_class.c | 248 |
db_create | src/compat/db_obj.c | 70 |
db_get | src/compat/db_obj.c | 234 |
db_put | src/compat/db_obj.c | 319 |
db_create_trigger | src/compat/db_obj.c | 1302 |
boot_initialize_client | src/transaction/boot_cl.c | 275 |
boot_restart_client | src/transaction/boot_cl.c | 690 |
boot_shutdown_client | src/transaction/boot_cl.c | 1352 |
net_client_init | src/communication/network_cl.c | 3657 |
net_client_request_with_callback | src/communication/network_cl.c | 1153 |
ux_database_connect | src/broker/cas_execute.c | 376 |
ux_database_shutdown | src/broker/cas_execute.c | 589 |
ux_prepare | src/broker/cas_execute.c | 618 |
ux_end_tran | src/broker/cas_execute.c | 874 |
ux_execute | src/broker/cas_execute.c | 992 |
ux_execute_all | src/broker/cas_execute.c | 1307 |
ux_execute_array | src/broker/cas_execute.c | 2086 |
ux_fetch | src/broker/cas_execute.c | 2442 |
ux_cursor | src/broker/cas_execute.c | 2583 |
ux_cursor_close | src/broker/cas_execute.c | 2732 |
fetch_result | src/broker/cas_execute.c | 5006 |
fetch_call | src/broker/cas_execute.c | 9018 |
server_fn_table | src/broker/cas.c | 75 |
fn_prepare | src/broker/cas_function.c | 231 |
fn_execute | src/broker/cas_function.c | 345 |
fn_prepare_and_execute | src/broker/cas_function.c | 661 |
fn_fetch | src/broker/cas_function.c | 959 |
struct db_session | src/compat/db_session.h | 26 |
Cross-check Notes
Section titled “Cross-check Notes”-
The “session” word is overloaded. Three different things in CUBRID call themselves “session”: the parser session
DB_SESSIONdocumented here (scoped to one or more SQL statements); the client session established bydb_login/db_restart(the process-global database connection); and the server sessionSESSION_STATEinsrc/session/session.c(the per-client server-side container for prepared-statement bindings, SET-variable bindings, last-insert-id). ADB_SESSIONlives entirely on the client and dies ondb_close_session; aSESSION_STATElives entirely on the server and is keyed by the integerSESSION_IDthe client carries in every NRP request. Seecubrid-server-session.md. -
db_compile_statementreturns 1-indexed statement numbers. The return isstmt_ndx + 1so the caller can distinguish a successful first statement (returns1) from “no more statements” (returns0). Thedb_execute_statementfamily decrements the index internally; the linestmt_ndx--;near the top ofdb_execute_and_keep_statement_localis exactly this conversion. -
db_execute_statementfrees the parse tree,db_execute_and_keep_statementdoesn’t. This is the only behavioural difference between the two forms but it determines whether the statement can be re-executed. The broker uses the “and-keep” form because JDBCPreparedStatementsemantics require it. -
The CCI client library lives in a separate repo. The server-side CCI logic documented here is in
src/broker/cas_*.cand ships with the CUBRID server source. The client-side CCI library (libcascci.so, what JDBC/Python/ODBC link against) lives in the separatecubrid-ccigit submodule, which was not present in the worktree at the time of this analysis. Wire details here are derived from the server-sidecas_*.cfiles andcas_protocol.h. -
Schema operations bypass MVCC at the session API.
db_create_classanddb_add_attributeproduce a schema modification immediately; there is no “in-template” stage exposed at the API level. Multi-step changes from C code require manually composing templates withsmt_def_class/smt_edit_class_mop/smt_add_attributeand committing once viasm_update_class. -
The
do_prepare_statementserver round trip is conditional. Indb_compile_statement_local, the prepare phase runs only whenPRM_ID_XASL_CACHE_MAX_ENTRIES > 0andstatement->flag.cannot_prepare == 0. Otherwise compilation stops at the Compiled stage and the executor takes the olderdo_statementpath that builds an XASL stream every time. Note thatstage[stmt_ndx]is set toStatementPreparedStageeven when prepare was skipped — the name means “ready to execute”, not “registered with the XASL cache”. -
Cursor random access on
T_CALLandT_OBJFETCHis degenerate. Those types hold exactly one tuple and the cursor functions only movecrs_posbetweenC_BEFORE/C_ON/C_AFTER. Full random-access semantics only apply toT_SELECT, which has a real list-file cursor underneath. -
The broker’s
T_SRV_HANDLEand DBI’sDB_SESSIONare not always 1:1. A handle prepared withis_prepared == TRUEowns itsDB_SESSIONuntilux_cursor_close/ux_close_req_handle. A handle prepared withis_prepared == FALSE(because semantic check failed at prepare time but the broker wants to retry at execute time) opens a freshDB_SESSIONinsideux_execute. -
db_query_set_copy_tplvalue (result, 0)enables peek mode. By default the cursor copies tuple values out of the list file. The broker calls this afterux_executewith the0argument to switch to peek mode, avoiding a per-column malloc; safe because the broker serialises into the wire buffer immediately. -
The XASL cache invalidation retry is silent on the success path. When
do_execute_statementreturnsER_QPROC_INVALID_XASLNODEorER_QPROC_XASLNODE_RECOMPILE_REQUESTED, the DBI clears the statement’sxasl_id, callsdo_prepare_statementagain, and retries. The caller only sees an error if recovery fails. -
db_invalidate_mvcc_snapshot_before_statementruns at every top-level execute call. This is what turns CUBRID into a per-statement READ COMMITTED engine. Skipping the wrapper (going straight to_local) preserves the old snapshot — the broker is careful to call the wrapper.
Open Questions
Section titled “Open Questions”-
The CCI client library in the
cubrid-ccisubmodule was not present in this worktree, so the document covers the server-side CCI face but not the actual client-sidecci_prepare,cci_execute,cci_fetchAPI that JDBC’scubrid-jdbccalls into. The next pass should add a section oncci_handle.candcci_query.cfrom the submodule. -
The relationship between
db_session_set_holdableand the server-side holdable cursor list (SESSION_STATE.holdable_cursors) is not fully traced. Specifically, what frees a holdable cursor when the JDBC client forgets it — transaction commit, session timeout, or explicitdb_query_end? -
db_execute_statement_localfrees the parse tree at indexstmt_ndx - 1after execution; the failure mode when a caller mixesdb_execute_statementanddb_execute_and_keep_statementinside one session and then callsdb_rewind_statementis unclear. -
The exact NRP opcodes used by
do_prepare_statementanddo_execute_statementare insrc/communication/network_interface_cl.cbut were not enumerated here. The cross-reference tocubrid-network-protocol.mdis by name only. -
ux_execute_all(multi-statement execute) is structurally similar toux_executebut inserts savepoint logic so that a partial failure rolls back only that statement when autocommit is off. The exact savepoint name format and the interaction withux_execute_batch/ux_execute_arraywas not analysed here.
Sources
Section titled “Sources”This document was synthesised by reading CUBRID source directly:
src/compat/db_admin.c— connection and transaction control.src/compat/db_vdb.c— statement compile/execute, host variables, prepared-statement support.src/compat/db_query.c— result set and cursor.src/compat/db_class.c— schema definition.src/compat/db_obj.c— object API.src/compat/db_session.h,db_query.h— struct definitions.src/transaction/boot_cl.c—boot_restart_client.src/communication/network_cl.c—net_client_*request family.src/broker/cas.c,cas_function.c— CCI dispatch table andfn_*wire glue.src/broker/cas_execute.c— theux_*façade.
Cross-references in this code-analysis tree: cubrid-network-protocol.md
(NRP wire format), cubrid-server-session.md (SESSION_STATE),
cubrid-boot.md (boot_restart_client), cubrid-broker.md
(broker daemon and CAS), cubrid-query-rewrite.md (pt_compile /
mq_translate), cubrid-semantic-check.md (late-bind semantic
phases), cubrid-cursor.md (cursor_* family),
cubrid-xasl-cache.md (do_prepare_statement cache).
Theoretical references: X/Open CLI specification (1995); Database System Concepts (Silberschatz, Korth, Sudarshan, ch. 5 §“Embedded SQL”); Database Internals (Petrov, ch. 5 §“Transactions”); the ODBC Programmer’s Reference for the scrollable-cursor seek model; the Oracle Call Interface Programmer’s Guide as the closest external counterpart in shape and naming convention.