CUBRID Network Protocol — Connection Accept, NRP Dispatch, and Server-Side Request Handlers
Contents:
- Theoretical Background
- Common DBMS Design
- CUBRID’s Approach
- Source Walkthrough
- Cross-check Notes
- Open Questions
- Sources
Theoretical Background
Section titled “Theoretical Background”A relational engine speaks the network in two layers, and conflating them is the most common source of bugs in DBMS RPC code. The lower layer is the wire framer: a length-prefixed packet header that delimits messages on a stream socket so the receiver knows how many bytes belong to the current request. The upper layer is the call dispatch: an opcode that selects which server-side function will consume the body and produce the reply. Database Internals (Petrov, ch. 6 “B-Tree Variants” and ch. 7 “Replication”) and the textbook treatment of RPC frameworks (Birrell & Nelson, Implementing Remote Procedure Calls, ACM TOCS 1984) give the model; every production DBMS — PostgreSQL, MySQL, Oracle, CUBRID — implements a variant of the same two layers.
Three independent design choices shape the resulting protocol and frame the rest of this document:
-
Custom binary versus general RPC. A DBMS could use gRPC, Thrift, or even REST/JSON for its client/server channel. The reason all major engines reject this in favour of a custom binary protocol is twofold. First, the “values” being shipped —
DB_VALUEin CUBRID,Datumin PostgreSQL,Fieldin MySQL — are tagged unions whose on-disk representation already exists; a generic serialiser would require translating to a portable schema (Protobuf, Thrift IDL) and back. Second, the throughput-critical paths are inner-loop fetches and bulk inserts where every nanosecond of marshalling cost is multiplied by the row count. CUBRID’sor_pack_valuewrites aDB_VALUEdirectly into the wire buffer in the same byte layout used by the heap manager, eliminating the intermediate copy. -
Length-prefixed framing versus message-typed framing. With length-prefixed framing every message starts with a fixed-size header that contains, at minimum, the body length. With message-typed framing every message starts with a single-byte tag that selects a parser; the parser then reads a length internally. PostgreSQL’s FE/BE protocol uses the latter (the message-type byte is the first thing on the wire); MySQL classic uses the former (a 4-byte
(length, sequence)prefix on every packet); CUBRID uses the former (aNET_HEADERstruct withbuffer_sizeas the body length). The latter wins on symmetric reception — the receive loop is one piece of code that does not branch on message type — but loses when message kinds have radically different shapes (a handshake is unrelated to a row event). CUBRID navigates this by encoding all variations within the body. -
Stateless dispatch table versus generated stubs. Some engines use code generation: an IDL describes each call, a compiler emits client stub and server skeleton in C/C++, the linker stitches them into the binary. This gives type safety but adds a build step and couples client and server compilations. CUBRID takes the manual stub path: every server entry point has a hand-written
network_interface_sr.cpp::s<name>handler that unpacks arguments, and a matchingnetwork_interface_cl.c::<name>client stub that packs them. The two are kept in sync by convention; the linkage is theNET_SERVER_*opcode. This trades static typing for a single choke-point —network.h— where every new RPC is declared, and makes hot-patching individual handlers trivial.
After these three choices are named, the rest of the CUBRID network protocol is a direct consequence of taking the binary, length-prefixed, manual-stub corner of the design space.
Common DBMS Design
Section titled “Common DBMS Design”Below the textbook framing layer, every primary client/server DBMS ships a similar handful of patterns. They are not in the original RPC papers; they are the engineering vocabulary that lives between the abstract protocol and the source.
A central enum of all RPC opcodes
Section titled “A central enum of all RPC opcodes”The opcode space is a single enum with one member per server entry
point. PostgreSQL’s BackendMessageCode (in src/include/libpq/protocol.h),
MySQL’s enum_server_command (in include/my_command.h), and
CUBRID’s enum net_server_request (in src/communication/network.h)
are all the same idea: every new feature that adds a server-side
function adds exactly one opcode here, and the opcode value is part
of the wire-compatibility contract. Extending the enum at the end is
backward-compatible; reordering or removing values breaks every
older client.
A static dispatch table indexed by opcode
Section titled “A static dispatch table indexed by opcode”Once the opcode is read, the server selects a handler by indexing into
a function-pointer array. PostgreSQL’s PostgresMain switch statement
is the closest analog (the dispatch is hand-written rather than a
table); MySQL’s do_command indexes a switch on
COM_* codes; CUBRID’s net_Requests[NET_SERVER_REQUEST_END] is the
canonical table form — static struct net_request net_Requests[]
in network_sr.c is filled at startup by net_server_init() with
one row per opcode, each carrying a function pointer plus an
attribute bitmask (CHECK_DB_MODIFICATION, CHECK_AUTHORIZATION,
IN_TRANSACTION, …). The bitmask makes side-conditions —
“this RPC requires DBA privilege”, “this RPC implies an open
transaction” — declarative rather than scattered inside each handler.
Symmetric pack/unpack helpers shared by client and server
Section titled “Symmetric pack/unpack helpers shared by client and server”The marshalling code is independent of the calling direction.
or_pack_int writes a 4-byte big-endian integer into a buffer and
returns the new pointer; or_unpack_int reads one and returns the
new pointer. The caller threads the pointer through a sequence of
calls, one per field, and never touches an offset directly. The same
header is included by client and server, so a client stub and its
matching server handler form a mirrored pair: the client’s or_pack_X
sequence is exactly the server’s or_unpack_X sequence in the same
order. PostgreSQL’s pq_send* / pq_get* and MySQL’s net_store_* /
net_field_length_ll are the same idiom.
Per-call request id (RID) for response demultiplexing
Section titled “Per-call request id (RID) for response demultiplexing”Because a single connection may have multiple in-flight requests
(server-initiated callbacks during query execution, asynchronous
cancel), the response carries the same identifier as the request so
the client can match them. CUBRID’s unsigned short rid lives in the
NET_HEADER; the server writes it on every reply, the client matches
it against pending request_queue / data_queue entries. PostgreSQL
sidesteps this with strict request/response ordering on a single
connection; MySQL uses a 1-byte sequence number. CUBRID’s RID is a
16-bit counter rolled at connection level, so concurrent callbacks
inside one query do not collide.
A worker pool that owns the dispatch loop
Section titled “A worker pool that owns the dispatch loop”The server cannot dispatch on the I/O thread because the handler may
block on locks, page reads, or sub-RPCs (PL invocation). The standard
shape is a worker pool: the I/O thread reads the request, packages
it as a task, hands it to a queue; a worker thread pulls the task
and calls the handler. CUBRID uses two pools — the
cubconn::connection::worker for connection I/O (epoll-based) and
the transaction worker pool from cubthread for handler execution
— with the connection worker pushing handler invocations into the
transaction pool via push_task_into_worker_pool.
Theory ↔ CUBRID mapping
Section titled “Theory ↔ CUBRID mapping”| Theoretical concept | CUBRID name |
|---|---|
| Wire framer header | NET_HEADER struct (9 fields, fixed size) in connection_defs.h |
| Length-prefixed body length | header.buffer_size (htonl/ntohl on the wire) |
| Packet kind tag | header.type ∈ {COMMAND_TYPE, DATA_TYPE, ABORT_TYPE, CLOSE_TYPE, ERROR_TYPE} |
| Per-call request id | header.request_id (16-bit, allocated by css_get_request_id) |
| Server function code (RPC opcode) | header.function_code (16-bit) + enum net_server_request |
| Static dispatch table | static struct net_request net_Requests[NET_SERVER_REQUEST_END] in network_sr.c |
| Action attribute bitmask | enum net_req_act { CHECK_DB_MODIFICATION, CHECK_AUTHORIZATION, SET_DIAGNOSTICS_INFO, IN_TRANSACTION, OUT_TRANSACTION } |
| Pack/unpack primitives | or_pack_int / or_unpack_int / or_pack_oid / or_pack_value / or_unpack_value |
| Per-call client stub | net_client_request* family in network_cl.c |
| Per-call server handler | s<module>_<verb> (e.g. slocator_force, sqmgr_execute_query) in network_interface_sr.cpp |
| Connection acceptor | cubconn::master::connector in master_connector.cpp (Unix-domain switch) |
| Connection worker pool | cubconn::connection::pool + cubconn::connection::worker (epoll-based) |
| Handler executor | transaction pool registered via REGISTER_WORKERPOOL, dispatched by css_internal_request_handler |
| Initial handshake | NET_SERVER_PING_WITH_HANDSHAKE = 999 (out-of-band opcode) |
| Capability bits | NET_CAP_* macros in network.h:304-311 |
| Client/server endianness check | get_endian_type () inline in network.h:337-343 |
CUBRID’s Approach
Section titled “CUBRID’s Approach”The network module has five moving parts: CSS framing (the
on-the-wire header and packet types), connection accept (how a
new client gets a worker), the worker pool (how requests are
read off the socket and dispatched), NRP dispatch (how an opcode
becomes a handler call), and the packer/unpacker (how arguments
get on and off the wire). We walk them in that order, then trace a
SELECT query end-to-end.
Overall structure
Section titled “Overall structure”flowchart LR
subgraph CLIENT["Client process<br/>(libcubridcs / CAS / csql)"]
APP["Application<br/>or CAS worker"]
CL_INTF["network_interface_cl.c<br/>per-call stub<br/>(qmgr_execute_query, ...)"]
CL_NET["network_cl.c<br/>net_client_request*"]
CL_CSS["connection_cl.cpp<br/>· connection_support.cpp<br/>(client-side CSS framing)"]
end
subgraph SERVER["Server process<br/>(cub_server)"]
SR_CONN["connection_sr.c<br/>connection lifecycle"]
SR_WORKER["connection_worker.cpp<br/>epoll workers<br/>(cubconn::connection)"]
SR_DISP["network_sr.c<br/>net_Requests[opcode]<br/>dispatch table"]
SR_HANDL["network_interface_sr.cpp<br/>per-call handler<br/>(sqmgr_execute_query, ...)"]
SR_TRAN["transaction pool<br/>(cubthread workers)"]
end
subgraph MASTER["cub_master"]
MASTER_LSN["TCP listening socket<br/>port = PRM_ID_TCP_PORT_ID"]
end
APP --> CL_INTF --> CL_NET --> CL_CSS
CL_CSS -->|"NET_HEADER + body"| MASTER_LSN
MASTER_LSN -->|"Unix-domain handoff"| SR_CONN
SR_CONN --> SR_WORKER
SR_WORKER --> SR_DISP
SR_DISP --> SR_TRAN
SR_TRAN --> SR_HANDL
SR_HANDL -->|"reply NET_HEADER + body"| CL_CSS
CSS framing — the wire header
Section titled “CSS framing — the wire header”Every packet on the CUBRID client/server wire begins with a
fixed-size NET_HEADER (9 fields, htonl-encoded big-endian). The
struct is defined once and shared by both sides:
// packet_header — connection_defs.htypedef struct packet_header NET_HEADER;struct packet_header{ int type; // COMMAND_TYPE | DATA_TYPE | ABORT_TYPE | CLOSE_TYPE | ERROR_TYPE int version; // unused in current code; reserved int host_id; // unused; reserved int transaction_id; // server-assigned tran index for this request int request_id; // per-connection RID for response demux int db_error; // last error code piggy-backed short function_code; // NET_SERVER_* opcode (when type == COMMAND_TYPE) unsigned short flags; // NET_HEADER_FLAG_METHOD_MODE | NET_HEADER_FLAG_INVALIDATE_SNAPSHOT int buffer_size; // length of the body that follows};The five type values are not arbitrary; they encode the kind of
packet so the receiver dispatcher can route without parsing the body:
// css_packet_type — connection_defs.h:185-192enum css_packet_type{ COMMAND_TYPE = 1, // request from client to server (carries an opcode) DATA_TYPE = 2, // payload data (request args or response data) ABORT_TYPE = 3, // server tells client "your last request was aborted" CLOSE_TYPE = 4, // half-close; this connection is going away ERROR_TYPE = 5 // server-side error, body is a packed error area};A single client request from end to end can produce multiple
packets. The minimum is a COMMAND_TYPE header followed (when
arg_size > 0) by a DATA_TYPE header plus body. The reply path uses
DATA_TYPE for the small fixed reply, optionally followed by more
DATA_TYPE packets for variable-size payloads, and ERROR_TYPE if
something went wrong. Every packet’s header carries the same
request_id so the client can correlate.
css_set_net_header() is the canonical writer:
// css_set_net_header — connection_support.cpp:1326voidcss_set_net_header (NET_HEADER *header_p, int type, short function_code, int request_id, int buffer_size, int transaction_id, int invalidate_snapshot, int db_error){ unsigned short flags = 0; header_p->type = htonl (type); header_p->function_code = htons (function_code); header_p->request_id = htonl (request_id); header_p->buffer_size = htonl (buffer_size); header_p->transaction_id = htonl (transaction_id); header_p->db_error = htonl (db_error); if (invalidate_snapshot) flags |= NET_HEADER_FLAG_INVALIDATE_SNAPSHOT;#if defined (CS_MODE) if (tran_is_in_libcas ()) flags |= NET_HEADER_FLAG_METHOD_MODE;#endif header_p->flags = htons (flags);}A complete request layout:
graph LR
subgraph REQ["Client request (two packets)"]
direction TB
H1["NET_HEADER<br/>type=COMMAND_TYPE<br/>function_code=NET_SERVER_QM_QUERY_EXECUTE<br/>request_id=R<br/>buffer_size=0"]
H2["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=N"]
BODY["packed args<br/>(or_pack_∗ sequence)<br/>N bytes"]
H1 --> H2 --> BODY
end
subgraph RESP["Server reply (one or more packets)"]
direction TB
H3["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=M"]
REPLY["fixed-size reply<br/>(or_pack_int x N)<br/>M bytes"]
H4["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=K"]
DATA2["bulk data<br/>(packed list-id, page, plan)<br/>K bytes"]
H3 --> REPLY --> H4 --> DATA2
end
REQ --> RESP
Connection accept — cub_master to cub_server handoff
Section titled “Connection accept — cub_master to cub_server handoff”CUBRID has an unusual two-process accept architecture. A separate
cub_master process owns the public TCP listening socket; the
actual database server (cub_server) does not bind a public port.
When a client connects, cub_master greets the connection, decides
which database the client wants, and hands the file descriptor to
the corresponding cub_server via a Unix-domain socket.
The server side of this protocol lives in
cubconn::master::connector (master_connector.cpp). At server
boot, net_server_start calls css_init, which constructs a
master::connector and calls connect → prepare_handshake → execute:
// connector::run — master_connector.cpp:160bool connector::run (int port, std::string &server_name) noexcept{ m_master_port = port; m_server_name = server_name; if (!this->connect (port)) // open TCP to cub_master return false; if (!this->prepare_handshake (server_name)) // tell master "I serve <name>" return false; if (!this->execute ()) // run the epoll-based fwd loop return false; return true;}connect() opens a TCP socket to cub_master on the well-known port
(PRM_ID_TCP_PORT_ID); prepare_handshake() sends a server-side
registration packet that includes the database name and the server’s
PID; execute() enters an epoll loop that handles two streams:
-
Master-side reception (
handle_master_reception) —cub_masterforwards new client connections by sending a Unix-domain socket message that carries the new client’s file descriptor. The server accepts the fd, allocates a freshCSS_CONN_ENTRYfrom the pool, and dispatches the connection to theconnection::workerpool. -
Worker statistics / shutdown control — secondary control messages from the master flow on the same channel.
The handshake from the master’s reply to a new server registration is
encoded as one of enum css_master_response (connection_defs.h):
enum css_master_response{ SERVER_ALREADY_EXISTS = 0, SERVER_REQUEST_ACCEPTED = 1, // legacy Unix-domain handoff DRIVER_NOT_FOUND = 2, SERVER_REQUEST_ACCEPTED_NEW = 3 // Windows/new-style: server opens its own port};For Linux/Unix the master always uses SERVER_REQUEST_ACCEPTED plus
Unix-domain fd passing; for Windows the master falls back to
SERVER_REQUEST_ACCEPTED_NEW because Windows lacks Unix-domain
sockets, and instead hands back a TCP port number that the server
owns directly (css_open_server_connection_socket in connection_sr.c).
The legacy server-only path is
css_connect_to_master_server (master_port_id, server_name, name_length)
in connection_sr.c:1066. This is the function that CUBRID’s older
in-process style used; the modern master::connector is the active
path.
sequenceDiagram participant CL as Client (CAS / csql) participant MA as cub_master participant SR as cub_server (this DB) participant WK as connection::worker Note over MA: bound on PRM_ID_TCP_PORT_ID CL->>MA: TCP connect CL->>MA: DATA_REQUEST + db_name (CSS framed) MA->>SR: Unix-domain msg (fd + db_name) SR->>SR: claim_context() / css_make_conn(fd) SR->>WK: dispatch(conn) → epoll register WK->>CL: ready (handshake reply via NET_SERVER_PING_WITH_HANDSHAKE = 999) CL->>WK: NET_SERVER_BO_REGISTER_CLIENT (real RPC starts) WK-->>CL: reply
NET_SERVER_PING_WITH_HANDSHAKE — the out-of-band opcode
Section titled “NET_SERVER_PING_WITH_HANDSHAKE — the out-of-band opcode”The first request on every new connection is special: opcode 999
(NET_SERVER_PING_WITH_HANDSHAKE). This is not part of the regular
NET_SERVER_REQUEST_LIST enum range; it is reserved at the constant
999 so its numeric value is preserved across version bumps. The
handler server_ping_with_handshake in network_interface_sr.cpp:563
performs:
- Reads client’s release string, capability flags, bit-platform (32 vs 64), client type, and host name.
- Checks compatibility via
rel_get_net_compatible(client, server). - Validates capability bits via
check_client_capabilities. - Reserves a connection slot via
css_increment_num_conn(client_type). - Replies with the server’s release string, capability bits, the
server’s host name, and a
REL_COMPATIBILITYverdict.
Because this opcode is the gate to all subsequent dispatch — every
later request can assume the client and server are version-compatible
— the dispatch in net_server_request short-circuits it before the
table lookup:
// net_server_request — network_sr.c:791if (request == NET_SERVER_PING_WITH_HANDSHAKE) { status = server_ping_with_handshake (thread_p, rid, buffer, size); goto end; }else if (request == NET_SERVER_SHUTDOWN) { er_set (ER_WARNING_SEVERITY, ARG_FILE_LINE, ER_NET_SERVER_SHUTDOWN, 0); status = CSS_UNPLANNED_SHUTDOWN; goto end; }if (request <= NET_SERVER_REQUEST_START || request >= NET_SERVER_REQUEST_END) { er_set (ER_WARNING_SEVERITY, ARG_FILE_LINE, ER_NET_UNKNOWN_SERVER_REQ, 0); return_error_to_client (thread_p, rid); goto end; }Capability bits encoded in the handshake (network.h:304-311):
#define NET_CAP_BACKWARD_COMPATIBLE 0x80000000#define NET_CAP_FORWARD_COMPATIBLE 0x40000000#define NET_CAP_INTERRUPT_ENABLED 0x00800000#define NET_CAP_UPDATE_DISABLED 0x00008000#define NET_CAP_REMOTE_DISABLED 0x00000080#define NET_CAP_HA_REPL_DELAY 0x00000008#define NET_CAP_HA_REPLICA 0x00000004#define NET_CAP_HA_IGNORE_REPL_DELAY 0x00000002A replica-only broker that connects to a non-replica server fails the
handshake with ER_NET_HS_HA_REPLICA_ONLY; a read-only client
connecting to a primary triggers an
ER_NET_HS_INCOMPAT_RW_MODE warning.
Worker pool — epoll-driven reception, transaction pool dispatch
Section titled “Worker pool — epoll-driven reception, transaction pool dispatch”After the handshake, the connection is owned by an instance of
cubconn::connection::worker (declared in connection_worker.hpp).
The worker is not the thread that runs the SQL request handler
— that responsibility is split:
-
Connection worker (epoll-based, one per N connections) reads CSS-framed packets off the socket, assembles complete request bodies, and enqueues a task to be executed by the transaction worker pool. The number of connection workers ranges between
PRM_ID_CSS_MIN_CONNECTION_WORKERandPRM_ID_CSS_MAX_CONNECTION_WORKER. -
Transaction worker pool is registered globally:
// server_support.c:548REGISTER_WORKERPOOL (transaction, []() { return (int) prm_get_integer_value (PRM_ID_TASK_WORKER); });Each request is wrapped in a task and pushed into this pool; the task runs
css_internal_request_handler, which is the bridge from the connection layer to the dispatch table.
The connection worker uses a multi-producer/single-consumer
TBB queue per worker plus an eventfd for cross-thread wakeup
(connection_worker.hpp:236-241). The two queue types separate
hot-path messages (IMMEDIATE) from defer-able control messages
(LAZY) so a flood of new clients does not delay an in-flight
SEND_PACKET.
Once the connection worker has a complete request, it eventually
calls into css_internal_request_handler:
// css_internal_request_handler — server_support.c:450static intcss_internal_request_handler (THREAD_ENTRY & thread_ref, CSS_CONN_ENTRY & conn_ref){ unsigned short rid; unsigned int eid; int request, rc, size = 0; char *buffer = NULL; int local_tran_index = thread_ref.tran_index; int status = CSS_UNPLANNED_SHUTDOWN;
rc = css_receive_request (&conn_ref, &rid, &request, &size); if (rc == NO_ERRORS) { thread_ref.tran_index = conn_ref.get_tran_index (); pthread_mutex_unlock (&thread_ref.tran_index_lock); if (size) { rc = css_receive_data (&conn_ref, rid, &buffer, &size, -1); if (rc != NO_ERRORS) return status; } conn_ref.db_error = 0; eid = css_return_eid_from_conn (&conn_ref, rid); css_set_thread_info (&thread_ref, conn_ref.client_id, eid, conn_ref.get_tran_index (), request); // 3. Call server_request() function status = css_Server_request_handler (&thread_ref, eid, request, size, buffer); css_set_thread_info (&thread_ref, -1, 0, local_tran_index, -1); } ...}The function pointer css_Server_request_handler was registered at
boot via css_initialize_server_interfaces (net_server_request)
(server_support.c:516). This decoupling is deliberate: connection/
code owns the framing and worker management; communication/ code
owns the dispatch. Either could be replaced without touching the
other.
NRP dispatch — the net_Requests[] table
Section titled “NRP dispatch — the net_Requests[] table”The dispatch table is a flat array of one row per opcode:
// network_sr.c — top of filestatic struct net_request net_Requests[NET_SERVER_REQUEST_END];The net_request struct itself is intentionally small
(network_request_def.hpp):
typedef void (*net_server_func) (THREAD_ENTRY *thrd, unsigned int rid, char *request, int reqlen);struct net_request{ int action_attribute; // bitmask of net_req_act net_server_func processing_function; net_request () = default;};Every server entry point is registered exactly once, in
net_server_init(). A few representative rows:
// net_server_init — network_sr.c:74req_p = &net_Requests[NET_SERVER_PING];req_p->processing_function = server_ping;
req_p = &net_Requests[NET_SERVER_BO_REGISTER_CLIENT];req_p->processing_function = sboot_register_client;
req_p = &net_Requests[NET_SERVER_LC_FORCE];req_p->action_attribute = (CHECK_DB_MODIFICATION | SET_DIAGNOSTICS_INFO | IN_TRANSACTION);req_p->processing_function = slocator_force;
req_p = &net_Requests[NET_SERVER_QM_QUERY_EXECUTE];req_p->action_attribute = (SET_DIAGNOSTICS_INFO | IN_TRANSACTION);req_p->processing_function = sqmgr_execute_query;
req_p = &net_Requests[NET_SERVER_TM_SERVER_COMMIT];req_p->action_attribute = (CHECK_DB_MODIFICATION | SET_DIAGNOSTICS_INFO | OUT_TRANSACTION);req_p->processing_function = stran_server_commit;Then net_server_request is the dispatcher proper. After the
out-of-band cases (handshake, shutdown) and bounds check, it consults
the row and applies the side conditions before calling the handler:
// net_server_request — network_sr.c:791if (net_Requests[request].action_attribute & CHECK_DB_MODIFICATION) { bool check = true; if (request == NET_SERVER_TM_SERVER_COMMIT) { if (!logtb_has_updated (thread_p)) // commit of a read-only txn doesn't need write check check = false; } if (check) { CHECK_MODIFICATION_NO_RETURN (thread_p, error_code); if (error_code != NO_ERROR) { return_error_to_client (thread_p, rid); css_send_abort_to_client (conn, rid); goto end; } } }if (net_Requests[request].action_attribute & CHECK_AUTHORIZATION) { if (!logtb_am_i_dba_client (thread_p)) { er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_AU_DBA_ONLY, 1, ""); return_error_to_client (thread_p, rid); css_send_abort_to_client (conn, rid); goto end; } }if (net_Requests[request].action_attribute & IN_TRANSACTION) conn->in_transaction = true;
// call a request processing functionfunc = net_Requests[request].processing_function;thread_p->push_resource_tracks ();if (conn->invalidate_snapshot != 0) logtb_invalidate_snapshot_data (thread_p);(*func) (thread_p, rid, buffer, size);thread_p->pop_resource_tracks ();pgbuf_unfix_all (thread_p); // defence: don't leak page latchesaction_attribute thus encodes orthogonal behaviours that would
otherwise have to be re-implemented inside every handler:
| Bit | Meaning |
|---|---|
CHECK_DB_MODIFICATION | The DB must accept writes (rejects on read-only mode, replica, suspended HA log applier) |
CHECK_AUTHORIZATION | Client must be DBA or owner; rejects with ER_AU_DBA_ONLY otherwise |
SET_DIAGNOSTICS_INFO | Wraps the call with perfmon timer (PSTAT_*) and trace-log tap |
IN_TRANSACTION | Marks the connection as having an open transaction (sets conn->in_transaction) |
OUT_TRANSACTION | Clears the in-transaction flag at end of call (COMMIT / ABORT) |
Sample handlers — three representative shapes
Section titled “Sample handlers — three representative shapes”Shape 1: tiny request, tiny reply. server_ping is the
canonical minimum. One int in, one int out:
// server_ping — network_interface_sr.cpp:532voidserver_ping (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen){ OR_ALIGNED_BUF (OR_INT_SIZE) a_reply; char *reply = OR_ALIGNED_BUF_START (a_reply); int client_val, server_val;
or_unpack_int (request, &client_val); server_val = 0; or_pack_int (reply, server_val); css_send_data_to_client (thread_p->conn_entry, rid, reply, OR_INT_SIZE);}Shape 2: variable request, mixed-size reply. sqp_get_server_info
returns a packed DB_VALUE payload whose size depends on requested
info bits:
// sqp_get_server_info — network_interface_sr.cpp:7962 (condensed)voidsqp_get_server_info (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen){ OR_ALIGNED_BUF (OR_INT_SIZE + OR_INT_SIZE) a_reply; char *reply = OR_ALIGNED_BUF_START (a_reply); char *ptr, *buffer = NULL; int buffer_length, server_info_bits, success = NO_ERROR; DB_VALUE dt_dbval, ts_dbval, lt_dbval;
ptr = or_unpack_int (request, &server_info_bits);
buffer_length = 0; if (server_info_bits & SI_SYS_DATETIME) { success = db_sys_date_and_epoch_time (&dt_dbval, &ts_dbval); buffer_length += OR_VALUE_ALIGNED_SIZE (&dt_dbval); buffer_length += OR_VALUE_ALIGNED_SIZE (&ts_dbval); } if (server_info_bits & SI_LOCAL_TRANSACTION_ID) { success = xtran_get_local_transaction_id (thread_p, <_dbval); buffer_length += OR_VALUE_ALIGNED_SIZE (<_dbval); }
buffer = (char *) malloc (buffer_length); ptr = buffer; if (server_info_bits & SI_SYS_DATETIME) { ptr = or_pack_value (ptr, &dt_dbval); ptr = or_pack_value (ptr, &ts_dbval); } if (server_info_bits & SI_LOCAL_TRANSACTION_ID) ptr = or_pack_value (ptr, <_dbval);
ptr = or_pack_int (reply, buffer_length); ptr = or_pack_int (ptr, success); css_send_reply_and_data_to_client (thread_p->conn_entry, rid, reply, OR_ALIGNED_BUF_SIZE (a_reply), buffer, buffer_length, std::move (deleter));}The two-stage send — first a small fixed reply that announces the incoming bulk size, then the bulk data itself — is the universal pattern for variable-size responses.
Shape 3: bulk-data request, multi-stage reply. slocator_force
ships a copy area of dirty objects from client to server, then sends
back updated descriptors (server may have assigned new OIDs):
// slocator_force — network_interface_sr.cpp:1381 (condensed)voidslocator_force (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen){ int num_objs, multi_update_flags, packed_desc_size, content_size, num_ignore_error_list; int success, csserror; LC_COPYAREA *copy_area = NULL; char *packed_desc = NULL, *content_ptr = NULL, *new_content_ptr = NULL; char *ptr; int ignore_error_list[-ER_LAST_ERROR];
ptr = or_unpack_int (request, &num_objs); ptr = or_unpack_int (ptr, &multi_update_flags); ptr = or_unpack_int (ptr, &packed_desc_size); ptr = or_unpack_int (ptr, &content_size); ptr = or_unpack_int (ptr, &num_ignore_error_list); for (int i = 0; i < num_ignore_error_list; i++) ptr = or_unpack_int (ptr, &ignore_error_list[i]);
copy_area = locator_recv_allocate_copyarea (num_objs, &content_ptr, content_size); // 1. pull the descriptor block from the client csserror = css_receive_data_from_client (thread_p->conn_entry, rid, &packed_desc, &packed_size); locator_unpack_copy_area_descriptor (num_objs, copy_area, packed_desc, -1); // 2. pull the content block if (content_size > 0) csserror = css_receive_data_from_client (thread_p->conn_entry, rid, &new_content_ptr, &received_size); // 3. run the actual server-side function success = xlocator_force (thread_p, copy_area, num_ignore_error_list, ignore_error_list); // 4. repack the descriptor (server may have written new OIDs into it) locator_pack_copy_area_descriptor (num_objs, copy_area, packed_desc, packed_desc_size); // 5. send the small reply + the updated descriptor as two pieces ptr = or_pack_int (reply, success); ptr = or_pack_int (ptr, packed_desc_size); ptr = or_pack_int (ptr, 0); css_send_reply_and_2_data_to_client (thread_p->conn_entry, rid, reply, OR_ALIGNED_BUF_SIZE (a_reply), packed_desc, packed_desc_size, NULL, 0, std::move (deleter));}The css_receive_data_from_client calls are inline pulls back over
the same connection — the server’s request body did not contain the
descriptor or content blob, only their sizes; the bulk arrives in
follow-up DATA_TYPE packets keyed by the same RID.
Packer / unpacker — or_pack_* and OR_PACK_*
Section titled “Packer / unpacker — or_pack_* and OR_PACK_*”The marshalling layer is a thin adapter over big-endian byte-by-byte
serialisation. or_pack_int advances the buffer pointer by 4:
// from object_representation.hextern char *or_pack_int (char *ptr, int number);extern char *or_pack_int64 (char *ptr, INT64 number);extern char *or_pack_string (char *ptr, const char *string);extern char *or_pack_oid (char *ptr, const OID *oid);extern char *or_pack_value (char *buf, DB_VALUE *value); // !! the heavyweight one
extern char *or_unpack_int (char *ptr, int *number);extern char *or_unpack_string (char *ptr, char **string);extern char *or_unpack_oid (char *ptr, OID *oid);extern char *or_unpack_value (const char *buf, DB_VALUE *value);The pointer-threading idiom is universal in CUBRID stub code: each
call’s return value is the next call’s input. There is no offset
arithmetic, no memcpy with hand-computed sizes; the packer hides
both. This is the same pattern PostgreSQL uses with its StringInfo
buffer (though PostgreSQL keeps the offset inside the buffer struct
rather than passing a moving pointer).
For DB_VALUE (the universal value type — see dbtype_def.h),
or_pack_value writes:
+-----------------+--------------------+----------------+| domain header | nullness flag | value bytes || (variable size) | (1 byte, in domain | (depends on || | header, encoded | domain type) || | via or_packed_ | || | domain_size) | |+-----------------+--------------------+----------------+The domain header itself is variable-length: a bit-packed int that
encodes the domain type tag (DB_TYPE_INTEGER, DB_TYPE_VARCHAR, …),
extended-domain flag, collation, precision, scale, etc. This means
the receiver cannot know the value’s byte length until it has parsed
the domain header. The trade-off: tiny on the wire for primitive
types (a DB_TYPE_INTEGER value packs to roughly 5 bytes including
header), but parsing-stateful (the receiver must read the header
first to know how to read the body).
Helper macros OR_INT_SIZE = 4, OR_OID_SIZE = 8, OR_VALUE_ALIGNED_SIZE,
and OR_ALIGNED_BUF (a stack-buffer-with-alignment macro) appear in
every stub to size the per-call argument and reply buffers.
The OR_BUF struct (object_representation.h:1029) is a higher-level
abstraction used inside heap and B-tree code — it encapsulates the
buffer pointer, end-of-buffer, and overflow flag. Network code
generally uses the raw char * pointer threading; OR_BUF is for
storage-side packing where overflow checks matter more.
Client stub — net_client_request_* family
Section titled “Client stub — net_client_request_* family”The client side mirrors the server, opcode-for-opcode. A request is:
- allocate a fixed-size argument buffer using
OR_ALIGNED_BUF, - pack the args via
or_pack_*, - allocate a fixed-size reply buffer,
- call the right
net_client_request_*variant, - unpack the reply via
or_unpack_*, - translate the result.
The dispatcher is net_client_request_internal (network_cl.c:495):
// net_client_request_internal — network_cl.c:495 (condensed)static intnet_client_request_internal (int request, char *argbuf, int argsize, char *replybuf, int replysize, char *databuf, int datasize, char *replydata, int replydatasize){ unsigned int rc; int size, error = 0; char *reply = NULL;
if (net_Server_name[0] == '\0') // not connected { er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_NET_SERVER_CRASHED, 0); return -1; }
rc = __gv_cvar.css_send_req_to_server (net_Server_host, request, argbuf, argsize, databuf, datasize, replybuf, replysize); if (rc == 0) return set_server_error (__gv_cvar.css_get_errno ());
if (replydata != NULL) __gv_cvar.css_queue_receive_data_buffer (rc, replydata, replydatasize);
error = __gv_cvar.css_receive_data_from_server (rc, &reply, &size); if (error != NO_ERROR) return set_server_error (error); error = COMPARE_SIZE_AND_BUFFER (&replysize, size, &replybuf, reply);
if (replydata != NULL) { error = __gv_cvar.css_receive_data_from_server (rc, &reply, &size); if (error == NO_ERROR) error = COMPARE_SIZE_AND_BUFFER (&replydatasize, size, &replydata, reply); } return error;}The __gv_cvar indirection deserves a note: it is a global vtable of
function pointers (css_send_req_to_server, css_receive_data_from_server,
css_queue_receive_data_buffer, …) so that the same client stubs
work in both CS_MODE (real client/server, calls go to TCP) and
SA_MODE (standalone — client and server linked into one process,
calls short-circuit through an in-process queue). The vtable is
populated at link time by whichever mode-specific connection_cl.cpp
or its standalone equivalent gets compiled in.
For the higher-level call shapes — request with bulk reply,
request with callbacks, request with stream reply — network_cl.c
provides specialised wrappers:
| Function | Use |
|---|---|
net_client_request_no_reply | One-shot fire-and-forget (e.g. interrupt) |
net_client_request | Standard request/reply |
net_client_request_with_callback | Server may send back-channel callbacks during processing (queries) |
net_client_request_recv_copyarea | Reply contains a LC_COPYAREA payload |
net_client_request_method_callback | Server invokes a client-side method (legacy stored-procedure path) |
net_client_request_with_logwr_context | Replication log-writer streaming |
net_client_request_recv_stream | Open-ended streaming reply (e.g. loaddb progress) |
Each handler shape on the server side has a matching wrapper on the client side.
Client stub example — qmgr_execute_query
Section titled “Client stub example — qmgr_execute_query”The symmetry of client and server is clearest by reading the same
RPC on both sides. Server-side is sqmgr_execute_query (above);
client-side:
// qmgr_execute_query — network_interface_cl.c:6916 (condensed)QFILE_LIST_ID *qmgr_execute_query (const XASL_ID *xasl_id, QUERY_ID *query_idp, int dbval_cnt, const DB_VALUE *dbvals, QUERY_FLAG flag, ...){ QFILE_LIST_ID *list_id = NULL; int req_error; char *request, *reply, *senddata = NULL; OR_ALIGNED_BUF (OR_XASL_ID_SIZE + OR_INT_SIZE * 5 + ...) a_request; OR_ALIGNED_BUF (OR_INT_SIZE * 7 + OR_PTR_ALIGNED_SIZE + OR_CACHE_TIME_SIZE) a_reply;
request = OR_ALIGNED_BUF_START (a_request); reply = OR_ALIGNED_BUF_START (a_reply);
/* 1. pack DB_VALUE host vars into bulk send buffer */ for (int i = 0; i < dbval_cnt; i++) senddata_size += OR_VALUE_ALIGNED_SIZE (&dbvals[i]); senddata = (char *) malloc (senddata_size); ptr = senddata; for (int i = 0; i < dbval_cnt; i++) ptr = or_pack_db_value (ptr, (DB_VALUE *) &dbvals[i]);
/* 2. pack the small fixed args into the request buffer */ ptr = request; OR_PACK_XASL_ID (ptr, xasl_id); ptr = or_pack_int (ptr, dbval_cnt); ptr = or_pack_int (ptr, senddata_size); ptr = or_pack_int (ptr, flag); OR_PACK_CACHE_TIME (ptr, clt_cache_time); ptr = or_pack_int (ptr, query_timeout);
/* 3. send + receive (callback variant: server may issue method callbacks back to us) */ req_error = net_client_request_with_callback (NET_SERVER_QM_QUERY_EXECUTE, request, request_len, reply, OR_ALIGNED_BUF_SIZE (a_reply), senddata, senddata_size, ...);
/* 4. unpack the reply */ ptr = or_unpack_ptr (reply + OR_INT_SIZE * 4, query_idp); OR_UNPACK_CACHE_TIME (ptr, &local_srv_cache_time); ... return list_id;}The or_pack_* sequence in step 2 is byte-for-byte the same sequence
as or_unpack_* in sqmgr_execute_query (in the same order: XASL_ID,
dbval_cnt, data_size, query_flag, cache_time, query_timeout). Any
divergence breaks the wire.
Error propagation
Section titled “Error propagation”Server-side errors flow back through a parallel channel:
- The handler calls
er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_*, ...)which records the error in the thread-local error area (er_setlives inerror_manager.c). - The handler then calls
return_error_to_client (thread_p, rid)which serialises the error area viaer_get_area_error()and sends it as anERROR_TYPEpacket (css_send_error). - The client-side
net_client_request_internalreads theERROR_TYPEpacket and callsset_server_error(). For mostenum css_error_codevalues the client maps toER_NET_SERVER_CRASHED; for special server-rejection codes (ER_DB_NO_MODIFICATIONS,ER_AU_DBA_ONLY) the original error is preserved. er_set_with_oserrorinset_server_error()stampserrnointo the propagated error so the client can discriminate “server killed my socket” from “server returned a logical error”.
For abort paths (deadlock victim, query interrupted), the server uses
css_send_abort_to_client (conn, rid) to send an ABORT_TYPE packet
without an error payload; the client recognises ABORT_TYPE as
“the request was rejected, see the next error packet for details”.
End-to-end trace — a SELECT query
Section titled “End-to-end trace — a SELECT query”sequenceDiagram
participant CL as client
participant CSTUB as qmgr_execute_query<br/>(network_interface_cl.c)
participant CNET as net_client_request_with_callback<br/>(network_cl.c)
participant WIRE as TCP / Unix-domain<br/>NET_HEADER framing
participant SWORK as connection::worker<br/>(connection_worker.cpp)
participant SDISP as net_server_request<br/>(network_sr.c)
participant SHND as sqmgr_execute_query<br/>(network_interface_sr.cpp)
CL->>CSTUB: qmgr_execute_query(xasl_id, dbvals, ...)
CSTUB->>CSTUB: or_pack_value(senddata, dbvals)<br/>or_pack_int(...)
CSTUB->>CNET: net_client_request_with_callback(<br/> NET_SERVER_QM_QUERY_EXECUTE, req, replybuf, senddata)
CNET->>WIRE: NET_HEADER{type=COMMAND, op=QM_QUERY_EXECUTE, rid=R}
CNET->>WIRE: NET_HEADER{type=DATA, rid=R} + req body
CNET->>WIRE: NET_HEADER{type=DATA, rid=R} + senddata body
WIRE->>SWORK: epoll_wait → readv
SWORK->>SDISP: enqueue task → cubthread::transaction worker picks up
SDISP->>SDISP: net_Requests[NET_SERVER_QM_QUERY_EXECUTE]<br/>action_attribute = SET_DIAGNOSTICS_INFO | IN_TRANSACTION
SDISP->>SHND: sqmgr_execute_query(thread_p, rid, request, reqlen)
SHND->>SHND: OR_UNPACK_XASL_ID(...)<br/>or_unpack_int(...)<br/>css_receive_data_from_client → host vars
SHND->>SHND: xqmgr_execute_query(...) → list_id
SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + reply (success, size, query_id)
SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + list_id payload
SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + page0 payload
WIRE->>CNET: read packets, match rid, deliver to caller's reply/replydata buffers
CNET->>CSTUB: return; reply contains query_id, list_id ptr
CSTUB->>CL: QFILE_LIST_ID *
A few subtleties of this flow worth naming:
- The opcode (
function_codefield of the header) is only meaningful forCOMMAND_TYPEpackets. ForDATA_TYPEpackets, the receiver identifies the message byrequest_idand looks up which buffer it was queued into. host vars(the query parameters) ride a separateDATA_TYPEpacket from the request body (which carries XASL_ID, dbval_cnt, query_flag). Splitting them lets the small request body share anOR_ALIGNED_BUFof fixed size while the bulk parameters fly in their own buffer.- The server may emit multiple
DATA_TYPEreply packets — one for the small reply, one for the result list_id, one for the first result page. The client’snet_client_request_with_callbackknows how many to expect from the call’s signature. - Mid-flight, the server can issue a callback request back to the
client (e.g. a method invocation, a user-input prompt, a console
output). These are encoded as
QUERY_SERVER_REQUESTvalues (connection_defs.h:313):{QUERY_END, METHOD_CALL, ASYNC_OBTAIN_USER_INPUT, GET_NEXT_LOG_PAGES, END_CALLBACK, CONSOLE_OUTPUT}.
Source Walkthrough
Section titled “Source Walkthrough”Connection lifecycle (server side)
Section titled “Connection lifecycle (server side)”| Symbol | File | Role |
|---|---|---|
CSS_CONN_ENTRY | connection_defs.h | Per-connection state (fd, request_id, status, transaction_id, queues) |
css_initialize_conn | connection_sr.c | Reset a CSS_CONN_ENTRY for reuse from the pool |
css_make_conn | connection_sr.c | Allocate a CSS_CONN_ENTRY and init its lists |
css_init_conn_list | connection_sr.c | Boot-time creation of the connection-entry array |
css_shutdown_conn | connection_sr.c | Tear down on disconnect; finalise all lists; free version string |
css_connect_to_master_server | connection_sr.c | Legacy server-to-master registration (Unix-domain or new-style) |
css_set_proc_register | connection_sr.c | Build the CSS_SERVER_PROC_REGISTER payload sent at registration |
cubconn::master::connector::run | master_connector.cpp | Modern entry: connect → handshake → execute the master forwarding loop |
connector::handle_master_reception | master_connector.cpp | Receive a forwarded fd from cub_master, dispatch to worker pool |
cubconn::connection::pool | connection_pool.{cpp,hpp} | Free-list of context objects + workers; claim_context/retire_context |
cubconn::connection::worker | connection_worker.{cpp,hpp} | Per-worker epoll loop; reads CSS packets, enqueues request tasks |
worker::handle_command_header_packet | connection_worker.cpp | Read NET_HEADER, classify as command/data/error/abort/close |
worker::handle_data_packet | connection_worker.cpp | Match by RID, deliver into queued user buffer |
worker::push_task_into_worker_pool | connection_worker.cpp | Hand the assembled request to the transaction worker pool |
css_internal_request_handler | server_support.c | Bridge: unpack from the connection, call css_Server_request_handler |
css_initialize_server_interfaces | server_support.c | Boot-time install of the request-handler function pointer |
css_init | server_support.c | Server’s network main: build pool, register transaction workers, run |
css_pack_server_name | server_support.c | Encode (server name + db version + bit-platform) into a registration blob |
NRP table and dispatch
Section titled “NRP table and dispatch”| Symbol | File | Role |
|---|---|---|
enum net_server_request | network.h | The opcode enum; one value per server entry |
NET_SERVER_REQUEST_LIST macro | network.h | X-macro form used to expand both enum + name table |
NET_SERVER_PING_WITH_HANDSHAKE = 999 | network.h | Out-of-band opcode; preserved across versions |
NET_CAP_* capability bits | network.h:304-311 | Negotiated at handshake; gate replica/read-only/interrupt features |
struct net_request | network_request_def.hpp | (action_attribute, processing_function) row of the dispatch table |
enum net_req_act | network_request_def.hpp | Bitmask: CHECK_DB_MODIFICATION / CHECK_AUTHORIZATION / etc. |
net_Requests[] (static) | network_sr.c | The dispatch table itself |
net_server_init | network_sr.c:74 | Populate net_Requests[opcode] for every opcode |
net_server_request | network_sr.c:791 | The dispatcher: bounds-check, side-conditions, call handler |
net_server_start | network_sr.c:1058 | Server main(): er_init → cubthread → boot_restart_server → css_init |
net_server_conn_down | network_sr.c:1040 | Callback when a client connection drops; unregisters the client |
net_server_wakeup_workers | network_sr.c:927 | Used during shutdown to interrupt threads holding a tran index |
get_net_request_name | network_sr.c | Reverse-lookup (opcode → string) for log messages |
Sample server-side handlers
Section titled “Sample server-side handlers”| Symbol | File | Role |
|---|---|---|
server_ping | network_interface_sr.cpp:532 | Trivial handler: int in, int out |
server_ping_with_handshake | network_interface_sr.cpp:563 | Initial handshake; checks bit-platform, capabilities, version |
sboot_register_client | network_interface_sr.cpp:3760 | Per-connection registration after handshake |
sqp_get_server_info | network_interface_sr.cpp:7962 | Fetch sysdate / local txn id; multi-DB_VALUE reply |
slocator_fetch | network_interface_sr.cpp:671 | Fetch one object by OID |
slocator_force | network_interface_sr.cpp:1381 | Bulk DML: copy area in, descriptor out |
sqmgr_prepare_query | network_interface_sr.cpp:5107 | Prepare a query: returns XASL_ID |
sqmgr_execute_query | network_interface_sr.cpp:5399 | Execute prepared query: returns query_id, list_id, page0 |
stran_server_commit | network_interface_sr.cpp (via dispatch table) | Commit current transaction; marks OUT_TRANSACTION |
return_error_to_client | network_interface_sr.cpp (helper) | Wrap er_get_area_error and call css_send_error |
Packer / unpacker
Section titled “Packer / unpacker”| Symbol | File | Role |
|---|---|---|
or_pack_int / or_unpack_int | object_representation.h | 4-byte big-endian int |
or_pack_int64 / or_unpack_int64 | object_representation.h | 8-byte int |
or_pack_string / or_unpack_string | object_representation.h | length-prefixed C string |
or_pack_oid / or_unpack_oid | object_representation.h | 8-byte (volid, pageid, slotid) tuple |
or_pack_value / or_unpack_value | object_representation.h | DB_VALUE (domain header + null flag + value bytes) |
or_packed_string_length | object_representation.h | Sizing helper for variable-length packing |
OR_VALUE_ALIGNED_SIZE | object_representation.h | Macro: alignment-padded byte size of a DB_VALUE |
OR_ALIGNED_BUF | object_representation.h | Stack buffer + start pointer with required alignment |
OR_PACK_XASL_ID / OR_UNPACK_XASL_ID | object_representation.h | Composite for XASL_ID (sha1 + cache_flag + temp_file_id) |
OR_PACK_CACHE_TIME / OR_UNPACK_CACHE_TIME | object_representation.h | Composite for CACHE_TIME (sec + usec) |
OR_INT_SIZE / OR_OID_SIZE | object_representation.h | Sizing constants used in every stub |
OR_BUF | object_representation.h:1029 | Higher-level buffer struct used in heap/btree pack code |
Client stubs
Section titled “Client stubs”| Symbol | File | Role |
|---|---|---|
net_client_init | network_cl.c:3657 | Initial connect: set net_Server_host/name, do handshake |
net_client_request_internal | network_cl.c:495 | Core send/receive over the __gv_cvar vtable |
net_client_request | network_cl.c:587 | Standard wrapper |
net_client_request_with_callback | network_cl.c:1153 | Variant that handles server-initiated callbacks during the call |
net_client_request_recv_copyarea | network_cl.c:2317 | Variant for replies containing an LC_COPYAREA |
net_client_request_with_logwr_context | network_cl.c:2072 | Variant for log-writer streaming |
client_capabilities | network_cl.c:235 | Build the local NET_CAP_* bitmask |
check_server_capabilities | network_cl.c:259 | Reconcile client/server capability bits at handshake |
set_server_error | network_cl.c | Map enum css_error_code to ER_NET_* and propagate via er_set |
locator_force | network_interface_cl.c:697 | Client stub paired with slocator_force |
qmgr_execute_query | network_interface_cl.c:6916 | Client stub paired with sqmgr_execute_query |
locator_fetch | network_interface_cl.c:271 | Client stub paired with slocator_fetch |
Position hints (as of 2026-04-30)
Section titled “Position hints (as of 2026-04-30)”| Symbol | File | Approx. line |
|---|---|---|
enum net_server_request | src/communication/network.h | 289 |
NET_SERVER_PING_WITH_HANDSHAKE = 999 | src/communication/network.h | 300 |
NET_CAP_BACKWARD_COMPATIBLE | src/communication/network.h | 304 |
get_endian_type | src/communication/network.h | 337 |
struct packet_header (NET_HEADER) | src/connection/connection_defs.h | 382 |
enum css_packet_type | src/connection/connection_defs.h | 185 |
enum css_command_type | src/connection/connection_defs.h | 67 |
struct css_conn_entry | src/connection/connection_defs.h | 437 |
enum net_req_act | src/communication/network_request_def.hpp | 32 |
struct net_request | src/communication/network_request_def.hpp | 43 |
net_server_init | src/communication/network_sr.c | 74 |
net_server_request | src/communication/network_sr.c | 791 |
net_server_start | src/communication/network_sr.c | 1058 |
net_Requests[] (table) | src/communication/network_sr.c | 68 |
server_ping | src/communication/network_interface_sr.cpp | 532 |
server_ping_with_handshake | src/communication/network_interface_sr.cpp | 563 |
slocator_fetch | src/communication/network_interface_sr.cpp | 671 |
slocator_force | src/communication/network_interface_sr.cpp | 1381 |
sboot_register_client | src/communication/network_interface_sr.cpp | 3760 |
sqmgr_prepare_query | src/communication/network_interface_sr.cpp | 5107 |
sqmgr_execute_query | src/communication/network_interface_sr.cpp | 5399 |
sqp_get_server_info | src/communication/network_interface_sr.cpp | 7962 |
css_initialize_conn | src/connection/connection_sr.c | 255 |
css_init_conn_list | src/connection/connection_sr.c | 420 |
css_make_conn | src/connection/connection_sr.c | 577 |
css_connect_to_master_server | src/connection/connection_sr.c | 1066 |
css_read_header | src/connection/connection_sr.c | 1428 |
css_receive_request | src/connection/connection_sr.c | 1470 |
css_receive_data | src/connection/connection_sr.c | 1487 |
css_internal_request_handler | src/connection/server_support.c | 450 |
css_initialize_server_interfaces | src/connection/server_support.c | 516 |
css_init | src/connection/server_support.c | 554 |
css_send_data_to_client | src/connection/server_support.c | 708 |
css_pack_server_name | src/connection/server_support.c | 1417 |
css_set_net_header | src/connection/connection_support.cpp | 1326 |
css_send_request_with_data_buffer | src/connection/connection_support.cpp | 1367 |
css_send_request | src/connection/connection_support.cpp | 1468 |
css_send_data | src/connection/connection_support.cpp | 1526 |
css_send_two_data | src/connection/connection_support.cpp | 1578 |
css_send_error | src/connection/connection_support.cpp | 1652 |
css_net_send | src/connection/connection_support.cpp | 1057 |
css_net_recv | src/connection/connection_support.cpp | 544 |
css_read_remaining_bytes | src/connection/connection_support.cpp | 501 |
cubconn::master::connector::run | src/connection/master_connector.cpp | 160 |
cubconn::connection::worker (class) | src/connection/connection_worker.hpp | 52 |
cubconn::connection::pool (class) | src/connection/connection_pool.hpp | 39 |
net_client_init | src/communication/network_cl.c | 3657 |
net_client_request_internal | src/communication/network_cl.c | 495 |
net_client_request | src/communication/network_cl.c | 587 |
net_client_request_with_callback | src/communication/network_cl.c | 1153 |
client_capabilities | src/communication/network_cl.c | 235 |
check_server_capabilities | src/communication/network_cl.c | 259 |
locator_force | src/communication/network_interface_cl.c | 697 |
qmgr_execute_query | src/communication/network_interface_cl.c | 6916 |
Cross-check Notes
Section titled “Cross-check Notes”Relationship to cubrid-pl-javasp.md
Section titled “Relationship to cubrid-pl-javasp.md”The PL family (JavaSP and PL/CSQL) ships its own Unix-domain socket
between cub_server and cub_pl, with a separate wire protocol —
not the CSS framing described here. The PL wire is simpler (one
Header with a session id and a RequestCode, then a packed body)
because the participants are fixed (one cub_server, one cub_pl)
and the message kinds are bounded (SP_CODE_INVOKE, SP_CODE_RESULT,
SP_CODE_ERROR, SP_CODE_INTERNAL_JDBC, …). It does not use
NET_HEADER and does not go through net_Requests[]; the dispatch
on the Java side is a switch in ExecuteThread.run().
There is one place where the two protocols intersect: when a JavaSP
issues a SQL query through the server-side JDBC driver
(CUBRIDServerSideConnection), the request travels back to the
originating server worker thread on the same cub_pl-to-cub_server
PL socket (via METHOD_CALLBACK_* codes), not through the regular
client/server NRP described in this document. The PL doc is
authoritative on that callback path; this doc is authoritative on
the original client request that triggered the SP invocation
(NET_SERVER_PL_CALL, opcode handler spl_call).
Drifts and unclarified areas
Section titled “Drifts and unclarified areas”-
Two physically distinct registration paths to the master. The legacy path is
css_connect_to_master_serverinconnection_sr.c(TCP + Unix-domain handoff for non-Windows, orSERVER_REQUEST_NEWwith the server opening its own port for Windows). The modern path iscubconn::master::connector::runinmaster_connector.cpp(active in the currentcss_initflow). Both compile in the current source. The legacy function is referenced by the symbol table butcss_initdoes not call it on the active path; it is preserved for the standalone/non-pool builds and possibly older build configurations. A reader chasing “where is the listening socket bound” should focus onmaster_connector.cppfirst. -
net_Requestsis fixed-size at compile time. The array is sized toNET_SERVER_REQUEST_END, an enum value derived from the X-macro. Adding an opcode requires recompiling everything client-side and server-side; there is no runtime registration mechanism. This is a deliberate consequence of the manual-stub choice — but it also means a hot-deployed extension cannot introduce a new RPC. -
net_server_requestdoes not currently useOUT_TRANSACTION. The dispatcher reads the bit and clearsconn->in_transactionafter the handler returns, but inspection ofnet_server_initshows onlyNET_SERVER_TM_SERVER_COMMITandNET_SERVER_TM_SERVER_ABORT(and a couple of others) actually set it. Most read-only RPCs do not advertise transaction transitions via the bitmask; instead the connection’s transaction state is managed inside the handler. -
Endianness check is one-way.
get_endian_type ()is defined innetwork.hbut is not called as part of the standard handshake in current source. The protocol implicitly assumes both endpoints use the same endianness (the wire format is htonl/ntohl big-endian but the peer’s “platform” model is identified only by theclient_bit_platformfield, which encodes 32/64 not endianness). All currently supported targets (Linux x86_64, Windows x86_64) are little-endian, so this has not been a practical problem. -
__gv_cvaris the indirection that lets stubs work in bothCS_MODEandSA_MODE. When this doc walks “the client stub packs args, sends, reads response”, inSA_MODEthe same call short-circuits through an in-process queue without touching a socket. Readers tracingnet_client_requestin a debugger should set a breakpoint inside__gv_cvar.css_send_req_to_serverto see which mode is active. -
The CSS framer’s
versionandhost_idfields are reserved but unused. They are zeroed bycss_set_net_headerand ignored by the receiver. They exist for future protocol versioning that never landed; in current code, version skew is detected only atserver_ping_with_handshaketime viarel_get_net_compatible, not per-packet.
Open Questions
Section titled “Open Questions”-
TLS / encryption. The current source has no TLS termination for the client/server channel. The broker (
cas) supports SSL for the broker-to-client hop, butcastocub_serveris plain. Is there a roadmap for end-to-end TLS, and would it sit at thecss_net_sendlayer or higher? -
Compression. Large
LC_COPYAREApayloads and query result pages are sent uncompressed. Some peer DBMS engines (MySQL withmysql_compress, PostgreSQL streaming withpg_compress) compress at the framer. Is there an investigation into page-buffer-aware compression for CUBRID’s NRP path? -
Versioning at the opcode level. A new opcode added to
enum net_server_requestis backward-compatible (an old client simply will not request it) but a changed argument layout in an existing opcode silently breaks older clients. The release-string compatibility check at handshake time is coarse-grained; is there a per-opcode wire-version registry that I missed? -
function_codeisshort(16-bit). With ~150 opcodes today, the limit is far away, but the field is sized for a future where modules outsidesrc/register their own RPCs (loaddb, CDC, flashback already added blocks). Is there an opcode-namespace plan, or will the table simply keep growing flat? -
Worker pool tunables.
PRM_ID_TASK_WORKER,PRM_ID_CSS_MAX_CONNECTION_WORKER, andPRM_ID_CSS_MIN_CONNECTION_WORKERtogether gate concurrency. Their interaction with epoll’s edge-triggered mode and TBB’s queue is not documented in code; production deployments likely tune these empirically. A capacity-planning doc would be useful. -
The
css_internal_request_handlerglobal indirection. The handler pointer is installed once at boot viacss_initialize_server_interfaces; there is no facility to swap it (e.g. for hot-patching, telemetry hooks, A/B traffic shadowing). Is this an intentional rigidity or an artefact of single-tenant deployment?
Sources
Section titled “Sources”src/connection/— CSS framing, connection lifecycle, worker poolsrc/communication/— NRP table, dispatch, per-call handlers and stubssrc/connection/AGENTS.md— module overview at the connection levelsrc/communication/AGENTS.md— module overview at the protocol levelreferences/cubrid/CLAUDE.md— top-level CUBRID engine structure