Skip to content

CUBRID Network Protocol — Connection Accept, NRP Dispatch, and Server-Side Request Handlers

Contents:

A relational engine speaks the network in two layers, and conflating them is the most common source of bugs in DBMS RPC code. The lower layer is the wire framer: a length-prefixed packet header that delimits messages on a stream socket so the receiver knows how many bytes belong to the current request. The upper layer is the call dispatch: an opcode that selects which server-side function will consume the body and produce the reply. Database Internals (Petrov, ch. 6 “B-Tree Variants” and ch. 7 “Replication”) and the textbook treatment of RPC frameworks (Birrell & Nelson, Implementing Remote Procedure Calls, ACM TOCS 1984) give the model; every production DBMS — PostgreSQL, MySQL, Oracle, CUBRID — implements a variant of the same two layers.

Three independent design choices shape the resulting protocol and frame the rest of this document:

  1. Custom binary versus general RPC. A DBMS could use gRPC, Thrift, or even REST/JSON for its client/server channel. The reason all major engines reject this in favour of a custom binary protocol is twofold. First, the “values” being shipped — DB_VALUE in CUBRID, Datum in PostgreSQL, Field in MySQL — are tagged unions whose on-disk representation already exists; a generic serialiser would require translating to a portable schema (Protobuf, Thrift IDL) and back. Second, the throughput-critical paths are inner-loop fetches and bulk inserts where every nanosecond of marshalling cost is multiplied by the row count. CUBRID’s or_pack_value writes a DB_VALUE directly into the wire buffer in the same byte layout used by the heap manager, eliminating the intermediate copy.

  2. Length-prefixed framing versus message-typed framing. With length-prefixed framing every message starts with a fixed-size header that contains, at minimum, the body length. With message-typed framing every message starts with a single-byte tag that selects a parser; the parser then reads a length internally. PostgreSQL’s FE/BE protocol uses the latter (the message-type byte is the first thing on the wire); MySQL classic uses the former (a 4-byte (length, sequence) prefix on every packet); CUBRID uses the former (a NET_HEADER struct with buffer_size as the body length). The latter wins on symmetric reception — the receive loop is one piece of code that does not branch on message type — but loses when message kinds have radically different shapes (a handshake is unrelated to a row event). CUBRID navigates this by encoding all variations within the body.

  3. Stateless dispatch table versus generated stubs. Some engines use code generation: an IDL describes each call, a compiler emits client stub and server skeleton in C/C++, the linker stitches them into the binary. This gives type safety but adds a build step and couples client and server compilations. CUBRID takes the manual stub path: every server entry point has a hand-written network_interface_sr.cpp::s<name> handler that unpacks arguments, and a matching network_interface_cl.c::<name> client stub that packs them. The two are kept in sync by convention; the linkage is the NET_SERVER_* opcode. This trades static typing for a single choke-point — network.h — where every new RPC is declared, and makes hot-patching individual handlers trivial.

After these three choices are named, the rest of the CUBRID network protocol is a direct consequence of taking the binary, length-prefixed, manual-stub corner of the design space.

Below the textbook framing layer, every primary client/server DBMS ships a similar handful of patterns. They are not in the original RPC papers; they are the engineering vocabulary that lives between the abstract protocol and the source.

The opcode space is a single enum with one member per server entry point. PostgreSQL’s BackendMessageCode (in src/include/libpq/protocol.h), MySQL’s enum_server_command (in include/my_command.h), and CUBRID’s enum net_server_request (in src/communication/network.h) are all the same idea: every new feature that adds a server-side function adds exactly one opcode here, and the opcode value is part of the wire-compatibility contract. Extending the enum at the end is backward-compatible; reordering or removing values breaks every older client.

Once the opcode is read, the server selects a handler by indexing into a function-pointer array. PostgreSQL’s PostgresMain switch statement is the closest analog (the dispatch is hand-written rather than a table); MySQL’s do_command indexes a switch on COM_* codes; CUBRID’s net_Requests[NET_SERVER_REQUEST_END] is the canonical table form — static struct net_request net_Requests[] in network_sr.c is filled at startup by net_server_init() with one row per opcode, each carrying a function pointer plus an attribute bitmask (CHECK_DB_MODIFICATION, CHECK_AUTHORIZATION, IN_TRANSACTION, …). The bitmask makes side-conditions — “this RPC requires DBA privilege”, “this RPC implies an open transaction” — declarative rather than scattered inside each handler.

Symmetric pack/unpack helpers shared by client and server

Section titled “Symmetric pack/unpack helpers shared by client and server”

The marshalling code is independent of the calling direction. or_pack_int writes a 4-byte big-endian integer into a buffer and returns the new pointer; or_unpack_int reads one and returns the new pointer. The caller threads the pointer through a sequence of calls, one per field, and never touches an offset directly. The same header is included by client and server, so a client stub and its matching server handler form a mirrored pair: the client’s or_pack_X sequence is exactly the server’s or_unpack_X sequence in the same order. PostgreSQL’s pq_send* / pq_get* and MySQL’s net_store_* / net_field_length_ll are the same idiom.

Per-call request id (RID) for response demultiplexing

Section titled “Per-call request id (RID) for response demultiplexing”

Because a single connection may have multiple in-flight requests (server-initiated callbacks during query execution, asynchronous cancel), the response carries the same identifier as the request so the client can match them. CUBRID’s unsigned short rid lives in the NET_HEADER; the server writes it on every reply, the client matches it against pending request_queue / data_queue entries. PostgreSQL sidesteps this with strict request/response ordering on a single connection; MySQL uses a 1-byte sequence number. CUBRID’s RID is a 16-bit counter rolled at connection level, so concurrent callbacks inside one query do not collide.

The server cannot dispatch on the I/O thread because the handler may block on locks, page reads, or sub-RPCs (PL invocation). The standard shape is a worker pool: the I/O thread reads the request, packages it as a task, hands it to a queue; a worker thread pulls the task and calls the handler. CUBRID uses two pools — the cubconn::connection::worker for connection I/O (epoll-based) and the transaction worker pool from cubthread for handler execution — with the connection worker pushing handler invocations into the transaction pool via push_task_into_worker_pool.

Theoretical conceptCUBRID name
Wire framer headerNET_HEADER struct (9 fields, fixed size) in connection_defs.h
Length-prefixed body lengthheader.buffer_size (htonl/ntohl on the wire)
Packet kind tagheader.type{COMMAND_TYPE, DATA_TYPE, ABORT_TYPE, CLOSE_TYPE, ERROR_TYPE}
Per-call request idheader.request_id (16-bit, allocated by css_get_request_id)
Server function code (RPC opcode)header.function_code (16-bit) + enum net_server_request
Static dispatch tablestatic struct net_request net_Requests[NET_SERVER_REQUEST_END] in network_sr.c
Action attribute bitmaskenum net_req_act { CHECK_DB_MODIFICATION, CHECK_AUTHORIZATION, SET_DIAGNOSTICS_INFO, IN_TRANSACTION, OUT_TRANSACTION }
Pack/unpack primitivesor_pack_int / or_unpack_int / or_pack_oid / or_pack_value / or_unpack_value
Per-call client stubnet_client_request* family in network_cl.c
Per-call server handlers<module>_<verb> (e.g. slocator_force, sqmgr_execute_query) in network_interface_sr.cpp
Connection acceptorcubconn::master::connector in master_connector.cpp (Unix-domain switch)
Connection worker poolcubconn::connection::pool + cubconn::connection::worker (epoll-based)
Handler executortransaction pool registered via REGISTER_WORKERPOOL, dispatched by css_internal_request_handler
Initial handshakeNET_SERVER_PING_WITH_HANDSHAKE = 999 (out-of-band opcode)
Capability bitsNET_CAP_* macros in network.h:304-311
Client/server endianness checkget_endian_type () inline in network.h:337-343

The network module has five moving parts: CSS framing (the on-the-wire header and packet types), connection accept (how a new client gets a worker), the worker pool (how requests are read off the socket and dispatched), NRP dispatch (how an opcode becomes a handler call), and the packer/unpacker (how arguments get on and off the wire). We walk them in that order, then trace a SELECT query end-to-end.

flowchart LR
  subgraph CLIENT["Client process<br/>(libcubridcs / CAS / csql)"]
    APP["Application<br/>or CAS worker"]
    CL_INTF["network_interface_cl.c<br/>per-call stub<br/>(qmgr_execute_query, ...)"]
    CL_NET["network_cl.c<br/>net_client_request*"]
    CL_CSS["connection_cl.cpp<br/>· connection_support.cpp<br/>(client-side CSS framing)"]
  end
  subgraph SERVER["Server process<br/>(cub_server)"]
    SR_CONN["connection_sr.c<br/>connection lifecycle"]
    SR_WORKER["connection_worker.cpp<br/>epoll workers<br/>(cubconn::connection)"]
    SR_DISP["network_sr.c<br/>net_Requests[opcode]<br/>dispatch table"]
    SR_HANDL["network_interface_sr.cpp<br/>per-call handler<br/>(sqmgr_execute_query, ...)"]
    SR_TRAN["transaction pool<br/>(cubthread workers)"]
  end
  subgraph MASTER["cub_master"]
    MASTER_LSN["TCP listening socket<br/>port = PRM_ID_TCP_PORT_ID"]
  end

  APP --> CL_INTF --> CL_NET --> CL_CSS
  CL_CSS -->|"NET_HEADER + body"| MASTER_LSN
  MASTER_LSN -->|"Unix-domain handoff"| SR_CONN
  SR_CONN --> SR_WORKER
  SR_WORKER --> SR_DISP
  SR_DISP --> SR_TRAN
  SR_TRAN --> SR_HANDL
  SR_HANDL -->|"reply NET_HEADER + body"| CL_CSS

Every packet on the CUBRID client/server wire begins with a fixed-size NET_HEADER (9 fields, htonl-encoded big-endian). The struct is defined once and shared by both sides:

// packet_header — connection_defs.h
typedef struct packet_header NET_HEADER;
struct packet_header
{
int type; // COMMAND_TYPE | DATA_TYPE | ABORT_TYPE | CLOSE_TYPE | ERROR_TYPE
int version; // unused in current code; reserved
int host_id; // unused; reserved
int transaction_id; // server-assigned tran index for this request
int request_id; // per-connection RID for response demux
int db_error; // last error code piggy-backed
short function_code; // NET_SERVER_* opcode (when type == COMMAND_TYPE)
unsigned short flags; // NET_HEADER_FLAG_METHOD_MODE | NET_HEADER_FLAG_INVALIDATE_SNAPSHOT
int buffer_size; // length of the body that follows
};

The five type values are not arbitrary; they encode the kind of packet so the receiver dispatcher can route without parsing the body:

// css_packet_type — connection_defs.h:185-192
enum css_packet_type
{
COMMAND_TYPE = 1, // request from client to server (carries an opcode)
DATA_TYPE = 2, // payload data (request args or response data)
ABORT_TYPE = 3, // server tells client "your last request was aborted"
CLOSE_TYPE = 4, // half-close; this connection is going away
ERROR_TYPE = 5 // server-side error, body is a packed error area
};

A single client request from end to end can produce multiple packets. The minimum is a COMMAND_TYPE header followed (when arg_size > 0) by a DATA_TYPE header plus body. The reply path uses DATA_TYPE for the small fixed reply, optionally followed by more DATA_TYPE packets for variable-size payloads, and ERROR_TYPE if something went wrong. Every packet’s header carries the same request_id so the client can correlate.

css_set_net_header() is the canonical writer:

// css_set_net_header — connection_support.cpp:1326
void
css_set_net_header (NET_HEADER *header_p, int type, short function_code, int request_id, int buffer_size,
int transaction_id, int invalidate_snapshot, int db_error)
{
unsigned short flags = 0;
header_p->type = htonl (type);
header_p->function_code = htons (function_code);
header_p->request_id = htonl (request_id);
header_p->buffer_size = htonl (buffer_size);
header_p->transaction_id = htonl (transaction_id);
header_p->db_error = htonl (db_error);
if (invalidate_snapshot)
flags |= NET_HEADER_FLAG_INVALIDATE_SNAPSHOT;
#if defined (CS_MODE)
if (tran_is_in_libcas ())
flags |= NET_HEADER_FLAG_METHOD_MODE;
#endif
header_p->flags = htons (flags);
}

A complete request layout:

graph LR
  subgraph REQ["Client request (two packets)"]
    direction TB
    H1["NET_HEADER<br/>type=COMMAND_TYPE<br/>function_code=NET_SERVER_QM_QUERY_EXECUTE<br/>request_id=R<br/>buffer_size=0"]
    H2["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=N"]
    BODY["packed args<br/>(or_pack_∗ sequence)<br/>N bytes"]
    H1 --> H2 --> BODY
  end
  subgraph RESP["Server reply (one or more packets)"]
    direction TB
    H3["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=M"]
    REPLY["fixed-size reply<br/>(or_pack_int x N)<br/>M bytes"]
    H4["NET_HEADER<br/>type=DATA_TYPE<br/>request_id=R<br/>buffer_size=K"]
    DATA2["bulk data<br/>(packed list-id, page, plan)<br/>K bytes"]
    H3 --> REPLY --> H4 --> DATA2
  end
  REQ --> RESP

Connection accept — cub_master to cub_server handoff

Section titled “Connection accept — cub_master to cub_server handoff”

CUBRID has an unusual two-process accept architecture. A separate cub_master process owns the public TCP listening socket; the actual database server (cub_server) does not bind a public port. When a client connects, cub_master greets the connection, decides which database the client wants, and hands the file descriptor to the corresponding cub_server via a Unix-domain socket.

The server side of this protocol lives in cubconn::master::connector (master_connector.cpp). At server boot, net_server_start calls css_init, which constructs a master::connector and calls connect → prepare_handshake → execute:

// connector::run — master_connector.cpp:160
bool connector::run (int port, std::string &server_name) noexcept
{
m_master_port = port;
m_server_name = server_name;
if (!this->connect (port)) // open TCP to cub_master
return false;
if (!this->prepare_handshake (server_name)) // tell master "I serve <name>"
return false;
if (!this->execute ()) // run the epoll-based fwd loop
return false;
return true;
}

connect() opens a TCP socket to cub_master on the well-known port (PRM_ID_TCP_PORT_ID); prepare_handshake() sends a server-side registration packet that includes the database name and the server’s PID; execute() enters an epoll loop that handles two streams:

  1. Master-side reception (handle_master_reception) — cub_master forwards new client connections by sending a Unix-domain socket message that carries the new client’s file descriptor. The server accepts the fd, allocates a fresh CSS_CONN_ENTRY from the pool, and dispatches the connection to the connection::worker pool.

  2. Worker statistics / shutdown control — secondary control messages from the master flow on the same channel.

The handshake from the master’s reply to a new server registration is encoded as one of enum css_master_response (connection_defs.h):

enum css_master_response
{
SERVER_ALREADY_EXISTS = 0,
SERVER_REQUEST_ACCEPTED = 1, // legacy Unix-domain handoff
DRIVER_NOT_FOUND = 2,
SERVER_REQUEST_ACCEPTED_NEW = 3 // Windows/new-style: server opens its own port
};

For Linux/Unix the master always uses SERVER_REQUEST_ACCEPTED plus Unix-domain fd passing; for Windows the master falls back to SERVER_REQUEST_ACCEPTED_NEW because Windows lacks Unix-domain sockets, and instead hands back a TCP port number that the server owns directly (css_open_server_connection_socket in connection_sr.c).

The legacy server-only path is css_connect_to_master_server (master_port_id, server_name, name_length) in connection_sr.c:1066. This is the function that CUBRID’s older in-process style used; the modern master::connector is the active path.

sequenceDiagram
  participant CL as Client (CAS / csql)
  participant MA as cub_master
  participant SR as cub_server (this DB)
  participant WK as connection::worker

  Note over MA: bound on PRM_ID_TCP_PORT_ID
  CL->>MA: TCP connect
  CL->>MA: DATA_REQUEST + db_name (CSS framed)
  MA->>SR: Unix-domain msg (fd + db_name)
  SR->>SR: claim_context() / css_make_conn(fd)
  SR->>WK: dispatch(conn) → epoll register
  WK->>CL: ready (handshake reply via NET_SERVER_PING_WITH_HANDSHAKE = 999)
  CL->>WK: NET_SERVER_BO_REGISTER_CLIENT (real RPC starts)
  WK-->>CL: reply

NET_SERVER_PING_WITH_HANDSHAKE — the out-of-band opcode

Section titled “NET_SERVER_PING_WITH_HANDSHAKE — the out-of-band opcode”

The first request on every new connection is special: opcode 999 (NET_SERVER_PING_WITH_HANDSHAKE). This is not part of the regular NET_SERVER_REQUEST_LIST enum range; it is reserved at the constant 999 so its numeric value is preserved across version bumps. The handler server_ping_with_handshake in network_interface_sr.cpp:563 performs:

  1. Reads client’s release string, capability flags, bit-platform (32 vs 64), client type, and host name.
  2. Checks compatibility via rel_get_net_compatible(client, server).
  3. Validates capability bits via check_client_capabilities.
  4. Reserves a connection slot via css_increment_num_conn(client_type).
  5. Replies with the server’s release string, capability bits, the server’s host name, and a REL_COMPATIBILITY verdict.

Because this opcode is the gate to all subsequent dispatch — every later request can assume the client and server are version-compatible — the dispatch in net_server_request short-circuits it before the table lookup:

// net_server_request — network_sr.c:791
if (request == NET_SERVER_PING_WITH_HANDSHAKE)
{
status = server_ping_with_handshake (thread_p, rid, buffer, size);
goto end;
}
else if (request == NET_SERVER_SHUTDOWN)
{
er_set (ER_WARNING_SEVERITY, ARG_FILE_LINE, ER_NET_SERVER_SHUTDOWN, 0);
status = CSS_UNPLANNED_SHUTDOWN;
goto end;
}
if (request <= NET_SERVER_REQUEST_START || request >= NET_SERVER_REQUEST_END)
{
er_set (ER_WARNING_SEVERITY, ARG_FILE_LINE, ER_NET_UNKNOWN_SERVER_REQ, 0);
return_error_to_client (thread_p, rid);
goto end;
}

Capability bits encoded in the handshake (network.h:304-311):

#define NET_CAP_BACKWARD_COMPATIBLE 0x80000000
#define NET_CAP_FORWARD_COMPATIBLE 0x40000000
#define NET_CAP_INTERRUPT_ENABLED 0x00800000
#define NET_CAP_UPDATE_DISABLED 0x00008000
#define NET_CAP_REMOTE_DISABLED 0x00000080
#define NET_CAP_HA_REPL_DELAY 0x00000008
#define NET_CAP_HA_REPLICA 0x00000004
#define NET_CAP_HA_IGNORE_REPL_DELAY 0x00000002

A replica-only broker that connects to a non-replica server fails the handshake with ER_NET_HS_HA_REPLICA_ONLY; a read-only client connecting to a primary triggers an ER_NET_HS_INCOMPAT_RW_MODE warning.

Worker pool — epoll-driven reception, transaction pool dispatch

Section titled “Worker pool — epoll-driven reception, transaction pool dispatch”

After the handshake, the connection is owned by an instance of cubconn::connection::worker (declared in connection_worker.hpp). The worker is not the thread that runs the SQL request handler — that responsibility is split:

  • Connection worker (epoll-based, one per N connections) reads CSS-framed packets off the socket, assembles complete request bodies, and enqueues a task to be executed by the transaction worker pool. The number of connection workers ranges between PRM_ID_CSS_MIN_CONNECTION_WORKER and PRM_ID_CSS_MAX_CONNECTION_WORKER.

  • Transaction worker pool is registered globally:

    // server_support.c:548
    REGISTER_WORKERPOOL (transaction, []() { return (int) prm_get_integer_value (PRM_ID_TASK_WORKER); });

    Each request is wrapped in a task and pushed into this pool; the task runs css_internal_request_handler, which is the bridge from the connection layer to the dispatch table.

The connection worker uses a multi-producer/single-consumer TBB queue per worker plus an eventfd for cross-thread wakeup (connection_worker.hpp:236-241). The two queue types separate hot-path messages (IMMEDIATE) from defer-able control messages (LAZY) so a flood of new clients does not delay an in-flight SEND_PACKET.

Once the connection worker has a complete request, it eventually calls into css_internal_request_handler:

// css_internal_request_handler — server_support.c:450
static int
css_internal_request_handler (THREAD_ENTRY & thread_ref, CSS_CONN_ENTRY & conn_ref)
{
unsigned short rid;
unsigned int eid;
int request, rc, size = 0;
char *buffer = NULL;
int local_tran_index = thread_ref.tran_index;
int status = CSS_UNPLANNED_SHUTDOWN;
rc = css_receive_request (&conn_ref, &rid, &request, &size);
if (rc == NO_ERRORS)
{
thread_ref.tran_index = conn_ref.get_tran_index ();
pthread_mutex_unlock (&thread_ref.tran_index_lock);
if (size)
{
rc = css_receive_data (&conn_ref, rid, &buffer, &size, -1);
if (rc != NO_ERRORS)
return status;
}
conn_ref.db_error = 0;
eid = css_return_eid_from_conn (&conn_ref, rid);
css_set_thread_info (&thread_ref, conn_ref.client_id, eid, conn_ref.get_tran_index (), request);
// 3. Call server_request() function
status = css_Server_request_handler (&thread_ref, eid, request, size, buffer);
css_set_thread_info (&thread_ref, -1, 0, local_tran_index, -1);
}
...
}

The function pointer css_Server_request_handler was registered at boot via css_initialize_server_interfaces (net_server_request) (server_support.c:516). This decoupling is deliberate: connection/ code owns the framing and worker management; communication/ code owns the dispatch. Either could be replaced without touching the other.

The dispatch table is a flat array of one row per opcode:

// network_sr.c — top of file
static struct net_request net_Requests[NET_SERVER_REQUEST_END];

The net_request struct itself is intentionally small (network_request_def.hpp):

typedef void (*net_server_func) (THREAD_ENTRY *thrd, unsigned int rid, char *request, int reqlen);
struct net_request
{
int action_attribute; // bitmask of net_req_act
net_server_func processing_function;
net_request () = default;
};

Every server entry point is registered exactly once, in net_server_init(). A few representative rows:

// net_server_init — network_sr.c:74
req_p = &net_Requests[NET_SERVER_PING];
req_p->processing_function = server_ping;
req_p = &net_Requests[NET_SERVER_BO_REGISTER_CLIENT];
req_p->processing_function = sboot_register_client;
req_p = &net_Requests[NET_SERVER_LC_FORCE];
req_p->action_attribute = (CHECK_DB_MODIFICATION | SET_DIAGNOSTICS_INFO | IN_TRANSACTION);
req_p->processing_function = slocator_force;
req_p = &net_Requests[NET_SERVER_QM_QUERY_EXECUTE];
req_p->action_attribute = (SET_DIAGNOSTICS_INFO | IN_TRANSACTION);
req_p->processing_function = sqmgr_execute_query;
req_p = &net_Requests[NET_SERVER_TM_SERVER_COMMIT];
req_p->action_attribute = (CHECK_DB_MODIFICATION | SET_DIAGNOSTICS_INFO | OUT_TRANSACTION);
req_p->processing_function = stran_server_commit;

Then net_server_request is the dispatcher proper. After the out-of-band cases (handshake, shutdown) and bounds check, it consults the row and applies the side conditions before calling the handler:

// net_server_request — network_sr.c:791
if (net_Requests[request].action_attribute & CHECK_DB_MODIFICATION)
{
bool check = true;
if (request == NET_SERVER_TM_SERVER_COMMIT)
{
if (!logtb_has_updated (thread_p)) // commit of a read-only txn doesn't need write check
check = false;
}
if (check)
{
CHECK_MODIFICATION_NO_RETURN (thread_p, error_code);
if (error_code != NO_ERROR)
{
return_error_to_client (thread_p, rid);
css_send_abort_to_client (conn, rid);
goto end;
}
}
}
if (net_Requests[request].action_attribute & CHECK_AUTHORIZATION)
{
if (!logtb_am_i_dba_client (thread_p))
{
er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_AU_DBA_ONLY, 1, "");
return_error_to_client (thread_p, rid);
css_send_abort_to_client (conn, rid);
goto end;
}
}
if (net_Requests[request].action_attribute & IN_TRANSACTION)
conn->in_transaction = true;
// call a request processing function
func = net_Requests[request].processing_function;
thread_p->push_resource_tracks ();
if (conn->invalidate_snapshot != 0)
logtb_invalidate_snapshot_data (thread_p);
(*func) (thread_p, rid, buffer, size);
thread_p->pop_resource_tracks ();
pgbuf_unfix_all (thread_p); // defence: don't leak page latches

action_attribute thus encodes orthogonal behaviours that would otherwise have to be re-implemented inside every handler:

BitMeaning
CHECK_DB_MODIFICATIONThe DB must accept writes (rejects on read-only mode, replica, suspended HA log applier)
CHECK_AUTHORIZATIONClient must be DBA or owner; rejects with ER_AU_DBA_ONLY otherwise
SET_DIAGNOSTICS_INFOWraps the call with perfmon timer (PSTAT_*) and trace-log tap
IN_TRANSACTIONMarks the connection as having an open transaction (sets conn->in_transaction)
OUT_TRANSACTIONClears the in-transaction flag at end of call (COMMIT / ABORT)

Sample handlers — three representative shapes

Section titled “Sample handlers — three representative shapes”

Shape 1: tiny request, tiny reply. server_ping is the canonical minimum. One int in, one int out:

// server_ping — network_interface_sr.cpp:532
void
server_ping (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen)
{
OR_ALIGNED_BUF (OR_INT_SIZE) a_reply;
char *reply = OR_ALIGNED_BUF_START (a_reply);
int client_val, server_val;
or_unpack_int (request, &client_val);
server_val = 0;
or_pack_int (reply, server_val);
css_send_data_to_client (thread_p->conn_entry, rid, reply, OR_INT_SIZE);
}

Shape 2: variable request, mixed-size reply. sqp_get_server_info returns a packed DB_VALUE payload whose size depends on requested info bits:

// sqp_get_server_info — network_interface_sr.cpp:7962 (condensed)
void
sqp_get_server_info (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen)
{
OR_ALIGNED_BUF (OR_INT_SIZE + OR_INT_SIZE) a_reply;
char *reply = OR_ALIGNED_BUF_START (a_reply);
char *ptr, *buffer = NULL;
int buffer_length, server_info_bits, success = NO_ERROR;
DB_VALUE dt_dbval, ts_dbval, lt_dbval;
ptr = or_unpack_int (request, &server_info_bits);
buffer_length = 0;
if (server_info_bits & SI_SYS_DATETIME)
{
success = db_sys_date_and_epoch_time (&dt_dbval, &ts_dbval);
buffer_length += OR_VALUE_ALIGNED_SIZE (&dt_dbval);
buffer_length += OR_VALUE_ALIGNED_SIZE (&ts_dbval);
}
if (server_info_bits & SI_LOCAL_TRANSACTION_ID)
{
success = xtran_get_local_transaction_id (thread_p, &lt_dbval);
buffer_length += OR_VALUE_ALIGNED_SIZE (&lt_dbval);
}
buffer = (char *) malloc (buffer_length);
ptr = buffer;
if (server_info_bits & SI_SYS_DATETIME)
{
ptr = or_pack_value (ptr, &dt_dbval);
ptr = or_pack_value (ptr, &ts_dbval);
}
if (server_info_bits & SI_LOCAL_TRANSACTION_ID)
ptr = or_pack_value (ptr, &lt_dbval);
ptr = or_pack_int (reply, buffer_length);
ptr = or_pack_int (ptr, success);
css_send_reply_and_data_to_client (thread_p->conn_entry, rid, reply, OR_ALIGNED_BUF_SIZE (a_reply),
buffer, buffer_length, std::move (deleter));
}

The two-stage send — first a small fixed reply that announces the incoming bulk size, then the bulk data itself — is the universal pattern for variable-size responses.

Shape 3: bulk-data request, multi-stage reply. slocator_force ships a copy area of dirty objects from client to server, then sends back updated descriptors (server may have assigned new OIDs):

// slocator_force — network_interface_sr.cpp:1381 (condensed)
void
slocator_force (THREAD_ENTRY *thread_p, unsigned int rid, char *request, int reqlen)
{
int num_objs, multi_update_flags, packed_desc_size, content_size, num_ignore_error_list;
int success, csserror;
LC_COPYAREA *copy_area = NULL;
char *packed_desc = NULL, *content_ptr = NULL, *new_content_ptr = NULL;
char *ptr;
int ignore_error_list[-ER_LAST_ERROR];
ptr = or_unpack_int (request, &num_objs);
ptr = or_unpack_int (ptr, &multi_update_flags);
ptr = or_unpack_int (ptr, &packed_desc_size);
ptr = or_unpack_int (ptr, &content_size);
ptr = or_unpack_int (ptr, &num_ignore_error_list);
for (int i = 0; i < num_ignore_error_list; i++)
ptr = or_unpack_int (ptr, &ignore_error_list[i]);
copy_area = locator_recv_allocate_copyarea (num_objs, &content_ptr, content_size);
// 1. pull the descriptor block from the client
csserror = css_receive_data_from_client (thread_p->conn_entry, rid, &packed_desc, &packed_size);
locator_unpack_copy_area_descriptor (num_objs, copy_area, packed_desc, -1);
// 2. pull the content block
if (content_size > 0)
csserror = css_receive_data_from_client (thread_p->conn_entry, rid, &new_content_ptr, &received_size);
// 3. run the actual server-side function
success = xlocator_force (thread_p, copy_area, num_ignore_error_list, ignore_error_list);
// 4. repack the descriptor (server may have written new OIDs into it)
locator_pack_copy_area_descriptor (num_objs, copy_area, packed_desc, packed_desc_size);
// 5. send the small reply + the updated descriptor as two pieces
ptr = or_pack_int (reply, success);
ptr = or_pack_int (ptr, packed_desc_size);
ptr = or_pack_int (ptr, 0);
css_send_reply_and_2_data_to_client (thread_p->conn_entry, rid, reply, OR_ALIGNED_BUF_SIZE (a_reply),
packed_desc, packed_desc_size, NULL, 0, std::move (deleter));
}

The css_receive_data_from_client calls are inline pulls back over the same connection — the server’s request body did not contain the descriptor or content blob, only their sizes; the bulk arrives in follow-up DATA_TYPE packets keyed by the same RID.

Packer / unpacker — or_pack_* and OR_PACK_*

Section titled “Packer / unpacker — or_pack_* and OR_PACK_*”

The marshalling layer is a thin adapter over big-endian byte-by-byte serialisation. or_pack_int advances the buffer pointer by 4:

// from object_representation.h
extern char *or_pack_int (char *ptr, int number);
extern char *or_pack_int64 (char *ptr, INT64 number);
extern char *or_pack_string (char *ptr, const char *string);
extern char *or_pack_oid (char *ptr, const OID *oid);
extern char *or_pack_value (char *buf, DB_VALUE *value); // !! the heavyweight one
extern char *or_unpack_int (char *ptr, int *number);
extern char *or_unpack_string (char *ptr, char **string);
extern char *or_unpack_oid (char *ptr, OID *oid);
extern char *or_unpack_value (const char *buf, DB_VALUE *value);

The pointer-threading idiom is universal in CUBRID stub code: each call’s return value is the next call’s input. There is no offset arithmetic, no memcpy with hand-computed sizes; the packer hides both. This is the same pattern PostgreSQL uses with its StringInfo buffer (though PostgreSQL keeps the offset inside the buffer struct rather than passing a moving pointer).

For DB_VALUE (the universal value type — see dbtype_def.h), or_pack_value writes:

+-----------------+--------------------+----------------+
| domain header | nullness flag | value bytes |
| (variable size) | (1 byte, in domain | (depends on |
| | header, encoded | domain type) |
| | via or_packed_ | |
| | domain_size) | |
+-----------------+--------------------+----------------+

The domain header itself is variable-length: a bit-packed int that encodes the domain type tag (DB_TYPE_INTEGER, DB_TYPE_VARCHAR, …), extended-domain flag, collation, precision, scale, etc. This means the receiver cannot know the value’s byte length until it has parsed the domain header. The trade-off: tiny on the wire for primitive types (a DB_TYPE_INTEGER value packs to roughly 5 bytes including header), but parsing-stateful (the receiver must read the header first to know how to read the body).

Helper macros OR_INT_SIZE = 4, OR_OID_SIZE = 8, OR_VALUE_ALIGNED_SIZE, and OR_ALIGNED_BUF (a stack-buffer-with-alignment macro) appear in every stub to size the per-call argument and reply buffers.

The OR_BUF struct (object_representation.h:1029) is a higher-level abstraction used inside heap and B-tree code — it encapsulates the buffer pointer, end-of-buffer, and overflow flag. Network code generally uses the raw char * pointer threading; OR_BUF is for storage-side packing where overflow checks matter more.

Client stub — net_client_request_* family

Section titled “Client stub — net_client_request_* family”

The client side mirrors the server, opcode-for-opcode. A request is:

  1. allocate a fixed-size argument buffer using OR_ALIGNED_BUF,
  2. pack the args via or_pack_*,
  3. allocate a fixed-size reply buffer,
  4. call the right net_client_request_* variant,
  5. unpack the reply via or_unpack_*,
  6. translate the result.

The dispatcher is net_client_request_internal (network_cl.c:495):

// net_client_request_internal — network_cl.c:495 (condensed)
static int
net_client_request_internal (int request, char *argbuf, int argsize, char *replybuf, int replysize,
char *databuf, int datasize, char *replydata, int replydatasize)
{
unsigned int rc;
int size, error = 0;
char *reply = NULL;
if (net_Server_name[0] == '\0') // not connected
{
er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_NET_SERVER_CRASHED, 0);
return -1;
}
rc = __gv_cvar.css_send_req_to_server (net_Server_host, request, argbuf, argsize,
databuf, datasize, replybuf, replysize);
if (rc == 0)
return set_server_error (__gv_cvar.css_get_errno ());
if (replydata != NULL)
__gv_cvar.css_queue_receive_data_buffer (rc, replydata, replydatasize);
error = __gv_cvar.css_receive_data_from_server (rc, &reply, &size);
if (error != NO_ERROR)
return set_server_error (error);
error = COMPARE_SIZE_AND_BUFFER (&replysize, size, &replybuf, reply);
if (replydata != NULL)
{
error = __gv_cvar.css_receive_data_from_server (rc, &reply, &size);
if (error == NO_ERROR)
error = COMPARE_SIZE_AND_BUFFER (&replydatasize, size, &replydata, reply);
}
return error;
}

The __gv_cvar indirection deserves a note: it is a global vtable of function pointers (css_send_req_to_server, css_receive_data_from_server, css_queue_receive_data_buffer, …) so that the same client stubs work in both CS_MODE (real client/server, calls go to TCP) and SA_MODE (standalone — client and server linked into one process, calls short-circuit through an in-process queue). The vtable is populated at link time by whichever mode-specific connection_cl.cpp or its standalone equivalent gets compiled in.

For the higher-level call shapes — request with bulk reply, request with callbacks, request with stream reply — network_cl.c provides specialised wrappers:

FunctionUse
net_client_request_no_replyOne-shot fire-and-forget (e.g. interrupt)
net_client_requestStandard request/reply
net_client_request_with_callbackServer may send back-channel callbacks during processing (queries)
net_client_request_recv_copyareaReply contains a LC_COPYAREA payload
net_client_request_method_callbackServer invokes a client-side method (legacy stored-procedure path)
net_client_request_with_logwr_contextReplication log-writer streaming
net_client_request_recv_streamOpen-ended streaming reply (e.g. loaddb progress)

Each handler shape on the server side has a matching wrapper on the client side.

Client stub example — qmgr_execute_query

Section titled “Client stub example — qmgr_execute_query”

The symmetry of client and server is clearest by reading the same RPC on both sides. Server-side is sqmgr_execute_query (above); client-side:

// qmgr_execute_query — network_interface_cl.c:6916 (condensed)
QFILE_LIST_ID *
qmgr_execute_query (const XASL_ID *xasl_id, QUERY_ID *query_idp, int dbval_cnt,
const DB_VALUE *dbvals, QUERY_FLAG flag, ...)
{
QFILE_LIST_ID *list_id = NULL;
int req_error;
char *request, *reply, *senddata = NULL;
OR_ALIGNED_BUF (OR_XASL_ID_SIZE + OR_INT_SIZE * 5 + ...) a_request;
OR_ALIGNED_BUF (OR_INT_SIZE * 7 + OR_PTR_ALIGNED_SIZE + OR_CACHE_TIME_SIZE) a_reply;
request = OR_ALIGNED_BUF_START (a_request);
reply = OR_ALIGNED_BUF_START (a_reply);
/* 1. pack DB_VALUE host vars into bulk send buffer */
for (int i = 0; i < dbval_cnt; i++)
senddata_size += OR_VALUE_ALIGNED_SIZE (&dbvals[i]);
senddata = (char *) malloc (senddata_size);
ptr = senddata;
for (int i = 0; i < dbval_cnt; i++)
ptr = or_pack_db_value (ptr, (DB_VALUE *) &dbvals[i]);
/* 2. pack the small fixed args into the request buffer */
ptr = request;
OR_PACK_XASL_ID (ptr, xasl_id);
ptr = or_pack_int (ptr, dbval_cnt);
ptr = or_pack_int (ptr, senddata_size);
ptr = or_pack_int (ptr, flag);
OR_PACK_CACHE_TIME (ptr, clt_cache_time);
ptr = or_pack_int (ptr, query_timeout);
/* 3. send + receive (callback variant: server may issue method callbacks back to us) */
req_error =
net_client_request_with_callback (NET_SERVER_QM_QUERY_EXECUTE, request, request_len, reply,
OR_ALIGNED_BUF_SIZE (a_reply), senddata, senddata_size, ...);
/* 4. unpack the reply */
ptr = or_unpack_ptr (reply + OR_INT_SIZE * 4, query_idp);
OR_UNPACK_CACHE_TIME (ptr, &local_srv_cache_time);
...
return list_id;
}

The or_pack_* sequence in step 2 is byte-for-byte the same sequence as or_unpack_* in sqmgr_execute_query (in the same order: XASL_ID, dbval_cnt, data_size, query_flag, cache_time, query_timeout). Any divergence breaks the wire.

Server-side errors flow back through a parallel channel:

  1. The handler calls er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE, ER_*, ...) which records the error in the thread-local error area (er_set lives in error_manager.c).
  2. The handler then calls return_error_to_client (thread_p, rid) which serialises the error area via er_get_area_error() and sends it as an ERROR_TYPE packet (css_send_error).
  3. The client-side net_client_request_internal reads the ERROR_TYPE packet and calls set_server_error(). For most enum css_error_code values the client maps to ER_NET_SERVER_CRASHED; for special server-rejection codes (ER_DB_NO_MODIFICATIONS, ER_AU_DBA_ONLY) the original error is preserved.
  4. er_set_with_oserror in set_server_error() stamps errno into the propagated error so the client can discriminate “server killed my socket” from “server returned a logical error”.

For abort paths (deadlock victim, query interrupted), the server uses css_send_abort_to_client (conn, rid) to send an ABORT_TYPE packet without an error payload; the client recognises ABORT_TYPE as “the request was rejected, see the next error packet for details”.

sequenceDiagram
  participant CL as client
  participant CSTUB as qmgr_execute_query<br/>(network_interface_cl.c)
  participant CNET as net_client_request_with_callback<br/>(network_cl.c)
  participant WIRE as TCP / Unix-domain<br/>NET_HEADER framing
  participant SWORK as connection::worker<br/>(connection_worker.cpp)
  participant SDISP as net_server_request<br/>(network_sr.c)
  participant SHND as sqmgr_execute_query<br/>(network_interface_sr.cpp)

  CL->>CSTUB: qmgr_execute_query(xasl_id, dbvals, ...)
  CSTUB->>CSTUB: or_pack_value(senddata, dbvals)<br/>or_pack_int(...)
  CSTUB->>CNET: net_client_request_with_callback(<br/>  NET_SERVER_QM_QUERY_EXECUTE, req, replybuf, senddata)
  CNET->>WIRE: NET_HEADER{type=COMMAND, op=QM_QUERY_EXECUTE, rid=R}
  CNET->>WIRE: NET_HEADER{type=DATA, rid=R} + req body
  CNET->>WIRE: NET_HEADER{type=DATA, rid=R} + senddata body
  WIRE->>SWORK: epoll_wait → readv
  SWORK->>SDISP: enqueue task → cubthread::transaction worker picks up
  SDISP->>SDISP: net_Requests[NET_SERVER_QM_QUERY_EXECUTE]<br/>action_attribute = SET_DIAGNOSTICS_INFO | IN_TRANSACTION
  SDISP->>SHND: sqmgr_execute_query(thread_p, rid, request, reqlen)
  SHND->>SHND: OR_UNPACK_XASL_ID(...)<br/>or_unpack_int(...)<br/>css_receive_data_from_client → host vars
  SHND->>SHND: xqmgr_execute_query(...) → list_id
  SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + reply (success, size, query_id)
  SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + list_id payload
  SHND->>WIRE: NET_HEADER{type=DATA, rid=R} + page0 payload
  WIRE->>CNET: read packets, match rid, deliver to caller's reply/replydata buffers
  CNET->>CSTUB: return; reply contains query_id, list_id ptr
  CSTUB->>CL: QFILE_LIST_ID *

A few subtleties of this flow worth naming:

  • The opcode (function_code field of the header) is only meaningful for COMMAND_TYPE packets. For DATA_TYPE packets, the receiver identifies the message by request_id and looks up which buffer it was queued into.
  • host vars (the query parameters) ride a separate DATA_TYPE packet from the request body (which carries XASL_ID, dbval_cnt, query_flag). Splitting them lets the small request body share an OR_ALIGNED_BUF of fixed size while the bulk parameters fly in their own buffer.
  • The server may emit multiple DATA_TYPE reply packets — one for the small reply, one for the result list_id, one for the first result page. The client’s net_client_request_with_callback knows how many to expect from the call’s signature.
  • Mid-flight, the server can issue a callback request back to the client (e.g. a method invocation, a user-input prompt, a console output). These are encoded as QUERY_SERVER_REQUEST values (connection_defs.h:313): {QUERY_END, METHOD_CALL, ASYNC_OBTAIN_USER_INPUT, GET_NEXT_LOG_PAGES, END_CALLBACK, CONSOLE_OUTPUT}.
SymbolFileRole
CSS_CONN_ENTRYconnection_defs.hPer-connection state (fd, request_id, status, transaction_id, queues)
css_initialize_connconnection_sr.cReset a CSS_CONN_ENTRY for reuse from the pool
css_make_connconnection_sr.cAllocate a CSS_CONN_ENTRY and init its lists
css_init_conn_listconnection_sr.cBoot-time creation of the connection-entry array
css_shutdown_connconnection_sr.cTear down on disconnect; finalise all lists; free version string
css_connect_to_master_serverconnection_sr.cLegacy server-to-master registration (Unix-domain or new-style)
css_set_proc_registerconnection_sr.cBuild the CSS_SERVER_PROC_REGISTER payload sent at registration
cubconn::master::connector::runmaster_connector.cppModern entry: connect → handshake → execute the master forwarding loop
connector::handle_master_receptionmaster_connector.cppReceive a forwarded fd from cub_master, dispatch to worker pool
cubconn::connection::poolconnection_pool.{cpp,hpp}Free-list of context objects + workers; claim_context/retire_context
cubconn::connection::workerconnection_worker.{cpp,hpp}Per-worker epoll loop; reads CSS packets, enqueues request tasks
worker::handle_command_header_packetconnection_worker.cppRead NET_HEADER, classify as command/data/error/abort/close
worker::handle_data_packetconnection_worker.cppMatch by RID, deliver into queued user buffer
worker::push_task_into_worker_poolconnection_worker.cppHand the assembled request to the transaction worker pool
css_internal_request_handlerserver_support.cBridge: unpack from the connection, call css_Server_request_handler
css_initialize_server_interfacesserver_support.cBoot-time install of the request-handler function pointer
css_initserver_support.cServer’s network main: build pool, register transaction workers, run
css_pack_server_nameserver_support.cEncode (server name + db version + bit-platform) into a registration blob
SymbolFileRole
enum net_server_requestnetwork.hThe opcode enum; one value per server entry
NET_SERVER_REQUEST_LIST macronetwork.hX-macro form used to expand both enum + name table
NET_SERVER_PING_WITH_HANDSHAKE = 999network.hOut-of-band opcode; preserved across versions
NET_CAP_* capability bitsnetwork.h:304-311Negotiated at handshake; gate replica/read-only/interrupt features
struct net_requestnetwork_request_def.hpp(action_attribute, processing_function) row of the dispatch table
enum net_req_actnetwork_request_def.hppBitmask: CHECK_DB_MODIFICATION / CHECK_AUTHORIZATION / etc.
net_Requests[] (static)network_sr.cThe dispatch table itself
net_server_initnetwork_sr.c:74Populate net_Requests[opcode] for every opcode
net_server_requestnetwork_sr.c:791The dispatcher: bounds-check, side-conditions, call handler
net_server_startnetwork_sr.c:1058Server main(): er_init → cubthread → boot_restart_server → css_init
net_server_conn_downnetwork_sr.c:1040Callback when a client connection drops; unregisters the client
net_server_wakeup_workersnetwork_sr.c:927Used during shutdown to interrupt threads holding a tran index
get_net_request_namenetwork_sr.cReverse-lookup (opcode → string) for log messages
SymbolFileRole
server_pingnetwork_interface_sr.cpp:532Trivial handler: int in, int out
server_ping_with_handshakenetwork_interface_sr.cpp:563Initial handshake; checks bit-platform, capabilities, version
sboot_register_clientnetwork_interface_sr.cpp:3760Per-connection registration after handshake
sqp_get_server_infonetwork_interface_sr.cpp:7962Fetch sysdate / local txn id; multi-DB_VALUE reply
slocator_fetchnetwork_interface_sr.cpp:671Fetch one object by OID
slocator_forcenetwork_interface_sr.cpp:1381Bulk DML: copy area in, descriptor out
sqmgr_prepare_querynetwork_interface_sr.cpp:5107Prepare a query: returns XASL_ID
sqmgr_execute_querynetwork_interface_sr.cpp:5399Execute prepared query: returns query_id, list_id, page0
stran_server_commitnetwork_interface_sr.cpp (via dispatch table)Commit current transaction; marks OUT_TRANSACTION
return_error_to_clientnetwork_interface_sr.cpp (helper)Wrap er_get_area_error and call css_send_error
SymbolFileRole
or_pack_int / or_unpack_intobject_representation.h4-byte big-endian int
or_pack_int64 / or_unpack_int64object_representation.h8-byte int
or_pack_string / or_unpack_stringobject_representation.hlength-prefixed C string
or_pack_oid / or_unpack_oidobject_representation.h8-byte (volid, pageid, slotid) tuple
or_pack_value / or_unpack_valueobject_representation.hDB_VALUE (domain header + null flag + value bytes)
or_packed_string_lengthobject_representation.hSizing helper for variable-length packing
OR_VALUE_ALIGNED_SIZEobject_representation.hMacro: alignment-padded byte size of a DB_VALUE
OR_ALIGNED_BUFobject_representation.hStack buffer + start pointer with required alignment
OR_PACK_XASL_ID / OR_UNPACK_XASL_IDobject_representation.hComposite for XASL_ID (sha1 + cache_flag + temp_file_id)
OR_PACK_CACHE_TIME / OR_UNPACK_CACHE_TIMEobject_representation.hComposite for CACHE_TIME (sec + usec)
OR_INT_SIZE / OR_OID_SIZEobject_representation.hSizing constants used in every stub
OR_BUFobject_representation.h:1029Higher-level buffer struct used in heap/btree pack code
SymbolFileRole
net_client_initnetwork_cl.c:3657Initial connect: set net_Server_host/name, do handshake
net_client_request_internalnetwork_cl.c:495Core send/receive over the __gv_cvar vtable
net_client_requestnetwork_cl.c:587Standard wrapper
net_client_request_with_callbacknetwork_cl.c:1153Variant that handles server-initiated callbacks during the call
net_client_request_recv_copyareanetwork_cl.c:2317Variant for replies containing an LC_COPYAREA
net_client_request_with_logwr_contextnetwork_cl.c:2072Variant for log-writer streaming
client_capabilitiesnetwork_cl.c:235Build the local NET_CAP_* bitmask
check_server_capabilitiesnetwork_cl.c:259Reconcile client/server capability bits at handshake
set_server_errornetwork_cl.cMap enum css_error_code to ER_NET_* and propagate via er_set
locator_forcenetwork_interface_cl.c:697Client stub paired with slocator_force
qmgr_execute_querynetwork_interface_cl.c:6916Client stub paired with sqmgr_execute_query
locator_fetchnetwork_interface_cl.c:271Client stub paired with slocator_fetch
SymbolFileApprox. line
enum net_server_requestsrc/communication/network.h289
NET_SERVER_PING_WITH_HANDSHAKE = 999src/communication/network.h300
NET_CAP_BACKWARD_COMPATIBLEsrc/communication/network.h304
get_endian_typesrc/communication/network.h337
struct packet_header (NET_HEADER)src/connection/connection_defs.h382
enum css_packet_typesrc/connection/connection_defs.h185
enum css_command_typesrc/connection/connection_defs.h67
struct css_conn_entrysrc/connection/connection_defs.h437
enum net_req_actsrc/communication/network_request_def.hpp32
struct net_requestsrc/communication/network_request_def.hpp43
net_server_initsrc/communication/network_sr.c74
net_server_requestsrc/communication/network_sr.c791
net_server_startsrc/communication/network_sr.c1058
net_Requests[] (table)src/communication/network_sr.c68
server_pingsrc/communication/network_interface_sr.cpp532
server_ping_with_handshakesrc/communication/network_interface_sr.cpp563
slocator_fetchsrc/communication/network_interface_sr.cpp671
slocator_forcesrc/communication/network_interface_sr.cpp1381
sboot_register_clientsrc/communication/network_interface_sr.cpp3760
sqmgr_prepare_querysrc/communication/network_interface_sr.cpp5107
sqmgr_execute_querysrc/communication/network_interface_sr.cpp5399
sqp_get_server_infosrc/communication/network_interface_sr.cpp7962
css_initialize_connsrc/connection/connection_sr.c255
css_init_conn_listsrc/connection/connection_sr.c420
css_make_connsrc/connection/connection_sr.c577
css_connect_to_master_serversrc/connection/connection_sr.c1066
css_read_headersrc/connection/connection_sr.c1428
css_receive_requestsrc/connection/connection_sr.c1470
css_receive_datasrc/connection/connection_sr.c1487
css_internal_request_handlersrc/connection/server_support.c450
css_initialize_server_interfacessrc/connection/server_support.c516
css_initsrc/connection/server_support.c554
css_send_data_to_clientsrc/connection/server_support.c708
css_pack_server_namesrc/connection/server_support.c1417
css_set_net_headersrc/connection/connection_support.cpp1326
css_send_request_with_data_buffersrc/connection/connection_support.cpp1367
css_send_requestsrc/connection/connection_support.cpp1468
css_send_datasrc/connection/connection_support.cpp1526
css_send_two_datasrc/connection/connection_support.cpp1578
css_send_errorsrc/connection/connection_support.cpp1652
css_net_sendsrc/connection/connection_support.cpp1057
css_net_recvsrc/connection/connection_support.cpp544
css_read_remaining_bytessrc/connection/connection_support.cpp501
cubconn::master::connector::runsrc/connection/master_connector.cpp160
cubconn::connection::worker (class)src/connection/connection_worker.hpp52
cubconn::connection::pool (class)src/connection/connection_pool.hpp39
net_client_initsrc/communication/network_cl.c3657
net_client_request_internalsrc/communication/network_cl.c495
net_client_requestsrc/communication/network_cl.c587
net_client_request_with_callbacksrc/communication/network_cl.c1153
client_capabilitiessrc/communication/network_cl.c235
check_server_capabilitiessrc/communication/network_cl.c259
locator_forcesrc/communication/network_interface_cl.c697
qmgr_execute_querysrc/communication/network_interface_cl.c6916

The PL family (JavaSP and PL/CSQL) ships its own Unix-domain socket between cub_server and cub_pl, with a separate wire protocol — not the CSS framing described here. The PL wire is simpler (one Header with a session id and a RequestCode, then a packed body) because the participants are fixed (one cub_server, one cub_pl) and the message kinds are bounded (SP_CODE_INVOKE, SP_CODE_RESULT, SP_CODE_ERROR, SP_CODE_INTERNAL_JDBC, …). It does not use NET_HEADER and does not go through net_Requests[]; the dispatch on the Java side is a switch in ExecuteThread.run().

There is one place where the two protocols intersect: when a JavaSP issues a SQL query through the server-side JDBC driver (CUBRIDServerSideConnection), the request travels back to the originating server worker thread on the same cub_pl-to-cub_server PL socket (via METHOD_CALLBACK_* codes), not through the regular client/server NRP described in this document. The PL doc is authoritative on that callback path; this doc is authoritative on the original client request that triggered the SP invocation (NET_SERVER_PL_CALL, opcode handler spl_call).

  • Two physically distinct registration paths to the master. The legacy path is css_connect_to_master_server in connection_sr.c (TCP + Unix-domain handoff for non-Windows, or SERVER_REQUEST_NEW with the server opening its own port for Windows). The modern path is cubconn::master::connector::run in master_connector.cpp (active in the current css_init flow). Both compile in the current source. The legacy function is referenced by the symbol table but css_init does not call it on the active path; it is preserved for the standalone/non-pool builds and possibly older build configurations. A reader chasing “where is the listening socket bound” should focus on master_connector.cpp first.

  • net_Requests is fixed-size at compile time. The array is sized to NET_SERVER_REQUEST_END, an enum value derived from the X-macro. Adding an opcode requires recompiling everything client-side and server-side; there is no runtime registration mechanism. This is a deliberate consequence of the manual-stub choice — but it also means a hot-deployed extension cannot introduce a new RPC.

  • net_server_request does not currently use OUT_TRANSACTION. The dispatcher reads the bit and clears conn->in_transaction after the handler returns, but inspection of net_server_init shows only NET_SERVER_TM_SERVER_COMMIT and NET_SERVER_TM_SERVER_ABORT (and a couple of others) actually set it. Most read-only RPCs do not advertise transaction transitions via the bitmask; instead the connection’s transaction state is managed inside the handler.

  • Endianness check is one-way. get_endian_type () is defined in network.h but is not called as part of the standard handshake in current source. The protocol implicitly assumes both endpoints use the same endianness (the wire format is htonl/ntohl big-endian but the peer’s “platform” model is identified only by the client_bit_platform field, which encodes 32/64 not endianness). All currently supported targets (Linux x86_64, Windows x86_64) are little-endian, so this has not been a practical problem.

  • __gv_cvar is the indirection that lets stubs work in both CS_MODE and SA_MODE. When this doc walks “the client stub packs args, sends, reads response”, in SA_MODE the same call short-circuits through an in-process queue without touching a socket. Readers tracing net_client_request in a debugger should set a breakpoint inside __gv_cvar.css_send_req_to_server to see which mode is active.

  • The CSS framer’s version and host_id fields are reserved but unused. They are zeroed by css_set_net_header and ignored by the receiver. They exist for future protocol versioning that never landed; in current code, version skew is detected only at server_ping_with_handshake time via rel_get_net_compatible, not per-packet.

  1. TLS / encryption. The current source has no TLS termination for the client/server channel. The broker (cas) supports SSL for the broker-to-client hop, but cas to cub_server is plain. Is there a roadmap for end-to-end TLS, and would it sit at the css_net_send layer or higher?

  2. Compression. Large LC_COPYAREA payloads and query result pages are sent uncompressed. Some peer DBMS engines (MySQL with mysql_compress, PostgreSQL streaming with pg_compress) compress at the framer. Is there an investigation into page-buffer-aware compression for CUBRID’s NRP path?

  3. Versioning at the opcode level. A new opcode added to enum net_server_request is backward-compatible (an old client simply will not request it) but a changed argument layout in an existing opcode silently breaks older clients. The release-string compatibility check at handshake time is coarse-grained; is there a per-opcode wire-version registry that I missed?

  4. function_code is short (16-bit). With ~150 opcodes today, the limit is far away, but the field is sized for a future where modules outside src/ register their own RPCs (loaddb, CDC, flashback already added blocks). Is there an opcode-namespace plan, or will the table simply keep growing flat?

  5. Worker pool tunables. PRM_ID_TASK_WORKER, PRM_ID_CSS_MAX_CONNECTION_WORKER, and PRM_ID_CSS_MIN_CONNECTION_WORKER together gate concurrency. Their interaction with epoll’s edge-triggered mode and TBB’s queue is not documented in code; production deployments likely tune these empirically. A capacity-planning doc would be useful.

  6. The css_internal_request_handler global indirection. The handler pointer is installed once at boot via css_initialize_server_interfaces; there is no facility to swap it (e.g. for hot-patching, telemetry hooks, A/B traffic shadowing). Is this an intentional rigidity or an artefact of single-tenant deployment?

  • src/connection/ — CSS framing, connection lifecycle, worker pool
  • src/communication/ — NRP table, dispatch, per-call handlers and stubs
  • src/connection/AGENTS.md — module overview at the connection level
  • src/communication/AGENTS.md — module overview at the protocol level
  • references/cubrid/CLAUDE.md — top-level CUBRID engine structure