Skip to content

CUBRID Reading Path — How a Stored Procedure Call Executes End-to-End (JavaSP / PL/CSQL with Embedded SQL Callback)

This document follows one concrete invocation — CALL my_sp('arg'), where my_sp is a JavaSP whose body issues an embedded SELECT * FROM t WHERE x > 10 via the server-side JDBC driver — from the moment a JDBC client calls CallableStatement.execute until the last result arrives back at the application. The trip starts as a TCP byte stream into a CUBRID broker daemon, is handed to a CAS worker via Unix-domain SCM_RIGHTS file-descriptor passing, lands inside a cub_server request handler through the standard CSS framing, walks the compile pipeline for the CALL statement, enters the PL family (cubrid-pl-javasp.md, cubrid-pl-plcsql.md) which ships the invocation to the cub_pl JVM as an SP_CODE_INVOKE envelope, dispatches inside the JVM to TargetMethod.invoke() for reflective Java dispatch, and runs the SP body. When the body issues its embedded SELECT, the PL server bridge (cubrid-pl-server-bridge.md) wraps each callback as SP_CODE_INTERNAL_JDBC carrying a METHOD_CALLBACK_* opcode, ships it back to cub_server over the same socket that delivered the original invocation, and the server’s cubpl::executor::response_callback_command routes it to callback_prepare / callback_execute / callback_fetch which re-enter the standard compile-and-execute pipeline recursively. The result rows flow back to the JVM, the SP body completes, the SP’s return value flows back as SP_CODE_RESULT, and the rest of the trip mirrors cubrid-rpath-select.md’s return path.

The example is JavaSP rather than PL/CSQL because the reflective- dispatch step is concrete and visible; the PL/CSQL variant follows the same trip with PlcsqlCompilerMain.compileInner / generated- class dispatch substituted for the reflective call. The rpath covers both, with the PL/CSQL-only branches called out where they diverge.

The embedded-SELECT example is small on purpose — it exercises the single most consequential mechanism on the SP path (the server↔JVM callback channel) without bringing in joins, ordering, or aggregates. Branches that are not on this path (recursive SP calls, JavaSP versus PL/CSQL compile-time semantic check, holdable result-sets, OID materialisation callbacks) are catalogued in the “What we did NOT cover” section at the end with one-line pointers.

Step 1 — Client to broker (CallableStatement)

Section titled “Step 1 — Client to broker (CallableStatement)”

The trip starts in a JDBC client process. The application calls CallableStatement.execute("CALL my_sp(?)"); the JDBC driver wraps the call into a CCI request and pushes it down a TCP socket to cub_broker. From the broker’s point of view a CALL statement is indistinguishable from any other SQL — the same dispatch_thr_f hands an idle CAS the file descriptor via SCM_RIGHTS, and from that point the JDBC driver and the CAS speak directly. See cubrid-broker.md §“Process topology” for the broker’s receiver/dispatch/CAS worker shape and the SCM_RIGHTS fd-handoff specifically. The broker is not on the hot path for the rest of this trip.

The CAS receives the SQL text on the JDBC-facing socket and runs it through ux_prepare / ux_execute, which call the embedded db_* API (db_open_bufferdb_compile_statement_localdb_execute_statement_local). For our CALL, prepare and execute fire back-to-back inside ux_execute because the SQL is literal text. cubrid-dbi-cci.md is the source of truth for this adaptation. The CAS is a CUBRID client in network terms — it holds a server-facing CSS socket alongside its JDBC-facing socket, and ships the compile/execute work to cub_server through the standard NET_SERVER_* framing. cubrid-network-protocol.md documents the framing.

Step 2 — Server-side request entry and session binding

Section titled “Step 2 — Server-side request entry and session binding”

Connection acceptance and session binding are identical to the plain-SELECT path — see cubrid-rpath-select.md step 2 for the detail. Briefly: cub_master forwards the new connection to cub_server over a Unix-domain socket; an epoll-based cubconn::connection::worker reads CSS-framed packets and pushes the decoded request through net_Requests[]. For our CALL the relevant opcode is NET_SERVER_QM_QUERY_PREPARE (combined prepare+execute when invoked via Statement.execute). The SESSION_STATE lookup, transaction binding (TDES), and THREAD_ENTRY setup all happen here per cubrid-server-session.md and cubrid-transaction.md.

The transaction binding matters more on the SP path than on the plain SELECT path because SP invocations participate in the caller’s transaction by default (the JavaSP runs inside the same TDES the caller opened) and can issue COMMIT / ROLLBACK themselves through the bridge if the SP’s transaction_control flag is set. PL/CSQL always has transaction_control = true; JavaSP inherits the SP definition’s flag. See cubrid-pl-javasp.md §“Wire protocol” for the invoke_java::transaction_control field.

The query manager hands the raw SQL text to the compile front-end. The trip through the lexer / Bison parser / semantic check / rewrite / optimizer / XASL generator / XASL cache is the same as for any SQL statement — cubrid-parser.md, cubrid-semantic-check.md, cubrid-query-rewrite.md, cubrid-query-optimizer.md, cubrid-xasl-generator.md, cubrid-xasl-cache.md are precise on each phase.

For a CALL statement two things differ from a SELECT:

  • Parse-tree shape. The root is a PT_METHOD_CALL (or PT_FUNCTION with the SP-call subtype) instead of a PT_SELECT. Name resolution looks up my_sp in the _db_stored_procedure catalog (see cubrid-pl-javasp.md §“Catalog rows”) rather than in the table catalog; a missing SP fails semantic check before the optimizer runs.
  • XASL shape. The XASL root is a DO_PROC (rather than a BUILDLIST_PROC). DO_PROC carries a single procedure call with packed argument expressions; the executor’s per-block iterator simply runs the call once and collects the return value. There is no row-stream output for the call itself, although the SP body’s embedded queries may produce per-callback result-sets (see step 7 below).

The XASL cache key collapses by SP signature, so subsequent CALL my_sp(?) invocations skip parse / semantic / optimize / XASL and start at execution. The first call pays the full compile.

Step 4 — Executor enters DO_PROC and resolves the SP

Section titled “Step 4 — Executor enters DO_PROC and resolves the SP”

qexec_execute_mainblock_internal’s switch (xasl->type) arrives at DO_PROC and dispatches into the per-call resolver. The resolver walks the catalog row that semantic check populated to extract: language tag (SP_LANG_JAVA or SP_LANG_PLCSQL), target-class / target-method strings (JavaSP only), argument descriptor list (mode + DB type per arg), and the transaction_control flag. The output is a cubpl::pl_signature plus an argument-value vector ready to ship to cub_pl.

The executor then enters the PL family proper. cubrid-pl-javasp.md §“C++ session and executor” describes the per-call cubpl::executor object that drives the round-trip:

  1. executor::fetch_args_peek() — populates the argument vector from the DO_PROC XASL value descriptor.
  2. executor::request_invoke_command() — packs an invoke_java payload with the signature and arguments, claims a connection from the global PL_CONNECTION_POOL (one of N pre-established UDS or TCP sockets to cub_pl), and writes SP_CODE_INVOKE followed by the payload.
  3. executor::response_invoke_command(value) — enters a loop reading responses from the same connection. Each frame is one of SP_CODE_RESULT (final return value), SP_CODE_ERROR, or SP_CODE_INTERNAL_JDBC (a callback request — handled in step 7).

The pool, the connection, and the pool→connection_view RAII are the same as cubrid-rpath-select.md’s plain socket abstraction just with cub_pl on the other end instead of a JDBC client.

Step 5 — JVM dispatch (ListenerThread → ExecuteThread)

Section titled “Step 5 — JVM dispatch (ListenerThread → ExecuteThread)”

On the JVM end of the connection, cub_pl’s ListenerThread (pl_engine/pl_server/.../ListenerThread.java) is a single accept loop. For each accepted UDS or TCP socket it spawns one ExecuteThread and parks. The ExecuteThread reads one frame at a time from its socket, dispatches by RequestCode:

  • RequestCode.UTIL_PING / UTIL_BOOTSTRAP → liveness / sysparm setup (handled at boot, not on the call path).
  • RequestCode.INVOKE_SP → the SP invocation entry — processStoredProcedure().
  • RequestCode.COMPILE → PL/CSQL compile (only on CREATE PROCEDURE time; not on the call path either).
  • RequestCode.DESTROY → session teardown.

For our CALL, the frame’s opcode is INVOKE_SP. processStoredProcedure() does:

  1. PrepareArgs.readArgs() — unpacks the argument vector from the wire frame into Java Value objects, with type coercion per the invoke_java’s argument descriptor.
  2. makeStoredProcedure() returns a StoredProcedure instance that knows whether to dispatch via JavaSP or PL/CSQL based on the language tag.
  3. StoredProcedure.invoke() — the actual user-code dispatch.

For JavaSP, invoke() constructs (or retrieves from cache) a TargetMethod whose getMethod() resolves the user method by reflection (Class.getMethod(methodName, argTypes)) using the classloader hierarchy cubrid-pl-javasp.md §“JavaSP-specific: reflective dispatch and classloaders” describes — ContextClassLoader walks $CUBRID_DATABASES/<db>/java/ looking for the JAR, with SessionClassLoader as a per-session isolation layer. Once the Method object is in hand, Method.invoke(target, args) runs the user code.

For PL/CSQL, invoke() instead routes to the in-process compiled class. The JVM holds a per-procedure Class object that PlcsqlCompilerMain produced at CREATE PROCEDURE time (in-process javax.tools.JavaCompiler); cubrid-pl-plcsql.md §“Compilation pipeline at CREATE PROCEDURE time” describes how the compiled JAR is stored as Base64 in _db_stored_procedure_code.ocode and re-loaded by the JVM on demand. Dispatch from there is reflective in the same way.

Either way, the user code runs on the ExecuteThread’s thread inside the JVM. The thread is owned by cub_pl and pinned to one SP invocation for its duration; recursive SP calls (a Java SP that calls another SP) acquire a new execution_stack entry and are handled by a separate ExecuteThread, but they re-use the same session — see cubrid-pl-javasp.md §“Server-side JDBC back-channel”.

Step 6 — SP body runs, issues an embedded SELECT

Section titled “Step 6 — SP body runs, issues an embedded SELECT”

The user’s Java code does:

public static int my_sp(String arg) throws SQLException {
Connection conn = DriverManager.getConnection("jdbc:default:connection:");
PreparedStatement ps = conn.prepareStatement("SELECT * FROM t WHERE x > ?");
ps.setInt(1, 10);
ResultSet rs = ps.executeQuery();
int count = 0;
while (rs.next()) count++;
rs.close();
ps.close();
return count;
}

The jdbc:default:connection: URL is the well-known shortcut for the server-side JDBC driver (CUBRIDServerSideDriver), which is registered with DriverManager at JVM boot. getConnection on that URL returns a CUBRIDServerSideConnection that does not open a new TCP socket — it reuses the same socket the ExecuteThread is already reading from. This is what cubrid-pl-server-bridge.md calls Path B: the modern PL bridge that ferries cub_plcub_server callbacks inside SP_CODE_INTERNAL_JDBC envelopes on the existing channel.

prepareStatement(...) constructs a CUBRIDServerSidePreparedStatement but does not yet talk to the server — it caches the SQL locally. The actual round-trip starts at executeQuery(). That call:

  1. Packs a frame with outer code SP_CODE_INTERNAL_JDBC and inner opcode METHOD_CALLBACK_QUERY_PREPARE, plus the SQL text and the host-variable buffer.
  2. Writes the frame to the socket.
  3. Blocks on a read for the response.

While the JVM thread is blocked, the cub_server worker thread (which has been blocked in executor::response_invoke_command reading from the same connection) wakes up.

cubrid-pl-server-bridge.md §“Path B — cub_pl → server” is the detail doc for what happens next. The server worker thread reads the SP_CODE_INTERNAL_JDBC envelope, then calls executor::response_callback_command() which unpacks the inner opcode and dispatches:

// executor::response_callback_command — pl_executor.cpp
switch (code) {
case METHOD_CALLBACK_QUERY_PREPARE:
error_code = callback_prepare (thread_ref, unpacker); break;
case METHOD_CALLBACK_QUERY_EXECUTE:
error_code = callback_execute (thread_ref, unpacker); break;
case METHOD_CALLBACK_FETCH:
error_code = callback_fetch (thread_ref, unpacker); break;
case METHOD_CALLBACK_OID_GET:
error_code = callback_oid_get (thread_ref, unpacker); break;
case METHOD_CALLBACK_END_TRANSACTION:
error_code = callback_end_transaction (thread_ref, unpacker); break;
/* ... twelve handlers total ... */
}

For our executeQuery, the JVM actually sends two frames in sequence — METHOD_CALLBACK_QUERY_PREPARE (compile the SQL) then METHOD_CALLBACK_QUERY_EXECUTE (run it) — because the server-side JDBC driver follows the standard JDBC lifecycle. callback_prepare runs the embedded SELECT through the same compile pipeline as step 3 above — cubrid-parser.md, cubrid-semantic-check.md, cubrid-query-rewrite.md, cubrid-query-optimizer.md, cubrid-xasl-generator.md, cubrid-xasl-cache.md all fire recursively. The XASL cache typically hits on the second invocation of the same SP (and on subsequent rows even within the same SP if the SP issues the same SQL repeatedly). The prepare returns a server-side prepared-statement handle ID.

callback_execute then runs the prepared statement and returns a query ID along with a first batch of result rows. Cursor maintenance for subsequent rows uses callback_fetch (one frame per batch). The handler implementations in pl_executor.cpp call into the same query-manager machinery the plain SELECT path uses — xqmgr_execute_query ends up firing here too. So the callback path effectively re-enters cubrid-rpath-select.md’s steps 4 through 11 from inside the SP.

The recursion is gated. cubrid-pl-server-bridge.md §“Recursion guard” is precise: tran_get_libcas_depth() (Path A) and the PL session’s m_stack_map size (Path B) are both checked against METHOD_MAX_RECURSION_DEPTH = 15. A SP that invokes another SP that invokes another… beyond 15 is rejected with ER_SP_TOO_MANY_NESTED_CALL. Our embedded SELECT doesn’t recurse into another SP, so the depth stays at 1.

The packed wire structures for prepare / execute / fetch live in src/method/method_struct_query.{cpp,hpp}, method_struct_value.{cpp,hpp}, and the OID family in method_struct_oid_info.{cpp,hpp} — the same packed structures the older Path A (server→CAS, legacy C-method scan) uses. See cubrid-pl-server-bridge.md §“Packed wire structures” for the shared family.

Step 8 — Result rows flow back to the JVM

Section titled “Step 8 — Result rows flow back to the JVM”

callback_execute’s handler packs a METHOD_RESPONSE_SUCCESS payload containing the query ID, column metadata, and the first batch of rows, then calls m_stack->send_data_to_java(blk) to ship the response over the same connection. The JVM thread, which has been blocked since step 6, wakes up.

The JVM unpacks the response into a CUBRIDServerSideResultSet, backs it with the per-batch row buffer, and returns it to the user code as the ResultSet from executeQuery(). From the user’s perspective the JDBC API behaves identically to a normal external- client connection — rs.next() advances row-by-row, fetching additional batches via METHOD_CALLBACK_FETCH callbacks as needed. Each next() that exhausts the buffered batch issues another callback round-trip; small SPs may complete in a single exchange, larger ones in many.

For our example the SP body simply counts rows and accumulates the count locally. After the loop drains, rs.close() issues METHOD_CALLBACK_CURSOR_CLOSE, ps.close() triggers per-handle cleanup, and the SP body’s return count; becomes the SP’s return value.

Step 9 — SP_CODE_RESULT and the server-side handoff

Section titled “Step 9 — SP_CODE_RESULT and the server-side handoff”

The JVM’s processStoredProcedure() finishes by packing the return value into a wire frame with outer code SP_CODE_RESULT plus the packed Value and writes it to the socket. The cub_server worker thread, which has been in executor::response_invoke_command’s read loop the whole time, wakes up one final time, sees SP_CODE_RESULT, unpacks the return value into the executor’s output DB_VALUE, and exits the loop.

request_invoke_command then returns the connection to the pool and the executor::execute() call returns. Back in qexec_execute_mainblock_internal, the DO_PROC block sets the SP’s return value as the XASL output and the executor moves on to the next block (none in our case — CALL is the entire statement). The XASL is closed, the list-file (carrying the single return-value tuple) is finalised, and the trip back to the client begins.

The trip back to the JDBC application is the trip out, in reverse, identical to cubrid-rpath-select.md step 11 with one twist: the result-set descriptor for a CALL is a single-column, single-row schema (just the SP’s return value), so the network traffic is small. The CAS forwards the row to the JDBC driver over the JDBC-facing socket; the driver delivers it via CallableStatement.getInt(1) (or equivalent) to the application.

The broker is again not on the hot path — SCM_RIGHTS from step 1 means the JDBC client and the CAS are speaking directly.

sequenceDiagram
  participant JDBC as JDBC client
  participant CAS as cub_cas
  participant SRV as cub_server
  participant JVM as cub_pl JVM (ExecuteThread)
  participant USER as user SP body

  JDBC->>CAS: TCP via SCM_RIGHTS handoff (CAS owns fd)
  CAS->>SRV: NET_SERVER_QM_QUERY_PREPARE ("CALL my_sp(?)")
  SRV->>SRV: parse / semantic-check / rewrite / optimize / XASL
  SRV->>SRV: DO_PROC dispatch; resolve SP catalog row
  SRV->>JVM: SP_CODE_INVOKE (invoke_java payload)
  JVM->>JVM: ListenerThread.accept; new ExecuteThread
  JVM->>JVM: TargetMethod.invoke (reflective)
  JVM->>USER: Method.invoke(...)
  USER->>JVM: jdbc:default:connection: prepareStatement
  USER->>JVM: ps.executeQuery()
  JVM-->>SRV: SP_CODE_INTERNAL_JDBC + METHOD_CALLBACK_QUERY_PREPARE
  SRV->>SRV: callback_prepare → compile pipeline (recursive)
  SRV-->>JVM: METHOD_RESPONSE_SUCCESS (handle id)
  JVM-->>SRV: SP_CODE_INTERNAL_JDBC + METHOD_CALLBACK_QUERY_EXECUTE
  SRV->>SRV: callback_execute → executor (recursive into rpath-select)
  SRV-->>JVM: METHOD_RESPONSE_SUCCESS (rows batch)
  JVM->>USER: ResultSet.next() loop
  USER->>JVM: rs.close(); return count
  JVM->>SRV: SP_CODE_RESULT (return value)
  SRV->>SRV: DO_PROC writes return value to XASL output
  SRV->>CAS: result row (single-column, single-row)
  CAS->>JDBC: CallableStatement.getInt(1)
  • PL/CSQL-only branches. The compile-time GET_SQL_SEMANTICS / GET_GLOBAL_SEMANTICS callbacks PL/CSQL fires during CREATE PROCEDURE to validate embedded SQL. Those rides on Path A (server→CAS) rather than Path B (cub_pl→server). See cubrid-pl-server-bridge.md §“Compile-time bridge for PL/CSQL embedded SQL” and cubrid-pl-plcsql.md §“Asking the C side for global semantics”.
  • Method scan as a SCAN_TYPE. The legacy server-side SCAN_TYPE_METHOD (Path A’s primary trigger — invoking C builtins on per-row arguments during a query) is documented separately. See cubrid-pl-server-bridge.md §“Method scan operator” and cubrid-scan-manager.md §“S_METHOD”.
  • Recursive SP calls. A SP that calls another SP gets a new execution_stack entry and a new ExecuteThread, sharing the session. Recursion bound: 15 frames. See cubrid-pl-javasp.md §“Server-side JDBC back-channel”.
  • Holdable cursors across COMMIT. Server-side cursors that the SP body opens but does not close before SP return migrate to the session’s holdable list if marked holdable. See cubrid-cursor.md §“Holdability”.
  • OID materialisation callbacks. A SP body that reads individual objects by OID (rather than via SQL) issues METHOD_CALLBACK_OID_GET / _CMD / COLLECTION callbacks. Same channel, different opcodes. See cubrid-pl-server-bridge.md §“Path B” and cubrid-class-object.md §“Workspace OID materialisation”.
  • JVM crash recovery. A cub_pl crash mid-call is detected by the server_monitor daemon and the JVM is restarted (if auto_restart_server = on). The in-flight call returns ER_SP_EXECUTE_ERROR and the caller’s transaction can roll back. See cubrid-master-process.md §“server_monitor — the C++ supervisor” and cubrid-pl-javasp.md §“Startup FSM”.
  • JavaSP classloader hierarchy and security manager. Loading the Class<?> for my_sp’s implementation class walks ContextClassLoaderServerClassLoader → bootstrap, with SpSecurityManager checking checkExit (blocks System.exit) and checkLink (blocks native lib loads from user contexts). See cubrid-pl-javasp.md §“JavaSP-specific: reflective dispatch and classloaders”.
  • Authentication and grant on SP invocation. The SP-invoke catalog row carries owner and is_definer_rights flags; call-site auth checking happens during semantic check before the executor enters DO_PROC. See cubrid-authentication.md for the auth model and cubrid-pl-javasp.md §“Catalog rows” for the SP-side rows.
  • Transaction control by the SP. A SP whose transaction_control = true may call commit() / rollback() from inside the body, which fires METHOD_CALLBACK_END_TRANSACTION. This commits or aborts the caller’s transaction, with all the visibility implications that entails. See cubrid-pl-server-bridge.md and cubrid-transaction.md.

This rpath synthesises the following detail docs (in the order they appear in the trip):

  • cubrid-broker.md — broker/CAS process model and SCM_RIGHTS handoff (steps 1, 10).
  • cubrid-dbi-cci.md — CAS-side db_* adaptation (step 1).
  • cubrid-network-protocol.md — CSS framing and NET_SERVER_* opcode taxonomy (steps 1, 10).
  • cubrid-server-session.md, cubrid-transaction.md — session lookup and TDES binding (step 2).
  • cubrid-parser.md, cubrid-semantic-check.md, cubrid-query-rewrite.md, cubrid-query-optimizer.md, cubrid-xasl-generator.md, cubrid-xasl-cache.md — compile pipeline (steps 3, 7).
  • cubrid-query-executor.md — DO_PROC dispatch (step 4).
  • cubrid-pl-javasp.md — process topology, wire protocol, C++ session/executor, JVM dispatch, classloader hierarchy (steps 4, 5, 6).
  • cubrid-pl-plcsql.md — PL/CSQL compile artifacts (step 5, PL/CSQL branch).
  • cubrid-pl-server-bridge.md — the PL↔server callback channel, opcode taxonomy, recursion guard, packed wire structures (steps 6, 7, 8).
  • cubrid-scan-manager.md — recursive scan dispatch under callback_execute (step 7).
  • cubrid-list-file.md — return-value tuple materialisation (step 9).
  • cubrid-mvcc.md — visibility check during recursive embedded-query execution (step 7).
  • cubrid-rpath-select.md — the recursive plain-SELECT pipeline that callback_execute re-enters (step 7).

Adjacent rpath docs (the rest of the family this completes):

  • cubrid-rpath-select.md — plain SELECT end-to-end.
  • cubrid-rpath-write.md — INSERT + COMMIT end-to-end.
  • cubrid-rpath-recovery.md — recovery walk end-to-end.