Skip to content

CUBRID PL Server Bridge — The Mid-Execution Callback Channel That Both PL Runtimes Ride On

This is the third sibling in the PL family. The other two are cubrid-pl-javasp.md (JavaSP) and cubrid-pl-plcsql.md (PL/CSQL). Each of those covers one runtime end-to-end — boot, catalog, dispatch, language-specific execution. This document covers the shared bridge they both ride on when they call back into the database mid-execution.

The reason it’s a separate doc and not folded into either runtime: the bridge is older than cub_pl. The same src/method/ code services two distinct callback channels in the running system, and the modern PL runtimes layer on top without owning the mechanism.

LayerWhat lives there
Wire taxonomy (METHOD_REQUEST_*, METHOD_CALLBACK_*)src/sp/sp_constants.hpp — shared by both paths
Packed value/query/OID structures (method_struct_*)src/method/method_struct_*.{cpp,hpp} — shared by both paths
Path Acub_server→CAS, legacy C-method scansrc/method/method_callback.{cpp,hpp}, src/method/query_method.cpp, src/method/method_scan.{cpp,hpp}
Path Bcub_plcub_server, modern PL invocationsrc/sp/pl_executor.cpp (response_callback_command dispatcher); ferried inside SP_CODE_INTERNAL_JDBC envelopes (see cubrid-pl-javasp.md §“Wire protocol”)
Server-side method scan that drives Path Asrc/method/method_scan.{cpp,hpp} (cubscan::method::scanner); group-of-methods abstraction in src/sp/method_invoke_group.{cpp,hpp}
Compile-time bridge for PL/CSQL embedded SQLPath A METHOD_CALLBACK_GET_SQL_SEMANTICS / _GET_GLOBAL_SEMANTICS; PL/CSQL compile path issues these to validate embedded queries (see cubrid-pl-plcsql.md §“Asking the C side for global semantics”)

cubrid-pl-javasp.md already names the per-call sequence at the high-level (SP_CODE_INVOKEMETHOD_CALLBACK_QUERY_PREPAREMETHOD_CALLBACK_QUERY_EXECUTESP_CODE_RESULT); this document explains the bridge mechanism itself — what every opcode does, how the dispatch tables are wired, what the CAS-side query handler does, and where Path A and Path B converge.

A mid-execution callback channel is the IPC mechanism a database engine uses to let server-side code (a stored procedure body, a user-defined method, a custom aggregate) call back into the engine itself for SQL queries, OID materialisation, schema introspection, and metadata. The concept appears whenever a database hosts user code in a runtime that is not the engine’s own thread of execution:

  1. In-process language runtimes. When the user code runs inside the server process (PostgreSQL functions in PL/pgSQL, Oracle PL/SQL, SQL Server T-SQL), the “callback” is a direct C-level call. There’s no IPC; the server just exposes its query executor as a function. The cost is failure isolation: a bug in user code can corrupt server state.

  2. External language runtimes. When the user code runs in a separate process — DB2 fenced-mode SP, Oracle EJB, CUBRID JavaSP + PL/CSQL — the engine forks (or pre-spawns) a sidecar process and ships invocations over an IPC channel. The cost moves the other way: failure isolation is restored, but every callback for SQL or metadata is now a synchronous round-trip on the same channel.

  3. Method scans. A specifically pre-OODB-era pattern: the SQL query language is extended with the ability to invoke methods on class instances, and the executor needs to dispatch each row’s method call to a runtime that lives somewhere else (usually a forked client process holding the OOSQL session). This is the pre-stored-procedure ancestor of the SP callback channel and is typically retained in modern engines for backward compatibility even after the SP machinery takes over the new workload.

The two most consequential design decisions for any callback channel are (a) the opcode taxonomy — what mid-execution operations the caller is allowed to request — and (b) the recursion model — whether a callback is allowed to invoke another procedure that issues its own callbacks, and how deep that nest can go.

CUBRID’s bridge solves both decisions once and applies the same solution to two physically distinct paths (Path A: server→CAS, Path B: cub_pl→server). The taxonomy lives in sp_constants.hpp; the recursion limit is METHOD_MAX_RECURSION_DEPTH = 15 for both paths.

EngineBridge mechanismRecursion guardCallback opcodes
PostgreSQL PL/pgSQLIn-process via SPI (Server Programming Interface). Function calls become C-level SPI_execute()/SPI_cursor_* calls into the same backend.Stack depth checked against max_stack_depth GUC (default 2 MB).SPI_execute, SPI_prepare, SPI_cursor_open, SPI_getvalue, etc. — not opcodes, direct functions.
Oracle PL/SQLIn-process; PL/SQL VM executes opcodes that include SQL and FETCH instructions calling back into the SQL kernel._PLSQL_OPTIMIZE_LEVEL and stack overflow detection.Internal opcodes; not exposed.
DB2 fenced-mode SPExternal: db2fmp (fenced-mode process) hosts the SP; back-channel uses a Unix socket and a small message protocol with prepare, open, fetch, close, execute, param opcodes.MAX_NESTED_CALLS registry (default 16).Similar to CUBRID’s METHOD_CALLBACK_* set.
Oracle EJB / JavaExternal: KPRB driver (Kernel-Programmatic-Resident-in-Backend); JDBC requests issued from inside the JVM are short-circuited to the server’s SQL engine.Java thread stack limits + a recursion counter.Standard JDBC, intercepted at driver level.
CUBRID PL family (this doc)External: cub_pl JVM, IPC over UDS or TCP. Path A (cub_server→CAS) is older C-method-scan plumbing reused for the same opcode set.METHOD_MAX_RECURSION_DEPTH = 15 applied via tran_get_libcas_depth().METHOD_CALLBACK_* (~18 opcodes); METHOD_REQUEST_* envelope on Path A; SP_CODE_INTERNAL_JDBC envelope on Path B.

The CUBRID design clusters with DB2 fenced-mode and Oracle EJB on the external-process axis. The distinguishing trait is that two generations of callback path coexist in the codebase: the older server→CAS channel (Path A), originally written for C user methods on class instances, and the newer cub_pl→server channel (Path B), built when JavaSP and PL/CSQL moved out of the server process. Both paths share the same opcode set and packed wire structures because Path B was designed to reuse the existing handlers wherever possible.

flowchart LR
  subgraph PathA["Path A — server → CAS (legacy C-method scan)"]
    direction LR
    SRV1["cub_server<br/>(query executor)"]
    SCAN["cubscan::method::scanner<br/>(SCAN_TYPE_METHOD)"]
    INV["cubmethod::method_invoke_group<br/>(per-call group)"]
    CAS["cub_cas<br/>(CAS process)"]
    DISP1["cubmethod::callback_handler<br/>::callback_dispatch"]
    SRV1 --> SCAN --> INV
    INV -- "METHOD_REQUEST_INVOKE<br/>METHOD_REQUEST_CALLBACK<br/>METHOD_REQUEST_ARG_PREPARE<br/>METHOD_REQUEST_END" --> CAS
    CAS --> DISP1
    DISP1 -- "METHOD_CALLBACK_QUERY_PREPARE<br/>METHOD_CALLBACK_OID_GET<br/>METHOD_CALLBACK_GET_SQL_SEMANTICS<br/>... 18 opcodes" --> SRV1
  end

  subgraph PathB["Path B — cub_pl → server (modern PL bridge)"]
    direction LR
    SRV2["cub_server<br/>(query executor)"]
    PLEXEC["cubpl::executor<br/>(per-invocation)"]
    PL["cub_pl<br/>(JVM)"]
    DISP2["cubpl::executor<br/>::response_callback_command"]
    SRV2 --> PLEXEC
    PLEXEC -- "SP_CODE_INVOKE<br/>(invoke_java payload)" --> PL
    PL -- "SP_CODE_INTERNAL_JDBC<br/>(carries METHOD_CALLBACK_*)" --> PLEXEC
    PLEXEC --> DISP2
    DISP2 -- "callback_prepare / _execute<br/>_fetch / _oid_get / _collection<br/>... 12 handlers" --> SRV2
  end

The two paths are not connected to each other. They share opcode constants, packed structures, and the conceptual loop pattern, but they run between different process pairs:

  • Path A lives between cub_server (engine) and cub_cas (the CAS worker process — see cubrid-broker.md). It is invoked when the query executor encounters SCAN_TYPE_METHOD — a method-call expression that needs to dispatch to a C builtin or a class method whose body is held by the CAS-side session. Driven by cubscan::method::scanner (src/method/method_scan.cpp) wrapping cubmethod::method_invoke_group (src/sp/method_invoke_group.hpp).
  • Path B lives between cub_pl (the JVM hosting JavaSP and PL/CSQL — see cubrid-pl-javasp.md §“Process topology”) and cub_server. It is invoked when a stored procedure body issues a SQL query, OID fetch, or schema lookup. Driven by cubpl::executor (src/sp/pl_executor.cpp) on the server side, with the JVM-side driver classes (CUBRIDServerSideDriver, …PreparedStatement) handling the JVM end of the wire.

Despite the physical asymmetry, the dispatch pattern is identical: one side sends a request envelope (METHOD_REQUEST_* on Path A, SP_CODE_INTERNAL_JDBC on Path B); the receiver decodes a METHOD_CALLBACK_* opcode from the envelope and dispatches to a handler function; the handler executes server- or CAS-side machinery and queues a response back over the same channel.

// sp_constants.hpp — request envelopes (Path A only)
enum METHOD_REQUEST {
METHOD_REQUEST_ARG_PREPARE = 0x40,
METHOD_REQUEST_INVOKE = 0x01,
METHOD_REQUEST_ERROR = 0x04,
METHOD_REQUEST_CALLBACK = 0x08,
METHOD_REQUEST_END = 0x20,
METHOD_REQUEST_COMPILE = 0x80,
METHOD_REQUEST_SQL_SEMANTICS = 0xA0,
METHOD_REQUEST_GLOBAL_SEMANTICS = 0xA1
};
// sp_constants.hpp — callback opcodes (shared, both paths)
enum METHOD_CALLBACK_RESPONSE {
METHOD_CALLBACK_END_TRANSACTION = 1,
METHOD_CALLBACK_QUERY_PREPARE = 2,
METHOD_CALLBACK_QUERY_EXECUTE = 3,
METHOD_CALLBACK_GET_DB_PARAMETER = 4,
METHOD_CALLBACK_CURSOR = 7,
METHOD_CALLBACK_FETCH = 8,
METHOD_CALLBACK_GET_SCHEMA_INFO = 9,
METHOD_CALLBACK_OID_GET = 10,
METHOD_CALLBACK_OID_PUT = 11,
METHOD_CALLBACK_OID_CMD = 17,
METHOD_CALLBACK_COLLECTION = 18,
METHOD_CALLBACK_NEXT_RESULT = 19,
METHOD_CALLBACK_EXECUTE_BATCH = 20,
METHOD_CALLBACK_EXECUTE_ARRAY = 21,
METHOD_CALLBACK_CURSOR_UPDATE = 22,
METHOD_CALLBACK_MAKE_OUT_RS = 33,
METHOD_CALLBACK_GET_GENERATED_KEYS = 34,
METHOD_CALLBACK_LOB_NEW = 35,
METHOD_CALLBACK_LOB_WRITE = 36,
METHOD_CALLBACK_LOB_READ = 37,
METHOD_CALLBACK_CURSOR_CLOSE = 42,
METHOD_CALLBACK_SET_PL_SESSION_PARAM = 50,
// COMPILE
METHOD_CALLBACK_GET_SQL_SEMANTICS = 100,
METHOD_CALLBACK_GET_GLOBAL_SEMANTICS = 101,
// AUTH
METHOD_CALLBACK_CHANGE_RIGHTS = 200,
// CLASS ACCESS
METHOD_CALLBACK_GET_CODE_ATTR = 201
};

The two enumerations are at different layers:

  • METHOD_REQUEST_* is the outer envelope the server uses on Path A to tell CAS what kind of step it’s asking for: prepare arguments, invoke a builtin, deliver a callback request body, signal end-of-call, or compile/semantics.
  • METHOD_CALLBACK_* is the inner opcode describing the actual service requested — query, OID, schema, etc. Both paths use this set. On Path A, METHOD_REQUEST_CALLBACK carries one of these as its first packed int; on Path B, SP_CODE_INTERNAL_JDBC plays the same envelope role.

The numeric ranges are deliberate: low IDs (1–22) are query/cursor operations inherited from JDBC semantics, the 30s are out-result and generated-keys, the 40s/50s are LOB / cursor-close / session-param, the 100s are PL/CSQL-only compile-time helpers, and the 200s are auth and class-access. Empty slots (5, 6, 12–16, 23–32) reflect opcodes that existed in earlier protocol revisions and were retired without renumbering — keeping the constants stable across releases.

Path A — server → CAS (the legacy method-scan channel)

Section titled “Path A — server → CAS (the legacy method-scan channel)”

The CAS side has a single entry function, method_dispatch (src/method/query_method.cpp), which receives a packed request, peels off a cubmethod::header containing (uint64_t id, int command), and dispatches on command to one of four handlers:

// method_dispatch_internal — query_method.cpp
switch (header.command) {
case METHOD_REQUEST_ARG_PREPARE: // stash DB_VALUE arguments under the group_id
error = method_prepare_arguments (unpacker);
break;
case METHOD_REQUEST_INVOKE: // invoke a C builtin with the stashed args
AU_SAVE_AND_ENABLE (save_auth);
error = method_invoke_builtin (unpacker, value);
AU_RESTORE (save_auth);
break;
case METHOD_REQUEST_CALLBACK: // service a callback into the CAS session
AU_SAVE_AND_ENABLE (save_auth);
error = cubmethod::get_callback_handler()->callback_dispatch (unpacker);
AU_RESTORE (save_auth);
break;
case METHOD_REQUEST_END: // free the named query handlers
std::vector<int> handlers;
unpacker.unpack_all (handlers);
for (size_t i = 0; i < handlers.size (); i++) {
cubmethod::get_callback_handler()->free_query_handle (handlers[i], false);
}
break;
}

The four envelopes form one method-scan call’s lifecycle:

  1. ARG_PREPARE. Server packs the per-row argument vector keyed by the method group’s id. CAS stores runtime_args[id] = args (an std::unordered_map<UINT64, std::vector<DB_VALUE>>).
  2. INVOKE. Server requests the actual call with a pl_signature describing the method. CAS looks up runtime_args[group_id], calls obj_send_array for instance methods, and queues the result with xs_send_queue (METHOD_SUCCESS, result).
  3. CALLBACK (zero or more times). The C method body executed any db_query_* SQL or OID call; that call goes back to the server, the server packages its request, and returns it as a METHOD_REQUEST_CALLBACK. CAS dispatches by inner METHOD_CALLBACK_* opcode via callback_handler::callback_dispatch.
  4. END. Server tells CAS the method group is done and lists the query handler IDs to free. CAS calls free_query_handle() on each.

The CAS-side handler holds per-session state across calls:

// callback_handler — method_callback.hpp
class EXPORT_IMPORT callback_handler {
// ...
std::multimap <std::string, int> m_sql_handler_map; // SQL -> handler id (statement cache)
std::unordered_map <uint64_t, int> m_qid_handler_map; // query_id -> handler (out resultset)
std::vector<query_handler *> m_query_handlers; // bounded slot table
oid_handler * m_oid_handler; // OID materialisation cache
std::queue <cubmem::extensible_block> m_data_queue; // packed responses pending xs_queue_send
std::list <cubmethod::query_handler *> m_deferred_query_free_handler;
error_context m_error_ctx;
};

m_query_handlers is a fixed-size array (sized by the max_query_handler constructor argument); each slot owns a query_handler that wraps a CUBRID-side DB_SESSION plus its DB_QUERY_RESULT, mirroring a JDBC PreparedStatement lifetime. m_sql_handler_map is the prepared-statement cache — when a METHOD_CALLBACK_QUERY_PREPARE arrives for an SQL string already prepared in this session by the same user (and not currently occupied), the existing handler is reused instead of allocating a new slot:

// callback_handler::prepare — method_callback.cpp
query_handler *handler = get_query_handler_by_sql (sql, [&] (query_handler *h) {
return h->get_is_occupied() == false
&& (h->get_tran_id () == NULL_TRANID || h->get_tran_id() == tid)
&& h->get_user_name ().compare (au_get_current_user_name ()) == 0;
});
if (handler == nullptr) {
// not in cache: allocate a new slot and prepare
handler = new_query_handler ();
if (handler != nullptr) {
int error = handler->prepare (sql, flag);
// ...
}
}

The eligibility predicate enforces three invariants: the cached handler must be free, must belong to the current transaction (or have no transaction binding), and must be owned by the current user.

The dispatch table inside callback_dispatch itself is a flat switch on the inner opcode:

// callback_dispatch — method_callback.cpp
switch (code) {
case METHOD_CALLBACK_END_TRANSACTION: error = end_transaction (unpacker); break;
case METHOD_CALLBACK_QUERY_PREPARE: error = prepare (unpacker); break;
case METHOD_CALLBACK_QUERY_EXECUTE: error = execute (unpacker); break;
case METHOD_CALLBACK_OID_GET: error = oid_get (unpacker); break;
case METHOD_CALLBACK_OID_PUT: error = oid_put (unpacker); break;
case METHOD_CALLBACK_OID_CMD: error = oid_cmd (unpacker); break;
case METHOD_CALLBACK_COLLECTION: error = collection_cmd (unpacker); break;
case METHOD_CALLBACK_MAKE_OUT_RS: error = make_out_resultset (unpacker); break;
case METHOD_CALLBACK_GET_GENERATED_KEYS: error = generated_keys (unpacker); break;
case METHOD_CALLBACK_GET_SCHEMA_INFO: assert (false); break; // disabled
case METHOD_CALLBACK_GET_SQL_SEMANTICS: error = get_sql_semantics (unpacker); break;
case METHOD_CALLBACK_GET_GLOBAL_SEMANTICS: error = get_global_semantics (unpacker); break;
case METHOD_CALLBACK_CHANGE_RIGHTS: error = change_rights (unpacker); break;
default: assert (false); error = ER_FAILED;
}
#if defined (CS_MODE)
xs_queue_send (); // flush queued responses to server
#endif

The trailing xs_queue_send() (in CS_MODE) flushes the response queue back to the server in one transport_xs_* packet rather than per-handler — handlers xs_pack_and_queue their replies and the dispatcher flushes once at the end of the request.

METHOD_CALLBACK_GET_SCHEMA_INFO is hard-disabled with assert(false) on the CAS side; schema info still lives at method_schema_info.{cpp,hpp} (used directly by the PL/CSQL compile helpers get_sql_semantics / get_global_semantics), but as a free-standing service it’s been retired from the callback channel.

Path B — cub_pl → server (the modern PL bridge)

Section titled “Path B — cub_pl → server (the modern PL bridge)”

Path B is what cubrid-pl-javasp.md §“Server-side JDBC back-channel” introduces at the high level. The dispatch site itself lives in src/sp/pl_executor.cpp:

// executor::response_callback_command — pl_executor.cpp
int code;
unpacker.unpack_int (code);
switch (code) {
case METHOD_CALLBACK_GET_DB_PARAMETER:
error_code = callback_get_db_parameter (thread_ref, unpacker); break;
case METHOD_CALLBACK_QUERY_PREPARE:
error_code = callback_prepare (thread_ref, unpacker); break;
case METHOD_CALLBACK_QUERY_EXECUTE:
error_code = callback_execute (thread_ref, unpacker); break;
case METHOD_CALLBACK_FETCH:
error_code = callback_fetch (thread_ref, unpacker); break;
case METHOD_CALLBACK_OID_GET:
error_code = callback_oid_get (thread_ref, unpacker); break;
case METHOD_CALLBACK_OID_PUT:
error_code = callback_oid_put (thread_ref, unpacker); break;
case METHOD_CALLBACK_OID_CMD:
error_code = callback_oid_cmd (thread_ref, unpacker); break;
case METHOD_CALLBACK_COLLECTION:
error_code = callback_collection_cmd (thread_ref, unpacker); break;
case METHOD_CALLBACK_MAKE_OUT_RS:
error_code = callback_make_outresult (thread_ref, unpacker); break;
case METHOD_CALLBACK_GET_GENERATED_KEYS:
error_code = callback_get_generated_keys (thread_ref, unpacker); break;
case METHOD_CALLBACK_END_TRANSACTION:
error_code = callback_end_transaction (thread_ref, unpacker); break;
case METHOD_CALLBACK_GET_CODE_ATTR:
error_code = callback_get_code_attr (thread_ref, unpacker); break;
case METHOD_CALLBACK_SET_PL_SESSION_PARAM:
error_code = callback_set_pl_session_param (thread_ref, unpacker); break;
default: assert (false); error_code = ER_FAILED;
}

The opcode set is a strict subset of Path A’s table — twelve handlers vs. fourteen — because the CAS-side compile helpers (GET_SQL_SEMANTICS, GET_GLOBAL_SEMANTICS, CHANGE_RIGHTS) are not issued by cub_pl; the PL/CSQL compiler runs server-side via Path A, and auth changes flow through DDL rather than mid-execution callbacks. Path B does add two opcodes Path A lacks: GET_DB_PARAMETER (returns isolation level + lock-wait + client IDs) and GET_CODE_ATTR (returns the catalog row for a stored procedure body — used by the JVM to load code when a procedure references another by name) and SET_PL_SESSION_PARAM (mutates per-session JVM-side flags like DBMS_OUTPUT.ENABLE). FETCH is a separate opcode here because Path B delivers cursor results in batches rather than reusing the cached prepared-statement model.

The handlers themselves call into the same server-side machinery any external client query would use (db_compile_statement, db_execute_statement, xqmgr_*, locator_get_class, etc.), then queue a packed response with m_stack->send_data_to_java(blk). Each handler is bookended by a pack_data_block(METHOD_RESPONSE_SUCCESS, ...) or pack_data_block(METHOD_RESPONSE_ERROR, err, msg) payload, mirroring Path A’s xs_pack_and_queue pattern.

callback_get_db_parameter is the simplest example and shows the shape:

// executor::callback_get_db_parameter — pl_executor.cpp
db_parameter_info *parameter_info = pl_session->get_db_parameter_info ();
if (parameter_info == nullptr) {
int tran_index = LOG_FIND_THREAD_TRAN_INDEX (m_stack->get_thread_entry());
parameter_info = new db_parameter_info ();
parameter_info->tran_isolation = logtb_find_isolation (tran_index);
parameter_info->wait_msec = logtb_find_wait_msecs (tran_index);
logtb_get_client_ids (tran_index, &parameter_info->client_ids);
pl_session->set_db_parameter_info (parameter_info);
}
cubmem::block blk = std::move (pack_data_block (METHOD_RESPONSE_SUCCESS, *parameter_info));
if (blk.is_valid ()) {
m_stack->send_data_to_java (blk);
blk.freemem ();
}

The result is memoised on the pl_session so subsequent GET_DB_PARAMETER callbacks within the same SP execution don’t repeat the lookup. Most other callback handlers do not memoise — they delegate straight to the server’s query/OID machinery and pack the response per call.

Compile-time bridge for PL/CSQL embedded SQL

Section titled “Compile-time bridge for PL/CSQL embedded SQL”

PL/CSQL is parsed and compiled inside the JVM (PlcParser.g4PlcsqlSemanticsJavaCodeWriter → in-process javac; see cubrid-pl-plcsql.md §“Compilation pipeline at CREATE PROCEDURE time”). The compiler needs to validate every embedded SQL statement against the live catalog — column names, data types, function overloads — but doesn’t have its own SQL parser or schema cache.

The bridge solves this with two compile-time-only callback opcodes that flow through Path A, independent of any running query:

  • METHOD_CALLBACK_GET_SQL_SEMANTICS (100). Sent for one embedded statement. The CAS-side handler get_sql_semantics in method_callback.cpp parses the SQL, runs semantic checking, and packs back a structured description (column types, table references, parameter placeholders) without executing.
  • METHOD_CALLBACK_GET_GLOBAL_SEMANTICS (101). Sent for a global symbol lookup (a function name, a procedure name, a type). The CAS-side handler get_global_semantics resolves it against the catalog and packs back the resolved signature.

The PL/CSQL compiler issues these as a sequence during semantic analysis. Each round-trip is a full METHOD_REQUEST_CALLBACK envelope with the inner opcode set to one of the above. cubrid-pl-plcsql.md §“Asking the C side for global semantics” describes the JVM-side caller; this document covers the CAS-side responder.

The fact that compile-time semantic checks ride the same callback channel as runtime query execution is by design: it lets PL/CSQL share exactly one piece of CAS infrastructure (the prepared-statement cache

  • catalog access path) for both compile and execute, instead of maintaining a parallel compile-time RPC.

Path A’s trigger on the server side is cubscan::method::scanner (src/method/method_scan.cpp), the access method registered for SCAN_TYPE_METHOD in the scan-manager dispatch table (see cubrid-scan-manager.md for the broader access-method catalogue).

The scanner’s job per row is: pull the next set of method-call arguments out of an upstream list-file, hand them to a cubmethod::method_invoke_group that wraps the call signature, and collect the per-method return values into a qproc_db_value_list that the executor can plumb upward as a row.

// scanner::next_scan — method_scan.cpp
SCAN_CODE scan_code = S_SUCCESS;
next_value_array (vl); // prepare slot list for results
scan_code = get_single_tuple (); // pull next row from upstream list-file
std::vector<std::reference_wrapper<DB_VALUE>>
arg_wrapper (m_arg_vector, m_arg_vector + m_arg_count);
if (scan_code == S_SUCCESS &&
(error = m_method_group->execute (arg_wrapper)) != NO_ERROR) {
scan_code = S_ERROR;
}
if (scan_code == S_SUCCESS) {
int num_methods = m_method_group->get_num_methods ();
for (int i = 0; i < num_methods; i++) {
DB_VALUE *dbval_p = (DB_VALUE *) db_private_alloc (m_thread_p, sizeof (DB_VALUE));
db_make_null (dbval_p);
DB_VALUE &result = m_method_group->get_return_value (i);
db_value_clone (&result, dbval_p);
m_dbval_list[i].val = dbval_p;
db_value_clear (&result);
}
m_method_group->reset (false);
}

m_method_group->execute(args) is the call site that issues the Path A METHOD_REQUEST_INVOKE envelope to CAS. The same method_invoke_group is also used by the obsolete server-side constant-folder (xmethod_invoke_fold_constants#if 0-d-out at the bottom of query_method.cpp); the active call site is the scanner.

Notice the per-row cost: each next_scan() call produces one synchronous round-trip through the CAS protocol per method in the group. Method scans are therefore documented to be expensive (see the “Gotchas” section of src/method/AGENTS.md) and the executor avoids introducing them on hot paths.

Recursion guard and tran_begin/end_libcas_function

Section titled “Recursion guard and tran_begin/end_libcas_function”

Both paths gate against unbounded callback nesting with a hard recursion limit:

// query_method.cpp — Path A entry (CAS side)
tran_begin_libcas_function ();
int depth = tran_get_libcas_depth ();
if (depth > METHOD_MAX_RECURSION_DEPTH) { // 15 from sp_constants.hpp
er_set (ER_ERROR_SEVERITY, ARG_FILE_LINE,
ER_SP_TOO_MANY_NESTED_CALL, 0);
error = ER_SP_TOO_MANY_NESTED_CALL;
}
// ... handle dispatch ...
tran_end_libcas_function ();

The tran_begin_libcas_function / tran_end_libcas_function pair (declared in transaction_cl.h) increments and decrements a libcas-depth counter scoped to the transaction. The counter captures “how deep am I inside a chain of callbacks within this transaction”; when it exceeds 15, the call is refused with ER_SP_TOO_MANY_NESTED_CALL.

The pairing matters because a callback can issue an SQL statement that, in turn, invokes another method scan — that nested call must see depth == current + 1, not start from zero. Bracketing every dispatch with begin/end keeps the counter accurate even when the stack is split across processes.

The same ER_SP_TOO_MANY_NESTED_CALL is raised on Path B from cubpl::executor::request_invoke_command if the SP nest is already at the limit. The two paths share the constant rather than the counter — Path A uses the libcas depth, Path B uses the PL session’s stack-map size.

Path A transitions to AU_SAVE_AND_ENABLE for the duration of INVOKE and CALLBACK envelopes and restores with AU_RESTORE afterwards (see the method_dispatch_internal switch). This is how methods run with authorisation on (so they can only see what their calling user could) regardless of whether the caller’s surrounding context had auth temporarily disabled (e.g., during a system-internal query).

callback_handler::change_rights (METHOD_CALLBACK_CHANGE_RIGHTS, opcode 200) is the explicit opcode for switching method invocation rights between owner (METHOD_AUTH_OWNER) and invoker (METHOD_AUTH_INVOKER) modes — the same distinction Oracle’s AUTHID DEFINER / AUTHID CURRENT_USER makes. CUBRID’s PL/CSQL syntax for this maps to change_rights callbacks issued at compile time.

Both paths share a family of cubpacking::packable_object subclasses in src/method/method_struct_*.{cpp,hpp} that serialise the domain-specific payloads:

HeaderSourceContents
cubmethod::headermethod_struct_invoke.hpp(uint64_t id, int command) — outer envelope on Path A
cubmethod::prepare_argsmethod_struct_invoke.hppgroup_id, tran_id, METHOD_TYPE, argument vector — payload of METHOD_REQUEST_ARG_PREPARE
cubmethod::query_handler_info (etc.)method_struct_query.{cpp,hpp}Prepare/execute request and response payloads
cubmethod::oid_get_info (etc.)method_struct_oid_info.{cpp,hpp}OID get/put/cmd payloads
cubmethod::schema_info_*method_struct_schema_info.{cpp,hpp}Column and table descriptor structures
cubmethod::dbvalue_packingmethod_struct_value.{cpp,hpp}DB_VALUE serialisation tailored for the bridge (handles VOBJ → object, OID → object fixups)

Each struct implements pack, unpack, and get_packed_size against the standard cubpacking::packer/unpacker. The fact that both paths use this same family of structs is what makes the cross-path opcode sharing tractable — the wire format is portable across IPC channels.

When an invocation chain terminates (the original method scan finishes all rows; the SP invocation returns), the METHOD_REQUEST_END envelope on Path A or the equivalent end-of-call on Path B causes the CAS-side callback_handler (Path A) or the server-side cubpl::executor (Path B) to free per-call state:

  • All query handlers used during the call are returned to the slot table (free_query_handle); the SQL→handler cache entry stays so a subsequent call that issues the same SQL can reuse the prepared statement.
  • The OID handler clears its per-call materialisation cache.
  • m_data_queue is drained.
  • On Path A, free_deferred_query_handler runs queued frees that couldn’t happen mid-call (e.g., results that were still being held by a result-set descriptor).

free_query_handle_all(true) is called at the very end of the CAS session (not just per-SP-call) and forcibly releases everything, including the cached SQL handler map — bypassing the reuse logic that normally protects cached entries.

CAS side (src/method/) — Path A handlers

Section titled “CAS side (src/method/) — Path A handlers”
SymbolFileRole
method_dispatchquery_method.cppCAS entry from server; brackets with tran_begin/end_libcas_function, checks recursion depth, calls method_dispatch_internal
method_dispatch_internalquery_method.cppSwitch on METHOD_REQUEST_*; dispatches to argument prep, builtin invoke, callback dispatch, or end
method_invoke_builtinquery_method.cppReads runtime_args[group_id], invokes C builtin via obj_send_array, queues result with xs_send_queue (METHOD_SUCCESS, result)
method_prepare_argumentsquery_method.cppStores per-row DB_VALUE arguments under the group ID for the next INVOKE
method_set_runtime_arguments / method_erase_runtime_argumentsquery_method.cppArgs map mutators with VOBJ → object fixups (method_fixup_vobjs)
method_fixup_vobjs / method_fixup_set_vobjs / method_has_set_vobjsquery_method.cppConvert OID/VOBJ values into materialised objects before passing to user method body
method_errorquery_method.cppSends METHOD_ERROR back to the server when CAS detects an error before the dispatch starts
cubmethod::callback_handlermethod_callback.{cpp,hpp}Per-CAS-session state: query handlers, OID handler, SQL→handler cache, data queue, error context
callback_handler::callback_dispatchmethod_callback.cppSwitch on METHOD_CALLBACK_*; flushes queued responses with xs_queue_send at the end
callback_handler::prepare / execute / end_transaction / make_out_resultset / generated_keysmethod_callback.cppQuery-related handlers
callback_handler::oid_get / oid_put / oid_cmd / collection_cmdmethod_callback.cppOID handlers
callback_handler::get_sql_semantics / get_global_semanticsmethod_callback.cppCompile-time semantic-check handlers used by PL/CSQL compiler
callback_handler::change_rightsmethod_callback.cppAuth opcode for owner/invoker mode
callback_handler::new_query_handler / free_query_handle / free_query_handle_all / get_query_handler_by_*method_callback.cppSlot table and SQL cache management
cubmethod::oid_handlermethod_oid_handler.{cpp,hpp}OID materialisation cache used by the OID handlers
cubmethod::query_handlermethod_query_handler.{cpp,hpp}Wraps DB_SESSION + DB_QUERY_RESULT for one prepared-statement slot
cubmethod::header, prepare_argsmethod_struct_invoke.{cpp,hpp}Outer envelope structures
cubmethod::dbvalue_packingmethod_struct_value.{cpp,hpp}DB_VALUE pack/unpack with VOBJ fixups
cubmethod::schema_info_*method_struct_schema_info.{cpp,hpp} and method_schema_info.{cpp,hpp}Column and table descriptor types and helpers

Server side (src/sp/pl_executor.cpp) — Path B handlers

Section titled “Server side (src/sp/pl_executor.cpp) — Path B handlers”
SymbolRole
cubpl::executor::request_invoke_commandPacks an invoke_java payload, sends SP_CODE_INVOKE over the connection claimed from the pool
cubpl::executor::response_invoke_commandLoop reading responses; routes results, errors, and SP_CODE_INTERNAL_JDBC envelopes
cubpl::executor::response_callback_commandSwitch on METHOD_CALLBACK_*; dispatches to the per-opcode handler
callback_get_db_parameterReturns transaction isolation, lock-wait, client IDs; memoised on pl_session
callback_prepareServer-side query prepare; returns prepared-statement handle ID
callback_executeExecutes the prepared statement and returns a query ID
callback_fetchCursor batch fetch by query ID
callback_oid_get / callback_oid_put / callback_oid_cmdOID materialisation, mutation, and class/instance commands
callback_collection_cmdSet/multiset/sequence operations
callback_make_outresultPromotes a query result into an out parameter for the SP return
callback_get_generated_keysReturns auto-generated keys for the last INSERT
callback_end_transactionServer-side commit/abort triggered by JVM-side JDBC
callback_get_code_attrReturns a stored procedure’s catalog row attributes — used by the JVM to load referenced SP code
callback_set_pl_session_paramMutates per-session JVM-side parameters (e.g., DBMS_OUTPUT.ENABLE)
SymbolFileRole
cubscan::method::scannersrc/method/method_scan.{cpp,hpp}SCAN_TYPE_METHOD access method; per-row method invocation
scanner::open / close / next_scan / init / clearsrc/method/method_scan.cppStandard SCAN_ID lifecycle plus method-group binding
scanner::get_single_tuple / next_value_arraysrc/method/method_scan.cppPull arguments from upstream list-file, prepare result slots
cubmethod::method_invoke_groupsrc/sp/method_invoke_group.{cpp,hpp}Wraps a pl_signature_array; per-call object that issues METHOD_REQUEST_INVOKE
method_invoke_group::execute / prepare / begin / end / resetsrc/sp/method_invoke_group.cppThe actual call site that ships method requests to CAS
SymbolPath
method_dispatchsrc/method/query_method.cpp:113
method_dispatch_internalsrc/method/query_method.cpp:201
method_invoke_builtinsrc/method/query_method.cpp:253
method_invoke_builtin_internalsrc/method/query_method.cpp:341
method_prepare_argumentssrc/method/query_method.cpp:286
method_fixup_vobjssrc/method/query_method.cpp:541
cubmethod::callback_handler (class)src/method/method_callback.hpp:58
callback_handler::callback_dispatchsrc/method/method_callback.cpp:68
callback_handler::end_transactionsrc/method/method_callback.cpp:138
callback_handler::preparesrc/method/method_callback.cpp:174
cubmethod::header (struct)src/method/method_struct_invoke.hpp:45
cubmethod::prepare_args (struct)src/method/method_struct_invoke.hpp:62
METHOD_REQUEST (enum)src/sp/sp_constants.hpp:184
METHOD_CALLBACK_RESPONSE (enum)src/sp/sp_constants.hpp:203
METHOD_MAX_RECURSION_DEPTH (#define 15)src/sp/sp_constants.hpp:160
METHOD_TYPE (enum)src/sp/sp_constants.hpp:169
cubpl::executor::response_callback_commandsrc/sp/pl_executor.cpp:511
executor::callback_get_db_parametersrc/sp/pl_executor.cpp:606
executor::callback_preparesrc/sp/pl_executor.cpp:651
cubscan::method::scanner::next_scansrc/method/method_scan.cpp:173
cubmethod::method_invoke_group (class)src/sp/method_invoke_group.hpp:66

Symbol names are the canonical anchor; line numbers are hints scoped to this updated: date.

  • Two callback paths, one taxonomy. The opcode table is shared (sp_constants.hpp), but the dispatch implementations are separate (method_callback.cpp for Path A, pl_executor.cpp for Path B). Treating “the callback channel” as a single thing in conversation is a trap: a change to METHOD_CALLBACK_QUERY_PREPARE semantics has to be made in both dispatchers and the wire-compat assumptions with the JVM-side driver checked.
  • METHOD_CALLBACK_GET_SCHEMA_INFO is hard-disabled on the CAS side (assert (false) at the dispatch site) but the supporting code (method_schema_info.{cpp,hpp}, method_struct_schema_info.*) is still compiled because the same types are used by the compile-time helpers. This is intentional but easy to misread as dead code.
  • xmethod_invoke_fold_constants at the bottom of query_method.cpp is #if 0-d-out and not currently invoked. It was a server-side constant folder for method calls; the active call site is cubscan::method::scanner::next_scan.
  • Recursion guard differs by path. Path A uses tran_get_libcas_depth() against METHOD_MAX_RECURSION_DEPTH = 15; Path B uses the PL session’s m_stack_map size against the same constant. Both raise ER_SP_TOO_MANY_NESTED_CALL. The constants are shared but the counters are independent — a single user transaction can’t exceed 15 callbacks on either path, but it could theoretically reach 30 callbacks total if it alternated paths, which is not currently observed in any test.
  • Auth toggling is asymmetric. Path A wraps INVOKE and CALLBACK dispatches with AU_SAVE_AND_ENABLE / AU_RESTORE unconditionally. Path B’s callback_* handlers don’t toggle themselves; they inherit the server worker thread’s auth state, which is set up earlier in the SP invocation by pl_executor.
  • Statement-cache reuse predicate is strict. A query_handler is reused only if (occupied == false) && (tran_id matches or is NULL_TRANID) && (current user matches). This means: a parallel recursive callback that prepares the same SQL will not share a handler with the outer callback (occupied check); a different user in the same CAS session won’t share (the user check); cross-tran reuse only works for handlers that haven’t been bound to a tran yet. The strictness is what prevents result-set entanglement between concurrent callback chains.
  • Path A retirement timeline. With JavaSP and PL/CSQL both on Path B and the obsolete server-side constant folder #if 0-d-out, the only remaining production driver of Path A appears to be C builtin methods invoked via SCAN_TYPE_METHOD. If those are also retired, the entire CAS-side callback_handler becomes dead code. Worth a separate audit.
  • Cross-path counter unification. Whether the libcas-depth and PL-session-stack counters should be unified into one THREAD_ENTRY-scoped depth counter. The current split allows a pathological mixed-path nesting that exceeds 15 in total even though no single path does. Not currently observed but not prevented either.
  • JVM-side dispatch table. The Java-side translation of CUBRIDServerSidePreparedStatement and friends into METHOD_CALLBACK_* opcodes lives in pl_engine/pl_server/ and isn’t covered here. A short follow-up section (or extension to cubrid-pl-javasp.md) could document the JVM-side opcode generation.
  • Compile-time semantic-check error reporting. get_sql_semantics and get_global_semantics pack errors back the same way runtime callbacks do, but the PL/CSQL compiler treats them as parser-level errors (with line/column from the original PL/CSQL source). The end-to-end error-path mapping isn’t fully documented; see cubrid-pl-plcsql.md Open Questions.
  • src/method/method_callback.{cpp,hpp} — CAS-side callback handler
  • src/method/query_method.cpp — CAS-side dispatch entry (method_dispatch, method_dispatch_internal)
  • src/method/method_scan.{cpp,hpp} — server-side SCAN_TYPE_METHOD scanner that drives Path A
  • src/method/method_struct_invoke.{cpp,hpp} — outer envelope structs (header, prepare_args)
  • src/method/method_struct_value.{cpp,hpp}DB_VALUE packing with VOBJ/OID fixups
  • src/method/method_struct_query.{cpp,hpp}, method_struct_oid_info.{cpp,hpp}, method_struct_schema_info.{cpp,hpp} — payload types
  • src/method/method_query_handler.{cpp,hpp}, method_oid_handler.{cpp,hpp}, method_schema_info.{cpp,hpp} — CAS-side per-resource handlers
  • src/sp/sp_constants.hpp — request and callback opcode enums, recursion limit, method type
  • src/sp/pl_executor.cpp — Path B dispatcher (response_callback_command) and twelve handlers
  • src/sp/method_invoke_group.{cpp,hpp} — server-side group abstraction shipping METHOD_REQUEST_INVOKE
  • src/method/AGENTS.mdsrc/method/ agent guide; the gotcha about per-row method invocation cost
  • Sibling docs: cubrid-pl-javasp.md (JavaSP runtime, including the SP_CODE_INTERNAL_JDBC envelope on Path B and the JVM-side CUBRIDServerSideDriver), cubrid-pl-plcsql.md (PL/CSQL compilation pipeline issuing GET_SQL_SEMANTICS / GET_GLOBAL_SEMANTICS requests), cubrid-overview-pl-language.md (subcategory router), cubrid-scan-manager.md (S_METHOD scan type in the access-method catalogue)