Skip to content

CUBRID Query Processing — Section Overview

Contents:

The query-processing subcategory is the largest in the CUBRID code-analysis tree. It covers everything that turns a SQL string into a stream of rows: the front-end that parses and validates the text, the middle-end that picks a plan and serialises it, the back-end that runs the plan against the storage stack, and the runtime helpers that the back-end calls into per row, per group, and per join. Nineteen detail docs sit under this heading, each focusing on one stage or one algorithmic family. This document is their router. It names the stages, draws the pipeline once, and points at the detail docs without repeating their content. If you are new to CUBRID’s QP code, read this first; if you already know one stage and want the next, the Reading order section threads them in dependency order. The companion reading-path document cubrid-rpath-select.md walks a single SELECT * FROM t WHERE x > 10 through the same pipeline as a worked example — this overview explains the topology, that doc explains the trip.

The section’s responsibility, stated plainly, is text → tuples. The boundaries are the broker (which delivers the SQL to the server, covered under cubrid-broker.md in the BC&I subcategory) and the storage stack (which the back-end calls down into, covered in the storage-engine subcategory: heap, B+Tree, page buffer, MVCC). Within those boundaries the section owns the parser, the semantic checker, the query rewriter, the cost-based optimizer, the XASL plan intermediate representation, the plan cache, the Volcano-style executor, the scan manager dispatch, the list-file materialisation substrate, the row-level predicate evaluator, the scalar function library, the external sorter, the GROUP BY / window post-processor, the hash join family, the runtime memoization caches, the partition pruning subsystem, the cursor handle, the SERIAL/auto-increment machinery, and the parallel-query worker pool. Each of those has a dedicated cubrid-<topic>.md; the rest of this document tells you where each one sits in the pipeline, in what order to read them, and which cross-cutting threads tie them together.

The pipeline has three sub-pipelines and a set of runtime helpers that the back-end calls into. Splitting the docs along these three sub-pipelines is what makes the section navigable. Every detail doc falls into exactly one bucket; runtime helpers are the operators and caches that the back-end depends on but that span more than one stage.

Front-end is the compile pipeline that turns SQL text into a semantically validated, normalised intermediate tree. Three docs:

  • cubrid-parser.md — Flex lexer + GLR Bison grammar producing a PT_NODE parse tree out of a per-PARSER_CONTEXT block allocator.
  • cubrid-semantic-check.mdpt_check_with_info chains name resolution, where-clause aggregate check, host-variable replacement, and semantic_check_local/pt_semantic_type for type checking and constant folding, finishing with pt_cnf to push the predicate to conjunctive normal form.
  • cubrid-query-rewrite.mdmq_rewrite and the rewriter family (query_rewrite_select.c, query_rewrite_subquery.c, query_rewrite_set.c, query_rewrite_term.c, query_rewrite_auto_parameterize.c) running predicate pushdown, view inlining, subquery flattening, outer-join reduction, redundant-join elimination, auto-parameterisation, and the LIMIT-clause lowering case study.

Middle-end is the plan stage. It costs alternatives, picks one, lowers the choice into the executor’s IR, and decides whether to remember it for next time. Three docs:

  • cubrid-query-optimizer.mdqo_optimize_query builds a QO_ENV query graph, runs partial-then-total dynamic-programming join enumeration over a 2^N join_info vector with a System R-style fixed-cpu/io + variable-cpu/io cost model, and emits a QO_PLAN tree.
  • cubrid-xasl-generator.mdxasl_generation.c walks the plan with recursive gen_outer/gen_inner and produces an XASL_NODE tree whose aptr/dptr/scan_ptr slots hide subqueries and joins inside one node, plus REGU_VARIABLE / ACCESS_SPEC / OUTPTR_LIST sub-IRs; serialised through the xts_* offset-table machinery into a wire-shippable byte buffer.
  • cubrid-xasl-cache.md — server-wide latch-free hashmap keyed on SHA-1 of the rewritten SQL (“hash text”) with cache_flag refcount, recompile-threshold (RT) cardinality drift detection, and xcache_remove_by_oid per-class invalidation hooked off DDL.

Back-end is the runtime that turns the cached or freshly compiled XASL tree into rows. Three docs:

  • cubrid-query-executor.mdqexec_execute_mainblock_internal dispatches on xasl->type, drives a uniform open/next/close loop over SCAN_ID operators, and pushes results into per-XASL list files; nested-loop join and merge join live here.
  • cubrid-scan-manager.md — one polymorphic SCAN_ID handle, a switch-driven open/start/next/end/close protocol, and per-SCAN_TYPE dispatch into heap, B+Tree, list-file, set, value, JSON-table, dblink, show, parallel-heap, and method scans.
  • cubrid-list-file.md — every materialised tuple stream (sub-query result, sort output, hash-build side, group-by accumulator, final query result) realised as one QFILE_LIST_ID linked-page abstraction backed by a per-query QMGR_TEMP_FILE membuf-then-FILE_TEMP substrate.

Runtime helpers are the operators, caches, and primitives that the back-end stages call into. They cut across the three sub-pipelines — the executor uses the predicate evaluator and the scalar function library on every row, the scan manager and post-processor share the external sorter, and the optimizer cost model sees the hash join — so they are documented as their own family rather than being slotted under one stage.

  • Join algorithms. cubrid-hash-join.md covers Build/Probe in query_hash_join.c reusing the HASH_LIST_SCAN primitive from query_hash_scan.c with three table layouts (in-memory mht_hls, hybrid memory-index-plus-file-tuples, extendible FHS hash file) and grace-style equi-hash partitioning on spill. Nested-loop and merge join are documented under cubrid-query-executor.md because they sit directly on the Volcano scaffolding rather than introducing a new hash substrate.
  • Sort and post-processing. cubrid-external-sort.md documents the two-phase replacement-selection-style run generator (sort_inphase_sort) and balanced k-way merge (sort_exphase_merge) over FILE_TEMP runs exposed through sort_listfile. cubrid-post-processing.md documents qexec_groupby and qexec_execute_analytic — sort-vs-hash GROUP BY strategy choice, hash-spill fallback to external sort when the table outgrows max_agg_hash_size, and window/analytic frame execution.
  • Predicates and operators. cubrid-query-evaluator.md covers the per-row keep/skip verdict — eval_pred walks PRED_EXPR trees of T_PRED boolean nodes and T_EVAL_TERM leaves under three-valued logic, and fetch_peek_dbval dispatches by REGU_VARIABLE::type. cubrid-scalar-functions.md is the operator-primitive layer underneath — arithmetic.c, numeric_opfunc.c, string_opfunc.c, query_opfunc.c, crypt_opfunc.c, and the string_regex_* family.
  • Caches. cubrid-runtime-memoization.md covers the three runtime caches that share one playbook (DB_VALUE-array hash key, fail-on-full budget, hit-ratio guard) at three lifecycle scopes — per-XASL sq_cache for uncorrelated scalar-subquery results, server-wide per-BTID fpcache for deserialised function-index predicates, per-XASL memoize::storage for nested-loop-join inner-side tuples. The XASL plan cache is documented separately under the middle-end.
  • Specialised features. cubrid-partition.md (range/hash/list partitions, PRUNING_CONTEXT pruning at optimize time, per-partition routing at execute time), cubrid-cursor.md (client-side CURSOR_ID over a server-side QFILE_LIST_ID, holdable cursors surviving COMMIT), cubrid-serial.md (sequence/auto-increment state in _db_serial, exclusive object locking, client-side caching), cubrid-parallel-query.md (one global parallel-query worker pool plus three operator-specific orchestrators for heap scan, hash join, and query execute).

The Mermaid below maps every detail doc to the stage it implements. The vertical axis is the data flow (text at top, tuples at bottom); the runtime helpers are drawn off to the side because they are called into rather than passed through.

flowchart TB
    SQL["SQL text<br/>(from broker / CAS)"]

    subgraph FE["Front-end (compile)"]
        direction TB
        Parser["cubrid-parser.md<br/>Flex + GLR Bison &rarr; PT_NODE"]
        SC["cubrid-semantic-check.md<br/>name res, type-check, CNF"]
        Rewrite["cubrid-query-rewrite.md<br/>predicate pushdown, view inline,<br/>subquery flatten, LIMIT lowering"]
        Parser --> SC --> Rewrite
    end

    subgraph ME["Middle-end (plan)"]
        direction TB
        Opt["cubrid-query-optimizer.md<br/>QO_ENV graph, DP join enum,<br/>System-R cost model &rarr; QO_PLAN"]
        XGen["cubrid-xasl-generator.md<br/>QO_PLAN &rarr; XASL_NODE tree<br/>· xts_∗ serialise"]
        XCache["cubrid-xasl-cache.md<br/>SHA-1 plan cache, RT recompile,<br/>per-class OID invalidation"]
        Opt --> XGen --> XCache
    end

    subgraph BE["Back-end (execute)"]
        direction TB
        Exec["cubrid-query-executor.md<br/>qexec_execute_mainblock_internal<br/>Volcano open/next/close"]
        Scan["cubrid-scan-manager.md<br/>SCAN_ID dispatch &rarr;<br/>heap / btree / list / set / json / dblink / px"]
        LF["cubrid-list-file.md<br/>QFILE_LIST_ID linked pages<br/>(materialise & re-scan)"]
        Exec --> Scan
        Exec <--> LF
    end

    subgraph RT["Runtime helpers"]
        direction TB
        Eval["cubrid-query-evaluator.md<br/>PRED_EXPR walk + fetch_peek_dbval"]
        Scalar["cubrid-scalar-functions.md<br/>arithmetic / string / numeric /<br/>JSON / regex / crypt"]
        Sort["cubrid-external-sort.md<br/>run-gen + k-way merge"]
        Post["cubrid-post-processing.md<br/>GROUP BY (sort vs hash) /<br/>analytic / window"]
        HJ["cubrid-hash-join.md<br/>Build/Probe + HASH_LIST_SCAN<br/>· grace-style spill"]
        Mem["cubrid-runtime-memoization.md<br/>sq_cache / fpcache / memoize"]
    end

    subgraph SPEC["Specialised features"]
        direction TB
        Part["cubrid-partition.md<br/>range/hash/list, PRUNING_CONTEXT"]
        Cur["cubrid-cursor.md<br/>CURSOR_ID over QFILE_LIST_ID,<br/>holdable / scrollable"]
        Ser["cubrid-serial.md<br/>_db_serial, AUTO_INCREMENT,<br/>cached values"]
        Par["cubrid-parallel-query.md<br/>parallel-query worker pool,<br/>parallel-heap / hash-join / execute"]
    end

    Tuples["Result tuples<br/>(to cursor / broker / client)"]

    SQL --> FE --> ME --> BE --> Tuples

    Exec -.calls.-> Eval
    Eval -.calls.-> Scalar
    Exec -.uses.-> Sort
    Exec -.uses.-> Post
    Exec -.uses.-> HJ
    Exec -.uses.-> Mem
    Scan -.uses.-> Mem

    ME -.influences.-> Part
    BE -.dispatches.-> Part
    BE --> Cur
    Exec -.uses.-> Ser
    BE -.parallelises via.-> Par

    classDef fe fill:#eef,stroke:#557
    classDef me fill:#efe,stroke:#575
    classDef be fill:#fee,stroke:#755
    classDef rt fill:#fef,stroke:#757
    classDef spec fill:#ffe,stroke:#775
    class Parser,SC,Rewrite fe
    class Opt,XGen,XCache me
    class Exec,Scan,LF be
    class Eval,Scalar,Sort,Post,HJ,Mem rt
    class Part,Cur,Ser,Par spec

The diagram understates one thing: the XASL cache is consulted at the top of the back-end pipeline, not the bottom of the middle-end. On a prepared-statement re-execute the executor reads the cached plan directly and the front-end plus the rest of the middle-end are skipped. The diagram puts the cache last in the middle-end because it is the output boundary of compile work; on the next execute the cache short-circuits everything above it. cubrid-xasl-cache.md is explicit about this hot path.

The dependency graph between detail docs is mostly linear within each sub-pipeline and mostly fan-out from the back-end to the runtime helpers. The following order is the recommended read for someone new to the section who wants the spine before the branches.

Front-end, in order. Each stage consumes the output of the one before it and the data structures only exist after the previous stage runs.

  1. cubrid-parser.md — start here. The PT_NODE tree it produces is the working representation that the next two passes mutate, and knowing its shape (polymorphic-tagged-union, three function-pointer arrays, per-PARSER_CONTEXT block allocator) is a prerequisite for everything else in the front-end.
  2. cubrid-semantic-check.md — read this once you can read a PT_NODE tree. The four passes (name resolution, where-clause aggregate check, host-var replacement, semantic_check_local) each annotate the tree in place; understanding which fields are filled in by which pass is the only way to read later code that reads them.
  3. cubrid-query-rewrite.md — read this last in the front-end. The mq_rewrite family of transformations operates on a fully resolved and type-checked tree, so it presupposes the work of the previous two docs. The LIMIT-clause case study is the most concrete worked example of how a single source-language feature spreads across semantic check, rewrite, plan generation, and runtime.

Middle-end, in order. The optimizer’s QO_PLAN is the input to the XASL generator’s gen_outer/gen_inner walk; the XASL cache keys off the rewritten-SQL hash, not the optimizer output, so strictly speaking the cache is parallel to the generator, but the cache is easier to understand once you know what it caches.

  1. cubrid-query-optimizer.md — the QO_ENV query graph, the System R-style cost model, and the dynamic-programming join enumeration. Read this once you have the rewritten PT_NODE tree from doc 3.
  2. cubrid-xasl-generator.mdgen_outer/gen_inner walking the QO_PLAN to produce XASL_NODE, with aptr/dptr/scan_ptr slot semantics and REGU_VARIABLE / ACCESS_SPEC / OUTPTR_LIST sub-IRs. Read this after the optimizer because it is the downstream consumer.
  3. cubrid-xasl-cache.md — the SHA-1-keyed latch-free hashmap with recompile-threshold drift detection and per-class OID invalidation. Read this last in the middle-end because it presupposes the serialised XASL byte buffer the generator emits.

Back-end, in order. The executor is the spine; the scan manager sits one layer below it; list-file is the materialisation substrate both depend on. Read top-down.

  1. cubrid-query-executor.mdqexec_execute_mainblock_internal, the proc-type dispatch, the Volcano open/next/close loop, and nested-loop and merge join.
  2. cubrid-scan-manager.md — the polymorphic SCAN_ID handle and the per-SCAN_TYPE dispatch. Read this immediately after the executor; the executor calls into it on every iteration.
  3. cubrid-list-file.md — the QFILE_LIST_ID linked-page abstraction. Read this third in the back-end because both the executor (writes to it) and the scan manager (reads from it via list scan) depend on it; understanding the substrate makes the first two docs’ references to “the list file” concrete.

The row-level glue. Once the back-end is in your head, read this:

  1. cubrid-query-evaluator.md — the per-row keep/skip verdict. eval_pred walks PRED_EXPR and fetch_peek_dbval resolves REGU_VARIABLEs. Every back-end operator calls into this on every candidate row.

Deeper into the operators. The next three docs cover the heavyweight algorithms you can ignore on a first pass but cannot ignore once you have a query that uses them.

  1. cubrid-hash-join.md — Build/Probe, three table layouts, grace-style spill. Pair it with the executor doc when you need to follow HASHJOIN_PROC.
  2. cubrid-post-processing.mdqexec_groupby and qexec_execute_analytic, sort vs hash GROUP BY, hash-spill fallback to external sort.
  3. cubrid-external-sort.mdsort_listfile and the two-phase run-generation + k-way merge. Used by post-processing, by ORDER BY, by DISTINCT, by B+Tree bulk load, by parallel index build.

Specialised features. Read on demand. None of these are on the spine of the most common query, but each is required for one major feature class.

  1. cubrid-partition.md — partition pruning at optimize time and per-partition routing at execute time.
  2. cubrid-parallel-query.md — the parallel-query worker pool and the three operator-specific orchestrators (heap scan, hash join, query execute).
  3. cubrid-cursor.md — client-side CURSOR_ID over a server-side QFILE_LIST_ID, holdable cursors surviving COMMIT.
  4. cubrid-serial.md — sequences and auto-increment via _db_serial rows.
  5. cubrid-runtime-memoization.md — the three small caches (sq_cache, fpcache, memoize::storage) that share one playbook.
  6. cubrid-scalar-functions.md — the operator-primitive library. Read this when you need to know what happens once a T_EVAL_TERM leaf calls into qdata_* or db_string_*.

The reading-path doc cubrid-rpath-select.md threads docs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (and a few others outside this section) by walking one literal SELECT * FROM t WHERE x > 10. Read it after the front-end and back-end basics (docs 1, 2, 4, 7) when you want to see the whole spine in one continuous narrative.

Three threads cut across more than one detail doc and are worth naming explicitly so they do not surprise the reader.

Statistics flow. The optimizer’s cost model is statistics-driven, but the catalog tables that store statistics, and the xstats_update_statistics server-side walk that populates them, live under the DDL & Schema subcategory in cubrid-statistics.md. The consumer of statistics is here. cubrid-query-optimizer.md calls out the consumption interface — qo_get_attr_info, qo_iscan_cost, qo_sscan_cost, qo_equal_selectivity, qo_range_selectivity — and cubrid-xasl-cache.md describes the recompile-threshold (RT) check that watches cardinality drift and triggers a soft recompile when statistics have moved enough since the cached plan was generated. Read cubrid-statistics.md once for the producer side; the consumer side is documented in this section.

List-file as the universal materialisation pipe. Every operator in CUBRID that has to materialise — sort output, hash build, sub-query result, group-by accumulator, final result for cursor read-back — uses the same QFILE_LIST_ID substrate. The detail doc cubrid-list-file.md is the source of truth, but the substrate shows up everywhere: cubrid-query-executor.md writes the per-XASL list file the cursor reads back; cubrid-external-sort.md writes sorted runs into list files; cubrid-post-processing.md accumulates GROUP BY rows into list files; cubrid-hash-join.md spills the build side to list-file-backed FILE_TEMP runs; cubrid-cursor.md reads list files back out one network page at a time. Knowing the list-file abstraction once unlocks the materialisation paragraph in every other doc.

Predicate flow from PT_NODE through PRED_EXPR to eval_pred. A predicate has three lifetime stages, each with its own data structure and its own owning doc. The parser produces a PT_NODE tree (e.g. a PT_EXPR of opcode > over a PT_NAME and a PT_VALUE); semantic check resolves and type-checks it; CNF normalisation rewrites the boolean structure; query rewrite pushes it toward the cheapest evaluation site (sargable into an index range, into a join condition, or left as a residual filter). The XASL generator (doc 5) lowers it into a PRED_EXPR tree of T_PRED boolean nodes and T_EVAL_TERM leaves over REGU_VARIABLE operands. The executor (doc 7) and the scan manager (doc 8) attach the right PRED_EXPR to the right scan (sargable predicates go into B+Tree key range terms, residual predicates stay on where_pred); the evaluator (doc 10) walks the PRED_EXPR and calls fetch_peek_dbval to resolve operands, with three-valued logic on the boolean operators. So the same predicate is touched by parser → semantic-check → rewrite → optimizer → XASL generator → executor → scan manager → evaluator → scalar functions, eight docs in this section. The rewrite doc, the XASL generator doc, and the evaluator doc each carry the most concrete code snippets for their slice of the lifetime.

DocStageOne-line job
cubrid-parser.mdFront-endFlex lexer + GLR Bison grammar produces a PT_NODE parse tree out of a per-PARSER_CONTEXT block allocator.
cubrid-semantic-check.mdFront-endpt_check_with_info chains name resolution, where-clause aggregate check, host-var replacement, semantic_check_local (pt_semantic_type for type checking and constant folding), and pt_cnf for CNF normalisation.
cubrid-query-rewrite.mdFront-endmq_rewrite runs predicate pushdown, view inlining, subquery flattening, outer-join reduction, redundant-join elimination, auto-parameterisation, and the LIMIT-clause lowering that produces INST_NUM / ORDERBY_NUM / GROUPBY_NUM predicates.
cubrid-query-optimizer.mdMiddle-endqo_optimize_query builds a QO_ENV query graph, runs partial-then-total dynamic-programming join enumeration with a System R-style cost model, and emits a QO_PLAN tree.
cubrid-xasl-generator.mdMiddle-endxasl_generation.c’s recursive gen_outer/gen_inner walks the plan and produces an XASL_NODE tree (with aptr/dptr/scan_ptr slots and REGU_VARIABLE / ACCESS_SPEC / OUTPTR_LIST sub-IRs); xts_* serialises it.
cubrid-xasl-cache.mdMiddle-endLatch-free server-wide hashmap keyed on SHA-1 of the rewritten SQL, with cache_flag refcount, recompile-threshold drift detection, and xcache_remove_by_oid per-class invalidation hooked off DDL.
cubrid-query-executor.mdBack-endqexec_execute_mainblock_internal dispatches on xasl->type and drives a Volcano-style open/next/close loop with nested-loop and merge join sitting directly on top.
cubrid-scan-manager.mdBack-endOne polymorphic SCAN_ID handle, switch-driven open/start/next/end/close, and per-SCAN_TYPE dispatch into heap, B+Tree, list-file, set, value, JSON-table, dblink, show, parallel-heap, and method scans.
cubrid-list-file.mdBack-endEvery materialised tuple stream (sub-query result, sort output, hash build, group-by accumulator, final result) realised as one QFILE_LIST_ID over a per-query QMGR_TEMP_FILE membuf-then-FILE_TEMP.
cubrid-query-evaluator.mdRuntime helpereval_pred walks PRED_EXPR boolean trees under three-valued logic; fetch_peek_dbval dispatches by REGU_VARIABLE::type into the right per-row resolver.
cubrid-scalar-functions.mdRuntime helperThe operator-primitive layer: arithmetic.c, numeric_opfunc.c, string_opfunc.c, query_opfunc.c, crypt_opfunc.c, and the string_regex_* family that the evaluator dispatches into.
cubrid-external-sort.mdRuntime helperTwo-phase replacement-selection-style run generator (sort_inphase_sort) plus balanced k-way merge (sort_exphase_merge) over FILE_TEMP runs, exposed through sort_listfile.
cubrid-post-processing.mdRuntime helperqexec_groupby and qexec_execute_analytic — sort-vs-hash GROUP BY runtime choice, hash-spill fallback to external sort, window/analytic frame execution.
cubrid-hash-join.mdRuntime helperBuild/Probe in query_hash_join.c reusing the HASH_LIST_SCAN primitive with three table layouts and grace-style equi-hash partitioning on spill.
cubrid-runtime-memoization.mdRuntime helperThree caches sharing one playbook (DB_VALUE-array hash key, fail-on-full budget, hit-ratio guard) at three lifecycle scopes — sq_cache, fpcache, memoize::storage.
cubrid-partition.mdSpecialisedRange/hash/list partitioning, SM_PARTITION rule encoding on the master class, PRUNING_CONTEXT partition elimination at optimize time, per-partition routing at execute time.
cubrid-cursor.mdSpecialisedClient-side CURSOR_ID over server-side QFILE_LIST_ID, page-at-a-time fetch via qfile_get_list_file_page, holdable cursors surviving COMMIT through the session-scoped list.
cubrid-serial.mdSpecialisedSequence/auto-increment via rows in _db_serial, advanced under exclusive object lock with optional client-side caching; AUTO_INCREMENT columns through synthesised <class>_ai_<attr> serials.
cubrid-parallel-query.mdSpecialisedOne global parallel-query worker pool, compute_parallel_degree() policy, three operator-specific orchestrators (parallel_heap_scan::manager, parallel_query::hash_join, parallel_query_execute::query_executor) on top of cubthread::worker_pool.

Three other subcategories reach into query processing or are reached into by it. Knowing where the boundary sits saves you from looking for a doc in the wrong section.

DDL & Schema. The catalog is the primary input to the front-end (name resolution consults catalog metadata to bind every PT_NAME) and to the middle-end (the optimizer reads statistics from the catalog). cubrid-catalog-manager.md covers the catalog itself; cubrid-statistics.md covers the producer side of the statistics pipeline (the xstats_update_statistics walk that populates the catalog’s stats records). The DDL execution path (cubrid-ddl-execution.md) also fires xcache_remove_by_oid to invalidate dependent XASL cache entries — that is the inbound edge from DDL into the middle-end’s plan cache.

Storage Engine. The back-end’s scan manager dispatches into the storage stack on every next(). cubrid-heap-manager.md is the target of S_HEAP_SCAN; cubrid-btree.md is the target of S_INDEX_SCAN; cubrid-page-buffer-manager.md sits underneath both; cubrid-mvcc.md provides the visibility check that the scan manager runs on every candidate row before handing it to the evaluator. cubrid-list-file.md itself sits above the storage layer (it allocates pages through FILE_TEMP from cubrid-disk-manager.md), but its consumers are all in this section.

Procedural Language. PL queries enter the executor through the same xqmgr_execute_query entry as SQL. cubrid-pl-javasp.md and cubrid-pl-plcsql.md cover the PL side that builds and submits those queries, but once a PL stored procedure issues SQL, control crosses the network from cub_pl back into cub_server and lands in cubrid-query-executor.md exactly like a JDBC client query would. The PL family is documented under its own subcategory because the language and runtime are independent of QP, even though the queries it issues land here.