CUBRID Query Processing — Section Overview
Contents:
- What this section covers
- The pipeline at a glance
- Reading order
- Cross-cutting concerns
- Detail-doc summaries
- Adjacent sections
What this section covers
Section titled “What this section covers”The query-processing subcategory is the largest in the CUBRID
code-analysis tree. It covers everything that turns a SQL string into a
stream of rows: the front-end that parses and validates the text, the
middle-end that picks a plan and serialises it, the back-end that runs
the plan against the storage stack, and the runtime helpers that the
back-end calls into per row, per group, and per join. Nineteen detail
docs sit under this heading, each focusing on one stage or one
algorithmic family. This document is their router. It names the stages,
draws the pipeline once, and points at the detail docs without
repeating their content. If you are new to CUBRID’s QP code, read this
first; if you already know one stage and want the next, the
Reading order section threads them in dependency
order. The companion reading-path document cubrid-rpath-select.md
walks a single SELECT * FROM t WHERE x > 10 through the same
pipeline as a worked example — this overview explains the topology,
that doc explains the trip.
The section’s responsibility, stated plainly, is text → tuples.
The boundaries are the broker (which delivers the SQL to the server,
covered under cubrid-broker.md in the BC&I subcategory) and the
storage stack (which the back-end calls down into, covered in the
storage-engine subcategory: heap, B+Tree, page buffer, MVCC). Within
those boundaries the section owns the parser, the semantic checker,
the query rewriter, the cost-based optimizer, the XASL plan
intermediate representation, the plan cache, the Volcano-style
executor, the scan manager dispatch, the list-file materialisation
substrate, the row-level predicate evaluator, the scalar function
library, the external sorter, the GROUP BY / window post-processor,
the hash join family, the runtime memoization caches, the partition
pruning subsystem, the cursor handle, the SERIAL/auto-increment
machinery, and the parallel-query worker pool. Each of those has a
dedicated cubrid-<topic>.md; the rest of this document tells you
where each one sits in the pipeline, in what order to read them, and
which cross-cutting threads tie them together.
The pipeline at a glance
Section titled “The pipeline at a glance”The pipeline has three sub-pipelines and a set of runtime helpers that the back-end calls into. Splitting the docs along these three sub-pipelines is what makes the section navigable. Every detail doc falls into exactly one bucket; runtime helpers are the operators and caches that the back-end depends on but that span more than one stage.
Front-end is the compile pipeline that turns SQL text into a semantically validated, normalised intermediate tree. Three docs:
cubrid-parser.md— Flex lexer + GLR Bison grammar producing aPT_NODEparse tree out of a per-PARSER_CONTEXTblock allocator.cubrid-semantic-check.md—pt_check_with_infochains name resolution, where-clause aggregate check, host-variable replacement, andsemantic_check_local/pt_semantic_typefor type checking and constant folding, finishing withpt_cnfto push the predicate to conjunctive normal form.cubrid-query-rewrite.md—mq_rewriteand the rewriter family (query_rewrite_select.c,query_rewrite_subquery.c,query_rewrite_set.c,query_rewrite_term.c,query_rewrite_auto_parameterize.c) running predicate pushdown, view inlining, subquery flattening, outer-join reduction, redundant-join elimination, auto-parameterisation, and the LIMIT-clause lowering case study.
Middle-end is the plan stage. It costs alternatives, picks one, lowers the choice into the executor’s IR, and decides whether to remember it for next time. Three docs:
cubrid-query-optimizer.md—qo_optimize_querybuilds aQO_ENVquery graph, runs partial-then-total dynamic-programming join enumeration over a2^Njoin_infovector with a System R-style fixed-cpu/io + variable-cpu/io cost model, and emits aQO_PLANtree.cubrid-xasl-generator.md—xasl_generation.cwalks the plan with recursivegen_outer/gen_innerand produces anXASL_NODEtree whoseaptr/dptr/scan_ptrslots hide subqueries and joins inside one node, plusREGU_VARIABLE/ACCESS_SPEC/OUTPTR_LISTsub-IRs; serialised through thexts_*offset-table machinery into a wire-shippable byte buffer.cubrid-xasl-cache.md— server-wide latch-free hashmap keyed on SHA-1 of the rewritten SQL (“hash text”) withcache_flagrefcount, recompile-threshold (RT) cardinality drift detection, andxcache_remove_by_oidper-class invalidation hooked off DDL.
Back-end is the runtime that turns the cached or freshly compiled XASL tree into rows. Three docs:
cubrid-query-executor.md—qexec_execute_mainblock_internaldispatches onxasl->type, drives a uniform open/next/close loop overSCAN_IDoperators, and pushes results into per-XASL list files; nested-loop join and merge join live here.cubrid-scan-manager.md— one polymorphicSCAN_IDhandle, a switch-driven open/start/next/end/close protocol, and per-SCAN_TYPEdispatch into heap, B+Tree, list-file, set, value, JSON-table, dblink, show, parallel-heap, and method scans.cubrid-list-file.md— every materialised tuple stream (sub-query result, sort output, hash-build side, group-by accumulator, final query result) realised as oneQFILE_LIST_IDlinked-page abstraction backed by a per-queryQMGR_TEMP_FILEmembuf-then-FILE_TEMPsubstrate.
Runtime helpers are the operators, caches, and primitives that the back-end stages call into. They cut across the three sub-pipelines — the executor uses the predicate evaluator and the scalar function library on every row, the scan manager and post-processor share the external sorter, and the optimizer cost model sees the hash join — so they are documented as their own family rather than being slotted under one stage.
- Join algorithms.
cubrid-hash-join.mdcovers Build/Probe inquery_hash_join.creusing theHASH_LIST_SCANprimitive fromquery_hash_scan.cwith three table layouts (in-memorymht_hls, hybrid memory-index-plus-file-tuples, extendibleFHShash file) and grace-style equi-hash partitioning on spill. Nested-loop and merge join are documented undercubrid-query-executor.mdbecause they sit directly on the Volcano scaffolding rather than introducing a new hash substrate. - Sort and post-processing.
cubrid-external-sort.mddocuments the two-phase replacement-selection-style run generator (sort_inphase_sort) and balanced k-way merge (sort_exphase_merge) overFILE_TEMPruns exposed throughsort_listfile.cubrid-post-processing.mddocumentsqexec_groupbyandqexec_execute_analytic— sort-vs-hash GROUP BY strategy choice, hash-spill fallback to external sort when the table outgrowsmax_agg_hash_size, and window/analytic frame execution. - Predicates and operators.
cubrid-query-evaluator.mdcovers the per-row keep/skip verdict —eval_predwalksPRED_EXPRtrees ofT_PREDboolean nodes andT_EVAL_TERMleaves under three-valued logic, andfetch_peek_dbvaldispatches byREGU_VARIABLE::type.cubrid-scalar-functions.mdis the operator-primitive layer underneath —arithmetic.c,numeric_opfunc.c,string_opfunc.c,query_opfunc.c,crypt_opfunc.c, and thestring_regex_*family. - Caches.
cubrid-runtime-memoization.mdcovers the three runtime caches that share one playbook (DB_VALUE-array hash key, fail-on-full budget, hit-ratio guard) at three lifecycle scopes — per-XASLsq_cachefor uncorrelated scalar-subquery results, server-wide per-BTIDfpcachefor deserialised function-index predicates, per-XASLmemoize::storagefor nested-loop-join inner-side tuples. The XASL plan cache is documented separately under the middle-end. - Specialised features.
cubrid-partition.md(range/hash/list partitions,PRUNING_CONTEXTpruning at optimize time, per-partition routing at execute time),cubrid-cursor.md(client-sideCURSOR_IDover a server-sideQFILE_LIST_ID, holdable cursors surviving COMMIT),cubrid-serial.md(sequence/auto-increment state in_db_serial, exclusive object locking, client-side caching),cubrid-parallel-query.md(one global parallel-query worker pool plus three operator-specific orchestrators for heap scan, hash join, and query execute).
The Mermaid below maps every detail doc to the stage it implements. The vertical axis is the data flow (text at top, tuples at bottom); the runtime helpers are drawn off to the side because they are called into rather than passed through.
flowchart TB
SQL["SQL text<br/>(from broker / CAS)"]
subgraph FE["Front-end (compile)"]
direction TB
Parser["cubrid-parser.md<br/>Flex + GLR Bison → PT_NODE"]
SC["cubrid-semantic-check.md<br/>name res, type-check, CNF"]
Rewrite["cubrid-query-rewrite.md<br/>predicate pushdown, view inline,<br/>subquery flatten, LIMIT lowering"]
Parser --> SC --> Rewrite
end
subgraph ME["Middle-end (plan)"]
direction TB
Opt["cubrid-query-optimizer.md<br/>QO_ENV graph, DP join enum,<br/>System-R cost model → QO_PLAN"]
XGen["cubrid-xasl-generator.md<br/>QO_PLAN → XASL_NODE tree<br/>· xts_∗ serialise"]
XCache["cubrid-xasl-cache.md<br/>SHA-1 plan cache, RT recompile,<br/>per-class OID invalidation"]
Opt --> XGen --> XCache
end
subgraph BE["Back-end (execute)"]
direction TB
Exec["cubrid-query-executor.md<br/>qexec_execute_mainblock_internal<br/>Volcano open/next/close"]
Scan["cubrid-scan-manager.md<br/>SCAN_ID dispatch →<br/>heap / btree / list / set / json / dblink / px"]
LF["cubrid-list-file.md<br/>QFILE_LIST_ID linked pages<br/>(materialise & re-scan)"]
Exec --> Scan
Exec <--> LF
end
subgraph RT["Runtime helpers"]
direction TB
Eval["cubrid-query-evaluator.md<br/>PRED_EXPR walk + fetch_peek_dbval"]
Scalar["cubrid-scalar-functions.md<br/>arithmetic / string / numeric /<br/>JSON / regex / crypt"]
Sort["cubrid-external-sort.md<br/>run-gen + k-way merge"]
Post["cubrid-post-processing.md<br/>GROUP BY (sort vs hash) /<br/>analytic / window"]
HJ["cubrid-hash-join.md<br/>Build/Probe + HASH_LIST_SCAN<br/>· grace-style spill"]
Mem["cubrid-runtime-memoization.md<br/>sq_cache / fpcache / memoize"]
end
subgraph SPEC["Specialised features"]
direction TB
Part["cubrid-partition.md<br/>range/hash/list, PRUNING_CONTEXT"]
Cur["cubrid-cursor.md<br/>CURSOR_ID over QFILE_LIST_ID,<br/>holdable / scrollable"]
Ser["cubrid-serial.md<br/>_db_serial, AUTO_INCREMENT,<br/>cached values"]
Par["cubrid-parallel-query.md<br/>parallel-query worker pool,<br/>parallel-heap / hash-join / execute"]
end
Tuples["Result tuples<br/>(to cursor / broker / client)"]
SQL --> FE --> ME --> BE --> Tuples
Exec -.calls.-> Eval
Eval -.calls.-> Scalar
Exec -.uses.-> Sort
Exec -.uses.-> Post
Exec -.uses.-> HJ
Exec -.uses.-> Mem
Scan -.uses.-> Mem
ME -.influences.-> Part
BE -.dispatches.-> Part
BE --> Cur
Exec -.uses.-> Ser
BE -.parallelises via.-> Par
classDef fe fill:#eef,stroke:#557
classDef me fill:#efe,stroke:#575
classDef be fill:#fee,stroke:#755
classDef rt fill:#fef,stroke:#757
classDef spec fill:#ffe,stroke:#775
class Parser,SC,Rewrite fe
class Opt,XGen,XCache me
class Exec,Scan,LF be
class Eval,Scalar,Sort,Post,HJ,Mem rt
class Part,Cur,Ser,Par spec
The diagram understates one thing: the XASL cache is consulted at the
top of the back-end pipeline, not the bottom of the middle-end. On a
prepared-statement re-execute the executor reads the cached plan
directly and the front-end plus the rest of the middle-end are
skipped. The diagram puts the cache last in the middle-end because
it is the output boundary of compile work; on the next execute the
cache short-circuits everything above it. cubrid-xasl-cache.md is
explicit about this hot path.
Reading order
Section titled “Reading order”The dependency graph between detail docs is mostly linear within each sub-pipeline and mostly fan-out from the back-end to the runtime helpers. The following order is the recommended read for someone new to the section who wants the spine before the branches.
Front-end, in order. Each stage consumes the output of the one before it and the data structures only exist after the previous stage runs.
cubrid-parser.md— start here. ThePT_NODEtree it produces is the working representation that the next two passes mutate, and knowing its shape (polymorphic-tagged-union, three function-pointer arrays, per-PARSER_CONTEXTblock allocator) is a prerequisite for everything else in the front-end.cubrid-semantic-check.md— read this once you can read aPT_NODEtree. The four passes (name resolution, where-clause aggregate check, host-var replacement,semantic_check_local) each annotate the tree in place; understanding which fields are filled in by which pass is the only way to read later code that reads them.cubrid-query-rewrite.md— read this last in the front-end. Themq_rewritefamily of transformations operates on a fully resolved and type-checked tree, so it presupposes the work of the previous two docs. The LIMIT-clause case study is the most concrete worked example of how a single source-language feature spreads across semantic check, rewrite, plan generation, and runtime.
Middle-end, in order. The optimizer’s QO_PLAN is the input to
the XASL generator’s gen_outer/gen_inner walk; the XASL cache
keys off the rewritten-SQL hash, not the optimizer output, so
strictly speaking the cache is parallel to the generator, but the
cache is easier to understand once you know what it caches.
cubrid-query-optimizer.md— theQO_ENVquery graph, the System R-style cost model, and the dynamic-programming join enumeration. Read this once you have the rewrittenPT_NODEtree from doc 3.cubrid-xasl-generator.md—gen_outer/gen_innerwalking theQO_PLANto produceXASL_NODE, withaptr/dptr/scan_ptrslot semantics andREGU_VARIABLE/ACCESS_SPEC/OUTPTR_LISTsub-IRs. Read this after the optimizer because it is the downstream consumer.cubrid-xasl-cache.md— the SHA-1-keyed latch-free hashmap with recompile-threshold drift detection and per-class OID invalidation. Read this last in the middle-end because it presupposes the serialised XASL byte buffer the generator emits.
Back-end, in order. The executor is the spine; the scan manager sits one layer below it; list-file is the materialisation substrate both depend on. Read top-down.
cubrid-query-executor.md—qexec_execute_mainblock_internal, the proc-type dispatch, the Volcano open/next/close loop, and nested-loop and merge join.cubrid-scan-manager.md— the polymorphicSCAN_IDhandle and the per-SCAN_TYPEdispatch. Read this immediately after the executor; the executor calls into it on every iteration.cubrid-list-file.md— theQFILE_LIST_IDlinked-page abstraction. Read this third in the back-end because both the executor (writes to it) and the scan manager (reads from it via list scan) depend on it; understanding the substrate makes the first two docs’ references to “the list file” concrete.
The row-level glue. Once the back-end is in your head, read this:
cubrid-query-evaluator.md— the per-row keep/skip verdict.eval_predwalksPRED_EXPRandfetch_peek_dbvalresolvesREGU_VARIABLEs. Every back-end operator calls into this on every candidate row.
Deeper into the operators. The next three docs cover the heavyweight algorithms you can ignore on a first pass but cannot ignore once you have a query that uses them.
cubrid-hash-join.md— Build/Probe, three table layouts, grace-style spill. Pair it with the executor doc when you need to followHASHJOIN_PROC.cubrid-post-processing.md—qexec_groupbyandqexec_execute_analytic, sort vs hash GROUP BY, hash-spill fallback to external sort.cubrid-external-sort.md—sort_listfileand the two-phase run-generation + k-way merge. Used by post-processing, by ORDER BY, by DISTINCT, by B+Tree bulk load, by parallel index build.
Specialised features. Read on demand. None of these are on the spine of the most common query, but each is required for one major feature class.
cubrid-partition.md— partition pruning at optimize time and per-partition routing at execute time.cubrid-parallel-query.md— the parallel-query worker pool and the three operator-specific orchestrators (heap scan, hash join, query execute).cubrid-cursor.md— client-sideCURSOR_IDover a server-sideQFILE_LIST_ID, holdable cursors surviving COMMIT.cubrid-serial.md— sequences and auto-increment via_db_serialrows.cubrid-runtime-memoization.md— the three small caches (sq_cache,fpcache,memoize::storage) that share one playbook.cubrid-scalar-functions.md— the operator-primitive library. Read this when you need to know what happens once aT_EVAL_TERMleaf calls intoqdata_*ordb_string_*.
The reading-path doc cubrid-rpath-select.md threads docs 1, 2, 3, 4,
5, 6, 7, 8, 9, 10 (and a few others outside this section) by walking
one literal SELECT * FROM t WHERE x > 10. Read it after the
front-end and back-end basics (docs 1, 2, 4, 7) when you want to see
the whole spine in one continuous narrative.
Cross-cutting concerns
Section titled “Cross-cutting concerns”Three threads cut across more than one detail doc and are worth naming explicitly so they do not surprise the reader.
Statistics flow. The optimizer’s cost model is statistics-driven,
but the catalog tables that store statistics, and the
xstats_update_statistics server-side walk that populates them, live
under the DDL & Schema subcategory in cubrid-statistics.md. The
consumer of statistics is here. cubrid-query-optimizer.md calls
out the consumption interface — qo_get_attr_info, qo_iscan_cost,
qo_sscan_cost, qo_equal_selectivity, qo_range_selectivity — and
cubrid-xasl-cache.md describes the recompile-threshold (RT) check
that watches cardinality drift and triggers a soft recompile when
statistics have moved enough since the cached plan was generated.
Read cubrid-statistics.md once for the producer side; the consumer
side is documented in this section.
List-file as the universal materialisation pipe. Every operator
in CUBRID that has to materialise — sort output, hash build,
sub-query result, group-by accumulator, final result for cursor
read-back — uses the same QFILE_LIST_ID substrate. The detail doc
cubrid-list-file.md is the source of truth, but the substrate
shows up everywhere: cubrid-query-executor.md writes the per-XASL
list file the cursor reads back; cubrid-external-sort.md writes
sorted runs into list files; cubrid-post-processing.md accumulates
GROUP BY rows into list files; cubrid-hash-join.md spills the build
side to list-file-backed FILE_TEMP runs; cubrid-cursor.md reads
list files back out one network page at a time. Knowing the list-file
abstraction once unlocks the materialisation paragraph in every other
doc.
Predicate flow from PT_NODE through PRED_EXPR to eval_pred. A
predicate has three lifetime stages, each with its own data
structure and its own owning doc. The parser produces a PT_NODE
tree (e.g. a PT_EXPR of opcode > over a PT_NAME and a
PT_VALUE); semantic check resolves and type-checks it; CNF
normalisation rewrites the boolean structure; query rewrite pushes it
toward the cheapest evaluation site (sargable into an index range,
into a join condition, or left as a residual filter). The XASL
generator (doc 5) lowers it into a PRED_EXPR tree of T_PRED
boolean nodes and T_EVAL_TERM leaves over REGU_VARIABLE
operands. The executor (doc 7) and the scan manager (doc 8) attach
the right PRED_EXPR to the right scan (sargable predicates go
into B+Tree key range terms, residual predicates stay on
where_pred); the evaluator (doc 10) walks the PRED_EXPR and
calls fetch_peek_dbval to resolve operands, with three-valued
logic on the boolean operators. So the same predicate is touched by
parser → semantic-check → rewrite → optimizer → XASL generator →
executor → scan manager → evaluator → scalar functions, eight docs
in this section. The rewrite doc, the XASL generator doc, and the
evaluator doc each carry the most concrete code snippets for their
slice of the lifetime.
Detail-doc summaries
Section titled “Detail-doc summaries”| Doc | Stage | One-line job |
|---|---|---|
cubrid-parser.md | Front-end | Flex lexer + GLR Bison grammar produces a PT_NODE parse tree out of a per-PARSER_CONTEXT block allocator. |
cubrid-semantic-check.md | Front-end | pt_check_with_info chains name resolution, where-clause aggregate check, host-var replacement, semantic_check_local (pt_semantic_type for type checking and constant folding), and pt_cnf for CNF normalisation. |
cubrid-query-rewrite.md | Front-end | mq_rewrite runs predicate pushdown, view inlining, subquery flattening, outer-join reduction, redundant-join elimination, auto-parameterisation, and the LIMIT-clause lowering that produces INST_NUM / ORDERBY_NUM / GROUPBY_NUM predicates. |
cubrid-query-optimizer.md | Middle-end | qo_optimize_query builds a QO_ENV query graph, runs partial-then-total dynamic-programming join enumeration with a System R-style cost model, and emits a QO_PLAN tree. |
cubrid-xasl-generator.md | Middle-end | xasl_generation.c’s recursive gen_outer/gen_inner walks the plan and produces an XASL_NODE tree (with aptr/dptr/scan_ptr slots and REGU_VARIABLE / ACCESS_SPEC / OUTPTR_LIST sub-IRs); xts_* serialises it. |
cubrid-xasl-cache.md | Middle-end | Latch-free server-wide hashmap keyed on SHA-1 of the rewritten SQL, with cache_flag refcount, recompile-threshold drift detection, and xcache_remove_by_oid per-class invalidation hooked off DDL. |
cubrid-query-executor.md | Back-end | qexec_execute_mainblock_internal dispatches on xasl->type and drives a Volcano-style open/next/close loop with nested-loop and merge join sitting directly on top. |
cubrid-scan-manager.md | Back-end | One polymorphic SCAN_ID handle, switch-driven open/start/next/end/close, and per-SCAN_TYPE dispatch into heap, B+Tree, list-file, set, value, JSON-table, dblink, show, parallel-heap, and method scans. |
cubrid-list-file.md | Back-end | Every materialised tuple stream (sub-query result, sort output, hash build, group-by accumulator, final result) realised as one QFILE_LIST_ID over a per-query QMGR_TEMP_FILE membuf-then-FILE_TEMP. |
cubrid-query-evaluator.md | Runtime helper | eval_pred walks PRED_EXPR boolean trees under three-valued logic; fetch_peek_dbval dispatches by REGU_VARIABLE::type into the right per-row resolver. |
cubrid-scalar-functions.md | Runtime helper | The operator-primitive layer: arithmetic.c, numeric_opfunc.c, string_opfunc.c, query_opfunc.c, crypt_opfunc.c, and the string_regex_* family that the evaluator dispatches into. |
cubrid-external-sort.md | Runtime helper | Two-phase replacement-selection-style run generator (sort_inphase_sort) plus balanced k-way merge (sort_exphase_merge) over FILE_TEMP runs, exposed through sort_listfile. |
cubrid-post-processing.md | Runtime helper | qexec_groupby and qexec_execute_analytic — sort-vs-hash GROUP BY runtime choice, hash-spill fallback to external sort, window/analytic frame execution. |
cubrid-hash-join.md | Runtime helper | Build/Probe in query_hash_join.c reusing the HASH_LIST_SCAN primitive with three table layouts and grace-style equi-hash partitioning on spill. |
cubrid-runtime-memoization.md | Runtime helper | Three caches sharing one playbook (DB_VALUE-array hash key, fail-on-full budget, hit-ratio guard) at three lifecycle scopes — sq_cache, fpcache, memoize::storage. |
cubrid-partition.md | Specialised | Range/hash/list partitioning, SM_PARTITION rule encoding on the master class, PRUNING_CONTEXT partition elimination at optimize time, per-partition routing at execute time. |
cubrid-cursor.md | Specialised | Client-side CURSOR_ID over server-side QFILE_LIST_ID, page-at-a-time fetch via qfile_get_list_file_page, holdable cursors surviving COMMIT through the session-scoped list. |
cubrid-serial.md | Specialised | Sequence/auto-increment via rows in _db_serial, advanced under exclusive object lock with optional client-side caching; AUTO_INCREMENT columns through synthesised <class>_ai_<attr> serials. |
cubrid-parallel-query.md | Specialised | One global parallel-query worker pool, compute_parallel_degree() policy, three operator-specific orchestrators (parallel_heap_scan::manager, parallel_query::hash_join, parallel_query_execute::query_executor) on top of cubthread::worker_pool. |
Adjacent sections
Section titled “Adjacent sections”Three other subcategories reach into query processing or are reached into by it. Knowing where the boundary sits saves you from looking for a doc in the wrong section.
DDL & Schema. The catalog is the primary input to the front-end
(name resolution consults catalog metadata to bind every PT_NAME)
and to the middle-end (the optimizer reads statistics from the
catalog). cubrid-catalog-manager.md covers the catalog itself;
cubrid-statistics.md covers the producer side of the statistics
pipeline (the xstats_update_statistics walk that populates the
catalog’s stats records). The DDL execution path
(cubrid-ddl-execution.md) also fires xcache_remove_by_oid to
invalidate dependent XASL cache entries — that is the inbound edge
from DDL into the middle-end’s plan cache.
Storage Engine. The back-end’s scan manager dispatches into the
storage stack on every next(). cubrid-heap-manager.md is the
target of S_HEAP_SCAN; cubrid-btree.md is the target of
S_INDEX_SCAN; cubrid-page-buffer-manager.md sits underneath both;
cubrid-mvcc.md provides the visibility check that the scan manager
runs on every candidate row before handing it to the evaluator.
cubrid-list-file.md itself sits above the storage layer (it
allocates pages through FILE_TEMP from cubrid-disk-manager.md),
but its consumers are all in this section.
Procedural Language. PL queries enter the executor through the
same xqmgr_execute_query entry as SQL. cubrid-pl-javasp.md and
cubrid-pl-plcsql.md cover the PL side that builds and submits
those queries, but once a PL stored procedure issues SQL, control
crosses the network from cub_pl back into cub_server and lands
in cubrid-query-executor.md exactly like a JDBC client query
would. The PL family is documented under its own subcategory because
the language and runtime are independent of QP, even though the
queries it issues land here.