Compare commits
56 Commits
1.11.lates
...
enable-pos
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cf4384da38 | ||
|
|
71a6e53102 | ||
|
|
c4dc80dcd2 | ||
|
|
8097a34726 | ||
|
|
b66dff7278 | ||
|
|
22d21edb4b | ||
|
|
bef7928e22 | ||
|
|
c573131d91 | ||
|
|
f10d84d05e | ||
|
|
79a4c8969e | ||
|
|
9a80308fcf | ||
|
|
7a13d08376 | ||
|
|
9e9f5b8e57 | ||
|
|
9cd6a23eba | ||
|
|
e46c37cf07 | ||
|
|
df23f398a6 | ||
|
|
97df9278c0 | ||
|
|
748d352b6b | ||
|
|
bbd8fa02f1 | ||
|
|
61009f6ba7 | ||
|
|
ee7ecdc29f | ||
|
|
d74b58a137 | ||
|
|
12b04e7d2f | ||
|
|
5d56a052a7 | ||
|
|
62a8ea05a6 | ||
|
|
1219bd49aa | ||
|
|
791d1ebdcd | ||
|
|
148b9b41a5 | ||
|
|
d096a6776e | ||
|
|
8ff86d35ea | ||
|
|
087f8167ec | ||
|
|
bcb07ceb7b | ||
|
|
c559848044 | ||
|
|
3de0160b00 | ||
|
|
2c7f49a71e | ||
|
|
518c360a29 | ||
|
|
8cf51fddba | ||
|
|
8e128eee8e | ||
|
|
94b69b1578 | ||
|
|
0216e32c7f | ||
|
|
bbd078089e | ||
|
|
575bac3172 | ||
|
|
bca2211246 | ||
|
|
0015e35a1b | ||
|
|
09bce7af63 | ||
|
|
cb7c4a7dce | ||
|
|
5555a3dd25 | ||
|
|
0e30db4e82 | ||
|
|
b2ff6ab5a7 | ||
|
|
48218be274 | ||
|
|
500208c009 | ||
|
|
0162b71e94 | ||
|
|
811e4ee955 | ||
|
|
b79ec3c33b | ||
|
|
e32718e666 | ||
|
|
f6e0793d00 |
@@ -1,151 +0,0 @@
|
||||
## dbt-core 1.11.0 - December 19, 2025
|
||||
|
||||
### Features
|
||||
|
||||
- Add file_format to catalog integration config ([#11695](https://github.com/dbt-labs/dbt-core/issues/11695))
|
||||
- 11561 ([#deprecate](https://github.com/dbt-labs/dbt-core/issues/deprecate), [#--models,--model,](https://github.com/dbt-labs/dbt-core/issues/--models,--model,), [#and](https://github.com/dbt-labs/dbt-core/issues/and), [#-m](https://github.com/dbt-labs/dbt-core/issues/-m), [#flags](https://github.com/dbt-labs/dbt-core/issues/flags))
|
||||
- Update jsonschemas with builtin data test properties and exposure configs in dbt_project.yml for more accurate deprecations ([#11335](https://github.com/dbt-labs/dbt-core/issues/11335))
|
||||
- Support loaded_at_query and loaded_at_field on source and table configs ([#11659](https://github.com/dbt-labs/dbt-core/issues/11659))
|
||||
- Begin validating configs from model sql files ([#11727](https://github.com/dbt-labs/dbt-core/issues/11727))
|
||||
- Deprecate `overrides` property for sources ([#11566](https://github.com/dbt-labs/dbt-core/issues/11566))
|
||||
- Create constrained namespace for dbt engine env vars ([#11340](https://github.com/dbt-labs/dbt-core/issues/11340))
|
||||
- Gate jsonschema validations by adapter ([#11680](https://github.com/dbt-labs/dbt-core/issues/11680))
|
||||
- Deprecate top-level argument properties in generic tests ([#11847](https://github.com/dbt-labs/dbt-core/issues/11847))
|
||||
- Deprecate {{ modules.itertools }} usage ([#11725](https://github.com/dbt-labs/dbt-core/issues/11725))
|
||||
- Default require_generic_test_arguments_property flag to True - The 'arguments' property will be parsed as keyword arguments to data tests, if provided ([#11911](https://github.com/dbt-labs/dbt-core/issues/11911))
|
||||
- Support Nested Key Traversal in dbt ls json output ([#11919](https://github.com/dbt-labs/dbt-core/issues/11919))
|
||||
- No-op when project-level `quoting.snowflake_ignore_case` is set. ([#11882](https://github.com/dbt-labs/dbt-core/issues/11882))
|
||||
- Support UDFs by allowing user definition of `function` nodes ([#11923](https://github.com/dbt-labs/dbt-core/issues/11923))
|
||||
- Support listing functions via `list` command ([#11967](https://github.com/dbt-labs/dbt-core/issues/11967))
|
||||
- Support selecting funciton nodes via: name, file path, and resource type ([#11962](https://github.com/dbt-labs/dbt-core/issues/11962), [#11958](https://github.com/dbt-labs/dbt-core/issues/11958), [#11961](https://github.com/dbt-labs/dbt-core/issues/11961))
|
||||
- Parse catalogs.yml during parse, seed, and test commands ([#12002](https://github.com/dbt-labs/dbt-core/issues/12002))
|
||||
- Handle creation of function nodes during DAG execution ([#11965](https://github.com/dbt-labs/dbt-core/issues/11965))
|
||||
- Support configuring model.config.freshness.build_after.updates_on without period or count ([#12019](https://github.com/dbt-labs/dbt-core/issues/12019))
|
||||
- Add `function` macro to jinja context ([#11972](https://github.com/dbt-labs/dbt-core/issues/11972))
|
||||
- Adding run_started_at to manifest.json metadata ([#12047](https://github.com/dbt-labs/dbt-core/issues/12047))
|
||||
- Validate {{ config }} in SQL for models that don't statically parse ([#12046](https://github.com/dbt-labs/dbt-core/issues/12046))
|
||||
- Add `type` property to `function` nodes ([#12042](https://github.com/dbt-labs/dbt-core/issues/12042), [#12037](https://github.com/dbt-labs/dbt-core/issues/12037))
|
||||
- Support function nodes for unit tested models ([#12024](https://github.com/dbt-labs/dbt-core/issues/12024))
|
||||
- Support partial parsing for function nodes ([#12072](https://github.com/dbt-labs/dbt-core/issues/12072))
|
||||
- Allow for the specification of function volatility ([#QMalcolm](https://github.com/dbt-labs/dbt-core/issues/QMalcolm))
|
||||
- Add python UDF parsing support ([#12043](https://github.com/dbt-labs/dbt-core/issues/12043))
|
||||
- Allow for defining funciton arguments with default values ([#12044](https://github.com/dbt-labs/dbt-core/issues/12044))
|
||||
- Raise jsonschema-based deprecation warnings by default ([#12240](https://github.com/dbt-labs/dbt-core/issues/12240))
|
||||
- :bug: :snowman: Disable unit tests whose model is disabled ([#10540](https://github.com/dbt-labs/dbt-core/issues/10540))
|
||||
- Implement config.meta_get and config.meta_require ([#12012](https://github.com/dbt-labs/dbt-core/issues/12012))
|
||||
|
||||
### Fixes
|
||||
|
||||
- Don't warn for metricflow_time_spine with non-day grain ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690))
|
||||
- Fix source freshness set via config to handle explicit nulls ([#11685](https://github.com/dbt-labs/dbt-core/issues/11685))
|
||||
- Ensure build_after is present in model freshness in parsing, otherwise skip freshness definition ([#11709](https://github.com/dbt-labs/dbt-core/issues/11709))
|
||||
- Ensure source node `.freshness` is equal to node's `.config.freshness` ([#11717](https://github.com/dbt-labs/dbt-core/issues/11717))
|
||||
- ignore invalid model freshness configs in inline model configs ([#11728](https://github.com/dbt-labs/dbt-core/issues/11728))
|
||||
- Fix store_failures hierarachical config parsing ([#10165](https://github.com/dbt-labs/dbt-core/issues/10165))
|
||||
- Remove model freshness property support in favor of config level support ([#11713](https://github.com/dbt-labs/dbt-core/issues/11713))
|
||||
- Bump dbt-common to 1.25.0 to access WarnErrorOptionsV2 ([#11755](https://github.com/dbt-labs/dbt-core/issues/11755))
|
||||
- ensure consistent casing in column names while processing user unit tests ([#11770](https://github.com/dbt-labs/dbt-core/issues/11770))
|
||||
- Update jsonschema definitions with nested config defs, cloud info, and dropping source overrides ([#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A))
|
||||
- Make `GenericJSONSchemaValidationDeprecation` a "preview" deprecation ([#11814](https://github.com/dbt-labs/dbt-core/issues/11814))
|
||||
- Correct JSONSchema Semantic Layer node issues ([#11818](https://github.com/dbt-labs/dbt-core/issues/11818))
|
||||
- Improve SL JSONSchema definitions ([#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A))
|
||||
- raise MissingPlusPrefixDeprecation instead of GenericJSONSchemaValidationDeprecation when config missing plus prefix in dbt_project.yml ([#11826](https://github.com/dbt-labs/dbt-core/issues/11826))
|
||||
- Propagate config.meta and config.tags to top-level on source nodes ([#11839](https://github.com/dbt-labs/dbt-core/issues/11839))
|
||||
- Safe handling of malformed config.tags on sources/tables ([#11855](https://github.com/dbt-labs/dbt-core/issues/11855))
|
||||
- Quoting the event_time field when the configuration says so ([#11858](https://github.com/dbt-labs/dbt-core/issues/11858))
|
||||
- Raise PropertyMovedToConfigDeprecation instead of CustomTopLevelKeyDeprecation when additional attribute is a valid node config ([#11879](https://github.com/dbt-labs/dbt-core/issues/11879))
|
||||
- Avoid redundant node patch removal during partial parsing ([#11886](https://github.com/dbt-labs/dbt-core/issues/11886))
|
||||
- Comply with strict `str` type when `block.contents` is `None` ([#11492](https://github.com/dbt-labs/dbt-core/issues/11492))
|
||||
- Remove duplicative PropertyMovedToConfigDeprecation for source freshness ([#11880](https://github.com/dbt-labs/dbt-core/issues/11880))
|
||||
- Add path to MissingArgumentsPropertyInGenericTestDeprecation message ([#11940](https://github.com/dbt-labs/dbt-core/issues/11940))
|
||||
- Unhide sample mode CLI flag ([#11959](https://github.com/dbt-labs/dbt-core/issues/11959))
|
||||
- Implement checked_agg_time_dimension_for_simple_metric to satisfy dbt-semantic-interfaces>0.9.0 ([#11998](https://github.com/dbt-labs/dbt-core/issues/11998))
|
||||
- Propagate column meta/tags from config to tests ([#11984](https://github.com/dbt-labs/dbt-core/issues/11984))
|
||||
- Skip initial render of loaded_at_query when specified as source or table config ([#11973](https://github.com/dbt-labs/dbt-core/issues/11973))
|
||||
- Guarantee instantiation result and thread_exception prior to access to avoid thread hangs ([#12013](https://github.com/dbt-labs/dbt-core/issues/12013))
|
||||
- Fixes a bug in the logic for legacy time spine deprecation warnings. ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690))
|
||||
- Fix errors in partial parsing when working with versioned models ([#11869](https://github.com/dbt-labs/dbt-core/issues/11869))
|
||||
- Fix property names of function nodes (arguments.data_type, returns, returns.data_type) ([#12064](https://github.com/dbt-labs/dbt-core/issues/12064))
|
||||
- Exclude `functions` from being filtered by `empty` and `event_time` ([#12066](https://github.com/dbt-labs/dbt-core/issues/12066))
|
||||
- Fix case of successful function status in logs ([#12075](https://github.com/dbt-labs/dbt-core/issues/12075))
|
||||
- Fix `ref` support in function nodes ([#12076](https://github.com/dbt-labs/dbt-core/issues/12076))
|
||||
- Move function node `type` into it's config ([#12101](https://github.com/dbt-labs/dbt-core/issues/12101))
|
||||
- Support setting function node configs from dbt_project.yml ([#12096](https://github.com/dbt-labs/dbt-core/issues/12096))
|
||||
- Fix parse error when build_after.count set to 0 ([#12136](https://github.com/dbt-labs/dbt-core/issues/12136))
|
||||
- Stop compiling python udfs like python models ([#12153](https://github.com/dbt-labs/dbt-core/issues/12153))
|
||||
- For metric names, fix bug allowing hyphens (not allowed in metricflow already), make validation throw ValidationErrors (not ParsingErrors), and add tests. ([#n/a](https://github.com/dbt-labs/dbt-core/issues/n/a))
|
||||
- Allow dbt deps to run when vars lack defaults in dbt_project.yml ([#8913](https://github.com/dbt-labs/dbt-core/issues/8913))
|
||||
- Include macros in unit test parsing ([#10157](https://github.com/dbt-labs/dbt-core/issues/10157))
|
||||
- Restore DuplicateResourceNameError for intra-project node name duplication, behind behavior flag `require_unique_project_resource_names` ([#12152](https://github.com/dbt-labs/dbt-core/issues/12152))
|
||||
- Allow the usage of `function` with `--exclude-resource-type` flag ([#12143](https://github.com/dbt-labs/dbt-core/issues/12143))
|
||||
- Fix bug where schemas of functions weren't guaranteed to exist ([#12142](https://github.com/dbt-labs/dbt-core/issues/12142))
|
||||
- :bug: :snowman: Correctly reference foreign key references when --defer and --state provided ([#11885](https://github.com/dbt-labs/dbt-core/issues/11885))
|
||||
- Fix generation of deprecations summary ([#12146](https://github.com/dbt-labs/dbt-core/issues/12146))
|
||||
- :bug: :snowman: Add exception when using --state and referring to a removed test ([#10630](https://github.com/dbt-labs/dbt-core/issues/10630))
|
||||
- :bug: :snowman: Stop emitting `NoNodesForSelectionCriteria` three times during `build` command ([#11627](https://github.com/dbt-labs/dbt-core/issues/11627))
|
||||
- :bug: :snowman: Fix long Python stack traces appearing when package dependencies have incompatible version requirements ([#12049](https://github.com/dbt-labs/dbt-core/issues/12049))
|
||||
- :bug: :snowman: Fixed issue where changing data type size/precision/scale (e.g., varchar(3) to varchar(10)) incorrectly triggered a breaking change error fo ([#11186](https://github.com/dbt-labs/dbt-core/issues/11186))
|
||||
- :bug: :snowman: Support unit testing models that depend on sources with the same name ([#11975](https://github.com/dbt-labs/dbt-core/issues/11975), [#10433](https://github.com/dbt-labs/dbt-core/issues/10433))
|
||||
- Fix bug in partial parsing when updating a model with a schema file that is referenced by a singular test ([#12223](https://github.com/dbt-labs/dbt-core/issues/12223))
|
||||
- :bug: :snowman: Avoid retrying successful run-operation commands ([#11850](https://github.com/dbt-labs/dbt-core/issues/11850))
|
||||
- :bug: :snowman: Fix `dbt deps --add-package` crash when packages.yml contains `warn-unpinned: false` ([#9104](https://github.com/dbt-labs/dbt-core/issues/9104))
|
||||
- :bug: :snowman: Improve `dbt deps --add-package` duplicate detection with better cross-source matching and word boundaries ([#12239](https://github.com/dbt-labs/dbt-core/issues/12239))
|
||||
- :bug: :snowman: Fix false positive deprecation warning of pre/post-hook SQL configs ([#12244](https://github.com/dbt-labs/dbt-core/issues/12244))
|
||||
- :bug: :snowman: Fix ref resolution within package when duplicate nodes exist, behind require_ref_searches_node_package_before_root behavior change flag ([#11351](https://github.com/dbt-labs/dbt-core/issues/11351))
|
||||
- Improve error message clarity when detecting nodes with space in name ([#11835](https://github.com/dbt-labs/dbt-core/issues/11835))
|
||||
- :bug: :snowman:Propagate exceptions for NodeFinished callbacks in dbtRunner ([#11612](https://github.com/dbt-labs/dbt-core/issues/11612))
|
||||
- Adds omitted return statement to RuntimeConfigObject.meta_require method ([#12288](https://github.com/dbt-labs/dbt-core/issues/12288))
|
||||
- Do not raise deprecation warning when encountering dataset or project configs for bigquery ([#12285](https://github.com/dbt-labs/dbt-core/issues/12285))
|
||||
|
||||
### Under the Hood
|
||||
|
||||
- Prevent overcounting PropertyMovedToConfigDeprecation for source freshness ([#11660](https://github.com/dbt-labs/dbt-core/issues/11660))
|
||||
- call adapter.add_catalog_integration during parse_manifest ([#11889](https://github.com/dbt-labs/dbt-core/issues/11889))
|
||||
- Fix docker os dependency install issue ([#11934](https://github.com/dbt-labs/dbt-core/issues/11934))
|
||||
- Ensure dbt-core modules aren't importing versioned artifact resources directly ([#11951](https://github.com/dbt-labs/dbt-core/issues/11951))
|
||||
- Update jsonschemas used for schema-based deprecations ([#11987](https://github.com/dbt-labs/dbt-core/issues/11987))
|
||||
- Introduce dbt_version-based manifest json upgrade framework to avoid state:modified false positives on minor evolutions ([#12006](https://github.com/dbt-labs/dbt-core/issues/12006))
|
||||
- Reorganize jsonschemas directory structure ([#12121](https://github.com/dbt-labs/dbt-core/issues/12121))
|
||||
- add dbt/jsonschemas to manifest.in ([#12126](https://github.com/dbt-labs/dbt-core/issues/12126))
|
||||
- Move from setup.py to pyproject.toml ([#5696](https://github.com/dbt-labs/dbt-core/issues/5696))
|
||||
- Fixes issue where config isn't propagated to metric from measure when set as create_metric:True ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
|
||||
- Support DBT_ENGINE prefix for record-mode env vars ([#12149](https://github.com/dbt-labs/dbt-core/issues/12149))
|
||||
- Update jsonschemas for schema.yml and dbt_project.yml deprecations ([#12180](https://github.com/dbt-labs/dbt-core/issues/12180))
|
||||
- Replace setuptools and tox with hatch for build, test, and environment management. ([#12151](https://github.com/dbt-labs/dbt-core/issues/12151))
|
||||
- Bump lower bound for dbt-common to 1.37.2 ([#12284](https://github.com/dbt-labs/dbt-core/issues/12284))
|
||||
|
||||
### Dependencies
|
||||
|
||||
- Bump minimum jsonschema version to `4.19.1` ([#11740](https://github.com/dbt-labs/dbt-core/issues/11740))
|
||||
- Allow for either pydantic v1 and v2 ([#11634](https://github.com/dbt-labs/dbt-core/issues/11634))
|
||||
- Bump dbt-common minimum to 1.25.1 ([#11789](https://github.com/dbt-labs/dbt-core/issues/11789))
|
||||
- Upgrade to dbt-semantic-interfaces==0.9.0 for more robust saved query support. ([#11809](https://github.com/dbt-labs/dbt-core/issues/11809))
|
||||
- upgrade protobuf to 6.0 ([#11916](https://github.com/dbt-labs/dbt-core/issues/11916))
|
||||
- Bump dbt-adapters minimum to 1.16.5 ([#11932](https://github.com/dbt-labs/dbt-core/issues/11932))
|
||||
- Bump actions/setup-python from 5 to 6 ([#11993](https://github.com/dbt-labs/dbt-core/issues/11993))
|
||||
- Loosen dbt-semantic-interfaces lower pin to >=0.9.0 ([#12005](https://github.com/dbt-labs/dbt-core/issues/12005))
|
||||
- Drop support for python 3.9 ([#12118](https://github.com/dbt-labs/dbt-core/issues/12118))
|
||||
|
||||
### Contributors
|
||||
- [@12030](https://github.com/12030) ([#QMalcolm](https://github.com/dbt-labs/dbt-core/issues/QMalcolm))
|
||||
- [@3loka](https://github.com/3loka) ([#8913](https://github.com/dbt-labs/dbt-core/issues/8913))
|
||||
- [@MichelleArk](https://github.com/MichelleArk) ([#11818](https://github.com/dbt-labs/dbt-core/issues/11818))
|
||||
- [@QMalcolm](https://github.com/QMalcolm) ([#11727](https://github.com/dbt-labs/dbt-core/issues/11727), [#11340](https://github.com/dbt-labs/dbt-core/issues/11340), [#11680](https://github.com/dbt-labs/dbt-core/issues/11680), [#11923](https://github.com/dbt-labs/dbt-core/issues/11923), [#11967](https://github.com/dbt-labs/dbt-core/issues/11967), [#11962](https://github.com/dbt-labs/dbt-core/issues/11962), [#11958](https://github.com/dbt-labs/dbt-core/issues/11958), [#11961](https://github.com/dbt-labs/dbt-core/issues/11961), [#11965](https://github.com/dbt-labs/dbt-core/issues/11965), [#11972](https://github.com/dbt-labs/dbt-core/issues/11972), [#12042](https://github.com/dbt-labs/dbt-core/issues/12042), [#12037](https://github.com/dbt-labs/dbt-core/issues/12037), [#12024](https://github.com/dbt-labs/dbt-core/issues/12024), [#12072](https://github.com/dbt-labs/dbt-core/issues/12072), [#12043](https://github.com/dbt-labs/dbt-core/issues/12043), [#12044](https://github.com/dbt-labs/dbt-core/issues/12044), [#11685](https://github.com/dbt-labs/dbt-core/issues/11685), [#11709](https://github.com/dbt-labs/dbt-core/issues/11709), [#11717](https://github.com/dbt-labs/dbt-core/issues/11717), [#11713](https://github.com/dbt-labs/dbt-core/issues/11713), [#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A), [#11814](https://github.com/dbt-labs/dbt-core/issues/11814), [#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A), [#11959](https://github.com/dbt-labs/dbt-core/issues/11959), [#12064](https://github.com/dbt-labs/dbt-core/issues/12064), [#12066](https://github.com/dbt-labs/dbt-core/issues/12066), [#12075](https://github.com/dbt-labs/dbt-core/issues/12075), [#12076](https://github.com/dbt-labs/dbt-core/issues/12076), [#12101](https://github.com/dbt-labs/dbt-core/issues/12101), [#12096](https://github.com/dbt-labs/dbt-core/issues/12096), [#12153](https://github.com/dbt-labs/dbt-core/issues/12153), [#12143](https://github.com/dbt-labs/dbt-core/issues/12143), [#12142](https://github.com/dbt-labs/dbt-core/issues/12142), [#11627](https://github.com/dbt-labs/dbt-core/issues/11627), [#11934](https://github.com/dbt-labs/dbt-core/issues/11934), [#11951](https://github.com/dbt-labs/dbt-core/issues/11951), [#11740](https://github.com/dbt-labs/dbt-core/issues/11740), [#11634](https://github.com/dbt-labs/dbt-core/issues/11634), [#11789](https://github.com/dbt-labs/dbt-core/issues/11789), [#11932](https://github.com/dbt-labs/dbt-core/issues/11932), [#12118](https://github.com/dbt-labs/dbt-core/issues/12118))
|
||||
- [@WilliamDee](https://github.com/WilliamDee) ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
|
||||
- [@aksestok](https://github.com/aksestok) ([#11882](https://github.com/dbt-labs/dbt-core/issues/11882))
|
||||
- [@aranke](https://github.com/aranke) ([#11660](https://github.com/dbt-labs/dbt-core/issues/11660))
|
||||
- [@asiunov](https://github.com/asiunov) ([#12146](https://github.com/dbt-labs/dbt-core/issues/12146))
|
||||
- [@colin-rogers-dbt](https://github.com/colin-rogers-dbt) ([#11695](https://github.com/dbt-labs/dbt-core/issues/11695), [#11728](https://github.com/dbt-labs/dbt-core/issues/11728), [#11770](https://github.com/dbt-labs/dbt-core/issues/11770), [#11889](https://github.com/dbt-labs/dbt-core/issues/11889), [#11916](https://github.com/dbt-labs/dbt-core/issues/11916))
|
||||
- [@courtneyholcomb](https://github.com/courtneyholcomb) ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690), [#11690](https://github.com/dbt-labs/dbt-core/issues/11690), [#11809](https://github.com/dbt-labs/dbt-core/issues/11809))
|
||||
- [@emmyoop](https://github.com/emmyoop) ([#10630](https://github.com/dbt-labs/dbt-core/issues/10630), [#12049](https://github.com/dbt-labs/dbt-core/issues/12049), [#11186](https://github.com/dbt-labs/dbt-core/issues/11186), [#9104](https://github.com/dbt-labs/dbt-core/issues/9104), [#12239](https://github.com/dbt-labs/dbt-core/issues/12239), [#5696](https://github.com/dbt-labs/dbt-core/issues/5696), [#12151](https://github.com/dbt-labs/dbt-core/issues/12151))
|
||||
- [@gshank](https://github.com/gshank) ([#12012](https://github.com/dbt-labs/dbt-core/issues/12012), [#11869](https://github.com/dbt-labs/dbt-core/issues/11869))
|
||||
- [@mattogburke](https://github.com/mattogburke) ([#12223](https://github.com/dbt-labs/dbt-core/issues/12223))
|
||||
- [@michellark](https://github.com/michellark) ([#11885](https://github.com/dbt-labs/dbt-core/issues/11885), [#11987](https://github.com/dbt-labs/dbt-core/issues/11987))
|
||||
- [@michelleark](https://github.com/michelleark) ([#deprecate](https://github.com/dbt-labs/dbt-core/issues/deprecate), [#--models,--model,](https://github.com/dbt-labs/dbt-core/issues/--models,--model,), [#and](https://github.com/dbt-labs/dbt-core/issues/and), [#-m](https://github.com/dbt-labs/dbt-core/issues/-m), [#flags](https://github.com/dbt-labs/dbt-core/issues/flags), [#11335](https://github.com/dbt-labs/dbt-core/issues/11335), [#11659](https://github.com/dbt-labs/dbt-core/issues/11659), [#11847](https://github.com/dbt-labs/dbt-core/issues/11847), [#11725](https://github.com/dbt-labs/dbt-core/issues/11725), [#11911](https://github.com/dbt-labs/dbt-core/issues/11911), [#12002](https://github.com/dbt-labs/dbt-core/issues/12002), [#12019](https://github.com/dbt-labs/dbt-core/issues/12019), [#12047](https://github.com/dbt-labs/dbt-core/issues/12047), [#12046](https://github.com/dbt-labs/dbt-core/issues/12046), [#12240](https://github.com/dbt-labs/dbt-core/issues/12240), [#10540](https://github.com/dbt-labs/dbt-core/issues/10540), [#10165](https://github.com/dbt-labs/dbt-core/issues/10165), [#11755](https://github.com/dbt-labs/dbt-core/issues/11755), [#11826](https://github.com/dbt-labs/dbt-core/issues/11826), [#11839](https://github.com/dbt-labs/dbt-core/issues/11839), [#11855](https://github.com/dbt-labs/dbt-core/issues/11855), [#11879](https://github.com/dbt-labs/dbt-core/issues/11879), [#11880](https://github.com/dbt-labs/dbt-core/issues/11880), [#11940](https://github.com/dbt-labs/dbt-core/issues/11940), [#11998](https://github.com/dbt-labs/dbt-core/issues/11998), [#11984](https://github.com/dbt-labs/dbt-core/issues/11984), [#11973](https://github.com/dbt-labs/dbt-core/issues/11973), [#12013](https://github.com/dbt-labs/dbt-core/issues/12013), [#12136](https://github.com/dbt-labs/dbt-core/issues/12136), [#10157](https://github.com/dbt-labs/dbt-core/issues/10157), [#12152](https://github.com/dbt-labs/dbt-core/issues/12152), [#11975](https://github.com/dbt-labs/dbt-core/issues/11975), [#10433](https://github.com/dbt-labs/dbt-core/issues/10433), [#11850](https://github.com/dbt-labs/dbt-core/issues/11850), [#12244](https://github.com/dbt-labs/dbt-core/issues/12244), [#11351](https://github.com/dbt-labs/dbt-core/issues/11351), [#11835](https://github.com/dbt-labs/dbt-core/issues/11835), [#11612](https://github.com/dbt-labs/dbt-core/issues/11612), [#12285](https://github.com/dbt-labs/dbt-core/issues/12285), [#12006](https://github.com/dbt-labs/dbt-core/issues/12006), [#12121](https://github.com/dbt-labs/dbt-core/issues/12121), [#12126](https://github.com/dbt-labs/dbt-core/issues/12126), [#12149](https://github.com/dbt-labs/dbt-core/issues/12149), [#12180](https://github.com/dbt-labs/dbt-core/issues/12180), [#12284](https://github.com/dbt-labs/dbt-core/issues/12284), [#12005](https://github.com/dbt-labs/dbt-core/issues/12005))
|
||||
- [@mjsqu](https://github.com/mjsqu) ([#12288](https://github.com/dbt-labs/dbt-core/issues/12288))
|
||||
- [@nathanskone](https://github.com/nathanskone) ([#10157](https://github.com/dbt-labs/dbt-core/issues/10157))
|
||||
- [@pablomc87](https://github.com/pablomc87) ([#11858](https://github.com/dbt-labs/dbt-core/issues/11858))
|
||||
- [@peterallenwebb](https://github.com/peterallenwebb) ([#11566](https://github.com/dbt-labs/dbt-core/issues/11566))
|
||||
- [@theyostalservice](https://github.com/theyostalservice) ([#n/a](https://github.com/dbt-labs/dbt-core/issues/n/a))
|
||||
- [@trouze](https://github.com/trouze) ([#11919](https://github.com/dbt-labs/dbt-core/issues/11919))
|
||||
- [@wircho](https://github.com/wircho) ([#11886](https://github.com/dbt-labs/dbt-core/issues/11886), [#11492](https://github.com/dbt-labs/dbt-core/issues/11492))
|
||||
@@ -1,8 +0,0 @@
|
||||
## dbt-core 1.11.1 - December 19, 2025
|
||||
|
||||
### Dependencies
|
||||
|
||||
- Bump minimum click to 8.2.0 ([#12305](https://github.com/dbt-labs/dbt-core/issues/12305))
|
||||
|
||||
### Contributors
|
||||
- [@QMalcolm](https://github.com/QMalcolm) ([#12305](https://github.com/dbt-labs/dbt-core/issues/12305))
|
||||
6
.changes/unreleased/Dependencies-20251118-155354.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Dependencies
|
||||
body: Use EventCatcher from dbt-common instead of maintaining a local copy
|
||||
time: 2025-11-18T15:53:54.284561+05:30
|
||||
custom:
|
||||
Author: 3loka
|
||||
Issue: "12124"
|
||||
6
.changes/unreleased/Features-20251006-140352.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Features
|
||||
body: Support partial parsing for function nodes
|
||||
time: 2025-10-06T14:03:52.258104-05:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "12072"
|
||||
6
.changes/unreleased/Features-20251117-141053.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Features
|
||||
body: Allow for defining funciton arguments with default values
|
||||
time: 2025-11-17T14:10:53.860178-06:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "12044"
|
||||
6
.changes/unreleased/Features-20251201-165209.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Features
|
||||
body: Raise jsonschema-based deprecation warnings by default
|
||||
time: 2025-12-01T16:52:09.354436-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: 12240
|
||||
6
.changes/unreleased/Features-20251203-122926.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Features
|
||||
body: ':bug: :snowman: Disable unit tests whose model is disabled'
|
||||
time: 2025-12-03T12:29:26.209248-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "10540"
|
||||
6
.changes/unreleased/Features-20251210-202001.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Features
|
||||
body: Implement config.meta_get and config.meta_require
|
||||
time: 2025-12-10T20:20:01.354288-05:00
|
||||
custom:
|
||||
Author: gshank
|
||||
Issue: "12012"
|
||||
6
.changes/unreleased/Fixes-20251117-140649.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Include macros in unit test parsing
|
||||
time: 2025-11-17T14:06:49.518566-05:00
|
||||
custom:
|
||||
Author: michelleark nathanskone
|
||||
Issue: "10157"
|
||||
6
.changes/unreleased/Fixes-20251117-185025.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Allow dbt deps to run when vars lack defaults in dbt_project.yml
|
||||
time: 2025-11-17T18:50:25.759091+05:30
|
||||
custom:
|
||||
Author: 3loka
|
||||
Issue: "8913"
|
||||
6
.changes/unreleased/Fixes-20251118-171106.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Restore DuplicateResourceNameError for intra-project node name duplication, behind behavior flag `require_unique_project_resource_names`
|
||||
time: 2025-11-18T17:11:06.454784-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "12152"
|
||||
6
.changes/unreleased/Fixes-20251119-195034.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Allow the usage of `function` with `--exclude-resource-type` flag
|
||||
time: 2025-11-19T19:50:34.703236-06:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "12143"
|
||||
6
.changes/unreleased/Fixes-20251124-155629.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Fix bug where schemas of functions weren't guaranteed to exist
|
||||
time: 2025-11-24T15:56:29.467004-06:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "12142"
|
||||
6
.changes/unreleased/Fixes-20251124-155756.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Fix generation of deprecations summary
|
||||
time: 2025-11-24T15:57:56.544123-08:00
|
||||
custom:
|
||||
Author: asiunov
|
||||
Issue: "12146"
|
||||
6
.changes/unreleased/Fixes-20251124-170855.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Correctly reference foreign key references when --defer and --state provided'
|
||||
time: 2025-11-24T17:08:55.387946-05:00
|
||||
custom:
|
||||
Author: michellark
|
||||
Issue: "11885"
|
||||
7
.changes/unreleased/Fixes-20251125-120246.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Add exception when using --state and referring to a removed
|
||||
test'
|
||||
time: 2025-11-25T12:02:46.635026-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "10630"
|
||||
6
.changes/unreleased/Fixes-20251125-122020.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Stop emitting `NoNodesForSelectionCriteria` three times during `build` command'
|
||||
time: 2025-11-25T12:20:20.132379-06:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "11627"
|
||||
6
.changes/unreleased/Fixes-20251127-141308.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ":bug: :snowman: Fix long Python stack traces appearing when package dependencies have incompatible version requirements"
|
||||
time: 2025-11-27T14:13:08.082542-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "12049"
|
||||
7
.changes/unreleased/Fixes-20251127-145929.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Fixed issue where changing data type size/precision/scale (e.g.,
|
||||
varchar(3) to varchar(10)) incorrectly triggered a breaking change error fo'
|
||||
time: 2025-11-27T14:59:29.256274-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "11186"
|
||||
6
.changes/unreleased/Fixes-20251127-170124.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Support unit testing models that depend on sources with the same name'
|
||||
time: 2025-11-27T17:01:24.193516-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: 11975 10433
|
||||
6
.changes/unreleased/Fixes-20251128-102129.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Fix bug in partial parsing when updating a model with a schema file that is referenced by a singular test
|
||||
time: 2025-11-28T10:21:29.911147Z
|
||||
custom:
|
||||
Author: mattogburke
|
||||
Issue: "12223"
|
||||
6
.changes/unreleased/Fixes-20251128-122838.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Avoid retrying successful run-operation commands'
|
||||
time: 2025-11-28T12:28:38.546261-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "11850"
|
||||
7
.changes/unreleased/Fixes-20251128-161937.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Fix `dbt deps --add-package` crash when packages.yml contains `warn-unpinned:
|
||||
false`'
|
||||
time: 2025-11-28T16:19:37.608722-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "9104"
|
||||
7
.changes/unreleased/Fixes-20251128-163144.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Improve `dbt deps --add-package` duplicate detection with better
|
||||
cross-source matching and word boundaries'
|
||||
time: 2025-11-28T16:31:44.344099-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "12239"
|
||||
6
.changes/unreleased/Fixes-20251202-133705.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: ':bug: :snowman: Fix false positive deprecation warning of pre/post-hook SQL configs'
|
||||
time: 2025-12-02T13:37:05.012112-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "12244"
|
||||
6
.changes/unreleased/Fixes-20251209-175031.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Ensure recent deprecation warnings include event name in message
|
||||
time: 2025-12-09T17:50:31.334618-06:00
|
||||
custom:
|
||||
Author: QMalcolm
|
||||
Issue: "12264"
|
||||
6
.changes/unreleased/Fixes-20251210-143935.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Fixes
|
||||
body: Improve error message clarity when detecting nodes with space in name
|
||||
time: 2025-12-10T14:39:35.107841-08:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "11835"
|
||||
@@ -1,6 +0,0 @@
|
||||
kind: Fixes
|
||||
body: Pin sqlparse <0.5.5 to avoid max tokens issue
|
||||
time: 2025-12-19T18:44:05.216329-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "12303"
|
||||
6
.changes/unreleased/Under the Hood-20251119-110110.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Under the Hood
|
||||
body: Update jsonschemas for schema.yml and dbt_project.yml deprecations
|
||||
time: 2025-11-19T11:01:10.616676-05:00
|
||||
custom:
|
||||
Author: michelleark
|
||||
Issue: "12180"
|
||||
6
.changes/unreleased/Under the Hood-20251121-140515.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Under the Hood
|
||||
body: Replace setuptools and tox with hatch for build, test, and environment management.
|
||||
time: 2025-11-21T14:05:15.838252-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "12151"
|
||||
6
.changes/unreleased/Under the Hood-20251209-131857.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: Under the Hood
|
||||
body: Add add_catalog_integration call even if we have a pre-existing manifest
|
||||
time: 2025-12-09T13:18:57.043254-08:00
|
||||
custom:
|
||||
Author: colin-rogers-dbt
|
||||
Issue: "12262"
|
||||
169
.github/dbt-postgres-testing.yml
vendored
Normal file
@@ -0,0 +1,169 @@
|
||||
# **what?**
|
||||
# Runs all tests in dbt-postgres with this branch of dbt-core to ensure nothing is broken
|
||||
|
||||
# **why?**
|
||||
# Ensure dbt-core changes do not break dbt-postgres, as a basic proxy for other adapters
|
||||
|
||||
# **when?**
|
||||
# This will run when trying to merge a PR into main.
|
||||
# It can also be manually triggered.
|
||||
|
||||
# This workflow can be skipped by adding the "Skip Postgres Testing" label to the PR. This is
|
||||
# useful when making a change in both `dbt-postgres` and `dbt-core` where the changes are dependant
|
||||
# and cause the other repository to break.
|
||||
|
||||
name: "dbt-postgres Tests"
|
||||
run-name: >-
|
||||
${{ (github.event_name == 'workflow_dispatch' || github.event_name == 'workflow_call')
|
||||
&& format('dbt-postgres@{0} with dbt-core@{1}', inputs.dbt-postgres-ref, inputs.dbt-core-ref)
|
||||
|| 'dbt-postgres@main with dbt-core branch' }}
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- "main"
|
||||
- "*.latest"
|
||||
- "releases/*"
|
||||
pull_request:
|
||||
merge_group:
|
||||
types: [checks_requested]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
dbt-postgres-ref:
|
||||
description: "The branch of dbt-postgres to test against"
|
||||
default: "main"
|
||||
dbt-core-ref:
|
||||
description: "The branch of dbt-core to test against"
|
||||
default: "main"
|
||||
workflow_call:
|
||||
inputs:
|
||||
dbt-postgres-ref:
|
||||
description: "The branch of dbt-postgres to test against"
|
||||
type: string
|
||||
required: true
|
||||
default: "main"
|
||||
dbt-core-ref:
|
||||
description: "The branch of dbt-core to test against"
|
||||
type: string
|
||||
required: true
|
||||
default: "main"
|
||||
|
||||
permissions: read-all
|
||||
|
||||
# will cancel previous workflows triggered by the same event
|
||||
# and for the same ref for PRs/merges or same SHA otherwise
|
||||
# and for the same inputs on workflow_dispatch or workflow_call
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(fromJson('["pull_request", "merge_group"]'), github.event_name) && github.event.pull_request.head.ref || github.sha }}-${{ contains(fromJson('["workflow_call", "workflow_dispatch"]'), github.event_name) && github.event.inputs.dbt-postgres-ref && github.event.inputs.dbt-core-ref || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
jobs:
|
||||
job-prep:
|
||||
# This allow us to run the workflow on pull_requests as well so we can always run unit tests
|
||||
# and only run integration tests on merge for time purposes
|
||||
name: Setup Repo Refs
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
dbt-postgres-ref: ${{ steps.core-ref.outputs.ref }}
|
||||
dbt-core-ref: ${{ steps.common-ref.outputs.ref }}
|
||||
|
||||
steps:
|
||||
- name: "Input Refs"
|
||||
id: job-inputs
|
||||
run: |
|
||||
echo "inputs.dbt-postgres-ref=${{ inputs.dbt-postgres-ref }}"
|
||||
echo "inputs.dbt-core-ref=${{ inputs.dbt-core-ref }}"
|
||||
|
||||
- name: "Determine dbt-postgres ref"
|
||||
id: core-ref
|
||||
run: |
|
||||
if [[ -z "${{ inputs.dbt-postgres-ref }}" ]]; then
|
||||
REF="main"
|
||||
else
|
||||
REF=${{ inputs.dbt-postgres-ref }}
|
||||
fi
|
||||
echo "ref=$REF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: "Determine dbt-core ref"
|
||||
id: common-ref
|
||||
run: |
|
||||
if [[ -z "${{ inputs.dbt-core-ref }}" ]]; then
|
||||
# these will be commits instead of branches
|
||||
if [[ "${{ github.event_name }}" == "merge_group" ]]; then
|
||||
REF=${{ github.event.merge_group.head_sha }}
|
||||
else
|
||||
REF=${{ github.event.pull_request.base.sha }}
|
||||
fi
|
||||
else
|
||||
REF=${{ inputs.dbt-core-ref }}
|
||||
fi
|
||||
echo "ref=$REF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: "Final Refs"
|
||||
run: |
|
||||
echo "dbt-postgres-ref=${{ steps.core-ref.outputs.ref }}"
|
||||
echo "dbt-core-ref=${{ steps.common-ref.outputs.ref }}"
|
||||
|
||||
integration-tests-postgres:
|
||||
name: "dbt-postgres integration tests"
|
||||
needs: [job-prep]
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: "./dbt-postgres"
|
||||
environment:
|
||||
name: "dbt-postgres"
|
||||
env:
|
||||
POSTGRES_TEST_HOST: ${{ vars.POSTGRES_TEST_HOST }}
|
||||
POSTGRES_TEST_PORT: ${{ vars.POSTGRES_TEST_PORT }}
|
||||
POSTGRES_TEST_USER: ${{ vars.POSTGRES_TEST_USER }}
|
||||
POSTGRES_TEST_PASS: ${{ secrets.POSTGRES_TEST_PASS }}
|
||||
POSTGRES_TEST_DATABASE: ${{ vars.POSTGRES_TEST_DATABASE }}
|
||||
POSTGRES_TEST_THREADS: ${{ vars.POSTGRES_TEST_THREADS }}
|
||||
services:
|
||||
postgres:
|
||||
image: postgres
|
||||
env:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- ${{ vars.POSTGRES_TEST_PORT }}:5432
|
||||
steps:
|
||||
- name: "Check out dbt-adapters@${{ needs.job-prep.outputs.dbt-postgres-ref }}"
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
repository: dbt-labs/dbt-adapters
|
||||
ref: ${{ needs.job-prep.outputs.dbt-postgres-ref }}
|
||||
|
||||
- name: "Set up Python"
|
||||
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
- name: "Set environment variables"
|
||||
run: |
|
||||
echo "HATCH_PYTHON=${{ inputs.python-version }}" >> $GITHUB_ENV
|
||||
echo "PIP_ONLY_BINARY=psycopg2-binary" >> $GITHUB_ENV
|
||||
|
||||
- name: "Setup test database"
|
||||
run: psql -f ./scripts/setup_test_database.sql
|
||||
env:
|
||||
PGHOST: ${{ vars.POSTGRES_TEST_HOST }}
|
||||
PGPORT: ${{ vars.POSTGRES_TEST_PORT }}
|
||||
PGUSER: postgres
|
||||
PGPASSWORD: postgres
|
||||
PGDATABASE: postgres
|
||||
|
||||
- name: "Install hatch"
|
||||
uses: pypa/hatch@257e27e51a6a5616ed08a39a408a21c35c9931bc # pypa/hatch@install
|
||||
|
||||
- name: "Run integration tests"
|
||||
run: hatch run ${{ inputs.hatch-env }}:integration-tests
|
||||
13
.github/workflows/community-label.yml
vendored
@@ -7,7 +7,6 @@
|
||||
# **when?**
|
||||
# When a PR is opened, not in draft or moved from draft to ready for review
|
||||
|
||||
|
||||
name: Label community PRs
|
||||
|
||||
on:
|
||||
@@ -29,9 +28,15 @@ jobs:
|
||||
# If this PR is opened and not draft, determine if it needs to be labeled
|
||||
# if the PR is converted out of draft, determine if it needs to be labeled
|
||||
if: |
|
||||
(!contains(github.event.pull_request.labels.*.name, 'community') &&
|
||||
(github.event.action == 'opened' && github.event.pull_request.draft == false ) ||
|
||||
github.event.action == 'ready_for_review' )
|
||||
(
|
||||
!contains(github.event.pull_request.labels.*.name, 'community')
|
||||
&& (
|
||||
(github.event.action == 'opened' && github.event.pull_request.draft == false)
|
||||
|| github.event.action == 'ready_for_review'
|
||||
)
|
||||
&& github.event.pull_request.user.type != 'Bot'
|
||||
&& github.event.pull_request.user.login != 'dependabot[bot]'
|
||||
)
|
||||
uses: dbt-labs/actions/.github/workflows/label-community.yml@main
|
||||
with:
|
||||
github_team: 'core-group'
|
||||
|
||||
14
.github/workflows/main.yml
vendored
@@ -62,7 +62,7 @@ jobs:
|
||||
python -m pip --version
|
||||
python -m pip install hatch
|
||||
cd core
|
||||
hatch -v run setup
|
||||
hatch run setup
|
||||
|
||||
- name: Verify dbt installation
|
||||
run: |
|
||||
@@ -106,7 +106,7 @@ jobs:
|
||||
with:
|
||||
timeout_minutes: 10
|
||||
max_attempts: 3
|
||||
command: cd core && hatch -v run ci:unit-tests
|
||||
command: cd core && hatch run ci:unit-tests
|
||||
|
||||
- name: Get current date
|
||||
if: always()
|
||||
@@ -230,7 +230,7 @@ jobs:
|
||||
timeout_minutes: 30
|
||||
max_attempts: 3
|
||||
shell: bash
|
||||
command: cd core && hatch -v run ci:integration-tests -- --ddtrace --splits ${{ env.PYTHON_INTEGRATION_TEST_WORKERS }} --group ${{ matrix.split-group }}
|
||||
command: cd core && hatch run ci:integration-tests -- --ddtrace --splits ${{ env.PYTHON_INTEGRATION_TEST_WORKERS }} --group ${{ matrix.split-group }}
|
||||
|
||||
- name: Get current date
|
||||
if: always()
|
||||
@@ -311,7 +311,7 @@ jobs:
|
||||
timeout_minutes: 30
|
||||
max_attempts: 3
|
||||
shell: bash
|
||||
command: cd core && hatch -v run ci:integration-tests -- --ddtrace --splits ${{ env.PYTHON_INTEGRATION_TEST_WORKERS }} --group ${{ matrix.split-group }}
|
||||
command: cd core && hatch run ci:integration-tests -- --ddtrace --splits ${{ env.PYTHON_INTEGRATION_TEST_WORKERS }} --group ${{ matrix.split-group }}
|
||||
|
||||
- name: Get current date
|
||||
if: always()
|
||||
@@ -326,7 +326,7 @@ jobs:
|
||||
name: logs_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.split-group }}_${{ steps.date.outputs.date }}
|
||||
path: ./logs
|
||||
|
||||
- name: Upload Integration Test Coverage
|
||||
- name: Upload Integration Test Coverage to Codecov
|
||||
if: ${{ matrix.python-version == '3.11' }}
|
||||
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # codecov/codecov-action@v5
|
||||
with:
|
||||
@@ -377,7 +377,7 @@ jobs:
|
||||
- name: Show distributions
|
||||
run: ls -lh dist/
|
||||
|
||||
- name: Check distribution descriptions
|
||||
- name: Check and verify distributions
|
||||
run: |
|
||||
cd core
|
||||
hatch -v run build:check-all
|
||||
hatch run build:check-all
|
||||
|
||||
265
.github/workflows/model_performance.yml
vendored
@@ -1,265 +0,0 @@
|
||||
# **what?**
|
||||
# This workflow models the performance characteristics of a point in time in dbt.
|
||||
# It runs specific dbt commands on committed projects multiple times to create and
|
||||
# commit information about the distribution to the current branch. For more information
|
||||
# see the readme in the performance module at /performance/README.md.
|
||||
#
|
||||
# **why?**
|
||||
# When developing new features, we can take quick performance samples and compare
|
||||
# them against the commited baseline measurements produced by this workflow to detect
|
||||
# some performance regressions at development time before they reach users.
|
||||
#
|
||||
# **when?**
|
||||
# This is only run once directly after each release (for non-prereleases). If for some
|
||||
# reason the results of a run are not satisfactory, it can also be triggered manually.
|
||||
|
||||
name: Model Performance Characteristics
|
||||
|
||||
on:
|
||||
# runs after non-prereleases are published.
|
||||
release:
|
||||
types: [released]
|
||||
# run manually from the actions tab
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_id:
|
||||
description: 'dbt version to model (must be non-prerelease in Pypi)'
|
||||
type: string
|
||||
required: true
|
||||
|
||||
env:
|
||||
RUNNER_CACHE_PATH: performance/runner/target/release/runner
|
||||
|
||||
# both jobs need to write
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
set-variables:
|
||||
name: Setting Variables
|
||||
runs-on: ${{ vars.UBUNTU_LATEST }}
|
||||
outputs:
|
||||
cache_key: ${{ steps.variables.outputs.cache_key }}
|
||||
release_id: ${{ steps.semver.outputs.base-version }}
|
||||
release_branch: ${{ steps.variables.outputs.release_branch }}
|
||||
steps:
|
||||
|
||||
# explicitly checkout the performance runner from main regardless of which
|
||||
# version we are modeling.
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
ref: main
|
||||
|
||||
- name: Parse version into parts
|
||||
id: semver
|
||||
uses: dbt-labs/actions/parse-semver@v1
|
||||
with:
|
||||
version: ${{ github.event.inputs.release_id || github.event.release.tag_name }}
|
||||
|
||||
# collect all the variables that need to be used in subsequent jobs
|
||||
- name: Set variables
|
||||
id: variables
|
||||
run: |
|
||||
# create a cache key that will be used in the next job. without this the
|
||||
# next job would have to checkout from main and hash the files itself.
|
||||
echo "cache_key=${{ runner.os }}-${{ hashFiles('performance/runner/Cargo.toml')}}-${{ hashFiles('performance/runner/src/*') }}" >> $GITHUB_OUTPUT
|
||||
|
||||
branch_name="${{steps.semver.outputs.major}}.${{steps.semver.outputs.minor}}.latest"
|
||||
echo "release_branch=$branch_name" >> $GITHUB_OUTPUT
|
||||
echo "release branch is inferred to be ${branch_name}"
|
||||
|
||||
latest-runner:
|
||||
name: Build or Fetch Runner
|
||||
runs-on: ${{ vars.UBUNTU_LATEST }}
|
||||
needs: [set-variables]
|
||||
env:
|
||||
RUSTFLAGS: "-D warnings"
|
||||
steps:
|
||||
- name: '[DEBUG] print variables'
|
||||
run: |
|
||||
echo "all variables defined in set-variables"
|
||||
echo "cache_key: ${{ needs.set-variables.outputs.cache_key }}"
|
||||
echo "release_id: ${{ needs.set-variables.outputs.release_id }}"
|
||||
echo "release_branch: ${{ needs.set-variables.outputs.release_branch }}"
|
||||
|
||||
# explicitly checkout the performance runner from main regardless of which
|
||||
# version we are modeling.
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
ref: main
|
||||
|
||||
# attempts to access a previously cached runner
|
||||
- uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # actions/cache@v4
|
||||
id: cache
|
||||
with:
|
||||
path: ${{ env.RUNNER_CACHE_PATH }}
|
||||
key: ${{ needs.set-variables.outputs.cache_key }}
|
||||
|
||||
- name: Fetch Rust Toolchain
|
||||
if: steps.cache.outputs.cache-hit != 'true'
|
||||
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af # actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
override: true
|
||||
|
||||
- name: Add fmt
|
||||
if: steps.cache.outputs.cache-hit != 'true'
|
||||
run: rustup component add rustfmt
|
||||
|
||||
- name: Cargo fmt
|
||||
if: steps.cache.outputs.cache-hit != 'true'
|
||||
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # actions-rs/cargo@v1
|
||||
with:
|
||||
command: fmt
|
||||
args: --manifest-path performance/runner/Cargo.toml --all -- --check
|
||||
|
||||
- name: Test
|
||||
if: steps.cache.outputs.cache-hit != 'true'
|
||||
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # actions-rs/cargo@v1
|
||||
with:
|
||||
command: test
|
||||
args: --manifest-path performance/runner/Cargo.toml
|
||||
|
||||
- name: Build (optimized)
|
||||
if: steps.cache.outputs.cache-hit != 'true'
|
||||
uses: actions-rs/cargo@844f36862e911db73fe0815f00a4a2602c279505 # actions-rs/cargo@v1
|
||||
with:
|
||||
command: build
|
||||
args: --release --manifest-path performance/runner/Cargo.toml
|
||||
# the cache action automatically caches this binary at the end of the job
|
||||
|
||||
model:
|
||||
# depends on `latest-runner` as a separate job so that failures in this job do not prevent
|
||||
# a successfully tested and built binary from being cached.
|
||||
needs: [set-variables, latest-runner]
|
||||
name: Model a release
|
||||
runs-on: ${{ vars.UBUNTU_LATEST }}
|
||||
steps:
|
||||
|
||||
- name: '[DEBUG] print variables'
|
||||
run: |
|
||||
echo "all variables defined in set-variables"
|
||||
echo "cache_key: ${{ needs.set-variables.outputs.cache_key }}"
|
||||
echo "release_id: ${{ needs.set-variables.outputs.release_id }}"
|
||||
echo "release_branch: ${{ needs.set-variables.outputs.release_branch }}"
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # actions/setup-python@v6
|
||||
with:
|
||||
python-version: "3.10"
|
||||
|
||||
- name: Install dbt
|
||||
run: pip install dbt-postgres==${{ needs.set-variables.outputs.release_id }}
|
||||
|
||||
- name: Install Hyperfine
|
||||
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
|
||||
|
||||
# explicitly checkout main to get the latest project definitions
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
ref: main
|
||||
|
||||
# this was built in the previous job so it will be there.
|
||||
- name: Fetch Runner
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # actions/cache@v4
|
||||
id: cache
|
||||
with:
|
||||
path: ${{ env.RUNNER_CACHE_PATH }}
|
||||
key: ${{ needs.set-variables.outputs.cache_key }}
|
||||
|
||||
- name: Move Runner
|
||||
run: mv performance/runner/target/release/runner performance/app
|
||||
|
||||
- name: Change Runner Permissions
|
||||
run: chmod +x ./performance/app
|
||||
|
||||
- name: '[DEBUG] ls baseline directory before run'
|
||||
run: ls -R performance/baselines/
|
||||
|
||||
# `${{ github.workspace }}` is used to pass the absolute path
|
||||
- name: Create directories
|
||||
run: |
|
||||
mkdir ${{ github.workspace }}/performance/tmp/
|
||||
mkdir -p performance/baselines/${{ needs.set-variables.outputs.release_id }}/
|
||||
|
||||
# Run modeling with taking 20 samples
|
||||
- name: Run Measurement
|
||||
run: |
|
||||
performance/app model -v ${{ needs.set-variables.outputs.release_id }} -b ${{ github.workspace }}/performance/baselines/ -p ${{ github.workspace }}/performance/projects/ -t ${{ github.workspace }}/performance/tmp/ -n 20
|
||||
|
||||
- name: '[DEBUG] ls baseline directory after run'
|
||||
run: ls -R performance/baselines/
|
||||
|
||||
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # actions/upload-artifact@v4
|
||||
with:
|
||||
name: baseline
|
||||
path: performance/baselines/${{ needs.set-variables.outputs.release_id }}/
|
||||
|
||||
create-pr:
|
||||
name: Open PR for ${{ matrix.base-branch }}
|
||||
|
||||
# depends on `model` as a separate job so that the baseline can be committed to more than one branch
|
||||
# i.e. release branch and main
|
||||
needs: [set-variables, latest-runner, model]
|
||||
runs-on: ${{ vars.UBUNTU_LATEST }}
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- base-branch: refs/heads/main
|
||||
target-branch: performance-bot/main_${{ needs.set-variables.outputs.release_id }}_${{GITHUB.RUN_ID}}
|
||||
- base-branch: refs/heads/${{ needs.set-variables.outputs.release_branch }}
|
||||
target-branch: performance-bot/release_${{ needs.set-variables.outputs.release_id }}_${{GITHUB.RUN_ID}}
|
||||
|
||||
steps:
|
||||
- name: '[DEBUG] print variables'
|
||||
run: |
|
||||
echo "all variables defined in set-variables"
|
||||
echo "cache_key: ${{ needs.set-variables.outputs.cache_key }}"
|
||||
echo "release_id: ${{ needs.set-variables.outputs.release_id }}"
|
||||
echo "release_branch: ${{ needs.set-variables.outputs.release_branch }}"
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ matrix.base-branch }}
|
||||
|
||||
- name: Create PR branch
|
||||
run: |
|
||||
git checkout -b ${{ matrix.target-branch }}
|
||||
git push origin ${{ matrix.target-branch }}
|
||||
git branch --set-upstream-to=origin/${{ matrix.target-branch }} ${{ matrix.target-branch }}
|
||||
|
||||
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # actions/download-artifact@v4
|
||||
with:
|
||||
name: baseline
|
||||
path: performance/baselines/${{ needs.set-variables.outputs.release_id }}
|
||||
|
||||
- name: '[DEBUG] ls baselines after artifact download'
|
||||
run: ls -R performance/baselines/
|
||||
|
||||
- name: Commit baseline
|
||||
uses: EndBug/add-and-commit@a94899bca583c204427a224a7af87c02f9b325d5 # EndBug/add-and-commit@v9
|
||||
with:
|
||||
add: 'performance/baselines/*'
|
||||
author_name: 'Github Build Bot'
|
||||
author_email: 'buildbot@fishtownanalytics.com'
|
||||
message: 'adding performance baseline for ${{ needs.set-variables.outputs.release_id }}'
|
||||
push: 'origin origin/${{ matrix.target-branch }}'
|
||||
|
||||
- name: Create Pull Request
|
||||
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # peter-evans/create-pull-request@v7
|
||||
with:
|
||||
author: 'Github Build Bot <buildbot@fishtownanalytics.com>'
|
||||
base: ${{ matrix.base-branch }}
|
||||
branch: '${{ matrix.target-branch }}'
|
||||
title: 'Adding performance modeling for ${{needs.set-variables.outputs.release_id}} to ${{ matrix.base-branch }}'
|
||||
body: 'Committing perf results for tracking for the ${{needs.set-variables.outputs.release_id}}'
|
||||
labels: |
|
||||
Skip Changelog
|
||||
Performance
|
||||
49
.github/workflows/release.yml
vendored
@@ -72,12 +72,15 @@ defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
env:
|
||||
MIN_HATCH_VERSION: "1.11.0"
|
||||
|
||||
jobs:
|
||||
job-setup:
|
||||
name: Log Inputs
|
||||
runs-on: ${{ vars.UBUNTU_LATEST }}
|
||||
outputs:
|
||||
starting_sha: ${{ steps.set_sha.outputs.starting_sha }}
|
||||
use_hatch: ${{ steps.use_hatch.outputs.use_hatch }}
|
||||
steps:
|
||||
- name: "[DEBUG] Print Variables"
|
||||
run: |
|
||||
@@ -88,19 +91,29 @@ jobs:
|
||||
echo Nightly release: ${{ inputs.nightly_release }}
|
||||
echo Only Docker: ${{ inputs.only_docker }}
|
||||
|
||||
- name: "Checkout target branch"
|
||||
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ inputs.target_branch }}
|
||||
|
||||
# release-prep.yml really shouldn't take in the sha but since core + all adapters
|
||||
# depend on it now this workaround lets us not input it manually with risk of error.
|
||||
# The changes always get merged into the head so we can't use a specific commit for
|
||||
# releases anyways.
|
||||
- name: "Capture sha"
|
||||
id: set_sha
|
||||
# In version env.HATCH_VERSION we started to use hatch for build tooling. Before that we used setuptools.
|
||||
# This needs to check if we're using hatch or setuptools based on the version being released. We should
|
||||
# check if the version is greater than or equal to env.HATCH_VERSION. If it is, we use hatch, otherwise we use setuptools.
|
||||
- name: "Check if using hatch"
|
||||
id: use_hatch
|
||||
run: |
|
||||
echo "starting_sha=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT
|
||||
# Extract major.minor from versions like 1.11.0a1 -> 1.11
|
||||
INPUT_MAJ_MIN=$(echo "${{ inputs.version_number }}" | sed -E 's/^([0-9]+\.[0-9]+).*/\1/')
|
||||
HATCH_MAJ_MIN=$(echo "${{ env.MIN_HATCH_VERSION }}" | sed -E 's/^([0-9]+\.[0-9]+).*/\1/')
|
||||
|
||||
if [ $(echo "$INPUT_MAJ_MIN >= $HATCH_MAJ_MIN" | bc) -eq 1 ]; then
|
||||
echo "use_hatch=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "use_hatch=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: "Notify if using hatch"
|
||||
run: |
|
||||
if [ ${{ steps.use_hatch.outputs.use_hatch }} = "true" ]; then
|
||||
echo "::notice title="Using Hatch": $title::Using Hatch for release"
|
||||
else
|
||||
echo "::notice title="Using Setuptools": $title::Using Setuptools for release"
|
||||
fi
|
||||
|
||||
bump-version-generate-changelog:
|
||||
name: Bump package version, Generate changelog
|
||||
@@ -110,12 +123,13 @@ jobs:
|
||||
uses: dbt-labs/dbt-release/.github/workflows/release-prep.yml@main
|
||||
|
||||
with:
|
||||
sha: ${{ needs.job-setup.outputs.starting_sha }}
|
||||
version_number: ${{ inputs.version_number }}
|
||||
hatch_directory: "core"
|
||||
target_branch: ${{ inputs.target_branch }}
|
||||
env_setup_script_path: "scripts/env-setup.sh"
|
||||
test_run: ${{ inputs.test_run }}
|
||||
nightly_release: ${{ inputs.nightly_release }}
|
||||
use_hatch: ${{ needs.job-setup.outputs.use_hatch == 'true' }} # workflow outputs are strings...
|
||||
|
||||
secrets: inherit
|
||||
|
||||
@@ -143,16 +157,13 @@ jobs:
|
||||
with:
|
||||
sha: ${{ needs.bump-version-generate-changelog.outputs.final_sha }}
|
||||
version_number: ${{ inputs.version_number }}
|
||||
hatch_directory: "core"
|
||||
changelog_path: ${{ needs.bump-version-generate-changelog.outputs.changelog_path }}
|
||||
build_script_path: "scripts/build-dist.sh"
|
||||
s3_bucket_name: "core-team-artifacts"
|
||||
package_test_command: "dbt --version"
|
||||
test_run: ${{ inputs.test_run }}
|
||||
nightly_release: ${{ inputs.nightly_release }}
|
||||
|
||||
secrets:
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
use_hatch: ${{ needs.job-setup.outputs.use_hatch == 'true' }} # workflow outputs are strings...
|
||||
|
||||
github-release:
|
||||
name: GitHub Release
|
||||
|
||||
@@ -123,7 +123,7 @@ jobs:
|
||||
with:
|
||||
timeout_minutes: 30
|
||||
max_attempts: 3
|
||||
command: cd core && hatch -v run ci:integration-tests -- -nauto
|
||||
command: cd core && hatch run ci:integration-tests -- -nauto
|
||||
env:
|
||||
PYTEST_ADDOPTS: ${{ format('--splits {0} --group {1}', env.PYTHON_INTEGRATION_TEST_WORKERS, matrix.split-group) }}
|
||||
|
||||
|
||||
@@ -84,7 +84,7 @@ repos:
|
||||
types: [python]
|
||||
- id: no_versioned_artifact_resource_imports
|
||||
name: no_versioned_artifact_resource_imports
|
||||
entry: python custom-hooks/no_versioned_artifact_resource_imports.py
|
||||
entry: python scripts/pre-commit-hooks/no_versioned_artifact_resource_imports.py
|
||||
language: system
|
||||
files: ^core/dbt/
|
||||
types: [python]
|
||||
|
||||
@@ -17,10 +17,6 @@ The main subdirectories of core/dbt:
|
||||
- [`parser`](core/dbt/parser/README.md): Read project files, validate, construct python objects
|
||||
- [`task`](core/dbt/task/README.md): Set forth the actions that dbt can perform when invoked
|
||||
|
||||
Legacy tests are found in the 'test' directory:
|
||||
- [`unit tests`](core/dbt/test/unit/README.md): Unit tests
|
||||
- [`integration tests`](core/dbt/test/integration/README.md): Integration tests
|
||||
|
||||
### Invoking dbt
|
||||
|
||||
The "tasks" map to top-level dbt commands. So `dbt run` => task.run.RunTask, etc. Some are more like abstract base classes (GraphRunnableTask, for example) but all the concrete types outside of task should map to tasks. Currently one executes at a time. The tasks kick off their “Runners” and those do execute in parallel. The parallelism is managed via a thread pool, in GraphRunnableTask.
|
||||
@@ -45,10 +41,9 @@ The Postgres adapter code is the most central, and many of its implementations a
|
||||
|
||||
## Testing dbt
|
||||
|
||||
The [`test/`](test/) subdirectory includes unit and integration tests that run as continuous integration checks against open pull requests. Unit tests check mock inputs and outputs of specific python functions. Integration tests perform end-to-end dbt invocations against real adapters (Postgres, Redshift, Snowflake, BigQuery) and assert that the results match expectations. See [the contributing guide](CONTRIBUTING.md) for a step-by-step walkthrough of setting up a local development and testing environment.
|
||||
The [`tests/`](tests/) subdirectory includes unit and fuctional tests that run as continuous integration checks against open pull requests. Unit tests check mock inputs and outputs of specific python functions. Functional tests perform end-to-end dbt invocations against real adapters (Postgres) and assert that the results match expectations. See [the contributing guide](CONTRIBUTING.md) for a step-by-step walkthrough of setting up a local development and testing environment.
|
||||
|
||||
## Everything else
|
||||
|
||||
- [docker](docker/): All dbt versions are published as Docker images on DockerHub. This subfolder contains the `Dockerfile` (constant) and `requirements.txt` (one for each version).
|
||||
- [etc](etc/): Images for README
|
||||
- [scripts](scripts/): Helper scripts for testing, releasing, and producing JSON schemas. These are not included in distributions of dbt, nor are they rigorously tested—they're just handy tools for the dbt maintainers :)
|
||||
|
||||
164
CHANGELOG.md
@@ -5,173 +5,11 @@
|
||||
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
|
||||
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
|
||||
|
||||
## dbt-core 1.11.1 - December 19, 2025
|
||||
|
||||
### Dependencies
|
||||
|
||||
- Bump minimum click to 8.2.0 ([#12305](https://github.com/dbt-labs/dbt-core/issues/12305))
|
||||
|
||||
### Contributors
|
||||
- [@QMalcolm](https://github.com/QMalcolm) ([#12305](https://github.com/dbt-labs/dbt-core/issues/12305))
|
||||
|
||||
|
||||
## dbt-core 1.11.0 - December 19, 2025
|
||||
|
||||
### Features
|
||||
|
||||
- Add file_format to catalog integration config ([#11695](https://github.com/dbt-labs/dbt-core/issues/11695))
|
||||
- 11561 ([#deprecate](https://github.com/dbt-labs/dbt-core/issues/deprecate), [#--models,--model,](https://github.com/dbt-labs/dbt-core/issues/--models,--model,), [#and](https://github.com/dbt-labs/dbt-core/issues/and), [#-m](https://github.com/dbt-labs/dbt-core/issues/-m), [#flags](https://github.com/dbt-labs/dbt-core/issues/flags))
|
||||
- Update jsonschemas with builtin data test properties and exposure configs in dbt_project.yml for more accurate deprecations ([#11335](https://github.com/dbt-labs/dbt-core/issues/11335))
|
||||
- Support loaded_at_query and loaded_at_field on source and table configs ([#11659](https://github.com/dbt-labs/dbt-core/issues/11659))
|
||||
- Begin validating configs from model sql files ([#11727](https://github.com/dbt-labs/dbt-core/issues/11727))
|
||||
- Deprecate `overrides` property for sources ([#11566](https://github.com/dbt-labs/dbt-core/issues/11566))
|
||||
- Create constrained namespace for dbt engine env vars ([#11340](https://github.com/dbt-labs/dbt-core/issues/11340))
|
||||
- Gate jsonschema validations by adapter ([#11680](https://github.com/dbt-labs/dbt-core/issues/11680))
|
||||
- Deprecate top-level argument properties in generic tests ([#11847](https://github.com/dbt-labs/dbt-core/issues/11847))
|
||||
- Deprecate {{ modules.itertools }} usage ([#11725](https://github.com/dbt-labs/dbt-core/issues/11725))
|
||||
- Default require_generic_test_arguments_property flag to True - The 'arguments' property will be parsed as keyword arguments to data tests, if provided ([#11911](https://github.com/dbt-labs/dbt-core/issues/11911))
|
||||
- Support Nested Key Traversal in dbt ls json output ([#11919](https://github.com/dbt-labs/dbt-core/issues/11919))
|
||||
- No-op when project-level `quoting.snowflake_ignore_case` is set. ([#11882](https://github.com/dbt-labs/dbt-core/issues/11882))
|
||||
- Support UDFs by allowing user definition of `function` nodes ([#11923](https://github.com/dbt-labs/dbt-core/issues/11923))
|
||||
- Support listing functions via `list` command ([#11967](https://github.com/dbt-labs/dbt-core/issues/11967))
|
||||
- Support selecting funciton nodes via: name, file path, and resource type ([#11962](https://github.com/dbt-labs/dbt-core/issues/11962), [#11958](https://github.com/dbt-labs/dbt-core/issues/11958), [#11961](https://github.com/dbt-labs/dbt-core/issues/11961))
|
||||
- Parse catalogs.yml during parse, seed, and test commands ([#12002](https://github.com/dbt-labs/dbt-core/issues/12002))
|
||||
- Handle creation of function nodes during DAG execution ([#11965](https://github.com/dbt-labs/dbt-core/issues/11965))
|
||||
- Support configuring model.config.freshness.build_after.updates_on without period or count ([#12019](https://github.com/dbt-labs/dbt-core/issues/12019))
|
||||
- Add `function` macro to jinja context ([#11972](https://github.com/dbt-labs/dbt-core/issues/11972))
|
||||
- Adding run_started_at to manifest.json metadata ([#12047](https://github.com/dbt-labs/dbt-core/issues/12047))
|
||||
- Validate {{ config }} in SQL for models that don't statically parse ([#12046](https://github.com/dbt-labs/dbt-core/issues/12046))
|
||||
- Add `type` property to `function` nodes ([#12042](https://github.com/dbt-labs/dbt-core/issues/12042), [#12037](https://github.com/dbt-labs/dbt-core/issues/12037))
|
||||
- Support function nodes for unit tested models ([#12024](https://github.com/dbt-labs/dbt-core/issues/12024))
|
||||
- Support partial parsing for function nodes ([#12072](https://github.com/dbt-labs/dbt-core/issues/12072))
|
||||
- Allow for the specification of function volatility ([#QMalcolm](https://github.com/dbt-labs/dbt-core/issues/QMalcolm))
|
||||
- Add python UDF parsing support ([#12043](https://github.com/dbt-labs/dbt-core/issues/12043))
|
||||
- Allow for defining funciton arguments with default values ([#12044](https://github.com/dbt-labs/dbt-core/issues/12044))
|
||||
- Raise jsonschema-based deprecation warnings by default ([#12240](https://github.com/dbt-labs/dbt-core/issues/12240))
|
||||
- :bug: :snowman: Disable unit tests whose model is disabled ([#10540](https://github.com/dbt-labs/dbt-core/issues/10540))
|
||||
- Implement config.meta_get and config.meta_require ([#12012](https://github.com/dbt-labs/dbt-core/issues/12012))
|
||||
|
||||
### Fixes
|
||||
|
||||
- Don't warn for metricflow_time_spine with non-day grain ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690))
|
||||
- Fix source freshness set via config to handle explicit nulls ([#11685](https://github.com/dbt-labs/dbt-core/issues/11685))
|
||||
- Ensure build_after is present in model freshness in parsing, otherwise skip freshness definition ([#11709](https://github.com/dbt-labs/dbt-core/issues/11709))
|
||||
- Ensure source node `.freshness` is equal to node's `.config.freshness` ([#11717](https://github.com/dbt-labs/dbt-core/issues/11717))
|
||||
- ignore invalid model freshness configs in inline model configs ([#11728](https://github.com/dbt-labs/dbt-core/issues/11728))
|
||||
- Fix store_failures hierarachical config parsing ([#10165](https://github.com/dbt-labs/dbt-core/issues/10165))
|
||||
- Remove model freshness property support in favor of config level support ([#11713](https://github.com/dbt-labs/dbt-core/issues/11713))
|
||||
- Bump dbt-common to 1.25.0 to access WarnErrorOptionsV2 ([#11755](https://github.com/dbt-labs/dbt-core/issues/11755))
|
||||
- ensure consistent casing in column names while processing user unit tests ([#11770](https://github.com/dbt-labs/dbt-core/issues/11770))
|
||||
- Update jsonschema definitions with nested config defs, cloud info, and dropping source overrides ([#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A))
|
||||
- Make `GenericJSONSchemaValidationDeprecation` a "preview" deprecation ([#11814](https://github.com/dbt-labs/dbt-core/issues/11814))
|
||||
- Correct JSONSchema Semantic Layer node issues ([#11818](https://github.com/dbt-labs/dbt-core/issues/11818))
|
||||
- Improve SL JSONSchema definitions ([#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A))
|
||||
- raise MissingPlusPrefixDeprecation instead of GenericJSONSchemaValidationDeprecation when config missing plus prefix in dbt_project.yml ([#11826](https://github.com/dbt-labs/dbt-core/issues/11826))
|
||||
- Propagate config.meta and config.tags to top-level on source nodes ([#11839](https://github.com/dbt-labs/dbt-core/issues/11839))
|
||||
- Safe handling of malformed config.tags on sources/tables ([#11855](https://github.com/dbt-labs/dbt-core/issues/11855))
|
||||
- Quoting the event_time field when the configuration says so ([#11858](https://github.com/dbt-labs/dbt-core/issues/11858))
|
||||
- Raise PropertyMovedToConfigDeprecation instead of CustomTopLevelKeyDeprecation when additional attribute is a valid node config ([#11879](https://github.com/dbt-labs/dbt-core/issues/11879))
|
||||
- Avoid redundant node patch removal during partial parsing ([#11886](https://github.com/dbt-labs/dbt-core/issues/11886))
|
||||
- Comply with strict `str` type when `block.contents` is `None` ([#11492](https://github.com/dbt-labs/dbt-core/issues/11492))
|
||||
- Remove duplicative PropertyMovedToConfigDeprecation for source freshness ([#11880](https://github.com/dbt-labs/dbt-core/issues/11880))
|
||||
- Add path to MissingArgumentsPropertyInGenericTestDeprecation message ([#11940](https://github.com/dbt-labs/dbt-core/issues/11940))
|
||||
- Unhide sample mode CLI flag ([#11959](https://github.com/dbt-labs/dbt-core/issues/11959))
|
||||
- Implement checked_agg_time_dimension_for_simple_metric to satisfy dbt-semantic-interfaces>0.9.0 ([#11998](https://github.com/dbt-labs/dbt-core/issues/11998))
|
||||
- Propagate column meta/tags from config to tests ([#11984](https://github.com/dbt-labs/dbt-core/issues/11984))
|
||||
- Skip initial render of loaded_at_query when specified as source or table config ([#11973](https://github.com/dbt-labs/dbt-core/issues/11973))
|
||||
- Guarantee instantiation result and thread_exception prior to access to avoid thread hangs ([#12013](https://github.com/dbt-labs/dbt-core/issues/12013))
|
||||
- Fixes a bug in the logic for legacy time spine deprecation warnings. ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690))
|
||||
- Fix errors in partial parsing when working with versioned models ([#11869](https://github.com/dbt-labs/dbt-core/issues/11869))
|
||||
- Fix property names of function nodes (arguments.data_type, returns, returns.data_type) ([#12064](https://github.com/dbt-labs/dbt-core/issues/12064))
|
||||
- Exclude `functions` from being filtered by `empty` and `event_time` ([#12066](https://github.com/dbt-labs/dbt-core/issues/12066))
|
||||
- Fix case of successful function status in logs ([#12075](https://github.com/dbt-labs/dbt-core/issues/12075))
|
||||
- Fix `ref` support in function nodes ([#12076](https://github.com/dbt-labs/dbt-core/issues/12076))
|
||||
- Move function node `type` into it's config ([#12101](https://github.com/dbt-labs/dbt-core/issues/12101))
|
||||
- Support setting function node configs from dbt_project.yml ([#12096](https://github.com/dbt-labs/dbt-core/issues/12096))
|
||||
- Fix parse error when build_after.count set to 0 ([#12136](https://github.com/dbt-labs/dbt-core/issues/12136))
|
||||
- Stop compiling python udfs like python models ([#12153](https://github.com/dbt-labs/dbt-core/issues/12153))
|
||||
- For metric names, fix bug allowing hyphens (not allowed in metricflow already), make validation throw ValidationErrors (not ParsingErrors), and add tests. ([#n/a](https://github.com/dbt-labs/dbt-core/issues/n/a))
|
||||
- Allow dbt deps to run when vars lack defaults in dbt_project.yml ([#8913](https://github.com/dbt-labs/dbt-core/issues/8913))
|
||||
- Include macros in unit test parsing ([#10157](https://github.com/dbt-labs/dbt-core/issues/10157))
|
||||
- Restore DuplicateResourceNameError for intra-project node name duplication, behind behavior flag `require_unique_project_resource_names` ([#12152](https://github.com/dbt-labs/dbt-core/issues/12152))
|
||||
- Allow the usage of `function` with `--exclude-resource-type` flag ([#12143](https://github.com/dbt-labs/dbt-core/issues/12143))
|
||||
- Fix bug where schemas of functions weren't guaranteed to exist ([#12142](https://github.com/dbt-labs/dbt-core/issues/12142))
|
||||
- :bug: :snowman: Correctly reference foreign key references when --defer and --state provided ([#11885](https://github.com/dbt-labs/dbt-core/issues/11885))
|
||||
- Fix generation of deprecations summary ([#12146](https://github.com/dbt-labs/dbt-core/issues/12146))
|
||||
- :bug: :snowman: Add exception when using --state and referring to a removed test ([#10630](https://github.com/dbt-labs/dbt-core/issues/10630))
|
||||
- :bug: :snowman: Stop emitting `NoNodesForSelectionCriteria` three times during `build` command ([#11627](https://github.com/dbt-labs/dbt-core/issues/11627))
|
||||
- :bug: :snowman: Fix long Python stack traces appearing when package dependencies have incompatible version requirements ([#12049](https://github.com/dbt-labs/dbt-core/issues/12049))
|
||||
- :bug: :snowman: Fixed issue where changing data type size/precision/scale (e.g., varchar(3) to varchar(10)) incorrectly triggered a breaking change error fo ([#11186](https://github.com/dbt-labs/dbt-core/issues/11186))
|
||||
- :bug: :snowman: Support unit testing models that depend on sources with the same name ([#11975](https://github.com/dbt-labs/dbt-core/issues/11975), [#10433](https://github.com/dbt-labs/dbt-core/issues/10433))
|
||||
- Fix bug in partial parsing when updating a model with a schema file that is referenced by a singular test ([#12223](https://github.com/dbt-labs/dbt-core/issues/12223))
|
||||
- :bug: :snowman: Avoid retrying successful run-operation commands ([#11850](https://github.com/dbt-labs/dbt-core/issues/11850))
|
||||
- :bug: :snowman: Fix `dbt deps --add-package` crash when packages.yml contains `warn-unpinned: false` ([#9104](https://github.com/dbt-labs/dbt-core/issues/9104))
|
||||
- :bug: :snowman: Improve `dbt deps --add-package` duplicate detection with better cross-source matching and word boundaries ([#12239](https://github.com/dbt-labs/dbt-core/issues/12239))
|
||||
- :bug: :snowman: Fix false positive deprecation warning of pre/post-hook SQL configs ([#12244](https://github.com/dbt-labs/dbt-core/issues/12244))
|
||||
- :bug: :snowman: Fix ref resolution within package when duplicate nodes exist, behind require_ref_searches_node_package_before_root behavior change flag ([#11351](https://github.com/dbt-labs/dbt-core/issues/11351))
|
||||
- Improve error message clarity when detecting nodes with space in name ([#11835](https://github.com/dbt-labs/dbt-core/issues/11835))
|
||||
- :bug: :snowman:Propagate exceptions for NodeFinished callbacks in dbtRunner ([#11612](https://github.com/dbt-labs/dbt-core/issues/11612))
|
||||
- Adds omitted return statement to RuntimeConfigObject.meta_require method ([#12288](https://github.com/dbt-labs/dbt-core/issues/12288))
|
||||
- Do not raise deprecation warning when encountering dataset or project configs for bigquery ([#12285](https://github.com/dbt-labs/dbt-core/issues/12285))
|
||||
|
||||
### Under the Hood
|
||||
|
||||
- Prevent overcounting PropertyMovedToConfigDeprecation for source freshness ([#11660](https://github.com/dbt-labs/dbt-core/issues/11660))
|
||||
- call adapter.add_catalog_integration during parse_manifest ([#11889](https://github.com/dbt-labs/dbt-core/issues/11889))
|
||||
- Fix docker os dependency install issue ([#11934](https://github.com/dbt-labs/dbt-core/issues/11934))
|
||||
- Ensure dbt-core modules aren't importing versioned artifact resources directly ([#11951](https://github.com/dbt-labs/dbt-core/issues/11951))
|
||||
- Update jsonschemas used for schema-based deprecations ([#11987](https://github.com/dbt-labs/dbt-core/issues/11987))
|
||||
- Introduce dbt_version-based manifest json upgrade framework to avoid state:modified false positives on minor evolutions ([#12006](https://github.com/dbt-labs/dbt-core/issues/12006))
|
||||
- Reorganize jsonschemas directory structure ([#12121](https://github.com/dbt-labs/dbt-core/issues/12121))
|
||||
- add dbt/jsonschemas to manifest.in ([#12126](https://github.com/dbt-labs/dbt-core/issues/12126))
|
||||
- Move from setup.py to pyproject.toml ([#5696](https://github.com/dbt-labs/dbt-core/issues/5696))
|
||||
- Fixes issue where config isn't propagated to metric from measure when set as create_metric:True ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
|
||||
- Support DBT_ENGINE prefix for record-mode env vars ([#12149](https://github.com/dbt-labs/dbt-core/issues/12149))
|
||||
- Update jsonschemas for schema.yml and dbt_project.yml deprecations ([#12180](https://github.com/dbt-labs/dbt-core/issues/12180))
|
||||
- Replace setuptools and tox with hatch for build, test, and environment management. ([#12151](https://github.com/dbt-labs/dbt-core/issues/12151))
|
||||
- Bump lower bound for dbt-common to 1.37.2 ([#12284](https://github.com/dbt-labs/dbt-core/issues/12284))
|
||||
|
||||
### Dependencies
|
||||
|
||||
- Bump minimum jsonschema version to `4.19.1` ([#11740](https://github.com/dbt-labs/dbt-core/issues/11740))
|
||||
- Allow for either pydantic v1 and v2 ([#11634](https://github.com/dbt-labs/dbt-core/issues/11634))
|
||||
- Bump dbt-common minimum to 1.25.1 ([#11789](https://github.com/dbt-labs/dbt-core/issues/11789))
|
||||
- Upgrade to dbt-semantic-interfaces==0.9.0 for more robust saved query support. ([#11809](https://github.com/dbt-labs/dbt-core/issues/11809))
|
||||
- upgrade protobuf to 6.0 ([#11916](https://github.com/dbt-labs/dbt-core/issues/11916))
|
||||
- Bump dbt-adapters minimum to 1.16.5 ([#11932](https://github.com/dbt-labs/dbt-core/issues/11932))
|
||||
- Bump actions/setup-python from 5 to 6 ([#11993](https://github.com/dbt-labs/dbt-core/issues/11993))
|
||||
- Loosen dbt-semantic-interfaces lower pin to >=0.9.0 ([#12005](https://github.com/dbt-labs/dbt-core/issues/12005))
|
||||
- Drop support for python 3.9 ([#12118](https://github.com/dbt-labs/dbt-core/issues/12118))
|
||||
|
||||
### Contributors
|
||||
- [@12030](https://github.com/12030) ([#QMalcolm](https://github.com/dbt-labs/dbt-core/issues/QMalcolm))
|
||||
- [@3loka](https://github.com/3loka) ([#8913](https://github.com/dbt-labs/dbt-core/issues/8913))
|
||||
- [@MichelleArk](https://github.com/MichelleArk) ([#11818](https://github.com/dbt-labs/dbt-core/issues/11818))
|
||||
- [@QMalcolm](https://github.com/QMalcolm) ([#11727](https://github.com/dbt-labs/dbt-core/issues/11727), [#11340](https://github.com/dbt-labs/dbt-core/issues/11340), [#11680](https://github.com/dbt-labs/dbt-core/issues/11680), [#11923](https://github.com/dbt-labs/dbt-core/issues/11923), [#11967](https://github.com/dbt-labs/dbt-core/issues/11967), [#11962](https://github.com/dbt-labs/dbt-core/issues/11962), [#11958](https://github.com/dbt-labs/dbt-core/issues/11958), [#11961](https://github.com/dbt-labs/dbt-core/issues/11961), [#11965](https://github.com/dbt-labs/dbt-core/issues/11965), [#11972](https://github.com/dbt-labs/dbt-core/issues/11972), [#12042](https://github.com/dbt-labs/dbt-core/issues/12042), [#12037](https://github.com/dbt-labs/dbt-core/issues/12037), [#12024](https://github.com/dbt-labs/dbt-core/issues/12024), [#12072](https://github.com/dbt-labs/dbt-core/issues/12072), [#12043](https://github.com/dbt-labs/dbt-core/issues/12043), [#12044](https://github.com/dbt-labs/dbt-core/issues/12044), [#11685](https://github.com/dbt-labs/dbt-core/issues/11685), [#11709](https://github.com/dbt-labs/dbt-core/issues/11709), [#11717](https://github.com/dbt-labs/dbt-core/issues/11717), [#11713](https://github.com/dbt-labs/dbt-core/issues/11713), [#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A), [#11814](https://github.com/dbt-labs/dbt-core/issues/11814), [#N/A](https://github.com/dbt-labs/dbt-core/issues/N/A), [#11959](https://github.com/dbt-labs/dbt-core/issues/11959), [#12064](https://github.com/dbt-labs/dbt-core/issues/12064), [#12066](https://github.com/dbt-labs/dbt-core/issues/12066), [#12075](https://github.com/dbt-labs/dbt-core/issues/12075), [#12076](https://github.com/dbt-labs/dbt-core/issues/12076), [#12101](https://github.com/dbt-labs/dbt-core/issues/12101), [#12096](https://github.com/dbt-labs/dbt-core/issues/12096), [#12153](https://github.com/dbt-labs/dbt-core/issues/12153), [#12143](https://github.com/dbt-labs/dbt-core/issues/12143), [#12142](https://github.com/dbt-labs/dbt-core/issues/12142), [#11627](https://github.com/dbt-labs/dbt-core/issues/11627), [#11934](https://github.com/dbt-labs/dbt-core/issues/11934), [#11951](https://github.com/dbt-labs/dbt-core/issues/11951), [#11740](https://github.com/dbt-labs/dbt-core/issues/11740), [#11634](https://github.com/dbt-labs/dbt-core/issues/11634), [#11789](https://github.com/dbt-labs/dbt-core/issues/11789), [#11932](https://github.com/dbt-labs/dbt-core/issues/11932), [#12118](https://github.com/dbt-labs/dbt-core/issues/12118))
|
||||
- [@WilliamDee](https://github.com/WilliamDee) ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
|
||||
- [@aksestok](https://github.com/aksestok) ([#11882](https://github.com/dbt-labs/dbt-core/issues/11882))
|
||||
- [@aranke](https://github.com/aranke) ([#11660](https://github.com/dbt-labs/dbt-core/issues/11660))
|
||||
- [@asiunov](https://github.com/asiunov) ([#12146](https://github.com/dbt-labs/dbt-core/issues/12146))
|
||||
- [@colin-rogers-dbt](https://github.com/colin-rogers-dbt) ([#11695](https://github.com/dbt-labs/dbt-core/issues/11695), [#11728](https://github.com/dbt-labs/dbt-core/issues/11728), [#11770](https://github.com/dbt-labs/dbt-core/issues/11770), [#11889](https://github.com/dbt-labs/dbt-core/issues/11889), [#11916](https://github.com/dbt-labs/dbt-core/issues/11916))
|
||||
- [@courtneyholcomb](https://github.com/courtneyholcomb) ([#11690](https://github.com/dbt-labs/dbt-core/issues/11690), [#11690](https://github.com/dbt-labs/dbt-core/issues/11690), [#11809](https://github.com/dbt-labs/dbt-core/issues/11809))
|
||||
- [@emmyoop](https://github.com/emmyoop) ([#10630](https://github.com/dbt-labs/dbt-core/issues/10630), [#12049](https://github.com/dbt-labs/dbt-core/issues/12049), [#11186](https://github.com/dbt-labs/dbt-core/issues/11186), [#9104](https://github.com/dbt-labs/dbt-core/issues/9104), [#12239](https://github.com/dbt-labs/dbt-core/issues/12239), [#5696](https://github.com/dbt-labs/dbt-core/issues/5696), [#12151](https://github.com/dbt-labs/dbt-core/issues/12151))
|
||||
- [@gshank](https://github.com/gshank) ([#12012](https://github.com/dbt-labs/dbt-core/issues/12012), [#11869](https://github.com/dbt-labs/dbt-core/issues/11869))
|
||||
- [@mattogburke](https://github.com/mattogburke) ([#12223](https://github.com/dbt-labs/dbt-core/issues/12223))
|
||||
- [@michellark](https://github.com/michellark) ([#11885](https://github.com/dbt-labs/dbt-core/issues/11885), [#11987](https://github.com/dbt-labs/dbt-core/issues/11987))
|
||||
- [@michelleark](https://github.com/michelleark) ([#deprecate](https://github.com/dbt-labs/dbt-core/issues/deprecate), [#--models,--model,](https://github.com/dbt-labs/dbt-core/issues/--models,--model,), [#and](https://github.com/dbt-labs/dbt-core/issues/and), [#-m](https://github.com/dbt-labs/dbt-core/issues/-m), [#flags](https://github.com/dbt-labs/dbt-core/issues/flags), [#11335](https://github.com/dbt-labs/dbt-core/issues/11335), [#11659](https://github.com/dbt-labs/dbt-core/issues/11659), [#11847](https://github.com/dbt-labs/dbt-core/issues/11847), [#11725](https://github.com/dbt-labs/dbt-core/issues/11725), [#11911](https://github.com/dbt-labs/dbt-core/issues/11911), [#12002](https://github.com/dbt-labs/dbt-core/issues/12002), [#12019](https://github.com/dbt-labs/dbt-core/issues/12019), [#12047](https://github.com/dbt-labs/dbt-core/issues/12047), [#12046](https://github.com/dbt-labs/dbt-core/issues/12046), [#12240](https://github.com/dbt-labs/dbt-core/issues/12240), [#10540](https://github.com/dbt-labs/dbt-core/issues/10540), [#10165](https://github.com/dbt-labs/dbt-core/issues/10165), [#11755](https://github.com/dbt-labs/dbt-core/issues/11755), [#11826](https://github.com/dbt-labs/dbt-core/issues/11826), [#11839](https://github.com/dbt-labs/dbt-core/issues/11839), [#11855](https://github.com/dbt-labs/dbt-core/issues/11855), [#11879](https://github.com/dbt-labs/dbt-core/issues/11879), [#11880](https://github.com/dbt-labs/dbt-core/issues/11880), [#11940](https://github.com/dbt-labs/dbt-core/issues/11940), [#11998](https://github.com/dbt-labs/dbt-core/issues/11998), [#11984](https://github.com/dbt-labs/dbt-core/issues/11984), [#11973](https://github.com/dbt-labs/dbt-core/issues/11973), [#12013](https://github.com/dbt-labs/dbt-core/issues/12013), [#12136](https://github.com/dbt-labs/dbt-core/issues/12136), [#10157](https://github.com/dbt-labs/dbt-core/issues/10157), [#12152](https://github.com/dbt-labs/dbt-core/issues/12152), [#11975](https://github.com/dbt-labs/dbt-core/issues/11975), [#10433](https://github.com/dbt-labs/dbt-core/issues/10433), [#11850](https://github.com/dbt-labs/dbt-core/issues/11850), [#12244](https://github.com/dbt-labs/dbt-core/issues/12244), [#11351](https://github.com/dbt-labs/dbt-core/issues/11351), [#11835](https://github.com/dbt-labs/dbt-core/issues/11835), [#11612](https://github.com/dbt-labs/dbt-core/issues/11612), [#12285](https://github.com/dbt-labs/dbt-core/issues/12285), [#12006](https://github.com/dbt-labs/dbt-core/issues/12006), [#12121](https://github.com/dbt-labs/dbt-core/issues/12121), [#12126](https://github.com/dbt-labs/dbt-core/issues/12126), [#12149](https://github.com/dbt-labs/dbt-core/issues/12149), [#12180](https://github.com/dbt-labs/dbt-core/issues/12180), [#12284](https://github.com/dbt-labs/dbt-core/issues/12284), [#12005](https://github.com/dbt-labs/dbt-core/issues/12005))
|
||||
- [@mjsqu](https://github.com/mjsqu) ([#12288](https://github.com/dbt-labs/dbt-core/issues/12288))
|
||||
- [@nathanskone](https://github.com/nathanskone) ([#10157](https://github.com/dbt-labs/dbt-core/issues/10157))
|
||||
- [@pablomc87](https://github.com/pablomc87) ([#11858](https://github.com/dbt-labs/dbt-core/issues/11858))
|
||||
- [@peterallenwebb](https://github.com/peterallenwebb) ([#11566](https://github.com/dbt-labs/dbt-core/issues/11566))
|
||||
- [@theyostalservice](https://github.com/theyostalservice) ([#n/a](https://github.com/dbt-labs/dbt-core/issues/n/a))
|
||||
- [@trouze](https://github.com/trouze) ([#11919](https://github.com/dbt-labs/dbt-core/issues/11919))
|
||||
- [@wircho](https://github.com/wircho) ([#11886](https://github.com/dbt-labs/dbt-core/issues/11886), [#11492](https://github.com/dbt-labs/dbt-core/issues/11492))
|
||||
|
||||
## Previous Releases
|
||||
|
||||
For information on prior major and minor releases, see their changelogs:
|
||||
|
||||
|
||||
* [1.11](https://github.com/dbt-labs/dbt-core/blob/1.11.latest/CHANGELOG.md)
|
||||
* [1.10](https://github.com/dbt-labs/dbt-core/blob/1.10.latest/CHANGELOG.md)
|
||||
* [1.9](https://github.com/dbt-labs/dbt-core/blob/1.9.latest/CHANGELOG.md)
|
||||
* [1.8](https://github.com/dbt-labs/dbt-core/blob/1.8.latest/CHANGELOG.md)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
<p align="center">
|
||||
<img src="https://raw.githubusercontent.com/dbt-labs/dbt-core/fa1ea14ddfb1d5ae319d5141844910dd53ab2834/etc/dbt-core.svg" alt="dbt logo" width="750"/>
|
||||
<img src="https://raw.githubusercontent.com/dbt-labs/dbt-core/fa1ea14ddfb1d5ae319d5141844910dd53ab2834/docs/images/dbt-core.svg" alt="dbt logo" width="750"/>
|
||||
</p>
|
||||
<p align="center">
|
||||
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
|
||||
|
||||

|
||||

|
||||
|
||||
## Understanding dbt
|
||||
|
||||
@@ -17,7 +17,7 @@ Analysts using dbt can transform their data by simply writing select statements,
|
||||
|
||||
These select statements, or "models", form a dbt project. Models frequently build on top of one another – dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
|
||||
|
||||

|
||||

|
||||
|
||||
## Getting started
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = "1.11.1"
|
||||
version = "1.12.0a1"
|
||||
|
||||
@@ -446,3 +446,5 @@ def setup_manifest(ctx: Context, write: bool = True, write_perf_info: bool = Fal
|
||||
adapter.set_macro_resolver(ctx.obj["manifest"])
|
||||
query_header_context = generate_query_header_context(adapter.config, ctx.obj["manifest"]) # type: ignore[attr-defined]
|
||||
adapter.connections.set_query_header(query_header_context)
|
||||
for integration in active_integrations:
|
||||
adapter.add_catalog_integration(integration)
|
||||
|
||||
@@ -608,8 +608,6 @@ class RuntimeConfigObject(Config):
|
||||
if validator is not None:
|
||||
self._validate(validator, to_return)
|
||||
|
||||
return to_return
|
||||
|
||||
def get(self, name, default=None, validator=None):
|
||||
to_return = self._lookup(name, default)
|
||||
|
||||
|
||||
@@ -558,10 +558,7 @@ def _packages_to_search(
|
||||
elif current_project == node_package:
|
||||
return [current_project, None]
|
||||
else:
|
||||
if get_flags().require_ref_searches_node_package_before_root:
|
||||
return [node_package, current_project, None]
|
||||
else:
|
||||
return [current_project, node_package, None]
|
||||
return [current_project, node_package, None]
|
||||
|
||||
|
||||
def _sort_values(dct):
|
||||
|
||||
@@ -367,7 +367,6 @@ class ProjectFlags(ExtensibleDbtClassMixin):
|
||||
require_all_warnings_handled_by_warn_error: bool = False
|
||||
require_generic_test_arguments_property: bool = True
|
||||
require_unique_project_resource_names: bool = False
|
||||
require_ref_searches_node_package_before_root: bool = False
|
||||
|
||||
@property
|
||||
def project_only_flags(self) -> Dict[str, Any]:
|
||||
@@ -385,7 +384,6 @@ class ProjectFlags(ExtensibleDbtClassMixin):
|
||||
"require_all_warnings_handled_by_warn_error": self.require_all_warnings_handled_by_warn_error,
|
||||
"require_generic_test_arguments_property": self.require_generic_test_arguments_property,
|
||||
"require_unique_project_resource_names": self.require_unique_project_resource_names,
|
||||
"require_ref_searches_node_package_before_root": self.require_ref_searches_node_package_before_root,
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -16,15 +16,14 @@ from dbt_common.events.format import (
|
||||
pluralize,
|
||||
timestamp_to_datetime_string,
|
||||
)
|
||||
from dbt_common.ui import (
|
||||
deprecation_tag,
|
||||
error_tag,
|
||||
green,
|
||||
line_wrap_message,
|
||||
red,
|
||||
warning_tag,
|
||||
yellow,
|
||||
)
|
||||
from dbt_common.ui import deprecation_tag as deprecation_tag_less_strict
|
||||
from dbt_common.ui import error_tag, green, line_wrap_message, red, warning_tag, yellow
|
||||
|
||||
|
||||
# This makes it so that mypy will complain if a deprecation tag is used without an event name
|
||||
def _deprecation_tag(description: str, event_name: str) -> str:
|
||||
return deprecation_tag_less_strict(description, event_name)
|
||||
|
||||
|
||||
# Event codes have prefixes which follow this table
|
||||
#
|
||||
@@ -260,7 +259,7 @@ class DeprecatedModel(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(msg, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(msg, self.__class__.__name__))
|
||||
else:
|
||||
return warning_tag(msg)
|
||||
|
||||
@@ -276,9 +275,9 @@ class PackageRedirectDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class PackageInstallPathDeprecation(WarnLevel):
|
||||
@@ -293,9 +292,9 @@ class PackageInstallPathDeprecation(WarnLevel):
|
||||
"""
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class ConfigSourcePathDeprecation(WarnLevel):
|
||||
@@ -309,9 +308,9 @@ class ConfigSourcePathDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class ConfigDataPathDeprecation(WarnLevel):
|
||||
@@ -325,9 +324,9 @@ class ConfigDataPathDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class MetricAttributesRenamed(WarnLevel):
|
||||
@@ -345,9 +344,9 @@ class MetricAttributesRenamed(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return deprecation_tag(description)
|
||||
return deprecation_tag_less_strict(description)
|
||||
|
||||
|
||||
class ExposureNameDeprecation(WarnLevel):
|
||||
@@ -364,9 +363,9 @@ class ExposureNameDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class InternalDeprecation(WarnLevel):
|
||||
@@ -383,7 +382,7 @@ class InternalDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return deprecation_tag(msg, self.__class__.__name__)
|
||||
return _deprecation_tag(msg, self.__class__.__name__)
|
||||
else:
|
||||
return warning_tag(msg)
|
||||
|
||||
@@ -401,9 +400,9 @@ class EnvironmentVariableRenamed(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class ConfigLogPathDeprecation(WarnLevel):
|
||||
@@ -422,9 +421,9 @@ class ConfigLogPathDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class ConfigTargetPathDeprecation(WarnLevel):
|
||||
@@ -443,9 +442,9 @@ class ConfigTargetPathDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
# Note: this deprecation has been removed, but we are leaving
|
||||
@@ -462,9 +461,9 @@ class TestsConfigDeprecation(WarnLevel):
|
||||
)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(deprecation_tag_less_strict(description))
|
||||
|
||||
|
||||
class ProjectFlagsMovedDeprecation(WarnLevel):
|
||||
@@ -478,9 +477,9 @@ class ProjectFlagsMovedDeprecation(WarnLevel):
|
||||
)
|
||||
# Can't use line_wrap_message here because flags.printer_width isn't available yet
|
||||
if require_event_names_in_deprecations():
|
||||
return deprecation_tag(description, self.__class__.__name__)
|
||||
return _deprecation_tag(description, self.__class__.__name__)
|
||||
else:
|
||||
return deprecation_tag(description)
|
||||
return deprecation_tag_less_strict(description)
|
||||
|
||||
|
||||
class SpacesInResourceNameDeprecation(DynamicLevel):
|
||||
@@ -496,7 +495,7 @@ class SpacesInResourceNameDeprecation(DynamicLevel):
|
||||
description = warning_tag(description)
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(description)
|
||||
|
||||
@@ -514,7 +513,7 @@ class ResourceNamesWithSpacesDeprecation(WarnLevel):
|
||||
description += " For more information: https://docs.getdbt.com/reference/global-configs/legacy-behaviors"
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -527,7 +526,7 @@ class PackageMaterializationOverrideDeprecation(WarnLevel):
|
||||
description = f"Installed package '{self.package_name}' is overriding the built-in materialization '{self.materialization_name}'. Overrides of built-in materializations from installed packages will be deprecated in future versions of dbt. For more information: https://docs.getdbt.com/reference/global-configs/legacy-behaviors"
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -540,7 +539,7 @@ class SourceFreshnessProjectHooksNotRun(WarnLevel):
|
||||
description = "In a future version of dbt, the `source freshness` command will start running `on-run-start` and `on-run-end` hooks by default. For more information: https://docs.getdbt.com/reference/global-configs/legacy-behaviors"
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -553,7 +552,7 @@ class MFTimespineWithoutYamlConfigurationDeprecation(WarnLevel):
|
||||
description = "Time spines without YAML configuration are in the process of deprecation. Please add YAML configuration for your 'metricflow_time_spine' model. See documentation on MetricFlow time spines: https://docs.getdbt.com/docs/build/metricflow-time-spine and behavior change documentation: https://docs.getdbt.com/reference/global-configs/behavior-changes."
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -566,7 +565,7 @@ class MFCumulativeTypeParamsDeprecation(WarnLevel):
|
||||
description = "Cumulative fields `type_params.window` and `type_params.grain_to_date` have been moved and will soon be deprecated. Please nest those values under `type_params.cumulative_type_params.window` and `type_params.cumulative_type_params.grain_to_date`. See documentation on behavior changes: https://docs.getdbt.com/reference/global-configs/behavior-changes."
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -579,7 +578,7 @@ class MicrobatchMacroOutsideOfBatchesDeprecation(WarnLevel):
|
||||
description = "The use of a custom microbatch macro outside of batched execution is deprecated. To use it with batched execution, set `flags.require_batched_execution_for_custom_microbatch_strategy` to `True` in `dbt_project.yml`. In the future this will be the default behavior."
|
||||
|
||||
if require_event_names_in_deprecations():
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
else:
|
||||
return line_wrap_message(warning_tag(description))
|
||||
|
||||
@@ -599,7 +598,7 @@ class GenericJSONSchemaValidationDeprecation(WarnLevel):
|
||||
else:
|
||||
description = f"{self.violation} in file `{self.file}` at path `{self.key_path}` is possibly a deprecation. {possible_causes}"
|
||||
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class UnexpectedJinjaBlockDeprecation(WarnLevel):
|
||||
@@ -608,7 +607,7 @@ class UnexpectedJinjaBlockDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"{self.msg} in file `{self.file}`"
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class DuplicateYAMLKeysDeprecation(WarnLevel):
|
||||
@@ -617,7 +616,7 @@ class DuplicateYAMLKeysDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"{self.duplicate_description} in file `{self.file}`"
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class CustomTopLevelKeyDeprecation(WarnLevel):
|
||||
@@ -626,7 +625,7 @@ class CustomTopLevelKeyDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"{self.msg} in file `{self.file}`"
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class CustomKeyInConfigDeprecation(WarnLevel):
|
||||
@@ -639,7 +638,7 @@ class CustomKeyInConfigDeprecation(WarnLevel):
|
||||
path_specification = f" at path `{self.key_path}`"
|
||||
|
||||
description = f"Custom key `{self.key}` found in `config`{path_specification} in file `{self.file}`. Custom config keys should move into the `config.meta`."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class CustomKeyInObjectDeprecation(WarnLevel):
|
||||
@@ -648,7 +647,7 @@ class CustomKeyInObjectDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Custom key `{self.key}` found at `{self.key_path}` in file `{self.file}`. This may mean the key is a typo, or is simply not a key supported by the object."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class DeprecationsSummary(WarnLevel):
|
||||
@@ -665,7 +664,7 @@ class DeprecationsSummary(WarnLevel):
|
||||
if self.show_all_hint:
|
||||
description += "\n\nTo see all deprecation instances instead of just the first occurrence of each, run command again with the `--show-all-deprecations` flag. You may also need to run with `--no-partial-parse` as some deprecations are only encountered during parsing."
|
||||
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class CustomOutputPathInSourceFreshnessDeprecation(WarnLevel):
|
||||
@@ -674,7 +673,7 @@ class CustomOutputPathInSourceFreshnessDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Custom output path usage `--output {self.path}` usage detected in `dbt source freshness` command."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class PropertyMovedToConfigDeprecation(WarnLevel):
|
||||
@@ -683,7 +682,7 @@ class PropertyMovedToConfigDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Found `{self.key}` as a top-level property of `{self.key_path}` in file `{self.file}`. The `{self.key}` top-level property should be moved into the `config` of `{self.key_path}`."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class WEOIncludeExcludeDeprecation(WarnLevel):
|
||||
@@ -703,7 +702,7 @@ class WEOIncludeExcludeDeprecation(WarnLevel):
|
||||
if self.found_exclude:
|
||||
description += " Please use `warn` instead of `exclude`."
|
||||
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class ModelParamUsageDeprecation(WarnLevel):
|
||||
@@ -712,7 +711,7 @@ class ModelParamUsageDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = "Usage of `--models`, `--model`, and `-m` is deprecated in favor of `--select` or `-s`."
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class ModulesItertoolsUsageDeprecation(WarnLevel):
|
||||
@@ -723,7 +722,7 @@ class ModulesItertoolsUsageDeprecation(WarnLevel):
|
||||
description = (
|
||||
"Usage of itertools modules is deprecated. Please use the built-in functions instead."
|
||||
)
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class SourceOverrideDeprecation(WarnLevel):
|
||||
@@ -732,7 +731,7 @@ class SourceOverrideDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"The source property `overrides` is deprecated but was found on source `{self.source_name}` in file `{self.file}`. Instead, `enabled` should be used to disable the unwanted source."
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class EnvironmentVariableNamespaceDeprecation(WarnLevel):
|
||||
@@ -741,7 +740,7 @@ class EnvironmentVariableNamespaceDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Found custom environment variable `{self.env_var}` in the environment. The prefix `{self.reserved_prefix}` is reserved for dbt engine environment variables. Custom environment variables with the prefix `{self.reserved_prefix}` may cause collisions and runtime errors."
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class MissingPlusPrefixDeprecation(WarnLevel):
|
||||
@@ -750,7 +749,7 @@ class MissingPlusPrefixDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Missing '+' prefix on `{self.key}` found at `{self.key_path}` in file `{self.file}`. Hierarchical config values without a '+' prefix are deprecated in dbt_project.yml."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class ArgumentsPropertyInGenericTestDeprecation(WarnLevel):
|
||||
@@ -759,7 +758,7 @@ class ArgumentsPropertyInGenericTestDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Found `arguments` property in test definition of {self.test_name} without usage of `require_generic_test_arguments_property` behavior change flag. The `arguments` property is deprecated for custom usage and will be used to nest keyword arguments in future versions of dbt."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class MissingArgumentsPropertyInGenericTestDeprecation(WarnLevel):
|
||||
@@ -768,7 +767,7 @@ class MissingArgumentsPropertyInGenericTestDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Found top-level arguments to test {self.test_name}. Arguments to generic tests should be nested under the `arguments` property."
|
||||
return line_wrap_message(deprecation_tag(description, self.__class__.__name__))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
class DuplicateNameDistinctNodeTypesDeprecation(WarnLevel):
|
||||
@@ -777,7 +776,7 @@ class DuplicateNameDistinctNodeTypesDeprecation(WarnLevel):
|
||||
|
||||
def message(self) -> str:
|
||||
description = f"Found resources with the same name '{self.resource_name}' in package '{self.package_name}': '{self.unique_id1}' and '{self.unique_id2}'. Please update one of the resources to have a unique name."
|
||||
return line_wrap_message(deprecation_tag(description))
|
||||
return line_wrap_message(_deprecation_tag(description, self.__class__.__name__))
|
||||
|
||||
|
||||
# =======================================================
|
||||
@@ -1271,19 +1270,6 @@ class InvalidMacroAnnotation(WarnLevel):
|
||||
return self.msg
|
||||
|
||||
|
||||
class PackageNodeDependsOnRootProjectNode(WarnLevel):
|
||||
def code(self) -> str:
|
||||
return "I077"
|
||||
|
||||
def message(self) -> str:
|
||||
msg = (
|
||||
f"The node '{self.node_name}'in package '{self.package_name}' depends on the root project node '{self.root_project_unique_id}'."
|
||||
"This may lead to unexpected cycles downstream. Please set the 'require_ref_prefers_node_package_to_root' behavior change flag to True to avoid this issue."
|
||||
"For more information, see the documentation at https://docs.getdbt.com/reference/global-configs/behavior-changes#require_ref_prefers_node_package_to_root"
|
||||
)
|
||||
return warning_tag(msg)
|
||||
|
||||
|
||||
# =======================================================
|
||||
# M - Deps generation
|
||||
# =======================================================
|
||||
|
||||
@@ -37,10 +37,6 @@ _HIERARCHICAL_CONFIG_KEYS = {
|
||||
"unit_tests",
|
||||
}
|
||||
|
||||
_ADAPTER_TO_CONFIG_ALIASES = {
|
||||
"bigquery": ["dataset", "project"],
|
||||
}
|
||||
|
||||
|
||||
def load_json_from_package(jsonschema_type: str, filename: str) -> Dict[str, Any]:
|
||||
"""Loads a JSON file from within a package."""
|
||||
@@ -110,16 +106,6 @@ def _validate_with_schema(
|
||||
return validator.iter_errors(json)
|
||||
|
||||
|
||||
def _get_allowed_config_key_aliases() -> List[str]:
|
||||
config_aliases = []
|
||||
invocation_context = get_invocation_context()
|
||||
for adapter in invocation_context.adapter_types:
|
||||
if adapter in _ADAPTER_TO_CONFIG_ALIASES:
|
||||
config_aliases.extend(_ADAPTER_TO_CONFIG_ALIASES[adapter])
|
||||
|
||||
return config_aliases
|
||||
|
||||
|
||||
def _get_allowed_config_fields_from_error_path(
|
||||
yml_schema: Dict[str, Any], error_path: List[Union[str, int]]
|
||||
) -> Optional[List[str]]:
|
||||
@@ -149,7 +135,6 @@ def _get_allowed_config_fields_from_error_path(
|
||||
][0]["$ref"].split("/")[-1]
|
||||
|
||||
allowed_config_fields = list(set(yml_schema["definitions"][config_field_name]["properties"]))
|
||||
allowed_config_fields.extend(_get_allowed_config_key_aliases())
|
||||
|
||||
return allowed_config_fields
|
||||
|
||||
@@ -184,6 +169,7 @@ def jsonschema_validate(schema: Dict[str, Any], json: Dict[str, Any], file_path:
|
||||
continue
|
||||
|
||||
if key == "overrides" and key_path.startswith("sources"):
|
||||
|
||||
deprecations.warn(
|
||||
"source-override-deprecation",
|
||||
source_name=key_path.split(".")[-1],
|
||||
@@ -219,9 +205,6 @@ def jsonschema_validate(schema: Dict[str, Any], json: Dict[str, Any], file_path:
|
||||
keys = _additional_properties_violation_keys(sub_error)
|
||||
key_path = error_path_to_string(error)
|
||||
for key in keys:
|
||||
if key in _get_allowed_config_key_aliases():
|
||||
continue
|
||||
|
||||
deprecations.warn(
|
||||
"custom-key-in-config-deprecation",
|
||||
key=key,
|
||||
|
||||
@@ -77,7 +77,6 @@ from dbt.events.types import (
|
||||
InvalidDisabledTargetInTestNode,
|
||||
MicrobatchModelNoEventTimeInputs,
|
||||
NodeNotFoundOrDisabled,
|
||||
PackageNodeDependsOnRootProjectNode,
|
||||
ParsedFileLoadFailed,
|
||||
ParsePerfInfoPath,
|
||||
PartialParsingError,
|
||||
@@ -1637,33 +1636,6 @@ def invalid_target_fail_unless_test(
|
||||
)
|
||||
|
||||
|
||||
def warn_if_package_node_depends_on_root_project_node(
|
||||
node: ManifestNode,
|
||||
target_model: ManifestNode,
|
||||
ref_package_name: Optional[str],
|
||||
current_project: str,
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
node: The node that specifies the ref
|
||||
target_model: The node that is being ref'd to
|
||||
ref_package_name: The package name specified in the ref
|
||||
current_project: The root project
|
||||
"""
|
||||
if (
|
||||
node.package_name != current_project
|
||||
and target_model.package_name == current_project
|
||||
and ref_package_name != current_project
|
||||
):
|
||||
warn_or_error(
|
||||
PackageNodeDependsOnRootProjectNode(
|
||||
node_name=node.name,
|
||||
package_name=node.package_name,
|
||||
root_project_unique_id=target_model.unique_id,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def _build_model_names_to_versions(manifest: Manifest) -> Dict[str, Dict]:
|
||||
model_names_to_versions: Dict[str, Dict] = {}
|
||||
for node in manifest.nodes.values():
|
||||
@@ -1921,11 +1893,6 @@ def _process_refs(
|
||||
scope=target_model.package_name,
|
||||
)
|
||||
|
||||
if not get_flags().require_ref_searches_node_package_before_root:
|
||||
warn_if_package_node_depends_on_root_project_node(
|
||||
node, target_model, ref.package, current_project
|
||||
)
|
||||
|
||||
target_model_id = target_model.unique_id
|
||||
node.depends_on.add_node(target_model_id)
|
||||
|
||||
|
||||
@@ -249,17 +249,34 @@ class GraphRunnableTask(ConfiguredTask):
|
||||
thread_exception = e
|
||||
finally:
|
||||
if result is not None:
|
||||
try:
|
||||
fire_event(
|
||||
NodeFinished(
|
||||
node_info=runner.node.node_info,
|
||||
run_result=result.to_msg_dict(),
|
||||
)
|
||||
fire_event(
|
||||
NodeFinished(
|
||||
node_info=runner.node.node_info,
|
||||
run_result=result.to_msg_dict(),
|
||||
)
|
||||
except Exception as e:
|
||||
result = self._handle_thread_exception(runner, e)
|
||||
)
|
||||
else:
|
||||
result = self._handle_thread_exception(runner, thread_exception)
|
||||
msg = f"Exception on worker thread. {thread_exception}"
|
||||
|
||||
fire_event(
|
||||
GenericExceptionOnRun(
|
||||
unique_id=runner.node.unique_id,
|
||||
exc=str(thread_exception),
|
||||
node_info=runner.node.node_info,
|
||||
)
|
||||
)
|
||||
|
||||
result = RunResult(
|
||||
status=RunStatus.Error, # type: ignore
|
||||
timing=[],
|
||||
thread_id="",
|
||||
execution_time=0.0,
|
||||
adapter_response={},
|
||||
message=msg,
|
||||
failures=None,
|
||||
batch_results=None,
|
||||
node=runner.node,
|
||||
)
|
||||
|
||||
# `_event_status` dict is only used for logging. Make sure
|
||||
# it gets deleted when we're done with it
|
||||
@@ -348,32 +365,6 @@ class GraphRunnableTask(ConfiguredTask):
|
||||
args = [runner]
|
||||
self._submit(pool, args, callback)
|
||||
|
||||
def _handle_thread_exception(
|
||||
self,
|
||||
runner: BaseRunner,
|
||||
thread_exception: Optional[Union[KeyboardInterrupt, SystemExit, Exception]],
|
||||
) -> RunResult:
|
||||
msg = f"Exception on worker thread. {thread_exception}"
|
||||
fire_event(
|
||||
GenericExceptionOnRun(
|
||||
unique_id=runner.node.unique_id,
|
||||
exc=str(thread_exception),
|
||||
node_info=runner.node.node_info,
|
||||
)
|
||||
)
|
||||
|
||||
return RunResult(
|
||||
status=RunStatus.Error, # type: ignore
|
||||
timing=[],
|
||||
thread_id="",
|
||||
execution_time=0.0,
|
||||
adapter_response={},
|
||||
message=msg,
|
||||
failures=None,
|
||||
batch_results=None,
|
||||
node=runner.node,
|
||||
)
|
||||
|
||||
def _handle_result(self, result: RunResult) -> None:
|
||||
"""Mark the result as completed, insert the `CompileResultNode` into
|
||||
the manifest, and mark any descendants (potentially with a 'cause' if
|
||||
|
||||
@@ -37,7 +37,7 @@ dependencies = [
|
||||
# ----
|
||||
# dbt-core uses these packages in standard ways. Pin to the major version, and check compatibility
|
||||
# with major versions in each new minor version of dbt-core.
|
||||
"click>=8.2.0,<9.0",
|
||||
"click>=8.0.2,<9.0",
|
||||
"jsonschema>=4.19.1,<5.0",
|
||||
"networkx>=2.3,<4.0",
|
||||
"protobuf>=6.0,<7.0",
|
||||
@@ -47,14 +47,14 @@ dependencies = [
|
||||
# These packages are major-version-0. Keep upper bounds on upcoming minor versions (which could have breaking changes)
|
||||
# and check compatibility / bump in each new minor version of dbt-core.
|
||||
"pathspec>=0.9,<0.13",
|
||||
"sqlparse>=0.5.0,<0.5.5",
|
||||
"sqlparse>=0.5.0,<0.6.0",
|
||||
# ----
|
||||
# These are major-version-0 packages also maintained by dbt-labs.
|
||||
# Accept patches but avoid automatically updating past a set minor version range.
|
||||
"dbt-extractor>=0.5.0,<=0.6",
|
||||
"dbt-semantic-interfaces>=0.9.0,<0.10",
|
||||
# Minor versions for these are expected to be backwards-compatible
|
||||
"dbt-common>=1.37.2,<2.0",
|
||||
"dbt-common>=1.37.0,<2.0",
|
||||
"dbt-adapters>=1.15.5,<2.0",
|
||||
"dbt-protos>=1.0.405,<2.0",
|
||||
"pydantic<3",
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
# The create_adapter_plugins script is being replaced by a new interactive cookiecutter scaffold
|
||||
# that can be found https://github.com/dbt-labs/dbt-database-adapter-scaffold
|
||||
print(
|
||||
"This script has been deprecated, to create a new adapter please visit https://github.com/dbt-labs/dbt-database-adapter-scaffold"
|
||||
)
|
||||
@@ -1,11 +0,0 @@
|
||||
## ADRs
|
||||
|
||||
For any architectural/engineering decisions we make, we will create an ADR (Architectural Design Record) to keep track of what decision we made and why. This allows us to refer back to decisions in the future and see if the reasons we made a choice still holds true. This also allows for others to more easily understand the code. ADRs will follow this process:
|
||||
|
||||
- They will live in the repo, under a directory `docs/arch`
|
||||
- They will be written in markdown
|
||||
- They will follow the naming convention [`adr-NNN-<decision-title>.md`](http://adr-nnn.md/)
|
||||
- `NNN` will just be a counter starting at `001` and will allow us easily keep the records in chronological order.
|
||||
- The common sections that each ADR should have are:
|
||||
- Title, Context, Decision, Status, Consequences
|
||||
- Use this article as a reference: [https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions)
|
||||
@@ -1,35 +0,0 @@
|
||||
# Performance Regression Framework
|
||||
|
||||
## Context
|
||||
We want the ability to benchmark our perfomance overtime with new changes going forward.
|
||||
|
||||
### Options
|
||||
- Static Window: Compare the develop branch to fastest version and ensure it doesn't exceed a static window (i.e. time parse on develop and time parse on 0.20.latest and make sure it's not more than 5% slower)
|
||||
- Pro: quick to run
|
||||
- Pro: simple to implement
|
||||
- Con: rerunning a failing test could get it to pass in a large number of changes.
|
||||
- Con: several small regressions could press us up against the threshold requiring us to do unexpected additional performance work, or lower the threshold to get a release out.
|
||||
- Variance-aware Testing: Run both the develop branch and our fastest version *many times* to collect a set of timing data. We can fail on a static window based on medians, confidence interval midpoints, and even variance magnitude.
|
||||
- Pro: would catch more small performance regressions
|
||||
- Con: would take much longer to run
|
||||
- Con: Need to be very careful about making sure caching doesn't wreck the curve (or if it does, it wrecks the curve equally for all tests)
|
||||
- Stateful Tracking: For example, the rust compiler team does some [bananas performance tracking](https://perf.rust-lang.org/). This option could be done in tandem with the above options, however it would require results be stored somewhere.
|
||||
- Pro: we can graph our performance history and look really cool.
|
||||
- Pro: Variance-aware testing would run in half the time since you can just reference old runs for comparison
|
||||
- Con: state in tests sucks
|
||||
- Con: longer to build
|
||||
- Performance Profiling: Running a sampling-based profiler through a series of standardized test runs (test designed to hit as many/all of the code paths in the codebase) to determine if any particular function/class/other code has regressed in performance.
|
||||
- Pro: easy to find the cause of the perf. regression
|
||||
- Pro: should be able to run on a fairly small project size without losing much test resolution (a 5% change in a function should be evident with even a single case that runs that code path)
|
||||
- Con: complex to build
|
||||
- Con: compute intensive
|
||||
- Con: requires stored results to compare against
|
||||
|
||||
## Decision
|
||||
We decided to start with variance-aware testing with the ability to add stateful tracking by leveraging `hyperfine` which does all the variance work for us, and outputs clear json artifacts. Since we're running perfornace testing on a schedule it doesn't matter that as we add more tests it may take hours to run. The artifacts are all stored in the github action runs today, but could easily be changed to be sent somewhere in the action to track over time.
|
||||
|
||||
## Status
|
||||
Completed
|
||||
|
||||
## Consequences
|
||||
We now have the ability to more rigorously detect performance regressions, but we do not have a solid way to identify where that regression is coming from. Adding Performance Profiling cababilities will help with this, but for now just running it nightly should help us narrow it down to specific commits. As we add more performance tests, the testing matrix may take hours to run which consumes resources on GitHub Actions. Because performance testing is asynchronous, failures are easier to miss or ignore, and because it is non-deterministic it adds a non-trivial amount of complexity to our development process.
|
||||
@@ -1,34 +0,0 @@
|
||||
# Structured Logging Arch
|
||||
|
||||
## Context
|
||||
Consumers of dbt have been relying on log parsing well before this change. However, our logs were never optimized for programatic consumption, nor were logs treated like a formal interface between dbt and users. dbt's logging strategy was changed explicitly to address these two realities.
|
||||
|
||||
### Options
|
||||
#### How to structure the data
|
||||
- Using a library like structlog to represent log data with structural types like dictionaries. This would allow us to easily add data to a log event's context at each call site and have structlog do all the string formatting and io work.
|
||||
- Creating our own nominal type layer that describes each event in source. This allows event fields to be enforced statically via mypy accross all call sites.
|
||||
|
||||
#### How to output the data
|
||||
- Using structlog to output log lines regardless of if we used it to represent the data. The defaults for structlog are good, and it handles json vs text and formatting for us.
|
||||
- Using the std lib logger to log our messages more manually. Easy to use, but does far less for us.
|
||||
|
||||
## Decision
|
||||
#### How to structure the data
|
||||
We decided to go with a custom nominal type layer even though this was going to be more work. This type layer centralizes our assumptions about what data each log event contains, and allows us to use mypy to enforce these centralized assumptions acrosss the codebase. This is all for the purpose for treating logs like a formal interface between dbt and users. Here are two concrete, practical examples of how this pattern is used:
|
||||
|
||||
1. On the abstract superclass of all events, there are abstract methods and fields that each concrete class must implement such as `level_tag()` and `code`. If you make a new concrete event type without those, mypy will fail and tell you that you need them, preventing lost log lines, and json log events without a computer-friendly code.
|
||||
|
||||
2. On each concrete event, the fields we need to construct the message are explicitly in the source of the class. At every call site if you construct an event without all the necessary data, mypy will fail and tell you which fields you are missing.
|
||||
|
||||
Using mypy to enforce these assumptions is a step better than testing becacuse we do not need to write tests to run through every branch of dbt that it could take. Because it is checked statically on every file, mypy will give us these guarantees as long as it is configured to run everywhere.
|
||||
|
||||
#### How to output the data
|
||||
We decided to use the std lib logger because it was far more difficult than we expected to get to structlog to work properly. Documentation was lacking, and reading the source code wasn't a quick way to learn. The std lib logger was used mostly out of a necessity, and because many of the pleasantries you get from using a log library we had already chosen to do explicitly with functions in our nominal typing layer. Swapping out the std lib logger in the future should be an easy task should we choose to do it.
|
||||
|
||||
## Status
|
||||
Completed
|
||||
|
||||
## Consequences
|
||||
Adding a new log event is more cumbersome than it was previously: instead of writing the message at the log callsite, you must create a new concrete class in the event types. This is more opaque for new contributors. The json serialization approach we are using via `asdict` is fragile and unoptimized and should be replaced.
|
||||
|
||||
All user-facing log messages now live in one file which makes the job of conforming them much simpler. Because they are all nominally typed separately, it opens up the possibility to have log documentation generated from the type hints as well as outputting our logs in multiple human languages if we want to translate our messages.
|
||||
@@ -1,68 +0,0 @@
|
||||
# Python Model Arch
|
||||
|
||||
## Context
|
||||
We are thinking of supporting `python` ([roadmap](https://github.com/dbt-labs/dbt-core/blob/main/docs/roadmap/2022-05-dbt-a-core-story.md#scene-3-python-language-dbt-models), [discussion](https://github.com/dbt-labs/dbt-core/discussions/5261)) as a language other than SQL in dbt-core. This would allow users to express transformation logic that is tricky to do in SQL and have more libraries available to them.
|
||||
|
||||
### Options
|
||||
|
||||
#### Where to run the code
|
||||
- running it locally where we run dbt core.
|
||||
- running it in the cloud providers' environment.
|
||||
|
||||
#### What are the guardrails dbt would enforce for the python model
|
||||
- None, users can write whatever code they like.
|
||||
- focusing on data transformation logic where each python model should have a model function that returns a database object for dbt to materialize.
|
||||
|
||||
#### Where should the implementation live
|
||||
Two places we need to consider are `dbt-core` and each individual adapter code-base. What are the pieces needed? How do we decide what goes where?
|
||||
|
||||
|
||||
#### Are we going to allow writing macros in python
|
||||
- Not allowing it.
|
||||
- Allowing certain Jinja templating
|
||||
- Allow everything
|
||||
|
||||
## Decisions
|
||||
#### Where to run the code
|
||||
In the same idea of dbt is not your query engine, we don't want dbt to be your python runtime. Instead, we want dbt to focus on being the place to express transformation logic. So python model will be following the existing pattern of the SQL model(parse and compile user written logic and submit it to your computation engine).
|
||||
|
||||
#### What are the guardrails dbt would enforce for the python model
|
||||
We want dbt to focus on transformation logic, so we opt for setting up some tools and guardrails for the python model to focus on doing data transformation.
|
||||
1. A `dbt` object would have functions including `dbt.ref`, `dbt.source` function to reference other models and sources in the dbt project, the return of the function will be a dataframe of referenced resources.
|
||||
1. Code in the python model node should include a model function that takes a `dbt` object as an argument, do the data transformation logic inside, and return a dataframe in the end. We think folks should load their data into dataframes using the `dbt.ref`, `dbt.source` provided over raw data references. We also think logic to write dataframe to database objects should live in materialization logic instead of transformation code.
|
||||
1. That `dbt` object should also have an attribute called `dbt.config` to allow users to define configurations of the current python model like materialization logic, a specific version of python libraries, etc. This `dbt.config` object should also provide a clear access function for variables defined in project YAML. This way user can access arbitrary configuration at runtime.
|
||||
|
||||
#### Where should the implementation live
|
||||
|
||||
Logic in core should be universal and carry the opionions we have for the feature, this includes but not limited to
|
||||
1. parsing of python file in dbt-core to get the `ref`, `source`, and `config` information. This information is used to place the python model in the correct place in project DAG and generate the correct python code sent to compute engine.
|
||||
1. `language` as a new top-level node property.
|
||||
1. python template code that is not cloud provider-specific, this includes implementation for `dbt.ref`, `dbt.source`. We would use ast parser to parse out all of the `dbt.ref`, `dbt.source` inside python during parsing time, and generate what database resources those points to during compilation time. This should allow user to copy-paste the "compiled" code, and run it themselves against the data warehouse — just like with SQL models. A example of definition for `dbt.ref` could look like this
|
||||
```python
|
||||
def ref(*args):
|
||||
refs = {"my_sql_model": "DBT_TEST.DBT_SOMESCHEMA.my_sql_model"}
|
||||
key = ".".join(args)
|
||||
return load_df_function(refs[key])
|
||||
```
|
||||
|
||||
1. functional tests for the python model, these tests are expected to be inherited in the adapter code to make sure intended functions are met.
|
||||
1. Generalizing the names of properties (`sql`, `raw_sql`, `compiled_sql`) for a future where it's not all SQL.
|
||||
1. implementation of restrictions have for python model.
|
||||
|
||||
|
||||
Computing engine specific logic should live in adapters, including but not limited to
|
||||
- `load_df_function` of how to load a dataframe for a given database resource,
|
||||
- `materialize` of how to save a dataframe to table or other materialization formats.
|
||||
- some kind of `submit_python` function for submitting python code to compute engine.
|
||||
- addition or modification `materialization` macro to add materialize the python model
|
||||
|
||||
|
||||
#### Are we going to allow writing macros in python
|
||||
|
||||
We don't know yet. We use macros in SQL models because it allows us to achieve what SQL can't do. But with python being a programming language, we don't see a strong need for macros in python yet. So we plan to strictly disable that in the user-written code in the beginning, and potentially add more as we hear from the community.
|
||||
|
||||
## Status
|
||||
Implementing
|
||||
|
||||
# Consequences
|
||||
Users would be able to write python transformation models in dbt and run them as part of their data transformation workflow.
|
||||
@@ -1,53 +0,0 @@
|
||||
# Use of betterproto package for generating Python message classes
|
||||
|
||||
## Context
|
||||
We are providing proto definitions for our structured logging messages, and as part of that we need to also have Python classes for use in our Python codebase
|
||||
|
||||
### Options, August 30, 2022
|
||||
|
||||
#### Google protobuf package
|
||||
|
||||
You can use the google protobuf package to generate Python "classes", using the protobuf compiler, "protoc" with the "--python_out" option.
|
||||
|
||||
* It's not readable. There are no identifiable classes in the output.
|
||||
* A "class" is generated using a metaclass when it is used.
|
||||
* You can't subclass the generated classes, which don't act much like Python objects
|
||||
* Since you can't put defaults or methods of any kind in these classes, and you can't subclass them, they aren't very usable in Python.
|
||||
* Generated classes are not easily importable
|
||||
* Serialization is via external utilities.
|
||||
* Mypy and flake8 totally fail so you have to exclude the generated files in the pre-commit config.
|
||||
|
||||
#### betterproto package
|
||||
|
||||
* It generates readable "dataclass" classes.
|
||||
* You can subclass the generated classes. (Though you still can't add additional attributes. But if we really needed to we might be able to modify the source code to do so.)
|
||||
* Integrates much more easily with our codebase.
|
||||
* Serialization (to_dict and to_json) is built in.
|
||||
* Mypy and flake8 work on generated files.
|
||||
|
||||
* Additional benefits listed: [betterproto](https://github.com/danielgtaylor/python-betterproto)
|
||||
|
||||
|
||||
## Revisited, March 21, 2023
|
||||
|
||||
We are switching away from using betterproto because of the following reasons:
|
||||
* betterproto only suppports Optional fields in a beta release
|
||||
* betterproto has had only beta releases for a few years
|
||||
* betterproto doesn't support Struct, which we really need
|
||||
* betterproto started changing our message names to be more "pythonic"
|
||||
|
||||
Steps taken to mitigate the drawbacks of Google protobuf from above:
|
||||
* We are using a wrapping class around the logging events to enable a constructor that looks more like a Python constructor, as long as only keyword arguments are used.
|
||||
* The generated file is skipped in the pre-commit config
|
||||
* We can live with the awkward interfaces. It's just code.
|
||||
|
||||
Advantages of Google protobuf:
|
||||
* Message can be constructed from a dictionary of all message values. With betterproto you had to pre-construct nested message objects, which kind of forced you to sprinkle generated message objects through the codebase.
|
||||
* The Struct support works really well
|
||||
* Type errors are caught much earlier and more consistently. Betterproto would accept fields of the wrong types, which was sometimes caught on serialization to a dictionary, and sometimes not until serialized to a binary string. Sometimes not at all.
|
||||
|
||||
Disadvantages of Google protobuf:
|
||||
* You can't just set nested message objects, you have to use CopyFrom. Just code, again.
|
||||
* If you try to stringify parts of the message (like in the constructed event message) it outputs in a bizarre "user friendly" format. Really bad for Struct, in particular.
|
||||
* Python messages aren't really Python. You can't expect them to *act* like normal Python objects. So they are best kept isolated to the logging code only.
|
||||
* As part of the not-really-Python, you can't use added classes to act like flags (Cache, NoFile, etc), since you can only use the bare generated message to construct other messages.
|
||||
|
Before Width: | Height: | Size: 97 KiB After Width: | Height: | Size: 97 KiB |
|
Before Width: | Height: | Size: 7.1 KiB After Width: | Height: | Size: 7.1 KiB |
|
Before Width: | Height: | Size: 138 KiB After Width: | Height: | Size: 138 KiB |
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
@@ -1,3 +0,0 @@
|
||||
The events outlined here exist to support "very very old versions of dbt-core, which expected to look directly at the HEAD branch of this github repo to find validation schemas".
|
||||
|
||||
Eventually these should go away (see https://github.com/dbt-labs/dbt-core/issues/7228)
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"type": "object",
|
||||
"title": "invocation_env",
|
||||
"description": "DBT invocation environment type",
|
||||
"properties": {
|
||||
"environment": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
{
|
||||
"type": "object",
|
||||
"title": "invocation",
|
||||
"description": "Schema for a dbt invocation",
|
||||
"properties": {
|
||||
"project_id": {
|
||||
"type": "string"
|
||||
},
|
||||
"user_id": {
|
||||
"type": "string"
|
||||
},
|
||||
"invocation_id": {
|
||||
"type": "string"
|
||||
},
|
||||
"command": {
|
||||
"type": "string"
|
||||
},
|
||||
"command_options": {
|
||||
"type": "string"
|
||||
},
|
||||
"progress": {
|
||||
"type": "string",
|
||||
"enum": ["start", "end"]
|
||||
},
|
||||
"version": {
|
||||
"type": "string"
|
||||
},
|
||||
"remote_ip": {
|
||||
"type": "string"
|
||||
},
|
||||
"run_type": {
|
||||
"type": "string",
|
||||
"enum": ["dry", "test", "regular"]
|
||||
},
|
||||
"result_type": {
|
||||
"type": "string",
|
||||
"enum": ["ok", "error"]
|
||||
},
|
||||
"result": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"type": "object",
|
||||
"title": "platform",
|
||||
"description": "Schema for a dbt user's platform",
|
||||
"properties": {
|
||||
"platform": {
|
||||
"type": "string"
|
||||
},
|
||||
"python": {
|
||||
"type": "string"
|
||||
},
|
||||
"python_version": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"type": "object",
|
||||
"title": "run_model",
|
||||
"description": "Schema for the execution of a model",
|
||||
"properties": {
|
||||
"index": {
|
||||
"type": "number"
|
||||
},
|
||||
"total": {
|
||||
"type": "number"
|
||||
},
|
||||
"execution_time": {
|
||||
"type": "number",
|
||||
"multiple_of": 0.01
|
||||
},
|
||||
"run_status": {
|
||||
"type": "string"
|
||||
},
|
||||
"run_skipped": {
|
||||
"type": "string"
|
||||
},
|
||||
"run_error": {
|
||||
"type": "string"
|
||||
},
|
||||
"model_materialization": {
|
||||
"type": "string"
|
||||
},
|
||||
"model_id": {
|
||||
"type": "string"
|
||||
},
|
||||
"hashed_contents": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,118 +0,0 @@
|
||||
# Performance Regression Testing
|
||||
|
||||
## Attention!
|
||||
|
||||
PLEASE READ THIS README IN THE MAIN BRANCH
|
||||
The performance runner is always pulled from main regardless of the version being modeled or sampled. If you are not in the main branch, this information may be stale.
|
||||
|
||||
## Description
|
||||
|
||||
This test suite samples the performance characteristics of individual commits against performance models for prior releases. Performance is measured in project-command pairs which are assumed to conform to a normal distribution. The sampling and comparison is effecient enough to run against PRs.
|
||||
|
||||
This collection of projects and commands should expand over time to reflect user feedback about poorly performing projects to protect against poor performance in these scenarios in future versions.
|
||||
|
||||
Here are all the components of the testing module:
|
||||
|
||||
- dbt project setups that are known performance bottlenecks which you can find in `/performance/projects/`, and a runner written in Rust that runs specific dbt commands on each of the projects.
|
||||
- Performance characteristics called "baselines" from released dbt versions in `/performance/baselines/`. Each branch will only have the baselines for its ancestors because when we compare samples, we compare against the lastest baseline available in the branch.
|
||||
- A GitHub action for modeling the performance distribution for a new release: `/.github/workflows/model_performance.yml`.
|
||||
- A GitHub action for sampling performance of dbt at your commit and comparing it against a previous release: `/.github/workflows/sample_performance.yml`.
|
||||
|
||||
At this time, the biggest risk in the design of this project is how to account for the natural variation of GitHub Action runs. Typically, performance work is done on dedicated hardware to elimiate this factor. However, there are ways to integrate the variation in obeservation tools if it can be measured.
|
||||
|
||||
## Adding Test Scenarios
|
||||
|
||||
A clear process for maintainers and community members to add new performance testing targets will exist after the next stage of the test suite is complete. For details, see #4768.
|
||||
|
||||
## Investigating Regressions
|
||||
|
||||
If your commit has failed one of the performance regression tests, it does not necessarily mean your commit has a performance regression. However, the observed runtime value was so much slower than the expected value that it was unlikely to be random noise. If it is not due to random noise, this commit contains the code that is causing this performance regression. However, it may not be the commit that introduced that code. That code may have been introduced in the commit before even if it passed due to natural variation in sampling. When investigating a performance regression, start with the failing commit and working your way backwards.
|
||||
|
||||
Here's an example of how this could happen:
|
||||
|
||||
```
|
||||
Commit
|
||||
A <- last release
|
||||
B
|
||||
C <- perf regression
|
||||
D
|
||||
E
|
||||
F <- the first failing commit
|
||||
```
|
||||
- Commit A is measured to have an expected value for one performance metric of 30 seconds with a standard deviation of 0.5 seconds.
|
||||
- Commit B doesn't introduce a performance regression and passes the performance regression tests.
|
||||
- Commit C introduces a performance regression such that the new expected value of the metric is 32 seconds with a standard deviation still at 0.5 seconds, but we don't know this because we don't estimate the whole performance distribution on every commit because that is far too much work to run on every commit. It passes the performance regression test because we happened to sample a value of 31 seconds which is within our threshold for the original model. It's also only 2 standard deviations away from the actual performance model of commit C so even though it's not going to be a super common situation, it is expected to happen sometimes.
|
||||
- Commit D samples a value of 31.4 seconds and passes
|
||||
- Commit E samples a value of 31.2 seconds and passes
|
||||
- Commit F samples a value of 32.9 seconds and fails
|
||||
|
||||
Because these performance regression tests are non-deterministic, it is frequently going to be possible to rerun the test on a failing commit and get it to pass. The more often we do this, the farther down the commit history we will be punting detection.
|
||||
|
||||
If your PR is against `main` your commits will be compared against the latest baseline measurement found in `performance/baselines`. If this commit needs to be backported, that PR will be against the `.latest` branch and will also compare against the latest baseline measurement found in `performance/baselines` in that branch. These two versions may be the same or they may be different. For example, If the latest version of dbt is v1.99.0, the performance sample of your PR against main will compare against the baseline for v1.99.0. When those commits are backported to `1.98.latest` those commits will be compared against the baseline for v1.98.6 (or whatever the latest is at that time). Even if the compared baseline is the same, a different sample is taken for each PR. In this case, even though it should be rare, it is possible for a performance regression to be detected in one of the two PRs even with the same baseline due to variation in sampling.
|
||||
|
||||
## The Statistics
|
||||
Particle physicists need to be confident in declaring new discoveries, snack manufacturers need to be sure each individual item is within the regulated margin of error for nutrition facts, and weight-rated climbing gear needs to be produced so you can trust your life to every unit that comes off the line. All of these use cases use the same kind of math to meet their needs: sigma-based p-values. This section will peel apart that math with the help of a physicist and walk through how we apply this approach to performance regression testing in this test suite.
|
||||
|
||||
You are likely familiar with forming a hypothesis of the form "A and B are correlated" which is known as _the research hypothesis_. Additionally, it follows that the hypothesis "A and B are not correlated" is relevant and is known as _the null hypothesis_. When looking at data, we commonly use a _p-value_ to determine the significance of the data. Formally, a _p-value_ is the probability of obtaining data at least as extreme as the ones observed, if the null hypothesis is true. To refine this definition, The experimental partical physicist [Dr. Tommaso Dorigo](https://userswww.pd.infn.it/~dorigo/#about) has an excellent [glossary](https://www.science20.com/quantum_diaries_survivor/fundamental_glossary_higgs_broadcast-85365) of these terms that helps clarify: "'Extreme' is quite tricky instead: it depends on what is your 'alternate hypothesis' of reference, and what kind of departure it would produce on the studied statistic derived from the data. So 'extreme' will mean 'departing from the typical values expected for the null hypothesis, toward the values expected from the alternate hypothesis.'" In the context of performance regression testing, our research hypothesis is that "after commit A, the codebase includes a performance regression" which means we expect the runtime of our measured processes to be _slower_, not faster than the expected value.
|
||||
|
||||
Given this definition of p-value, we need to explicitly call out the common tendancy to apply _probability inversion_ to our observations. To quote [Dr. Tommaso Dorigo](https://www.science20.com/quantum_diaries_survivor/fundamental_glossary_higgs_broadcast-85365) again, "If your ability on the long jump puts you in the 99.99% percentile, that does not mean that you are a kangaroo, and neither can one infer that the probability that you belong to the human race is 0.01%." Using our previously defined terms, the p-value is _not_ the probability that the null hypothesis _is true_.
|
||||
|
||||
This brings us to calculating sigma values. Sigma refers to the standard deviation of a statistical model, which is used as a measurement of how far away an observed value is from the expected value. When we say that we have a "3 sigma result" we are saying that if the null hypothesis is true, this is a particularly unlikely observation—not that the null hypothesis is false. Exactly how unlikely depends on what the expected values from our research hypothesis are. In the context of performance regression testing, if the null hypothesis is false, we are expecting the results to be _slower_ than the expected value not _slower or faster_. Looking at a normal distrubiton below, we can see that we only care about one _half_ of the distribution: the half where the values are slower than the expected value. This means that when we're calculating the p-value we are not including both sides of the normal distribution.
|
||||
|
||||

|
||||
|
||||
Because of this, the following table describes the significance of each sigma level for our _one-sided_ hypothesis:
|
||||
|
||||
| σ | p-value | scientific significance |
|
||||
| --- | -------------- | ----------------------- |
|
||||
| 1 σ | 1 in 6 | |
|
||||
| 2 σ | 1 in 44 | |
|
||||
| 3 σ | 1 in 741 | evidence |
|
||||
| 4 σ | 1 in 31,574 | |
|
||||
| 5 σ | 1 in 3,486,914 | discovery |
|
||||
|
||||
When detecting performance regressions that trigger alerts, block PRs, or delay releases we want to be conservative enough that detections are infrequently triggered by noise, but not so conservative as to miss most actual regressions. This test suite uses a 3 sigma standard so that only about 1 in every 700 runs is expected to fail the performance regression test suite due to expected variance in our measurements.
|
||||
|
||||
In practice, the number of performance regression failures due to random noise will be higher because we are not incorporating the variance of the tools we use to measure, namely GHA.
|
||||
|
||||
### Concrete Example: Performance Regression Detection
|
||||
|
||||
The following example data was collected by running the code in this repository in Github Actions.
|
||||
|
||||
In dbt v1.0.3, we have the following mean and standard deviation when parsing a dbt project with 2000 models:
|
||||
|
||||
μ (mean): 41.22<br/>
|
||||
σ (stddev): 0.2525<br/>
|
||||
|
||||
The 2-sided 3 sigma range can be calculated with these two values via:
|
||||
|
||||
x < μ - 3 σ or x > μ + 3 σ<br/>
|
||||
x < 41.22 - 3 * 0.2525 or x > 41.22 + 3 * 0.2525 <br/>
|
||||
x < 40.46 or x > 41.98<br/>
|
||||
|
||||
It follows that the 1-sided 3 sigma range for performance regressions is just:<br/>
|
||||
x > 41.98
|
||||
|
||||
If when we sample a single `dbt parse` of the same project with a commit slated to go into dbt v1.0.4, we observe a 42s parse time, then this observation is so unlikely if there were no code-induced performance regressions, that we should investigate if there is a performance regression in any of the commits between this failure and the commit where the initial distribution was measured.
|
||||
|
||||
Observations with 3 sigma significance that are _not_ performance regressions could be due to observing unlikely values (roughly 1 in every 750 observations), or variations in the instruments we use to take these measurements such as github actions. At this time we do not measure the variation in the instruments we use to account for these in our calculations which means failures due to random noise are more likely than they would be if we did take them into account.
|
||||
|
||||
### Concrete Example: Performance Modeling
|
||||
|
||||
Once a new dbt version is released (excluding pre-releases), the performance characteristics of that released version need to be measured. In this repository this measurement is referred to as a baseline.
|
||||
|
||||
After dbt v1.0.99 is released, a github action running from `main`, for the latest version of that action, takes the following steps:
|
||||
- Checks out main for the latest performance runner
|
||||
- pip installs dbt v1.0.99
|
||||
- builds the runner if it's not already in the github actions cache
|
||||
- uses the performance runner model sub command with `./runner model`.
|
||||
- The model subcommand calls hyperfine to run all of the project-command pairs a large number of times (maybe 20 or so) and save the hyperfine outputs to files in `performance/baselines/1.0.99/` one file per command-project pair.
|
||||
- The action opens two PRs with these files: one against `main` and one against `1.0.latest` so that future PRs against these branches will detect regressions against the performance characteristics of dbt v1.0.99 instead of v1.0.98.
|
||||
- The release driver for dbt v1.0.99 reviews and merges these PRs which is the sole deliverable of the performance modeling work.
|
||||
|
||||
## Future work
|
||||
- pin commands to projects by reading commands from a file defined in the project.
|
||||
- add a postgres warehouse to run `dbt compile` and `dbt run` commands
|
||||
- add more projects to test different configurations that have been known performance bottlenecks
|
||||
- Account for github action variation: Either measure it, or eliminate it. To measure it we could set up another action that periodically samples the same version of dbt and use a 7 day rolling variation. To eliminate it we could run the action using something like [act](https://github.com/nektos/act) on dedicated hardware.
|
||||
- build in a git-bisect run to automatically identify the commits that caused a performance regression by modeling each commit's expected value for the failing metric. Running this automatically, or even providing a script to do this locally would be useful.
|
||||
1
performance/baselines/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
# placeholder for baselines directory
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.2.0","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-09T13:49:21.773314639Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":44.19299478025,"stddev":0.2429047068802047,"median":44.17483035975,"user":43.4559033,"system":0.5913923200000001,"min":43.81193651175,"max":44.61466355675,"times":[44.597056272749995,43.96855886975,43.90405755675,44.14156308475,44.49939515775,44.11553658675,44.30173547275,43.932534850749995,43.843978513749995,44.08611205475,43.99133546975,44.39880287075,44.20809763475,44.10553540675,43.81193651175,44.24880915975,44.408731260749995,44.61466355675,44.31538149475,44.36607381875]}}
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.3.0","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-05T21:26:14.178981105Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":57.34703829679,"stddev":1.264070714183875,"median":57.16122855003999,"user":56.124171495,"system":0.6879409899999999,"min":56.03876437454,"max":62.15960342254,"times":[56.45744564454,56.27775436354,56.50617413654,57.34027474654,57.38757627154,57.17093026654,56.29133183054,56.89527107354,57.48466258854,56.87484084654,57.14306217354,57.13537045454,58.00688797954,57.15152683354,57.65667721054,56.03876437454,57.68217591654,58.03524921154,62.15960342254,57.24518659054]}}
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.3.4","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-05T21:21:13.216166358Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":43.251824134715,"stddev":0.2626902769638351,"median":43.195683199465,"user":42.82592822,"system":0.444670655,"min":42.988474644965,"max":44.268850566965,"times":[43.117288670965,43.276664016965,44.268850566965,43.175714899965,43.069990564965,43.353031152965,43.064902203965,43.104385867965,43.228237677965,43.151709868965,43.410496816965,43.139105498965,43.112643799965,43.19391977696501,43.303759563965,43.312242193965,43.197446621965,43.297804568965,42.988474644965,43.269813715965]}}
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.4.0","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-05T16:07:45.035878166Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":53.44691701517499,"stddev":1.9217109918352029,"median":54.31170254667501,"user":52.633288745,"system":0.636774385,"min":49.603911921675,"max":55.743179437675,"times":[55.021354517675,54.25164864567501,54.975722432675,52.635067164675,53.571658032675,51.382873180675,50.043912339675,49.603911921675,51.132099650675,54.615839302675,52.565473620675,51.152761771675,52.459746128675,55.743179437675,54.936982552675005,54.37175644767501,54.852100134675,55.048404930675,55.185989433675005,55.387858656675]}}
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.4.1","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-05T21:23:11.574110714Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":51.81799889823499,"stddev":0.49021827459557155,"median":51.877185231885,"user":50.937133405,"system":0.66050657,"min":50.713426685384995,"max":52.451290474385,"times":[51.868264556385,51.967490942385,52.321507218385,51.886105907385,52.451290474385,52.283930937385,51.818989812385,51.978303421385,51.213362656385,50.713426685384995,52.258454610385,51.758877730384995,51.082508232384995,51.128473688385,51.631421367384995,52.194084467385,52.240100726384995,51.64952270338499,51.49970049638499,52.414161330385]}}
|
||||
@@ -1 +0,0 @@
|
||||
{"version":"1.4.6","metric":{"name":"parse","project_name":"01_2000_simple_models"},"ts":"2023-05-05T21:31:07.688350571Z","measurement":{"command":"dbt parse --no-version-check --profiles-dir ../../project_config/","mean":71.63662348534498,"stddev":1.0486666901040516,"median":71.48043032754501,"user":70.594864785,"system":0.7236668199999998,"min":70.179068043545,"max":73.74777047454499,"times":[70.885587350545,71.733729563545,71.902222862545,70.362755346545,70.179068043545,70.902001253545,72.798824228545,73.209881293545,70.520832511545,71.143232155545,71.623572279545,71.337288375545,71.763221403545,70.426712498545,70.82376365454499,72.50315140754499,71.161477365545,72.747252973545,73.74777047454499,72.96012466454499]}}
|
||||
|
Before Width: | Height: | Size: 42 KiB |
@@ -1 +0,0 @@
|
||||
id: 5d0c160e-f817-4b77-bce3-ffb2e37f0c9b
|
||||
@@ -1,12 +0,0 @@
|
||||
default:
|
||||
target: dev
|
||||
outputs:
|
||||
dev:
|
||||
type: postgres
|
||||
host: localhost
|
||||
user: dummy
|
||||
password: dummy_password
|
||||
port: 5432
|
||||
dbname: dummy
|
||||
schema: dummy
|
||||
threads: 4
|
||||
@@ -1,13 +0,0 @@
|
||||
name: 'my_new_package'
|
||||
version: 1.0.0
|
||||
config-version: 2
|
||||
profile: 'default'
|
||||
model-paths: ["models"]
|
||||
|
||||
target-path: "target"
|
||||
clean-targets:
|
||||
- "target"
|
||||
- "dbt_modules"
|
||||
|
||||
models:
|
||||
materialized: view
|
||||
@@ -1 +0,0 @@
|
||||
select 1 as id
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_0
|
||||
version: 2
|
||||
@@ -1,3 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_1
|
||||
version: 2
|
||||
@@ -1,3 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_2
|
||||
version: 2
|
||||
@@ -1,3 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_3
|
||||
version: 2
|
||||
@@ -1,3 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_4
|
||||
version: 2
|
||||
@@ -1,5 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_2') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_5
|
||||
version: 2
|
||||
@@ -1,5 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_3') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_6
|
||||
version: 2
|
||||
@@ -1,7 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_3') }}
|
||||
union all
|
||||
select * from {{ ref('node_6') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_7
|
||||
version: 2
|
||||
@@ -1,7 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_3') }}
|
||||
union all
|
||||
select * from {{ ref('node_6') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_8
|
||||
version: 2
|
||||
@@ -1,9 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_3') }}
|
||||
union all
|
||||
select * from {{ ref('node_6') }}
|
||||
union all
|
||||
select * from {{ ref('node_7') }}
|
||||
@@ -1,11 +0,0 @@
|
||||
models:
|
||||
- columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
field: id
|
||||
to: ref('node_0')
|
||||
name: node_9
|
||||
version: 2
|
||||
@@ -1,9 +0,0 @@
|
||||
select 1 as id
|
||||
union all
|
||||
select * from {{ ref('node_0') }}
|
||||
union all
|
||||
select * from {{ ref('node_3') }}
|
||||
union all
|
||||
select * from {{ ref('node_6') }}
|
||||
union all
|
||||
select * from {{ ref('node_8') }}
|
||||