* Add event name to `message` of recently added deprecations
* Make it harder to not supply the event name to deprecation messages
* Add changie doc
* Fixup import naming
* initial hatch implmentation
* cleanup docs
* replacing makefile
* cleanup hatch commands to match adapters
reorganize more to match adapters setup
script comment
dont pip install
fix test commands
* changelog
improve changelog
* CI fix
* fix for env
* use a standard version file
* remove odd license logic
* fix bumpversion
* remove sha input
* more cleanup
* fix legacy build path
* define version for pyproject.toml
* use hatch hook for license
* remove tox
* ensure tests are split
* remove temp file for testing
* explicitly match old verion in pyproject.toml
* fix up testing
* get rid of bumpversion
* put dev_dependencies.txtin hatch
* setup.py is now dead
* set python version for local dev
* local dev fixes
* temp script to compare wheels
* parity with existing wheel builds
* Revert "temp script to compare wheels"
This reverts commit c31417a092.
* fix docker test file
* Allow dbt deps to run when vars lack defaults in dbt_project.yml
* Added Changelog entry
* fixed integration tests
* fixed mypy error
* Fix: Use strict var validation by default, lenient only for dbt deps to show helpful errors
* Fixed Integration tests
* fixed nit review comments
* addressed review comments and cleaned up tests
* addressed review comments and cleaned up tests
* Add test checking that `NoNodesForSelectionCriteria` is only fired once per invocation
* Stop emitting `NoNodesForSelectionCriteria` three times during `build` command
* update changelog
---------
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
* Explicitly support functions during partial parsing
* Emit a `Note` event when partial parsing is skipped due to there being no changes
* Begin testing partial parsing support of function nodes
* Add changie doc
* Move test_pp_functions to use `EventCatcher` from dbt-common
* Remove from `functions` instead of `nodes` during partial parsing function deletion
* Fix the partial parsing scheduling of function sql and yaml files
Previously we were treating the partial parsing scheduling of function
files as if they were only defined by YAML files. However functions consist
of a "raw code file" (typically a .sql file) and a YAML file. We needed
to update the the deletion handling + scheduling of functions during partial
parsing to act more similar to "mssat" files in order to achieve this.
This work was primarily done agentically, but then simplified by myself
afterwards.
* Test that changing the alias of a function doesn't require reparsing of the downstream nodes that reference it
* Add test to check that functions with not default schemas get their schemas created
* Ensure schemas of function nodes are created when in DAG during `build` command
* Add changie doc for function schema bug fix
* Add tests to check parsing of function argument default values
* Begin allowing the specification of `default_value` on function arguments
* Validate that non-default function arguments don't come _after_ default function arguments
* Add changie doc
* Clean up changelog on main
* Bumping version to 1.12.0a1
* Code quality cleanup
* Update CHANGELOG.md
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Propagate measure.config to metric.config when specified during create_metric:True
* changelog
* Update the metric.expr to be populated correctly according to DSI rules
* convert setup.py to pyproject.toml
* move dev requirements into pyproject.toml
* with setup.py gone we can install from root
* lint
cleanrly state intention to remove
* convert precommit to use dev deps
* consolidate version to pyproject.toml
* editable req
get rid of editable-req
* docs updates
* tweak configs for builds
* fix script
* changelog
* fixes to build
* revert unnecesary changes
more simplification
revert linting
more simplification
fix
don’t need it
* Update `setup.py` to drop support for python 3.9
* Update github issue templates to not use python 3.9 as an example
* Update github workflows to no longer depend on or test python 3.9
* Drop python 3.9 from the test dockerfile
* Update `CONTRIBUTING.md` to correctly list what python versions we test
* Update comment about some code specifically needed for a python 3.9.7 issue
* Update pre-commit python version comment
* Add changie doc
* Update imports from click as upgrading to python 3.10 changed some click items
* Add test to check that python UDFs can be parsed
* Add `entry_point` and `runtime_version` to function node config
These two configs are required for python UDFs in some warehouses and
may also be required for other UDF languages moving forward. The specific
adapters implementation will enforce the requirement. By default both
configs will be `None` unless set.
* Begin searching for `.py` files in `functions` directory
* Switch to using `SimpleParser` for functions
Previously we were using `SimpleSQLParser` and we were _only_ parsing
SQL files. However, we're now also parsing python files for functions.
As such it makes sense to switch to the `SimpleParser`. Functionally there
is no change bceause we re-added the `parse_file` override that `SimpleSQLParser`
had (there was nothing sql specific about it). Hence this is mostly a
symbolic change.
* Add changie doc
* Add test which checks that function nodes can be configured from dbt_project.yml
* Support setting function node configs from dbt_project.yml
* add changie doc
* Fix unit tests to expect `functions` as part of project
* Update function node tests to look for `type` on function config
* Update `function` node to have `type` on config
* Update parsing of `function` nodes to expect `type` on the config
* Add changie doc
* Add test to check that a function's volatility is configurable
* Define the `FunctionVolatility` enum type
* Add `volatility` as a configuration on function nodes
* Add changie doc
* Ensure jsonschema validation tests aren't skipping validation because postgres isn't technically supported
* Blanket accept `functions` as top level yaml key as temp fix
We for the moment can't sync over the full jsonschema from fusion,
as such this is a stop gap simply so that we don't raise deprecation
warnings if people start specifying functions.
* Move model column `meta` and `tags` into the column's config in happy path fixture
* Test that functions work properly when unit testing models
* Ensure that functions properly get propagated to the `manifest` and `depends_on` of the `unit_test` node
* Update comment about `RuntimeUnitTestFunctionResolver`
* Add changie doc
* Add test to ensure that using a function with `--empty` works
* Ensure relations for functions are created with a `type` set to `function`
Previously on creation of function relations we weren't passing a `type`
value. This was problematic because in dbt-adapters we call `is_function`
(which uses the relation `type`) to determine whether a relation can be
filtered when filtering options (like `empty` or `event_time`) are present.
Because `type` wasn't set for function relations, `is_function` would
return `False` and thus in the present of a filter, we would attempt to
filter it. This would raise an error because functions can't be filtered.
Setting the type on the relation solves the issue.
* add changie doc
* Add `FunctionType` enum
* Add `type` property to `Function` resource
* Add `type` property to `ParsedFunctionPatch` and `UnparsedFunctionUpdate`
* Begin populating a function's `type` during patch parsing
* Regnerate v12 manifest to include function `type` property
* Add changie doc
* Begin testing that function node `type` property is setable and accessible
* Move comment about triggering the PathEncoder back to its proper place
* Allow for the defining of basic SQL UDFs (#11957)
* Add initial definiton of the `Function` resource
* Add FunctionNode definition to graph contracts
* Add test which checks whether basic UDFs can be parsed
This test fails right now, which is intentional. This is test driven
development. Now I do work to maket the test pass :)
* Add basic function sql parser for UDFs, and plumb it through parsing code paths
* Begin populating `functions` in the ref lookup
* Begin patching `function` nodes with their yaml definitions
Of note, presently `arguments` and `return_type` aren't populating properly.
It's likely that we'll have to do additional work to the FunctionPatchParser
to get this _fully_ working.
* Increase responsibility of FunctionPatchParser to handle entire `parse_patch` of function nodes
* Fix testing suite to accomodate addition of new `function` node
* Add changie doc for new `function` node type
* Minor refactoring of `NodePatchParser.parse_patch` to reduce code duplication in `FunctionPatchParser`
* Ability to list and select function nodes (#11968)
* Begin listing `function` nodes in `list` command
* Add ability to run `list` specifying the `function` resource type
* Function nodes are support selection via: name, file path, and resource type
* Add changie doc
* Core handles lifecycle of function nodes (#12008)
* Add basic test to check that UDFs get created in data warehouse
* Add functions to the runner map of \ operation
* Add basic stub of `FunctionRunner` modeled after `SeedRunner`
* Begin using `FunctionRunner` for running `function` nodes
* Add stubbing of things to implement on `FunctionRunner`
* Initial implementation of execution of function nodes
This is largely a copy of the execution of model nodes (in run.py) but
with some abstractions into helper methods to make the body of the
`execute` function easier to follow. Of note, right now this appears to
be getting the incorrect macro from the adapter. This is likely because
for some reason the node's materialization config is being set to `view`
by default.
* Ensure parsed function nodes get the correct materialization type
* Begin generating context for `function` materialization macro
* Stub out adapter response in node result as it was causing some failures
* Correct the adapter response in the run result for functions
* Begin logging `LogFunctionResult` event for completed function nodes
* Add changie doc
* Temp update dev reqs to point at branch of dbt-adapters
* Add test `LogFunctionResult` event to serialization test
* Add `function` nodes to the `WritableManifest`
* Fix tests
* Remove no longer relevant `TODO`s from `function.py`
* Add a new macro `function()` to the jinja context for using functions (#12031)
* Update function tests to look for `functions` under `manifest.functions`
* Begin storing funciton nodes in `Manifest.functions` instead of `Manifest.nodes`
* Ensure function nodes are still included in nodes to run during `build`
* Add ability to lookup functions on the manifest
* Update patch parsing of function YAML files now that functions live on `Manifest.functions`
* Mark function nodes as no longer refable
* Ensure function nodes are still selectable
* Add `function` macro!
* Ensure functions nodes are correctly linked in the DAG
* Update jinja context tests to expect `function` macro to exist
* Fix unit tests in test suite to expect function nodes
* Add changie doc
* regen v12.json jsonschema
* Fix test `TestVerifyArtifacts::test_run_and_generate`
* Fix test `TestVerifyArtifactsReferences::test_references`
* Fix test `TestVerifyArtifactsVersions::test_versions`
* Regen manifest artifact for `TestPreviousVersionState::test_compare_state_current`
* Update `_iterate_selected_nodes` to support function nodes
* Ensure we process node functions to ensure they get added to the `depends_on`
* Take functions into account for state modified
* Regen data for `TestModifiedStateSchemaEvolution::test_modified_state_schema_evolution` test
* Default `functions` property on `WritableManifest` to a dict
I'm not sure if this is actually how we want to do this. However, without
doing this the `WritableManifest` will break on loading of older manifests
that don't have `functions`. The alternative to this would be to bump
the schema version (v12 -> v13) and create an upgrade in `upgrade_manifest.py`.
* Update UDF tests to use a more general purpose function
* Add tests ensuring UDFs can be used in models and `--inline` queries
* Correct `ParseFunctionResolver` so that the name isn't added twice to the function args spec
* Drop `functions` from `Exposure` and `Metric` definitions
* Regen v12 manifest schema
* Remove unnecessary string interpolation
* Point dev reqs back to dbt-adapters@main
* Empty commit
* Increase shared memory size for postgres docker container
I recently started getting errors that look like
```
E dbt_common.exceptions.base.DbtDatabaseError: Database Error
E could not resize shared memory segment "/PostgreSQL.3814850474" to 2097152 bytes: No space left on device
```
At first I thought this was a lack of memory, disk space, or ulimit file descriptors. However
increasing all of those things did not solve the problem. I eventually found, by exec-ing into
the container and running `df -h /dev/shm && ls -lh /dev/shm` that the container only had 64MB
of memory available to it. This change increases the memory available to the container to 1GB,
which resolved the issue.
* Use `docker compose` instead of `docker-compose`
The later was docker v1, and no longer works. Use `docker compose` instead.
* Only run homebrew postgres in `setup_db.sh` if `SKIP_HOMEBREW` is not passed
Our github actions use homebrew, but our local dev uses docker. When we
were doing local development and running `make setup-db` suddenly there would
be _two_ postgres instances running. One via homebrew, and another in docker.
This was breaking the setup. Now when running `make setup-db` we skip the
homebrew relevant portions of `setup_db.sh`.
* Set more PG environment variables in `setup_db.sh`
* fix: Properly quote event_time column names in sample mode filters
When using the --sample flag with models that have camel case or
spaced column names as their event_time field, the generated SQL
would fail because column names weren't properly quoted.
This fix introduces a robust quoting system that:
- Checks column-level quote configuration first (highest precedence)
- Falls back to source-level quoting settings
- Uses the existing Column class for proper quote handling
- Centralizes the logic in a dedicated method to eliminate duplication
- Ensures sample mode works with PostgreSQL and other databases that
require quoted identifiers for column names with spaces or special characters
Fixes issue where --sample flag fails with camel case or spaced
event_time column names.
* returning the same path that was used earlier for the event_time filed
* adding changelog
* verify cla agreement
* test: Add comprehensive tests for _resolve_event_time_field_name method
This commit adds extensive test coverage for the _resolve_event_time_field_name
method to address the PR review feedback requesting tests.
Changes:
- Add 28 parametrized test cases covering all quoting scenarios
- Test column-level vs source-level quote precedence
- Test edge cases: missing columns, empty columns dict, no quoting attributes
- Test camel case, snake case, and spaced column names
- Test both quoted and unquoted column name scenarios
- Improve method robustness with better error handling
The tests ensure the method correctly handles:
- Column-level quote settings taking precedence over source-level
- Proper fallback to source-level quoting when column-level is not set
- Edge cases where columns don't exist or have no quoting attributes
- Various column name formats (simple, camelCase, snake_case, spaced)
Fixes: Addresses PR review feedback requesting comprehensive test coverage
* style: Apply code formatting from pre-commit hooks
- Apply black formatting to providers.py and test_providers.py
- Fix trailing whitespace issues
- Add proper type guards for event_time attribute access
- Ensure all tests continue to pass after formatting changes
* Create custom hook for checking for improper imports of artifact resources
* Fix return value of `has_bad_artifact_resource_imports.py::main`
* Regex match versioned resource imports and give import in pre-commit error
* (Tidy First): Fix imports of artifact resources to not import direct versioned resources
* Add changie doc
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* feat: support nested key traversal in dbt list output
* Update version for libpq-dev in Dockerfile
The previous version we had for libpq-dev stopped being listed. As such
we need to change to installing a version that is still listed. Hence
we now install version 13.22-0+deb11u1
* Fix `FromAsCasing` warning in Docker file
Our docker file was raising the warning
`FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 27)`
because we were using `FROM` and `as`, and docker wants those words
to have the same casing. As such, the `as` instances have become `AS`.
* Add changie doc
* Pull in latest jsonschemas, primarily for improved SL definitions
* Improve metric definitions in happy path test fixture to be more expansive
* Add changie doc
* Fix test_list to know about new happy path fixture metrics
* Make `GenericJSONSchemaValidationDeprecation` a "preview" deprecation
Making the deprecation a preview will:
1. Remove it from the summary
2. Emit it as a Note event instead of the actual deprecation event
a. This does mean it'll still be in the logs (but as info level instead of warning)
* Update message of `GenericJSONSchemaValidationDeprecation` to state it's only possibly a deprecation
* Add changie doc
* fix GenericJSONSchemaValidationDeprecation related tests
* Add more details to `GenericJSONSchemaValidationDeprecation` message
* Fix tests related to GenericJSONSchemaValidationDeprecation
* Bump dbt-protos dep min to get new env var namespace deprecation event
* Define new EnvironmentVariableNamespaceDeprecation event in core
* Add new deprecation class for EnvironmentVariableNamespaceDeprecation
* Bump dbt-common dep min to get new env var prefix definiton
* Add new `env_vars` module with function for validating dbt engine env vars
* Add changie doc
* Begin keeping a list of env vars associated with cli params
* Begin validating that only allowed engine environment variables exist
* Add some extra engine env vars found throughout the project to the known list
* Begin cross propagating dbt engine env vars with old names
If the old env var name is present, and the new one is not, set the
new one to have the value of the old one. Else, if the new one is set,
set/override old name to have the value of the new one.
There are some drawbacks to this approach. Namely, click only validates
environment variable types for the environment variables it is aware of.
Thus by using the new environment variable naming scheme for existing
environment variables (not newly added ones), we actually lose type guarantees.
This might require a rework.
* Add test for validate_engine_env_vars method
* Add unit test ensuring new engine env vars get added correctly
* Add integration test for environment variable namespace deprecation
* Move logic for propagating engine env vars to pre-flight env var setting
Previously we were attempting to set it on the flags context, but that is
_not_ the environment variable context. Instead what appears to happen is
that the environment variable context is loaded, click takes this into
consideration, and then the flags are set from click's understanding of
passed cli params + env vars.
* Get the env vars from the invocation context in `validate_engine_env_vars`
* Move `_create_engine_env_var` to `__init__` of `EngineEnvVar` data class
* Fix error type in __init__ of EngineEnvVar dataclass
* Correct grammar of EnvironmentVariableNamespaceDeprecation message
* Upgrade to DSI 0.9.0
Note this new version has some breaking changes (changes to class names). This won't impact semantic manifest parsing. The changes in the new version will be used to support order_by and limit on saved queries.
* Changelog
* Update test saved query
* Improve deprecation message for SourceOverrideDeprecation
* Move SourceOverrideDeprecation to jsonschema validation code path
* Update test for checking SourceOverrideDeprecation
* Update dbt_project.yml jsonschema spec to handle nested config defs
Additionally adds some more cloud configs
* Update schema files jsonschema definition to not have `overrides` for sources
Additionally add some cloud definitions
* Add changie doc
* Update happy_path fixture to include nested config specifations in dbt_project.yml
* First draft of SourceOverrideDeprecation warning.
* Refinements and test
* Back out unneeded change`
* Fix unit test.
* add changie doc
* Bump minimum dbt-protos to 1.0.335
---------
Co-authored-by: Quigley Malcolm <quigley.malcolm@dbtlabs.com>
* Stop dynamically setting ubuntu version for `main.yml` and structured logging actions
These actions are important to run on community PRs. However these workflows
use `on: pull_request` instead of `on: pull_request_target`. That is intentional,
as `on: pull_request` doesn't give access to variables or secrets, and we need
to keep it that way for security purposes. The these actions were trying to access
a variable, which they don't have access to. This was a nicety for us, because
sometimes we'd delay moving to github's `ubuntu-latest`. However, the security
concern is more important, and thus we lose the variable for these workflows.
* Change `runs_on` of `artifact-reviews.yml`
* Stop dynamically setting mac and windows versions in main.yml
* Revert "bump dbt-common (#11640)"
This reverts commit c6b7655b65.
* update freshness model config handling
* lower case all columns when processing unit test results
* add changelog
* swap .columns for .column_names
* use rename instead of select api for normalizing agate table column casing
* Add helper to validate model configs via jsonschema
* Store jsonschemas as module vars instead of reloading everytime
Every time we were calling a jsonschema validation, we were _reloading_
from file the underlying jsonschema. As a one off, this isn't too costly.
However, for large projects it starts to add up. By only loading each json
schema once we can save a lot of time. Calling one of the functions which
loads a jsonschema 10,000 times was costing ~3.7215 seconds. By switching
to this module var paradigm we reduced that to ~0.3743 seconds.
* Begin validating configs from model `.sql` files
It was a bit of a hunt to figure out where to do this. We couldn't do
the validating in `calculate_node_config` because that function is called
4 times per node (which is an issue of itself, but out of scope for this
work). We also couldn't do the validation where `_config_call_dict` is set
because it turns out there are multiple avenues for setting
`_config_call_dict`, which is a fun rabbit hole.
* Ensure .sql configs are validated only once
It turns out that that `update_parsed_node_config` can potentially be
called twice per model. It'll be called from either `ModelParser.render_update`
or `ModelParser.populate`, and it can additionally be called from
`PatchParser.patch_node_config` if there is a .yml definition for the
model. We only want to validate the config once, and we aren't guaranteed
to have a `PatchParser` if there is no patch for the model. Thus, we've
updated `ModelParser.populate` and `ModelParser.render_update` to
request the config validation (which by default doesn't run unless requested).
* Split out the model config specific validation from general jsonschema validation
We're validating model configs from sql files via a subschema of the main
resources jsonschema, different case logic for detecting the different
types of deprecation warnings present. Thus `validate_model_config` cannot
call `jsonschema_validate`. We could have had both logic paths exist in
`jsonschema_validate`, but it would have added another later of if/elses
and bloated the function substantially.
* Handle additional properties of sub config objects
* Give better key path information for .sql config jsonschema issues
* Add tests for validate_model_config
* Add changie doc
* Fix jsonschemas unittests to avoid catching irrelevant issues
* Revert "bump dbt-common (#11640)"
This reverts commit c6b7655b65.
* update freshness model config handling
* lower case all columns when processing unit test results
* add changelog
* swap .columns for .column_names
* Loosen pydantic maximum to <3 (allowing for pydantic 2)
* Add an internal pydantic shim for getting pydantic BaseSettings reguardless of pydantic v1 vs v2
* Add changie doc
In 1.10.0 we began utilizing `jsonschema._keywords`. However, the submodule
`_keywords` wasn't added until jsonschema `4.19.1` which came out September
20th, 2023. Our jsonschema requirement was being set transitively via
dbt-common as `>=4.0,<5`. This mean people doing a _non_ fresh install of
dbt-core `1.10.0` could end up with a broken system if their existing
jsonschema dependency was anywhere in the range `>=4.0,<4.19.1`. By bumping the
minimum jsonschema version we make it such that anyone install dbt-core 1.10.1 will
automatically get there jsonschema updated (assuming they don't have an exclusionary
pin)
* Begin testing that model freshness can't be set as a top level model property
* Remove ability to specify freshness as top level property of models
* Add come comments to calculate_node_config for better readability
* Drop `freshness` as a top level property of models, and let `patch_node_config` handle merging config freshness
Model freshness hasn't been released in a minor release yet, not been documented. Thus
it is safe to remove the top level property of freshness on models. Freshness will instead
be set, and gotten, from the model config. Additionally our way of calculating the
config model freshness only got the top level `+freshness` from dbt_project.yml (ignoring
any path specific definitions). By instead using the built in `calculate_node_config` (which
is eventually called by `patch_node_config`), we get all path specific freshness config handling
and it also handles the precedence of `dbt_project.yml` specification, schema file specification,
and sql file specification.
* add changie doc
* Ensure source node `.freshness` is equal to node's `.config.freshness`
* Default source config freshness to empty spec if no freshenss spec is given
* Update contract tests for source nodes
* Ensure `build_after` is present in model freshness in parsing, otherwise skip freshness definition
* add freshness model config test
* add changelog
---------
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
* Handle explicit setting of null for source freshness config
* Abstract out the creation of the target config
This is useful because it makes that portion of code more re-usable/portable
and makes the work we are about to do easier.
* Fix bug in `merge_source_freshness` where empty freshness was preferenced over `None`
The issue was that during merging of freshnesses, an "empty freshness", one
where all values are `None`, was being preferenced over `None`. This was
problematic because an "empty freshness" indicates that a freshness was not
specified at that level. While `None` means that the freshness was _explicitly_
set to `None`. As such we should preference the thing that was specifically set.
* Properly get dbt_project defined freshness and don't merge with schema defined freshness
Previously we were only getting the "top level" freshness from the
dbt_project.yaml. This was ignoring freshness settings for the direct,
source, and table set in the dbt_project.yaml. Additionally, we were
merging the dbt_project.yaml freshness into the schema freshness. Long
term this merging would be desireably, however before we do that we need
to ensure freshness at diffrent levels within the dbt_project.yml get
properly merged (currently the different levels clobber each other). Fixing
that is a larger issue though. So for the time being, the schema defintion
of freshness will clobber any dbt_project.yml definition of freshness.
* Add changie doc
* Fix whitespace to make code quality happy
* Set the parsed source freshness to an empty FreshnessThreshold if None
This maintains backwards compatibility
* Revert "bump dbt-common (#11640)"
This reverts commit c6b7655b65.
* add file_format as a top level config in CatalogWriteIntegrationConfig
* add changelog
* Clean up changelog on main
* Bumping version to 1.11.0a1
* Code quality cleanup
* add old changelogs
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2025-05-28 14:11:10 -05:00
4558 changed files with 42222 additions and 79029 deletions
- Add invocations_started_at field to artifact metadata ([#11272](https://github.com/dbt-labs/dbt-core/issues/11272))
### Features
- Add new hard_deletes="new_record" mode for snapshots. ([#10235](https://github.com/dbt-labs/dbt-core/issues/10235))
- Add `batch` context object to model jinja context ([#11025](https://github.com/dbt-labs/dbt-core/issues/11025))
- Ensure pre/post hooks only run on first/last batch respectively for microbatch model batches ([#11094](https://github.com/dbt-labs/dbt-core/issues/11094), [#11104](https://github.com/dbt-labs/dbt-core/issues/11104))
- Support "tags" in Saved Queries ([#11155](https://github.com/dbt-labs/dbt-core/issues/11155))
- Calculate source freshness via a SQL query ([#8797](https://github.com/dbt-labs/dbt-core/issues/8797))
- Add freshness definition on model for adaptive job ([#11123](https://github.com/dbt-labs/dbt-core/issues/11123))
- Meta config for dimensions measures and entities ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
- Add doc_blocks to manifest for nodes and columns ([#11000](https://github.com/dbt-labs/dbt-core/issues/11000), [#11001](https://github.com/dbt-labs/dbt-core/issues/11001))
- Combine `--sample` and `--sample-window` CLI params ([#11299](https://github.com/dbt-labs/dbt-core/issues/11299))
- Allow for sampling of ref'd seeds ([#11300](https://github.com/dbt-labs/dbt-core/issues/11300))
- Enable sample mode for 'build' command ([#11298](https://github.com/dbt-labs/dbt-core/issues/11298))
- Allow sampling nodes snapshots depend on and of snapshots as a dependency ([#11301](https://github.com/dbt-labs/dbt-core/issues/11301))
### Fixes
- dbt retry does not respect --threads ([#10584](https://github.com/dbt-labs/dbt-core/issues/10584))
- update adapter version messages ([#10230](https://github.com/dbt-labs/dbt-core/issues/10230))
- Catch DbtRuntimeError for hooks ([#11012](https://github.com/dbt-labs/dbt-core/issues/11012))
- Access DBUG flag more consistently with the rest of the codebase in ManifestLoader ([#11068](https://github.com/dbt-labs/dbt-core/issues/11068))
- Improve the performance characteristics of add_test_edges() ([#10950](https://github.com/dbt-labs/dbt-core/issues/10950))
- Implement partial parsing for singular data test configs in yaml files ([#10801](https://github.com/dbt-labs/dbt-core/issues/10801))
- Fix debug log messages for microbatch batch execution information ([#11111](https://github.com/dbt-labs/dbt-core/issues/11111))
- Fix running of extra "last" batch when there is only one batch ([#11112](https://github.com/dbt-labs/dbt-core/issues/11112))
- Fix interpretation of `PartialSuccess` to result in non-zero exit code ([#11114](https://github.com/dbt-labs/dbt-core/issues/11114))
- Warn about invalid usages of `concurrent_batches` config ([#11122](https://github.com/dbt-labs/dbt-core/issues/11122))
- Error writing generic test at run time ([#11110](https://github.com/dbt-labs/dbt-core/issues/11110))
- Run check_modified_contract for state:modified ([#11034](https://github.com/dbt-labs/dbt-core/issues/11034))
- Fix unrendered_config for tests from dbt_project.yml ([#11146](https://github.com/dbt-labs/dbt-core/issues/11146))
- Make partial parsing reparse referencing nodes of newly versioned models. ([#8872](https://github.com/dbt-labs/dbt-core/issues/8872))
- Ensure warning about microbatch lacking filter inputs is always fired ([#11159](https://github.com/dbt-labs/dbt-core/issues/11159))
- Fix microbatch dbt list --output json ([#10556](https://github.com/dbt-labs/dbt-core/issues/10556), [#11098](https://github.com/dbt-labs/dbt-core/issues/11098))
- Fix for custom fields in generic test config for not_null and unique tests ([#11208](https://github.com/dbt-labs/dbt-core/issues/11208))
- Loosen validation on freshness to accomodate previously wrong but harmless config. ([#11123](https://github.com/dbt-labs/dbt-core/issues/11123))
- Handle `--limit -1` properly in `ShowTaskDirect` so that it propagates None instead of a negative int ([#None](https://github.com/dbt-labs/dbt-core/issues/None))
- _get_doc_blocks is crashing parsing if .format is called ([#11310](https://github.com/dbt-labs/dbt-core/issues/11310))
- Fix microbatch execution to not block main thread nor hang ([#11243](https://github.com/dbt-labs/dbt-core/issues/11243), [#11306](https://github.com/dbt-labs/dbt-core/issues/11306))
- Fixes parsing errors when using the new YAML format for snapshots ([#11164](https://github.com/dbt-labs/dbt-core/issues/11164))
### Under the Hood
- Create a no-op exposure runner ([#](https://github.com/dbt-labs/dbt-core/issues/), [#](https://github.com/dbt-labs/dbt-core/issues/))
- Improve selection peformance by optimizing the select_children() and select_parents() functions. ([#11099](https://github.com/dbt-labs/dbt-core/issues/11099))
- Change exception type from DbtInternalException to UndefinedMacroError when macro not found in 'run operation' command ([#11192](https://github.com/dbt-labs/dbt-core/issues/11192))
- Add opt-in validation of macro argument names and types ([#11274](https://github.com/dbt-labs/dbt-core/issues/11274))
- Add support for Python 3.13! ([#11401](https://github.com/dbt-labs/dbt-core/issues/11401))
- Support artifact upload to dbt Cloud ([#11418](https://github.com/dbt-labs/dbt-core/issues/11418))
### Fixes
- Update ConfigFolderDirectory dir to use str. ([#9768](https://github.com/dbt-labs/dbt-core/issues/9768), [#11305](https://github.com/dbt-labs/dbt-core/issues/11305))
- Fix microbatch models couting as success when only having one batch (and that batch failing) ([#11390](https://github.com/dbt-labs/dbt-core/issues/11390))
### Under the Hood
- Add node_checksum to node_info on structured logs ([#11372](https://github.com/dbt-labs/dbt-core/issues/11372))
- Flip behavior flag `source-freshness-run-project-hooks` to true ([#11609](https://github.com/dbt-labs/dbt-core/issues/11609))
### Features
- Show summaries for deprecations and add ability to toggle seeing all deprecation violation instances ([#11429](https://github.com/dbt-labs/dbt-core/issues/11429))
- Add behavior flag for handling all warnings via warn_error logic ([#11116](https://github.com/dbt-labs/dbt-core/issues/11116))
- Basic jsonschema validation of `dbt_project.yml` ([#11503](https://github.com/dbt-labs/dbt-core/issues/11503))
- Begin checking YAML files for duplicate keys ([#11296](https://github.com/dbt-labs/dbt-core/issues/11296))
- Add deprecation warnings for unexpected blocks in jinja. ([#11393](https://github.com/dbt-labs/dbt-core/issues/11393))
- Begin validating the jsonschema of resource YAML files ([#11504](https://github.com/dbt-labs/dbt-core/issues/11504))
- Add deprecation warning for custom top level keys in YAML files. ([#11338](https://github.com/dbt-labs/dbt-core/issues/11338))
- Begin emitting deprecationw warnings for custom keys in config blocks ([#11337](https://github.com/dbt-labs/dbt-core/issues/11337))
- Begin emitting deprecation events for custom properties found in objects ([#11336](https://github.com/dbt-labs/dbt-core/issues/11336))
- Create a singular deprecations summary event ([#11536](https://github.com/dbt-labs/dbt-core/issues/11536))
- Deprecate --output/-o usage in source freshness ([#11559](https://github.com/dbt-labs/dbt-core/issues/11559))
### Fixes
- datetime.datetime.utcnow() is deprecated as of Python 3.12 ([#9791](https://github.com/dbt-labs/dbt-core/issues/9791))
- Allow copying asset when dbt docs command is run outside the dbt project ([#9308](https://github.com/dbt-labs/dbt-core/issues/9308))
- Add pre-commit installation to Docker container for testing compatibility ([#11498](https://github.com/dbt-labs/dbt-core/issues/11498))
- Fix duplicate macro error message with multiple macros and multiple patches ([#4233](https://github.com/dbt-labs/dbt-core/issues/4233))
- Fix seed path for partial parsing if project directory name changes ([#11550](https://github.com/dbt-labs/dbt-core/issues/11550))
- Add `pre-commit` installation to Docker container for testing compatibility ([#11498](https://github.com/dbt-labs/dbt-core/issues/11498))
- Ensure the right key is associatd with the `CustomKeyInConfigDeprecation` deprecation ([#11576](https://github.com/dbt-labs/dbt-core/issues/11576))
- Add tags and meta config to exposures ([#11428](https://github.com/dbt-labs/dbt-core/issues/11428))
### Under the Hood
- Add package 'name' to lock file ([#11487](https://github.com/dbt-labs/dbt-core/issues/11487))
- Allow for deprecation previews ([#11597](https://github.com/dbt-labs/dbt-core/issues/11597))
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.