* Add breakpoint.
* Move breakpoint.
* Add fix
* Add changelog.
* Avoid sorting for the string case.
* Add unit test.
* Fix test.
* add good unit tests for coverage of sort method.
* add sql format coverage.
* Modify behavior to log a warning and proceed.
* code review comments.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Fix exclusive_primary_alt_value_setting to set warn_error_options correctly
* Add test
* Changie
* Fix unit test
* Replace conversion method
* Refactor normalize_warn_error_options
* Add changelog.
* Avoid sorting for the string case.
* add good unit tests for coverage of sort method.
* add sql format coverage.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Correct `isort` configuration to include dbt-semantic-interfaces as internal
We thought we were already doing this. However, we accidentally missed the last
`s` of `dbt-semantic-interfaces`, so imports from dbt-semantic-interfaces were not
being identified as an internal package by isort. This fixes that.
* Run isort using updated configs to mark `dbt-semantic-interfaces` as included
* Fix `test_can_silence` tests in `test_warn_error_options.py` to ensure silencing
We're fixing an issue wherein `silence` specifications in the `dbt_project.yaml`
weren't being respected. This was odd since we had tests specifically for this.
It turns out the tests were broken. Essentially the warning was instead being raised
as an error due to `include: 'all'`. Then because it was being raised as an error,
the event wasn't going through the logger. We were only asserting in these tests that
the silenced event wasn't going through the logger (which it wasn't) so everything
"appeared" to be working. Unfortunately everything wasn't actually working. This is now
highlighted because `test_warn_error_options::TestWarnErrorOptionsFromProject:test_can_silence`
is now failing with this commit.
* Fix setting `warn_error_options` via `dbt_project.yaml` flags.
Back when I did the work for #10058 (specifically c52d6531) I thought that
the `warn_error_options` would automatically be converted from the yaml
to the `WarnErrorOptions` object as we were building the `ProjectFlags` object,
which holds `warn_error_options`, via `ProjectFlags.from_dict`. And I thought
this was validated by the `test_can_silence` test added in c52d6531. However,
there were two problems:
1. The definition of `warn_error_options` on `PrjectFlags` is a dict, not a
`WarnErrorOptions` object
2. The `test_can_silence` test was broken, and not testing what I thought
The quick fix (this commit) is to ensure `silence` is passed to `WarnErrorOptions`
instantiation in `dbt.cli.flags.convert_config`. The better fix would be to make
the `warn_error_options` of `ProjectFlags` a `WarnErrorOptions` object instead of
a dict. However, to do this we first need to update dbt-common's `WarnErrorOptions`
definition to default `include` to an empty list. Doing so would allow us to get rid
of `convert_config` entirely.
* Add unit test for `ModelRunner.print_result_line`
* Add (and skip) unit test for `ModelRunner.execute`
An attempt at testing `ModelRunner.execute`. We should probably also be
asserting that the model has been executed. However before we could get there,
we're running into runtime errors during `ModelRunner.execute`. Currently the
struggle is ensuring the adapter exists in the global factory when `execute`
goes looking for. The error we're getting looks like the following:
```
def test_execute(self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:121:
----
core/dbt/task/run.py:259: in execute
context = generate_runtime_model_context(model, self.config, manifest)
core/dbt/context/providers.py:1636: in generate_runtime_model_context
ctx = ModelContext(model, config, manifest, RuntimeProvider(), None)
core/dbt/context/providers.py:834: in __init__
self.adapter = get_adapter(self.config)
venv/lib/python3.10/site-packages/dbt/adapters/factory.py:207: in get_adapter
return FACTORY.lookup_adapter(config.credentials.type)
---`
self = <dbt.adapters.factory.AdapterContainer object at 0x106e73280>, adapter_name = 'postgres'
def lookup_adapter(self, adapter_name: str) -> Adapter:
> return self.adapters[adapter_name]
E KeyError: 'postgres'
venv/lib/python3.10/site-packages/dbt/adapters/factory.py:132: KeyError
```
* Add `postgres_adapter` fixture for use in `TestModelRunner`
Previously we were running into an issue where the during `ModelRunner.execute`
the mock_adapter we were using wouldn't be found in the global adapter
factory. We've gotten past this error by supply a "real" adapter, a
`PostgresAdapter` instance. However we're now running into a new error
in which the materialization macro can't be found. This error looks like
```
model_runner = <dbt.task.run.ModelRunner object at 0x106746650>
def test_execute(
self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner
) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:129:
----
self = <dbt.task.run.ModelRunner object at 0x106746650>
model = ModelNode(database='dbt', schema='dbt_schema', name='table_model', resource_type=<NodeType.Model: 'model'>, package_na...ected'>, constraints=[], version=None, latest_version=None, deprecation_date=None, defer_relation=None, primary_key=[])
manifest = Manifest(nodes={'seed.pkg.seed': SeedNode(database='dbt', schema='dbt_schema', name='seed', resource_type=<NodeType.Se...s(show=True, node_color=None), patch_path=None, arguments=[], created_at=1718229810.21914, supported_languages=None)}})
def execute(self, model, manifest):
context = generate_runtime_model_context(model, self.config, manifest)
materialization_macro = manifest.find_materialization_macro_by_name(
self.config.project_name, model.get_materialization(), self.adapter.type()
)
if materialization_macro is None:
> raise MissingMaterializationError(
materialization=model.get_materialization(), adapter_type=self.adapter.type()
)
E dbt.adapters.exceptions.compilation.MissingMaterializationError: Compilation Error
E No materialization 'table' was found for adapter postgres! (searched types 'default' and 'postgres')
core/dbt/task/run.py:266: MissingMaterializationError
```
* Add spoofed macro fixture `materialization_table_default` for `test_execute` test
Previously the `TestModelRunner:test_execute` test was running into a runtime error
do to the macro `materialization_table_default` macro not existing in the project. This
commit adds that macro to the project (though it should ideally get loaded via interactions
between the manifest and adapter). Manually adding it resolved our previous issue, but created
a new one. The macro appears to not be properly loaded into the manifest, and thus isn't
discoverable later on when getting the macros for the jinja context. This leads to an error
that looks like the following:
```
model_runner = <dbt.task.run.ModelRunner object at 0x1080a4f70>
def test_execute(
self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner
) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:129:
----
core/dbt/task/run.py:287: in execute
result = MacroGenerator(
core/dbt/clients/jinja.py:82: in __call__
return self.call_macro(*args, **kwargs)
venv/lib/python3.10/site-packages/dbt_common/clients/jinja.py:294: in call_macro
macro = self.get_macro()
---
self = <dbt.clients.jinja.MacroGenerator object at 0x1080f3130>
def get_macro(self):
name = self.get_name()
template = self.get_template()
# make the module. previously we set both vars and local, but that's
# redundant: They both end up in the same place
# make_module is in jinja2.environment. It returns a TemplateModule
module = template.make_module(vars=self.context, shared=False)
> macro = module.__dict__[get_dbt_macro_name(name)]
E KeyError: 'dbt_macro__materialization_table_default'
venv/lib/python3.10/site-packages/dbt_common/clients/jinja.py:277: KeyError
```
It's becoming apparent that we need to find a better way to either mock or legitimately
load the default and adapter macros. At this point I think I've exausted the time box
I should be using to figure out if testing the `ModelRunner` class is possible currently,
with the result being more work has yet to be done.
* Begin adding the `LogModelResult` event catcher to event manager class fixture
* init push for issue 10198
* add changelog
* add unit tests based on michelle example
* add data_tests, and post_hook unit tests
* pull creating macro_func out of try call
* revert last commit
* pull macro_func definition back out of try
* update code formatting
* Add basic semantic layer fixture nodes to unit test `manifest` fixture
We're doing this in preperation to a for a unit test which will be testing
these nodes (as well as others) and thus we want them in the manifest.
* Add `WhereFilterInteresection` to `QueryParams` of `saved_query` fixture
In the previous commit, 58990aa450, we added
the `saved_query` fixture to the `manifest` fixture. This broke the test
`tests/unit/parser/test_manifest.py::TestPartialParse::test_partial_parse_by_version`.
It broke because the `Manifest.deepcopy` manifest basically dictifies things. When we were
dictifying the `QueryParams` of the `saved_query` before, the `where` key was getting
dropped because it was `None`. We'd then run into a runtime error on instantiation of the
`QueryParams` because although `where` is declared as _optional_, we don't set a default
value for it. And thus, kaboom :(
We should probably provide a default value for `where`. However that is out of scope
for this branch of work.
* Fix `test_select_fqn` to account for newly included semantic model
In 58990aa450 we added a semantic model
to the `manifest` fixture. This broke the test
`tests/unit/graph/test_selector_methods.py::test_select_fqn` because in
the test it selects nodes based on the string `*.*.*_model`. The newly
added semantic model matches this, and thus needed to be added to the
expected results.
* Add unit tests for `_warn_for_unused_resource_config_paths` method
Note: At this point the test when run with for a `unit_test` config
taht should be considered used, fails. This is because it is not being
properly identified as used.
* Include `unit_tests` in `Manifest.get_resouce_fqns`
Because `unit_tests` weren't being included in calls to `Manifest.get_resource.fqns`,
it always appeared to `_warn_for_unused_resource_config_paths` that there were no
unit tests in the manifest. Because of this `_warn_for_unused_resource_config_paths` thought
that _any_ `unit_test` config in `dbt_project.yaml` was unused.
* Rename `maniest` fixture in `test_selector` to `mock_manifest`
We have a globally available `manifest` fixture in our unit tests. In the
coming commits we're going to add tests to the file which use the gloablly
available `manifest` fixture. Prior to this commit, the locally defined
`manifest` fixture was taking precidence. To get around this, the easiest
solution was to rename the locally defined fixture.
I had tried to isolate the locally defined fixture by moving it, and the relevant
tests to a test class like `TestNodeSelector`. However because of _how_ the relevant
tests were parameterized, this proved difficult. Basically for readability we define
a variable which holds a list of all the parameterization variables. By moving to a
test class, the definition of the variables would have had to be defined directly in
the parameterization macro call. Although possible, it made the readability slighty
worse. It might be worth doing anyway in the long run, but instead I used a less heavy
handed alternative (already described)
* Improve type hinting in `tests/unit/utils/manifest.py`
* Ensure `args` get set from global flags for `runtime_config` fixture in unit tests
The `Compiler.compile` method accesses `self.config.args.which`. The `config`
is the `RuntimeConfig` the `Compiler` was instantiated with. Our `runtime_config`
fixture was being instatiated with an empty dict for the `args` property. Thus
a `which` property of the args wasn't being made avaiable, and if `compile` was run
a runtime error would occur. To solve this, we've begun instantiating the args from
the global flags via `get_flags()`. This works because we ensure the `set_test_flags`
fixture is run first which calls `set_from_args`.
* Create a `make_manifest` utility function for use in unit tests and fixture creation
* Refactor `Compiler` and `NodeSelector` tests in `test_selector.py` to use pytesting methodology
* Remove parsing tests that exist in `test_selector.py`
We had some tests in `test_selector.py::GraphTest` that didn't add
anything ontop of what was already being tested else where in the file
except the parsing of models. However, the tests in `test_parser.py::ModelParserTest`
cover everything being tested here (and then some). Thus these tests
in `test_selector.py::GraphTest` are unnecessary and can be deleted.
* Move `test__partial_parse` from `test_selector.py` to `test_manifest.py`
There was a test `test__partial_parse` in `test_selector.py` which tested
the functionality of `is_partial_parsable` of the `ManifestLoader`. This
doesn't really make sense to exist in `test_selector.py` where we are
testing selectors. We test the `ManifestLoader` class in `test_manifest.py`
which seemed like a more appropriate place for the test. Additionally we
renamed the test to `test_is_partial_parsable_by_version` to more accurately
describe what is being tested.
* Make `test_selector`'s manifest fixture name even more specific
* Add type hint to `expected_nodes` in `test_selector.py` tests
In the test `tests/unit/graph/test_selector.py::TestCompiler::test_two_models_simple_ref`
we have a variable `expected_nodes` that we are setting via a list comprehension.
At a glance it isn't immediately obvious what `expected_nodes` actually is. It's a
list, but a list of what? One suggestion was to explicitly write out the list of strings.
However, I worry about the brittleness of doing so. That might be the way we head long
term, but as a compromise for now, I've added a type hint the the variable definition.
* Fire skipped events at debug level
Closes https://github.com/dbt-labs/dbt-core/issues/8774
* add changelog entry
* Update to work with 1.9.*.
* Add tests for --fail-fast not showing skip messages unless --debug.
* Update test that works by itself, but assumes to much to work in integration tests.
---------
Co-authored-by: Scott Gigante <scottgigante@users.noreply.github.com>
* init push arbitrary configs for generic tests pr
* iterative work
* initial test design attempts
* test reformatting
* test rework, have basic structure for 3 of 4 passing, need to figure out how to best represent same key error, failing correctly though
* swap up test formats for new config dict and mixed varitey to run of dbt parse and inspecting the manifest
* modify tests to get passing, then modify the TestBuilder class work from earlier to be more dry
* add changelog
* modify code to match suggested changes around seperate methods and test id fix
* add column_name reference to init for deeper nested _render_values can use the input
* add type annotations
* feedback based on mike review
* Create `runtime_config` fixture and necessary upstream fixtures
* Check for better scoped `ProjectContractError` in test_runtime tests
Previously in `test_unsupported_version_extra_config` and
`test_archive_not_allowed` we were checking for `DbtProjectError`. This
worked because `ProjectContractError` is a subclass of `DbtProjectError`.
However, if we check for `DbtProjectError` in these tests than, some tangential
failure which raises a `DbtProejctError` type error would go undetected. As
we plan on modifying these tests to be pytest in the coming commits, we want to
ensure that the tests are succeeding for the right reason.
* Convert `test_str` of `TestRuntimeConfig` to a pytest test using fixtures
* Convert `test_from_parts` of `TestRuntimeConfig` to a pytest test using fixtures
While converting `test_from_parts` I noticed the comment
> TODO(jeb): Adapters must assert that quoting is populated?
This led me to beleive that `quoting` shouldn't be "fully" realized
in our project fixture unless we're saying that it's gone through
adapter instantiation. Thus I update the `quoting` on our project
fixture to be an empty dict. This change affected `test__str__` in
`test_project.py` which we thus needed to update accordingly.
* Convert runtime version specifier tests to pytest tests and move to test_project
We've done two things in this commit, which arguably _should_ have been done in
two commits. First we moved the version specifier tests from `test_runtime.py::TestRuntimeConfig`
to `test_project.py::TestGetRequiredVersion` this is because what is really being
tested is the `_get_required_version` method. Doing it via `RuntimeConfig.from_parts` method
made actually testing it a lot harder as it requires setting up more of the world and
running with a _full_ project config dict.
The second thing we did was convert it from the old unittest implementation to a pytest
implementation. This saves us from having to create most of the world as we were doing
previously in these tests.
Of note, I did not move the test `test_unsupported_version_range_bad_config`. This test
is a bit different from the rest of the version specifier tests. It was introduced in
[1eb5857811](1eb5857811)
of [#2726](https://github.com/dbt-labs/dbt-core/pull/2726) to resolve [#2638](https://github.com/dbt-labs/dbt-core/issues/2638).
The focus of #2726 was to ensure the version specifier checks were run _before_ the validation
of the `dbt_project.yml`. Thus what this test is actually testing for is order of
operations at parse time. As such, this is really more a _functional_ test than a
unit test. In the next commit we'll get this test moved (and renamed)
* Create a better test for checking that version checks come before project schema validation
* Convert `test_get_metadata` to pytest test
* Refactor `test_archive_not_allowed` to functional test
We do already have tests that ensure "extra" keys aren't allowed in
the dbt_project.yaml. This test is different because it's checking that
a specific key, `archive`, isn't allowed. We do this because at one point
in time `archive` _was_ an allowed key. Specifically, we stopped allowing
`archive` in dbt-core 0.15.0 via commit [f26948dd](f26948dde2).
Given that it's been 5 years and a major version, we could probably remove
this test, but let's keep it around unless we start supporting `archive` again.
* Convert `warn_for_unused_resource_config_paths` tests to use pytest
* Add fixtures for setting and resettign flags for unit tests
* Remove unnecessary `set_from_args` in non `unittest.TestCase` based unit tests
In the previous commit we added a pytest fixture which sets and tears down
the global flags arg via `set_from_args` for every pytest based unit test.
Previously we had added a `set_from_args` in tests or test files to reset
the global flags from if they were modified by a previous test. This is no
longer necessary because of the work done in the previous commit.
Note: We did not modify any tests that use the `unittest.TestCase` class
because they don't use pytest fixtures. Thus those tests need to continue
operating as they currently do until we shift them to pytest unit tests.
* Utilize the new `args_for_flags` fixture for setting of flags in `test_contracts_graph_parsed.py`
* Convert `test_compilation.py` from `TestCase` tests to pytest tests
We did this so in the next commit we can drop the unnecessary `set_from_args`
in the next commit. That will be it's own commit because converting these
tests is a restructuring that doing separately makes things easier to follow.
That is to say, all changes in this commit were just to convert the tests to
pytest, no other changes were made.
* Drop unnecessary `set_from_args` in `test_compilation.py`
* Add return types to all methods in `test_compilation.py`
* Reduce imports from `compilation` in `test_compilation.py`
* Update `test_logging.py` now that we don't need to worry about global flags
* Conditionally import `Generator` type for python 3.8
In python 3.9 `Generator` was moved to `collections.abc` and deprecated
in `typing`. We still support 3.8 and thus need to be conditionally
importing `Generator`. We should remove this in the future when we drop
support for 3.8.
* Add more accurate RSS high water mark measurement for Linux
* Add changelog entry.
* Checks to avoid exception based flow control, per review.
* Fix review nit.
* Add unit test to assert `setup_config_logger` clears the event manager state
* Move `setup_event_logger` tests from `test_functions.py` to `test_logging.py`
* Move `EventCatcher` to `tests.utils` for use in unit and functional tests
* Update fixture mocking global event manager to instead clear it
Previously we had started _mocking_ the global event manager. We did this
because we though that meant anything we did to the global event manager,
including modifying it via things like `setup_event_logger`, would be
automatically cleaned up at the end of any test using the fixture because
the mock would go away. However, this understanding of fixtures and mocking
was incorrect, and the global event manager wouldn't be properly isolated/reset.
Thus we changed to fixture to instead cleanup the global event manager before
any test that uses it and by using `autouse=True` in the fixture definition
we made it so that every unit test uses the fixture.
Note this will no longer be viabled if we every multi-thread our unit testing as
the event manager isn't actually isolated, and thus two tests could both modify
the event manager at the same time.
* Add test for different `write_perf_info` values to `get_full_manifest`
* Add test for different `reset` values to `get_full_manifest`
* Abstract required mocks for `get_full_manifest` tests to reduce duplication
There are a set of required mocks that `get_full_manifest` unit tests need.
Instead of doing these mocks in each test, we've abstracted these mocks into
a reusable function. I did try to do this as a fixture, but for some reaosn
the mocks didn't actually propagate when I did that.
* Add test for different `PARTIAL_PARSE_FILE_DIFF` values to `get_full_manifest`
* Refactor mock fixtures in `test_manifest.py` to make them more widely available
* Convert `set_required_mocks` of `TestGetFullManifest` into a fixture
This wasn't working before, but it does now. Not sure why.
This was done by running `pre-commit run --all`. That this was needed
is a temporary glitch in how our `Tests and Code Checks` github action
works on PRs. Basically we added `isort` to the pre-commit hooks recently, and
this does additional linting/formatting on our imports.
People reasonably have branches which were started prior to `isort` being
part of the pre-commit hooks on main. Thus, unless those branches get caught
up to main, the github action on associated PRs won't run `isort` because
it doesn't exist on those branchs. Once everyone gets their local `main`
branch updated (I suspect this might take a few days) this problem will go
away.
* Add `isort` as a dev-req and pre-commit hook
The tool `isort` reorders imports to be in alphabetical order. I've
added it because our imports in most files are in random order. The lack
of order meant that:
- sometimes the same module would be imported from twice
- figuring out if a a module was already being imported from took
longer
In the next commit I'll actually run isort to order everything. The best
part is that when developing, we don't have to put them in correct order.
Though you can if you want. However, `isort` will take care of re-ordering
things at commit time :)
* Improve isort functionality by setting initial `.isort.cfg`
Specifically we set two config values: `extend_skip_glob` and `known_first_party`.
The `extend_skip_glob` extends the default skipped paths. The defaults can be seen
here https://pycqa.github.io/isort/docs/configuration/options.html#skip. We are skipping
third party stubs because these are more so provided (I believe). We are skipping
`.github` and `scripts` as they feel out of scope and things we can be less strict with.
The `known_first_party` setting makes it so that these imports get grouped separately from
all other imports, which is useful visually of "this comes from us" vs "this comes from
someone/somewhere else".
* Add profile `black` to isort config
I was seeing some odd behavior where running pre-commit, adding the modified
files, and then running pre-commit again would result making more modifications
to some of the same files. This felt odd. You shouldn't have to run pre-commit
more multiple times for it to eventually come to a final "solution". I believe
the problem was because we are using the tool `black` to format things, but weren't
registering the black profile with `isort` this lead to some conflicting formatting
rules, and the two tools had to negotiate a few times before being both satisfied.
Registering the profile `black` with `isort` resolved this problem.
* Reorder, merge-duplicate, and format module imports using `isort`
This was done by running `pre-commit run --all`. I ran it separately from
the commit process itself because I wanted to run it against all files
instead of only changed files.
Of note, this not only reordered and formatted our imports. But we also
had 60 distinct duplicate module import paths across 50 files, which this
took care of. When I say "distinct duplicate module import paths" I mean
when `from x.y.z import` was imported more than once in a single file.
* add support for explicit nulls for loaded_at_field
* add test
* changelog
* add parsing for tests
* centralize logic a bit
* account for sources being None
* fix bug
* remove new field from SourceDefinition
* add validation for empty string, mroe tests
* Move deferral from task to manifest loading + RefResolver
* dbt clone must specify --defer
* Fix deferral for unit test type deteection
* Add changelog
* Move merge_from_artifact from end of parsing back to task before_run to reduce scope of refactor
* PR review. DeferRelation conforms to RelationConfig protocol
* Add test case for #10017
* Update manifest v12 in test_previous_version_state
---------
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* Change agate upper bound to v1.10
* Add changelog.
* update lower pin
* for testing
* put back dev requirement
* move the lower pin back to 1.7
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Refactor test class `EventCatcher` into utils to make accessible to other tests
* Raise minimum version of dbt_common to 1.0.2
We're going to start depending on `silence` existing as a attr of
`WarnErrorOptions`. The `silence` attr was only added in dbt_common
1.0.2, thus that is our new minimum.
* Add ability to silence warnings from `warn_error_options` CLI arguments
* Add `flush` to `EventCatcher` test util, and use in `test_warn_error_options`
* Add tests to `TestWarnErrorOptionsFromCLI` for `include` and `exclude`
* Test support for setting `silence` of `warn_error_options` in `dbt_project` flags
Support for `silence` was _automatically_ added when we upgraded to dbt_common 1.0.2.
This is because we build the project flags in a `.from_dict` style, which is cool. In
this case it was _automatically_ handled in `read_project_flags` in `project.py`. More
specifically here bcbde3ac42/core/dbt/config/project.py (L839)
* Add tests to `TestWarnErrorOptionsFromProject` for `include` and `exclude`
Typically we can't have multiple tests in the same `test class` if they're
utilizing/modifying file system fixtures. That is because the file system
fixtures are scoped to test classes, so they don't reset inbetween tests within
the same test class. This problem _was_ affectin these tests as they modify the
`dbt_project.yml` file which is set by a class based fixture. To get around this,
because I find it annoying to create multiple test classes when the tests really
should be grouped, I created a "function" scoped fixture to reset the `dbt_project.yml`.
* Update `warn_error_options` in CLI args to support `error` and `warn` options
Setting `error` is the same as setting `include`, but only one can be specified.
Setting `warn` is the same as setting `exclude`, but only one can be specified.
* Update `warn_error_options` in Project flags to support `error` and `warn` options
As part of this I refactored `exclusive_primary_alt_value_setting` into an upstream
location `/config/utils`. That is because importing it in `/config/project.py` from
`cli/option_types.py` caused some circular dependency issues.
* Use `DbtExclusivePropertyUseError` in `exclusive_primary_alt_value_setting` instead of `DbtConfigError`
Using `DbtConfigError` seemed reasonable. However in order to make sure the error
got raised in `read_project_flags` we had to mark it to be `DbtConfigError`s to be
re-raised. This had the uninteded consequence of reraising a smattering of errors
which were previously being swallowed. I'd argue that if those are errors we're
swallowing, the functions that raise those errors should be modified perhaps to
conditionally not raise them, but that's not the world we live in and is out of
scope for this branch of work. Thus instead we've created a error specific to
`WarnErrorOptions` issues, and now use, and catch for re-raising.
* Add unit tests for `exclusive_primary_alt_value_setting` method
I debated about parametrizing these tests, and it can be done. However,
I found that the resulting code ended up being about the same number of
lines and slightly less readable (in my opinion). Given the simplicity of
these tests, I think not parametrizing them is okay.
Letting the dbt version be dynamic in the project fixture previously was
causing some tests to break whenever the version of dbt actually got updated,
which isn't great. It'd be super annoying to have to always update tests
affected by this. To get around this we've gone and hard coded the dbt version
in the profile. The alternative was to interpolate the version during comparison
during the relevant tests, which felt less appealing.
* Move `tests/unit/test_yaml_renderer.py` to `tests/unit/parser/test_schema_renderer.py`
* Move `tests/unit/test_unit_test_parser.py` to `tests/unit/parser/test_unit_tests.py`
* Convert `tests/unit/test_tracking.py` to use pytest fixtures
* Delete `tests/unit/test_sql_result.py` as it was moved to `dbt-adapters`
* Move `tests/unit/test_semantic_models.py` to `tests/unit/graph/test_nodes.py
* Group tests of `SemanticModel` in `test_nodes.py` into a `TestSemanticModel` class
* Move `tests/unit/test_selector_errors.py` to `tests/unit/config/test_selectors.py`
* Add `Project` fixture for unit tests and test `Project` class methods
* Move `Project.__eq__` unit tests to new pytest class testing
* Move `Project.hashed_name` unit test to pytest testing class
* Rename some testing class in `test_project.py` to align with testing split
* Refactor `project` fixture to make accessible to other unit tests
* simplify dockerfile, eliminate references to adapter repos as they will be handled in those repos
* keep dbt-postgres target for historical releases of dbt-postgres
* update third party image to pip install conditionally
* Add event type for deprecation of spaces in model names
* Begin emitting deprecation warning for spaces in model names
* Only warn on first model name with spaces unless `--debug` is specified
For projects with a lot of models that have spaces in their names, the
warning about this deprecation would be incredibly annoying. Now we instead
only log the first model name issue and then a count of how many models
have the issue, unless `--debug` is specified.
* Refactor `EventCatcher` so that the event to catch is setable
We want to be able to catch more than just `SpacesInModelNameDeprecation`
events, and in the next commit we will alter our tests to do so. Thus
instead of writing a new catcher for each event type, a slight modification
to the existing `EventCatcher` makes this much easier.
* Add project flag to control whether spaces are allowed in model names
* Log errors and raise exception when `allow_spaces_in_model_names` is `False`
* Use custom event for output invalid name counts instead of `Note` events
Using `Note` events was causing test flakiness when run in a multi
worker environment using `pytest -nauto`. This is because the event
manager is currently a global. So in a situation where test `A` starts
and test `tests_debug_when_spaces_in_name` starts shortly there after,
the event manager for both tests will have the callbacks set in
`tests_debug_when_spaces_in_name`. Then if something in test `A` fired
a `Note` event, this would affect the count of `Note` events that
`tests_debug_when_spaces_in_name` sees, causing assertion failures. By
creating a custom event, `TotalModelNamesWithSpacesDeprecation`, we limit
the possible flakiness to only tests that fire the custom event. Thus
we didn't _eliminate_ all possibility of flakiness, but realistically
only the tests in `test_check_for_spaces_in_model_names.py` can now
interfere with each other. Which still isn't great, but to fully
resolve the problem we need to work on how the event manager is
handled (preferably not globally).
* Always log total invalid model names if at least one
Previously we only logged out the count of how many invalid model names
there were if there was two or more invalid names (and not in debug mode).
However this message is important if there is even one invalid model
name and regardless of whether you are running debug mode. That is because
automated tools might be looking for the event type to track if anything
is wrong.
A related change in this commit is that we now only output the debug hint
if it wasn't run with debug mode. The idea being that if they are already
running it in debug mode, the hint could come accross as somewhat
patronizing.
* Reduce duplicate `if` logic in `check_for_spaces_in_model_names`
* Improve readability of logs related to problematic model names
We want people running dbt to be able to at a glance see warnings/errors
with running their project. In this case we are focused specifically on
errors/warnings in regards to model names containing spaces. Previously
we were only ever emitting the `warning_tag` in the message even if the
event itself was being emitted at an `ERROR` level. We now properly have
`[ERROR]` or `[WARNING]` in the message depending on the level. Unfortunately
we couldn't just look what level the event was being fired at, because that
information doesn't exist on the event itself.
Additionally, we're using events that base off of `DynamicEvents` which
unfortunately hard coded to `DEBUG`. Changing this would involve still
having a `level` property on the definition in `core_types.proto` and
then having `DynamicEvent`s look to `self.level` in the `level_tag`
method. Then we could change how firing events works based on the an
event's `level_tag` return value. This all sounds like a bit of tech
debt suited for PR, possibly multiple, and thus is not being done here.
* Alter `TotalModelNamesWithSpacesDeprecation` message to handle singular and plural
* Remove duplicate import in `test_graph.py` introduced from merging in main
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Expect that the `args` variable is un-modified by `dbt.invoke(args)`
* Make `args` variable un-modified by `dbt.invoke(args)`
* Changelog entry
* Expect that the `args` variable is un-modified by `make_dbt_context`
* Make the `args` variable is un-modified by `make_dbt_context`
* Make a copy of `args` passed to `make_dbt_context`
* Revert "Make a copy of `args` passed to `make_dbt_context`"
This reverts commit 79227b4d34.
* Ensure BaseRunner handles nodes without `build_path`
Some nodes, like SourceDefinition nodes, don't have a `build_path` property.
This is problematic because we take in nodes with no type checking, and
assume they have properties sometimes, like `build_path`. This was just
the case in BaseRunner's `_handle_generic_exception` and
`_handle_interal_exception` methods. Thus to stop dbt from crashing when
trying to handle an exception related to a node without a `build_path`,
we added an private method to the BaseRunner class for safely trying
to get `build_path`.
* Use keyword arguments when instantiating `Note` events in freshness.py
Previously we were passing arguments during the `Note` event instantiations
in freshness.py as positional arguments. This would cause not the desired
`Note` event to be emitted, but instead get the message
```
[Note] Don't use positional arguments when constructing logging events
```
which was our fault, not the users'. Additionally, we were passing the
level for the event in the `Note` instantiation when we needed to be
passing it to the `fire_event` method.
* Raise error when `loaded_at_field` is `None` and metadata check isn't possible
Previously if a source freshness check didn't have a `loaded_at_field` and
metadata source freshness wasn't supported by the adapter, then we'd log
a warning message and let the source freshness check continue. This was problematic
because the source freshness check couldn't actually continue and the process
would raise an error in the form
```
type object argument after ** must be a mapping, not NoneType
```
because the `freshness` variable was never getting set. This error wasn't particularly
helpful for any person running into it. So instead of letting that error
happen we now deliberately raise an error with helpful information.
* Add test which ensures bad source freshness checks raise appropriate error
This test directly tests that when a source freshness check doesn't have a
`loaded_at_field` and the adapter in use doesn't support metadata checks,
then the appropriate error message gets raised. That is, it directly tests
the change made in a162d53a8. This test indirectly tests the changes in both
7ec2f82a9 and 7b0ff3198 as the appropriate error can only be raised because
we've fixed other upstream issues via those commits.
* Add changelog entry for source freshness edgecase fixes
* Add @p.profile and @p.target to the list of "global" CLI flags
* Add env vars (DBT_PROFILE, DBT_TARGET) to the params
* Add unit test
* Simplify unit test
* changie
* Update .changes/unreleased/Features-20231115-092005.yaml
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Fix incorrect envvar names
* Realign environment variable names
* Remove from specific subcommands
* Add test_global_flags_not_on_subcommands
* Remove one unnecessary test case
* Remove other unnecessary test case
---------
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Our `protobuf` dep was in the section of `setup.py` which we delineate
as expecting all future versions of it to be compatible. However, this
is no longer actually the case, and in e4fe839e45
we restricted it to major vesion 4.
* [#9570] Fix fixtures in fixtures/subfolders throwing parsing error
* Fast-forward imports to match upstream
* Re-introduce doc strings on traceback info handling
* [#9570] Changelog update for fix of fixtures in fixtures/subfolders throwing parsing error
* [#9570] Improve testability and coverage for partial parsing
* Transform skip_parsing (private variable of ManifestLoader.load()) into instance-attribute of ManifestLoader(), with default value False
(to enable splitting of ManifestLoader.load())
* Split ManifestLoader.load(), to extract operation of PartialParsing into new method called ManifestLoader.safe_update_project_parser_files_partially()
(to simplify both cognitive complexity in the codebase and mocking in unittestest)
* Add "ignore" type-comments in new ManifestLoader.safe_update_project_parser_files_partially()
(to silence mypy warnings regarding instance-attributes which can be initialized as None or as something else, e.g. self.saved_manifest)[1]
[1] Although I wanted avoid "ignore" type-comments, it seems like addressing these mypy warnings in a stricter sense requires technical alignment and broader code changes.
For example, might need to initialize self.saved_manifest as Manifest, instead of Optional[Manifest], so that PartialParsing gets inputs with type it currently expects.
... perhaps too far beyond the scope of this fix?
* Check for equality with existing input_measures when adding input_measures
* Changie
* Add type annotation
* Move add_input_measure to metric from type_params
* Add tests to check that saved queries show in `dbt list`
* Update `list` task to support saved queries
This is built off of @jtcohen6 work in d6e7cda on jerco/fix-9532.
I didn't directly cherry pick because there was more work to do as
well as merge conflicts. That is to say @jtcohen6 should be credited
with some of the work.
* Update error message when iterating over nodes during list command errors
This was originally suggested by @jtcohen6 in d6e7cda of jerco/fix-9532.
This commit just makes sure the change gets included because I didn't
cherry-pick that commit into this work.
* Add test around deleting a YAML file containing semantic models and metrics
It was raised in https://github.com/dbt-labs/dbt-core/issues/8860 that an
error is being raised during partial parsing when files containing
metrics/semantic models are deleted. In further testing it looks like this
error specifically happens when a file containing both semantic models and
metrics is deleted. If the deleted file contains just semantic models or
metrics there seems to be no issue. The next commit should contain the fix.
* Skip deleted schema files when scheduling files during partial parsing
Waaaay back (in 7563b99) deleted schema files started being separated out
from deleted non-schema files. However ever since, when it came to scheduling
files for reparsing, we've only done so for deleted non-schema files. We even
missed this when we refactored the scheduling code in b37e5b5. This change
updates `_schedule_for_parsing` which is used by `schedule_nodes_for_parsing`
to begin skipping deleted schema files in addition to deleted non schema files.
* Update `add_to_pp_files` to ignore `deleted_schema_files`
As noted in the previous commit, we started separating out deleted
schema files from deleted non-schema files a looong time ago. However,
this whole time we've been adding `deleted_schema_files` to the list
of files to be parsed. This change corrects for that.
* Add changie doc for partial parsing KeyError fix
Protobuf v5 has breaking changes. Here we are limiting the protobuf
dependency to one major version, 4, so that we don't have to patch
over handling 2 different major versions of protobuf.
* Clearer no-op logging in stubbed SavedQueryRunner
* Add changelog entry
* Fix unit test
* More logging touchups
* Fix failing test
* Rename flag + refactor per #9629
* Fix failing test
* regenerate core_proto_types with libprotoc 25.3
---------
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
A recent update to the version ranges for our internally
maintained support packages quite reasonably expanded the
allowed versions for dbt-semantic-interfaces to all minor versions
after 0.5.0, under the assumption that subsequent releases will
generally be backwards-compatible.
Unfortunately, dbt-semantic-interfaces is not yet in that state.
So we update the version range accordingly, and include some
comments around version range expectations for dependencies
listed in this section of dbt-core's package configuration.
CVE-2024-22195 identified an issue in Jinja2 versions <= 3.1.2. As such
we've gone and changed our dependency requirement specification to be
3.1.3 or greater (but less than 4).
Note: Preivously we were using the `~=` version specifier. However due
to some issues with the `~=` we've moved to using `>=` in combination
with `<`. This gives us the same range that `~=` gave us, but avoids
a pip resolution issue when multiple packages in an environment use `~=`
for the same dependency.
* Remove extraneous `/` in `schema-check.yml`
We have hypothesis that the extra `/` in `schema-check` is causing
issues we're seeing currently in the artifact check failing. It may
not be the final solution, but we should fix it anyway.
* Move `artifact_minor_upgrade` label check to job level of `Check Artifact Changes`
Previously the checking for `artifact_minor_upgrade` was happening in each job
step of `Check Artifact Changes`. By moving it up to the job level instead of
in the job steps we make it so the check for the label only happens once and
it simplifies the job steps.
* Update `Check Artifact Changes` to use `dorny/paths-filter`
Previously we were using `git diff` to check if any files had changed
in `core/dbt/artifacts`. However, our `git diff` usage was including any
changes that happened on `main` which the PR branch did not have. This
commit switches the check from using `git diff` to `dorny/paths-filter`,
which is what we use for checking for changelog existence as well. The
`dorny/paths-fitler` includes logic for excluding changes that are on main
but not the PR branch (which is what want to happen).
* Move `ColumnInfo` to dbt/artifacts
* Move `Quoting` resource to dbt/artifacts
* Move `TimePeriod` to `types.py` in dbt/artifacts
* Move `Time` class to `components`
We need to move the data parts of the `Time` definition to dbt/artifacts.
That is not what we're doing in this commit. In this commit we're simply
moving the functional `Time` definition upstream of `unparsed` and `nodes`.
This does two things
- Mirrors the import path that the resource `time` definition will have in dbt/artifacts
- Reduces the chance of ciricular import problems between `unparsed` and `nodes`
* Move data part of `Time` definition to dbt/artifacts
* Move `FreshnessThreshold` class to components module
We need to move the data parts of the `FreshnessThreshold` definition to dbt/artifacts.
That is not what we're doing in this commit. In this commit we're simply
moving the functional `FreshnessThreshold` definition upstream of `unparsed` and `nodes`.
This does two things
- Mirrors the import path that the resource `FreshnessThreshold` definition will have in dbt/artifacts
- Reduces the chance of ciricular import problems between `unparsed` and `nodes`
* Move data part of `FreshnessThreshold` to dbt/artifacts
Note: We had to override some of the attrs of the `FreshnessThreshold`
resource because the resource version only has access to the resource
version of `Time`. The overrides in the functional definition of
`FreshnessThreshold` make it so the attrs use the functional version
of `Time`.
* Move `ExternalTable` and `ExternalPartition` to `source_definition` module in dbt/artifacts
* Move `SourceConfig` to `source_definition` module in dbt/artifacts
* Move `HasRelationMetadata` to core `components` module
This is a precursor to splitting `HasRelationMetadata` into it's
data and functional parts.
* Move data portion of `HasRelationMetadata` to dbt/artifacts
* Move `SourceDefinitionMandatory` to dbt/artifacts
* Move the data parts of `SourceDefinition` to dbt/artifacts
Something interesting here is that we had to override the `freshness`
property. We had to do this because if we didn't we wouldn't get the
functional parts of `FreshnessThreshold`, we'd only get the data parts.
Also of note, the `SourceDefintion` has a lot of `@property` methods that
on other classes would be actual attribute properties of the node. There is
an argument to be made that these should be moved as well, but thats perhaps
a separate discussion.
Finally, we have not (yet) moved `NodeInfoMixin`. It is an open discussion
whether we do or not. It seems primarily functional, as a means to update the
source freshness information. As the artifacts primarily deal with the shape
of the data, not how it should be set, it seems for now that `NodeInfoMixin`
should stay in core / not move to artifacts. This thinking may change though.
* Refactor `from_resource` to no longer use generics
In the next commit we're gonna add a `to_resource` method. As we don't
want to have to pass a resource into `to_resource`, the class itself
needs to expose what resource class should be built. Thus a type annotation
is no longer enough. To solve this we've added a class method to BaseNode
which returns the associated resource class. The method on BaseNode will
raise a NotImplementedError unless the the inheriting class has overridden
the `resouce_class` method to return the a resource class.
You may be thinking "Why not a class property"? And that is absolutely a
valid question. We used to be able to chain `@classmethod` with
`@property` to create a class property. However, this was deprecated in
python 3.11 and removed in 3.13 (details on why this happened can be found
[here](https://github.com/python/cpython/issues/89519)). There is an
[alternate way to setup a class property](https://github.com/python/cpython/issues/89519#issuecomment-1397534245),
however this seems a bit convoluted if a class method easily gets the job
done. The draw back is that we must do `.resource_class()` instead of
`.resource_class` and on classes implementing `BaseNode` we have to
override it with a method instead of a property specification.
Additionally, making it a class _instance_ property won't work because
we don't want to require an _instance_ of the class to get the
`resource_class` as we might not have an instance at our dispossal.
* Add `to_resource` method to `BaseNode`
Nodes have extra attributes. We don't want these extra attributes to
get serialized. Thus we're converting back to resources prior to
serialization. There could be a CPU hit here as we're now dictifying
and undictifying right before serialization. We can do some complicated
and non-straight-forward things to get around this. However, we want
to see how big of a perforance hit we actually have before going that
route.
* Drop `__post_serialize__` from `SourceDefinition` node class
The method `__post_serialize__` on the `SourceDefinition` was used for
ensuring the property `_event_status` didn't make it to the serialized
version of the node. Now that resource definition of `SourceDefinition`
handles serialization/deserialization, we can drop `__post_serialize__`
as it is no longer needed.
* Merge functional parts of `components` into their resource counter parts
We discussed this on the PR. It seems like a minimal lift, and minimal to
support. Doing so also has the benefit of reducing a bunch of the overriding
we were previously doing.
* Fixup:: Rename variable `name` to `node_id` in `_map_nodes_to_map_resources`
Naming is hard. That is all.
* Fixup: Ensure conversion of groups to resources for `WritableManifest`
this pr provides additional command line options (cloud cli) for dbt development, as well as clarifies 'dbt core' under the 'get started' header.
cc @greg-mckeon
* update docker file to remove &subdirectory=plugins/postgres from git path
* remove extra proto file generation scripts which are no longer necessary in this repo
* Begin using `Mergeable` supplied by `dbt-common`
We're currently in the process of moving the "data resource" portion on nodes
to `dbt/artifacts`. Some of those artifacts depend on `Mergeable` which has
been defined on core. In order to move the data resources to `dbt/artifacts`,
we thus need to move `Mergeable` upstream of core. We moved `Mergeable` to
[dbt-common](https://github.com/dbt-labs/dbt-common) in
https://github.com/dbt-labs/dbt-common/pull/59, and released this change in
[dbt-common 0.1.3](https://pypi.org/project/dbt-common/0.1.3/). As such as, in
order to unblock some of the `dbt/artifacts` migration work, we first need to
update references to `Mergeable` in core to use the `dbt-common` definition.
NOTE: We include changing over to `Replaceable` from `dbt-common` in this
commit. This is because there wasn't a clean way to do it. If I moved the imports
of `Replaceable` on in the files where we updated `Mergeable` then we would
have left `Replaceable` in an inbetween state. If we had moved all instances
of `Replaceable`, it'd be out of scope for the change. As such, it makes more
sense to do that as a separate changeset.
* Remove definition of `Mergeable` from dbt/contracts/util
Although we've removed the definition of `Mergeable` we've ensured the
import paths are still available. We do this because this is under
`contracts`, and the sudden disappearance from the import path might
cause issues for community members using dbt-core as a library.
Ideally we'd define a `Mergeable` class here that inherits the
`dbt-common` definition and raise a deprecation warning on instantiation.
However, we don't have an established strategy to do so.
* Use new context invocation class.
* Adjust new constructor param on InvocationContext, make tests robust
* Add changelog entry.
* Clarify parameter name
* Move `ExposureType` to dbt/artifacts
* Move `MaturityType` to dbt/artifacts
* Move `ExposureConfig` to dbt/artifacts
* Move data parts of `Exposure` node class to dbt/artifacts
* Update leftover incorrect imports of `Owner` resource
There were a few places in the code base that were importing `Owner`
from `unparsed` or `nodes`. The places importing from `unparsed` were
working because `unparsed` itself was correctly importing from
`artifacts.resources`. However in places where it was being imported
from `nodes`, an exception was being raised because in the previous
commit we removed the import of `Owner` in `nodes` because it was
no longer needed.
* Move `SemanticModel` sub dataclasses to dbt/artifacts
* Move `NodeRelation` to dbt/artifacts
* Move `SemanticModelConfig` to dbt/artifacts
* Move data portion of `SemanticModel` to dbt/artifacts
* Add contextual comments to `semantic_model.py` about DSI protocols
* Fixup mypy complaint
* Migrate v12 manifest to use artifact definitions of `SavedQuery`, `Metric`, and `SemanticModel`
* Convert `SemanticModel` and `Metric` resources to full nodes in selector search
In the `search` method in `selector_methods.py`, we were getting object
representations from the incoming writable manifest by unique id. What we
get from the writable manifest though is increasingly the `resource`
(data artifact) part of the node, not the full node. This was problematic
because a number of the selector processes _compare_ the old node to the
new node, but the `resource` representation doesn't have the comparator
methods.
In this commit we dict-ify the resource and then get the full node by
undictifying that. We should probably have a better built in process to
the full node objects to do this, but this will do for now.
* Add `from_resource` implementation on `BaseNode` to ease resource to node conversion
We want to easily be able to create nodes from their resource counter
parts. It's actually imperative that we can do so. The previous commit
had a manual way to do so where needed. However, we don't want to have
to put `from_dict(.to_dict())` everywhere. So here we hadded a `from_resource`
class method to `BaseNode`. Everything that inherits from `BaseNode` thus
automatically gets this functionality.
HOWEVER, the implementation currently has a problem. Specifically, the
type for `resource_instance` is `BaseResource`. Which means if one is
calling say `Metric.from_resource()`, one could hand it a `SemanticModelResource`
and mypy won't complain. In this case, a semi-cryptic error might get
raised at runtime. Whether or not an error gets raised depends entirely
on whether or not the dictified resource instance manages to satisfy all
the required attributes of the desired node class. THIS IS VERY BAD.
We should be able to solve this issue in an upcoming (hopefully next)
commit, wherein we genericize `BaseNode` such that when inheriting it
you declare it with a resource type. Technically a runtime error will
still be possible, however any mixups should be caught by mypy on
pre-commit hooks as well as PRs.
* Make `BaseNode` a generic that is defined with a `ResourceType`
Turning `BaseNode` into an ABC generic allows us to say that the inheriting
class can define what resource type from artifacts it should be used with.
This gives us added type safety to what resource type can be passed into
`from_resource` when called via `SemanticModel.from_resource(...)`,
`Metric.from_resource(...)`, and etc.
NOTE: This only gives us type safety from mypy. If we begin ignoring
mypy errors during development, we can still get into a situation for
runtime errors (it's just harder to do so now).
* simplify and modularize tagging logic
* change package field to dropdown, log inputs to publish, skip actual publish for testing
* add dry run option
* update to v3 of docker actions to migrate from node16 (deprecated) to node20
* Move `MetricInputMeasure` to dbt/artifacts
* Move `MetricTimeWindow` to dbt/artifacts
* Move `MetricInput` to dbt/artifacts
* Move `ConstantPropertyInput` and `ConversionTypeParams` to dbt/artifacts
* Move `MetricTypeParams` to dbt/artifacts
* Remove obsolete `MetricReference` class from core
The `MetricReference` defined in `nodes.py` is from pre core 1.6 metrics,
i.e. the legacy semantic layer prior to integrating with MetricFlow. I
double checked and found that this `MetricReference` is found _nowhere_
in core. It is dead, with no plan of coming back. Thus deleting it seems
logical.
* Move `MetricConfig` to dbt/artifacts
* Move data portion of `Metric` node to dbt/artifacts
* Move `depends_on_nodes` and `search_name` back to core `Metric` implementation
I got a little too indiscriminate in what got moved in the `Metric`
definition split in the previous commit. Specifically `depends_on_nodes`
and `search_name` shouldn't have been moved to `dbt/artifacts` as they
are specific core internals, not artifacts to be depended on.
* Add context comment to `metric.py` artifact file about upstream protocols.
* Move the common semantic layer node components to v1 artifact resources
* Move `FileSlice` and `SourceFileMetadata` to `semantic_layer_components` in artifacts
* Split `GraphNode` into a functional class in core and data class in artifacts
* Refactor the `same_context` checks of `Exports` into `SavedQuery`
This is important because we want to move the `Export` class to artifacts.
However, because it had functional parts we would have split it in half,
with the data definition exists in artifacts and the functional specification
defined in core. At first glance thats not problematic. However, the
`SavedQuery` definition in artifacts would only be able to point at the
data definition of `Export`, and then the function `SavedQuery` spec in
core would have to override that with the functional `Export` definition
that exists in core. This would make the inheritance rather wonky and
confusing. This refactor simplifies thigs greatly because now we can move
the entirety of `Export` to artifacts, and the core `SavedQuery` won't
have to override anything.
* Move child components of `SavedQuery` to artifacts
Specifically the components in `contracts/graph/saved_queries.py` which
are `Export`, `ExportConfig`, and `QueryParams` got moved to
`artifacts/resources/v1/saved_query.py`. The moving of `Export` was
made possible by the refactor in the previous commit.
* Move `SavedQueryMandatory` to dbt/artifacts
* Move `SavedQueryConfig` to dbt/artifacts
* Move `DependsOn` class to artifacts
If we had followed the general paradigm we've set, we would have split
`DependsOn` into a data half and a functional half, with the data half
going in artifacts. However, doing so overly complicates the work that
we're doing. Additionally looking forward, we hope to simplify the
`DependsOn` (as well as `MacroDependsOn`) to use `sets` instead of
`lists`, thus allowing us to get rid of the fuctional part. We haven't
done that refactor here because there is a reasonable amount of risk
associated with such a change such that doign so should be it's own
segement of work.
* Move `NodeVersion` and `RefArgs` to dbt/artifacts
I debated about making this two commits. However I only realized we
needed to also move `NodeVersion` when I was most the way through
moving `RefArgs`, and instead of stashing, I just decided to due both.
They're kind of inseparable anyways because it only makes sense to
move `NodeVersion` if you move `RefArgs`, but you can't move `RefArgs`
unless you also move `NodeVersion`. The two in one commit are still
small enough that I'm okay with this.
* Move data portion of `SavedQuery` class to dbt/artifacts
* Update implementation-ticket.yml
Changed "Notion docs" to "documentations"
* Added changelog
* modified the contributing and readme files.
* fixed end of files as test failed on previous commit.
* fixed the test errors.
* Changes as per reviewer's request have been made.
* some changes idk
* Update .changes/unreleased/Under the Hood-20240109-091856.yaml
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Update .github/ISSUE_TEMPLATE/implementation-ticket.yml
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Update .github/ISSUE_TEMPLATE/implementation-ticket.yml
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
---------
Co-authored-by: Tania <tonayya@users.noreply.github.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* simplify release inputs
* fix vars
* add missing quote:
* drop all env vars since the workflows dont like them
* Update .github/workflows/release.yml
* Add unit test that shows unit tests work with external nodes
* Abbreviate in names in external nodes test to stay under 64 character postgres max
Was getting test failures due to resulting lengthy model names being created
by unit test task in the functional test
* Fix unit test parsing to ensure external nodes continue to keep their package name
* Add seed to test of external node unit test, and indirectly have the external node point to it
Previously I was getting an error about the columns for the external model
not being fetchable from the database via the macro `get_columns_in_relation`.
By creating a seed for the tests, which creates a table in postgres, we can then
tell the external model that it's database schema and identifier (the relation)
is that table from the seed without make the seed an actual dependency of the
external model in the dag.
* Ensure all models in unit test shadow manifest have a non `None` path
External nodes generally don't have paths, but in unit tests we write out
all models to sql files (as this allows us to test them). Thus external
nodes need to have their paths set.
* Add `run` step to function test of unit test with external nodes
This is necessary because when executing a unit tests, the columns
associated with a model in the database are retrieved. For this to
be possible, the model must exist in the database, thus we must
run the associated models at least once first.
* Create a full external package for function test of a unit test with an external node
Previously we were only pseudo creating an external package for testing
how unit tests work with external nodes. This was problematic because the
package didn't actually exist and thus wasn't seen as accessible when running
through dag dependencies. By actually creating the external package, we
ensure that all the built in normal processes happen.
* Add test for more ephemoral external models
* Flip logic in `packages_for_node` to remove error case
By flipping the logic from `not in` to `in` we can drop the exception
and instead default to the model runtime config when the package isn't
found. We're still trying to grok if there will be any fallout from this.
The tests all pass, but that doesn't guarantee nothing bad will happen.
* Add changie doc for added support of external nodes in unit tests
* Initial implementation of unit testing (from pr #2911)
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* 8295 unit testing artifacts (#8477)
* unit test config: tags & meta (#8565)
* Add additional functional test for unit testing selection, artifacts, etc (#8639)
* Enable inline csv format in unit testing (#8743)
* Support unit testing incremental models (#8891)
* update unit test key: unit -> unit-tests (#8988)
* convert to use unit test name at top level key (#8966)
* csv file fixtures (#9044)
* Unit test support for `state:modified` and `--defer` (#9032)
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* Allow use of sources as unit testing inputs (#9059)
* Use daff for diff formatting in unit testing (#8984)
* Fix#8652: Use seed file from disk for unit testing if rows not specified in YAML config (#9064)
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
Fix#8652: Use seed value if rows not specified
* Move unit testing to test and build commands (#9108)
* Enable unit testing in non-root packages (#9184)
* convert test to data_test (#9201)
* Make fixtures files full-fledged members of manifest and enable partial parsing (#9225)
* In build command run unit tests before models (#9273)
---------
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
* remove dbt.contracts.connection imports from adapter module
* Move events to common (#8676)
* Move events to common
* More Type Annotations (#8536)
* Extend use of type annotations in the events module.
* Add return type of None to more __init__ definitions.
* Still more type annotations adding -> None to __init__
* Tweak per review
* Allow adapters to include python package logging in dbt logs (#8643)
* add set_package_log_level functionality
* set package handler
* set package handler
* add logging about stting up logging
* test event log handler
* add event log handler
* add event log level
* rename package and add unit tests
* revert logfile config change
* cleanup and add code comments
* add changie
* swap function for dict
* add additional unit tests
* fix unit test
* update README and protos
* fix formatting
* update precommit
---------
Co-authored-by: Peter Webb <peter.webb@dbtlabs.com>
* fix import
* move types_pb2.py from events to common/events
* move agate_helper into common
* Add utils module (#8910)
* moving types_pb2.py to common/events
* split out utils into core/common/adapters
* add changie
* remove usage of dbt.config.PartialProject from dbt/adapters (#8909)
* remove usage of dbt.config.PartialProject from dbt/adapters
* add changie
---------
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
* move agate_helper unit tests under tests/unit/common
* move agate_helper into common (#8911)
* move agate_helper into common
* add changie
---------
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
* remove dbt.flags.MP_CONTEXT usage in dbt/adapters (#8931)
* remove dbt.flags.LOG_CACHE_EVENTS usage in dbt/adapters (#8933)
* Refactor Base Exceptions (#8989)
* moving types_pb2.py to common/events
* Refactor Base Exceptions
* update make_log_dir_if_missing to handle str
* move remaining adapters exception imports to common/adapters
---------
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* Remove usage of dbt.deprecations in dbt/adapters, enable core & adapter-specific (#9051)
* Decouple adapter constraints from core (#9054)
* Move constraints to dbt.common
* Move constraints to contracts folder, per review
* Add a changelog entry.
* move include/global_project to adapters (#8930)
* remove adapter.get_compiler (#9134)
* Move adapter logger to adapters (#9165)
* moving types_pb2.py to common/events
* Move AdapterLogger to adapter folder
* add changie
* delete accidentally merged types_pb2.py
* Move the semver package to common and alter references. (#9166)
* Move the semver package to common and alter references.
* Alter leftover references to dbt.semver, this time using from syntax.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Refactor EventManager setup and interaction (#9180)
* moving types_pb2.py to common/events
* move event manager setup back to core, remove ref to global EVENT_MANAGER and clean up event manager functions
* move invocation_id from events to first class common concept
* move lowercase utils to common
* move lowercase utils to common
* ref CAPTURE_STREAM through method
* add changie
* first pass: adapter migration script (#9160)
* Decouple macro generator from adapters (#9149)
* Remove usage of dbt.contracts.relation in dbt/adapters (#9207)
* Remove ResultNode usage from connections (#9211)
* Add RelationConfig Protocol for use in Relation.create_from (#9210)
* move relation contract to dbt.adapters
* changelog entry
* first pass: clean up relation.create_from
* type ignores
* type ignore
* changelog entry
* update RelationConfig variable names
* Merge main into feature/decouple-adapters-from-core (#9240)
* moving types_pb2.py to common/events
* Restore warning on unpinned git packages (#9157)
* Support --empty flag for schema-only dry runs (#8971)
* Fix ensuring we produce valid jsonschema artifacts for manifest, catalog, sources, and run-results (#9155)
* Drop `all_refs=True` from jsonschema-ization build process
Passing `all_refs=True` makes it so that Everything is a ref, even
the top level schema. In jsonschema land, this essentially makes the
produced artifact not a full schema, but a fractal object to be included
in a schema. Thus when `$id` is passed in, jsonschema tools blow up
because `$id` is for identifying a schema, which we explicitly weren't
creating. The alternative was to drop the inclusion of `$id`. Howver, we're
intending to create a schema, and having an `$id` is recommended best
practice. Additionally since we were intending to create a schema,
not a fractal, it seemed best to create to full schema.
* Explicity produce jsonschemas using DRAFT_2020_12 dialect
Previously were were implicitly using the `DRAFT_2020_12` dialect through
mashumaro. It felt wise to begin explicitly specifying this. First, it
is closest in available mashumaro provided dialects to what we produced
pre 1.7. Secondly, if mashumaro changes its default for whatever reason
(say a new dialect is added, and mashumaro moves to that), we don't want
to automatically inherit that.
* Bump manifest version to v12
Core 1.7 released with manifest v11, and we don't want to be overriding
that with 1.8. It'd be weird for 1.7 and 1.8 to both have v11 manifests,
but for them to be different, right?
* Begin including schema dialect specification in produced jsonschema
In jsonschema's documentation they state
> It's not always easy to tell which draft a JSON Schema is using.
> You can use the $schema keyword to declare which version of the JSON Schema specification the schema is written to.
> It's generally good practice to include it, though it is not required.
and
> For brevity, the $schema keyword isn't included in most of the examples in this book, but it should always be used in the real world.
Basically, to know how to parse a schema, it's important to include what
schema dialect is being used for the schema specification. The change in
this commit ensures we include that information.
* Create manifest v12 jsonschema specification
* Add change documentation for jsonschema schema production fix
* Bump run-results version to v6
* Generate new v6 run-results jsonschema
* Regenerate catalog v1 and sources v3 with fixed jsonschema production
* Update tests to handle bumped versions of manifest and run-results
---------
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
Co-authored-by: Quigley Malcolm <QMalcolm@users.noreply.github.com>
* Move BaseConfig to Common (#9224)
* moving types_pb2.py to common/events
* move BaseConfig and assorted dependencies to common
* move ShowBehavior and OnConfigurationChange to common
* add changie
* Remove manifest from catalog and connection method signatures (#9242)
* Add MacroResolverProtocol, remove lazy loading of manifest in adapter.execute_macro (#9243)
* remove manifest from adapter.execute_macro, replace with MacroResolver + remove lazy loading
* rename to MacroResolverProtocol
* pass MacroResolverProtcol in adapter.calculate_freshness_from_metadata
* changelog entry
* fix adapter.calculate_freshness call
* pass context to MacroQueryStringSetter (#9248)
* moving types_pb2.py to common/events
* remove manifest from adapter.execute_macro, replace with MacroResolver + remove lazy loading
* rename to MacroResolverProtocol
* pass MacroResolverProtcol in adapter.calculate_freshness_from_metadata
* changelog entry
* fix adapter.calculate_freshness call
* pass context to MacroQueryStringSetter
* changelog entry
---------
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
* add macro_context_generator on adapter (#9251)
* moving types_pb2.py to common/events
* remove manifest from adapter.execute_macro, replace with MacroResolver + remove lazy loading
* rename to MacroResolverProtocol
* pass MacroResolverProtcol in adapter.calculate_freshness_from_metadata
* changelog entry
* fix adapter.calculate_freshness call
* add macro_context_generator on adapter
* fix adapter test setup
* changelog entry
* Update parser to support conversion metrics (#9173)
* added ConversionTypeParams classes
* updated parser for ConversionTypeParams
* added step to populate input_measure for conversion metrics
* version bump on DSI
* comment back manifest generating line
* updated v12 schemas
* added tests
* added changelog
* Add typing for macro_context_generator, fix query_header_context
---------
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
Co-authored-by: William Deng <33618746+WilliamDee@users.noreply.github.com>
* Pass mp_context to adapter factory (#9275)
* moving types_pb2.py to common/events
* require core to pass mp_context to adapter factory
* add changie
* fix SpawnContext annotation
* Fix include for decoupling (#9286)
* moving types_pb2.py to common/events
* fix include path in MANIFEST.in
* Fix include for decoupling (#9288)
* moving types_pb2.py to common/events
* fix include path in MANIFEST.in
* add index.html to in MANIFEST.in
* move system client to common (#9294)
* moving types_pb2.py to common/events
* move system.py to common
* add changie update README
* remove dbt.utils from semver.py
* remove aliasing connection_exception_retry
* Update materialized views to use RelationConfigs and remove refs to dbt.utils (#9291)
* moving types_pb2.py to common/events
* add AdapterRuntimeConfig protocol and clean up dbt-postgress core imports
* add changie
* remove AdapterRuntimeConfig
* update changelog
* Add config field to RelationConfig (#9300)
* moving types_pb2.py to common/events
* add config field to RelationConfig
* merge main into feature/decouple-adapters-from-core (#9305)
* moving types_pb2.py to common/events
* Update parser to support conversion metrics (#9173)
* added ConversionTypeParams classes
* updated parser for ConversionTypeParams
* added step to populate input_measure for conversion metrics
* version bump on DSI
* comment back manifest generating line
* updated v12 schemas
* added tests
* added changelog
* Remove `--dry-run` flag from `dbt deps` (#9169)
* Rm --dry-run flag for dbt deps
* Add changelog entry
* Update test
* PR feedback
* adding clean_up methods to basic and unique_id tests (#9195)
* init attempt of adding clean_up methods to basic and unique_id tests
* swapping cleanup method drop of test_schema to unique_schema to test breakage on docs_generate test
* moving the clean_up method down into class BaseDocsGenerate
* remove drop relation for unique_schema
* manually define alternate_schema for clean_up as not being seen as part of project_config
* add changelog
* remove unneeded changelog
* uncomment line that generates new manifest and delete manifest our changes created
* make sure the manifest test is deleted and readd older version of manifest.json to appease test
* manually revert file to previous commit
* Revert "manually revert file to previous commit"
This reverts commit a755419e8b.
---------
Co-authored-by: William Deng <33618746+WilliamDee@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
* resolve merge conflict on unparsed.py (#9309)
* moving types_pb2.py to common/events
* Update parser to support conversion metrics (#9173)
* added ConversionTypeParams classes
* updated parser for ConversionTypeParams
* added step to populate input_measure for conversion metrics
* version bump on DSI
* comment back manifest generating line
* updated v12 schemas
* added tests
* added changelog
* Remove `--dry-run` flag from `dbt deps` (#9169)
* Rm --dry-run flag for dbt deps
* Add changelog entry
* Update test
* PR feedback
* adding clean_up methods to basic and unique_id tests (#9195)
* init attempt of adding clean_up methods to basic and unique_id tests
* swapping cleanup method drop of test_schema to unique_schema to test breakage on docs_generate test
* moving the clean_up method down into class BaseDocsGenerate
* remove drop relation for unique_schema
* manually define alternate_schema for clean_up as not being seen as part of project_config
* add changelog
* remove unneeded changelog
* uncomment line that generates new manifest and delete manifest our changes created
* make sure the manifest test is deleted and readd older version of manifest.json to appease test
* manually revert file to previous commit
* Revert "manually revert file to previous commit"
This reverts commit a755419e8b.
---------
Co-authored-by: William Deng <33618746+WilliamDee@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
* Resolve unparsed.py conflict (#9311)
* Update parser to support conversion metrics (#9173)
* added ConversionTypeParams classes
* updated parser for ConversionTypeParams
* added step to populate input_measure for conversion metrics
* version bump on DSI
* comment back manifest generating line
* updated v12 schemas
* added tests
* added changelog
* Remove `--dry-run` flag from `dbt deps` (#9169)
* Rm --dry-run flag for dbt deps
* Add changelog entry
* Update test
* PR feedback
* adding clean_up methods to basic and unique_id tests (#9195)
* init attempt of adding clean_up methods to basic and unique_id tests
* swapping cleanup method drop of test_schema to unique_schema to test breakage on docs_generate test
* moving the clean_up method down into class BaseDocsGenerate
* remove drop relation for unique_schema
* manually define alternate_schema for clean_up as not being seen as part of project_config
* add changelog
* remove unneeded changelog
* uncomment line that generates new manifest and delete manifest our changes created
* make sure the manifest test is deleted and readd older version of manifest.json to appease test
* manually revert file to previous commit
* Revert "manually revert file to previous commit"
This reverts commit a755419e8b.
---------
Co-authored-by: William Deng <33618746+WilliamDee@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
---------
Co-authored-by: colin-rogers-dbt <111200756+colin-rogers-dbt@users.noreply.github.com>
Co-authored-by: Peter Webb <peter.webb@dbtlabs.com>
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
Co-authored-by: Mila Page <67295367+VersusFacit@users.noreply.github.com>
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Quigley Malcolm <QMalcolm@users.noreply.github.com>
Co-authored-by: William Deng <33618746+WilliamDee@users.noreply.github.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
* init attempt of adding clean_up methods to basic and unique_id tests
* swapping cleanup method drop of test_schema to unique_schema to test breakage on docs_generate test
* moving the clean_up method down into class BaseDocsGenerate
* remove drop relation for unique_schema
* manually define alternate_schema for clean_up as not being seen as part of project_config
* add changelog
* remove unneeded changelog
* uncomment line that generates new manifest and delete manifest our changes created
* make sure the manifest test is deleted and readd older version of manifest.json to appease test
* manually revert file to previous commit
* Revert "manually revert file to previous commit"
This reverts commit a755419e8b.
* Drop `all_refs=True` from jsonschema-ization build process
Passing `all_refs=True` makes it so that Everything is a ref, even
the top level schema. In jsonschema land, this essentially makes the
produced artifact not a full schema, but a fractal object to be included
in a schema. Thus when `$id` is passed in, jsonschema tools blow up
because `$id` is for identifying a schema, which we explicitly weren't
creating. The alternative was to drop the inclusion of `$id`. Howver, we're
intending to create a schema, and having an `$id` is recommended best
practice. Additionally since we were intending to create a schema,
not a fractal, it seemed best to create to full schema.
* Explicity produce jsonschemas using DRAFT_2020_12 dialect
Previously were were implicitly using the `DRAFT_2020_12` dialect through
mashumaro. It felt wise to begin explicitly specifying this. First, it
is closest in available mashumaro provided dialects to what we produced
pre 1.7. Secondly, if mashumaro changes its default for whatever reason
(say a new dialect is added, and mashumaro moves to that), we don't want
to automatically inherit that.
* Bump manifest version to v12
Core 1.7 released with manifest v11, and we don't want to be overriding
that with 1.8. It'd be weird for 1.7 and 1.8 to both have v11 manifests,
but for them to be different, right?
* Begin including schema dialect specification in produced jsonschema
In jsonschema's documentation they state
> It's not always easy to tell which draft a JSON Schema is using.
> You can use the $schema keyword to declare which version of the JSON Schema specification the schema is written to.
> It's generally good practice to include it, though it is not required.
and
> For brevity, the $schema keyword isn't included in most of the examples in this book, but it should always be used in the real world.
Basically, to know how to parse a schema, it's important to include what
schema dialect is being used for the schema specification. The change in
this commit ensures we include that information.
* Create manifest v12 jsonschema specification
* Add change documentation for jsonschema schema production fix
* Bump run-results version to v6
* Generate new v6 run-results jsonschema
* Regenerate catalog v1 and sources v3 with fixed jsonschema production
* Update tests to handle bumped versions of manifest and run-results
In [dbt-labs/dbt-core#7984](https://github.com/dbt-labs/dbt-core/pull/7984)
we began setting a metrics `type_params.input_measures` during metric
processing post parsing. However in that PR we didn't clean up the comment
in the parser about setting `input_measures`. This is that post fact
cleanup.
* Add test asserting GraphRunnableTasks attempt to cancel connections on SystemExit
* Add test asserting GraphRunnableTasks attempt to cancel connections on KeyboardInterrupt
* Add test asserting GraphRunnableNode doesn't try to cancel connections on generic Exception
* tarball lockfile fix
* Add changie doc for tarball deps issue
* Add integration test for ensuring tarball package specification works
This test was written _after_ the fix was commited. However, I ran this
test against main without the fix and it failed. After running the test
with the tarball fix, it passed.
* Remove unnecessary `tarball` conditional logic in `PackageConfig.validate`
We had a conditional to skip validation for a package if the package
included the `tarball` key. However, this conditional always returned
false as it was nested inside a conditional that the package had the
default `package` key, which means it's not a tarball package, but a
package package (maybe we need better differentiation here). If we need
additional validation for tarballs down the road, we should do that one
level up. At this time we have no additional validaitons to add.
* Fix typos in changie doc for tarball deps issue
* Improve tarball package test naming and add related unhappy path test
* Remove unnecessary `setUp` fixture from tarball package tests
We initially included this fixture due to copy and pasting another
test. However, this `setUp` fixture isn't actually necessary for the
tarball dependency tests.
---------
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
* Add test asserting `SavedQuery` configs can be set from `dbt_project.yml`
* Allow extraneous properties in Export configs
This brings the Export config object more in line with how other config
objects are specified in the unparsed definition. It allows for specifying
of extra configs, although they won't get propagate to the final config.
* Add `ExportConfig` options to `SavedQueryConfig` options
This allows for specifying `ExportConfig` options at the `SavedQueryConfig` level.
This also therefore allows these options to be specified in the dbt_project.yml
config. The plan in the follow up commit is to merge the `SavedQueryConfig` options
into all configs of `Exports` belonging to the saved query.
There are a couple caveots to call out:
1. We've used `schema` instead of `schema_name` on the `SavedQueryConfig` despite
it being called `schema_name` on the `ExportConfig`. This is because need `schema_name`
to be the name of the property on the `ExportConfig`, but `schema` is the user facing
specification.
2. We didn't add the `ExportConfig` `alias` property to the `SavedQueryConfig` This
is because `alias` will always be specific to a single export, and thus it doesn't
make sense to allow defining it on the `SavedQueryConfig` to then apply to all
`Exports` belonging to the `SavedQuery`
* Begin inheriting configs from saved query config, and transitively from project config
Export configs will now inherit from saved query configs, with a preference
for export config specifications. That is to say an export config will inherity
a config attr from the saved query config only if a value hasn't been supplied
on the export config directly. Additionally because the saved query config has
a similar relationship with the project config, exports configs can inherit
from the project config (again with a preference for export config specifications).
* Correct conditional in export config building for map schema to schema_name
I somehow wrote a really weird, but also valid, conditional statement. Previously
the conditional was
```
if combined.get("schema") is not combined.get("schema_name") is None:
```
which basically checked whether `schema` was a boolean that didn't match
the boolean of whether `schema_name` was None. This would pretty much
always evaluate to True because `schema` should be a string or none, not
a bool, and thus would never match the right hand side. Crazy. It has now
been fixed to do the thing we want to it to do. If `schema` isn't `None`,
and `schema_name` is `None`, then set `schema_name` to have the value of
`schema`.
* Update parameter names in `_get_export_config` to be more verbose
* Supports non half width alphanumeric for generic test
* Unify conditional statements
* add CHANGELOG entries
* added test for Japanese
* Move the fix further upstream
* Remove the changes in core/dbt/task/runnable.py
* Fix accidental removal of `_` substitution character
---------
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* Handle unknown `type_code` for model contracts
* Changelog entry
* Fix changelog entry
* Functional test for a `type_code` that is not recognized by psycopg2
* Functional tests for data type mismatches
* add test
* fix test
* first pass with constraint error
* add back column checks for temp tables
* changelog
* Update .changes/unreleased/Fixes-20231024-145504.yaml
* changie doc for DSI 0.3.0 upgrade
* Gracefully handle v10 metric filters
* Fix iteration over metrics in `upgrade_v10_metric_filters`
* Update previous manifest version test fixtures to have more expressive metrics
* Regenerate the test v10 manifest artifact using the more expressive metrics from 904cc1ef
To do this I cherry-picked 904cc1ef onto my local 1.6.latest branch,
had the test regenerate the test v10 manifest artifact, and then over
wrote the test v10 manifest artifact on this branch (cherry-picking it
across the branches didn't work, had to copy paste :grimmace:)
* Regenerate test v11 manifest artifact using the fixture changes in 904cc1ef
* Update `upgrade_v10_metric_filters` to handled disabled metrics
Regenerating the v10 and v11 test manifest artifacts uncovered an
issue wherein we weren't handling disabled metrics that need to
get upgraded. This commit fixes that. Additionally, the
`upgrade_v10_metric_filters` was getting a bit unwieldy, so I broke
extracted the abstracted sub functions.
* Fix `test_backwards_compatible_versions` test
When we regenerated the v10 test manifest artifact, it started having
the `metricflow_time_sine` model, and it didn't previously. This caused
`test_backwards_compatible_versions` to start failing because it was
no longer identified as having modified state for v10. The test has
been altered accordingly
* Bump to dbt-semantic-interfaces 0.3.0b1
* Update import path of `WhereFilterParser` from `dbt-semantic-interfaces`
In 0.3.x of `dbt-semantic-intefaces` the location of the WhereFilterParser
moved to be grouped in with a bunch of new adjacent code. As such,
we needed to correct our import path of it.
* Create basic `SavedQuery` node type based on `SavedQuery` protocol from DSI
* Add ability to add SavedQueries to the manifest
* Define unparsed SavedQuery node
* Begin parsing saved_query objects to manifest
* Skip jinja rendering of `SavedQuery.where` property
* Begin propagating `SavedQueries` on the manifest to the semantic manifest
* Add tests for basic saved query parsing
* Add custom pluralization handling of SavedQuery node type
* Add a config subclass to SavedQuery node
* Move the SavedQuery node to nodes.py
Unfortunately things are a bit too intertwined currently for SavedQuery
to be in it's own file. We need to add the SavedQuery node to the
GraphMemberNode, unfortunately with SavedQuery in it's own file,
importing it would have caused a circular dependency. We'll need
to separately come in and split things up as a cleanup portion of
work.
* Add basic plumbing of saved query configs to projects
* Add basic lookup utility for saved queries, SavedQueryLookup
* Handle disabled SavedQuery nodes in parsing and lookups
* Add SavedQuery nodes to grouping process
Our grouping logic seems to be in a weird spot. It seems liek we're
moving to setting the `group` for a node in the node's `config` however,
all of the logic around grouping is still focused on the top level `group`
property on a nodes. To get group stuff plumbed I've thus added `group`
as a top level property of the `SavedQuery` node, and populated it from
the config group value.
* Plumb through saved query in a lot more places
I don't like making scatter shot commits like this. However, a lot
of this commit was written ~4am, soooo yea. Things were broken, I wanted
things to be unbroken. I mostly searched for `semantic_models` and added
the equivalent necessary `saved_queries`. Some stuff is in support of
writing out the manifest, some stuff helps with node selection, it's a
lot of miscelaneous stuff that I don't fully understand.
* Add `depends_on` to `SavedQuery` nodes and populate from `metrics` property
* Add partial parsing support to SavedQuery nodes
* Add `docs` support for SavedQuery descriptions
* Support selctor methods for SavedQuery nodes
* Add `refs` property to SavedQuery node
We don't actually append anything to `refs` for SavedQuery nodes currently.
I'm not sure if anything needs to be appended to them. Regardless, we
access the `refs` property throughout the codebase while iterating over
nodes. It seems wise to support this attribute as to not accidently blow
something up with it not existing.
* Support `saved_queries` when upgrading from manifests <= v10 (and regenerate v11)
* Add changie doc for saved query node support
* Pin to dbt-semantic-interfaces 0.3.0b1 for saved query work
We're gonna release DSI 0.3.0, and if this PR automatically pulls that
in things will break. But the things that need fixing should be handled
separately from this PR. After releasing DSI 0.3.0 I'm going to create
a branch off/ontop of this one, and open a stacked PR with the associated
changes.
* Bump supported DSI version to 0.3.x
* Switch metric filters and saved query where to use ne WhereFilterIntersection
* Update schema yaml readers to create WhereFilterInterfaces
* Expand metric filters and saved query where property to handle both str and list of strs
* Update tests which were broken by where filter changes
* Regeneate v11 manifest
* Fixup: Update `SavedQueryLookup.perform_lookup` to operate on saved queries
I missed this when I was copy and pasting 🤦
* Add support for getting freshness from DBMS metadata
* Add changelog entry
* Add simple test case
* Change parsing error to warning and add new event type for warning
* Code review simplification of capability dict.
* Revisions to the capability mechanism per review
* Move utility function.
* Reduce try/except scope
* Clean up imports.
* Simplify typing per review
* Unit test fix
* add `store_failures_as` parameter to TestConfig, catch strategy parameter in test materialization
* create test results as views
* updated test expected values for new config option
* break up tests into reusable tests and adapter specific configuration, update test to check for relation type and confirm views update
* move test configuration into base test class
* allow `store_failures_as` to drive whether failures are stored
* update expected test config dicts to include the new default value for store_failures_as
* Add `store_failures_as` config for generic tests
* cover --store-failures on CLI gap
* add generic tests test case for store_failures_as
* update object names for generic test case tests for store_failures_as
* remove unique generic test, it was not testing `store_failures_as`
* pull generic run and assertion into base test class to turn tests into quasi-parameterized tests
* add ephemeral option for store_failures_as, as a way to easily turn off store_failures at the model level
* add compilation error for invalid setting of store_failures_as
---------
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* Explanation of Parsing vs. Compilation vs. Runtime
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Update core/dbt/parser/parsing-vs-compilation-vs-runtime.md
* Apply suggestions from code review
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Fix a couple markdown rendering issues
* Move to the "explain it like im 64" folder
When ELI5 just isnt detailed enough.
* Disambiguate Python references
Disambiguate Python references and delineate SQL models ("Jinja-SQL") from Python models ("dbt-py")
---------
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Add semantic model test to `test_contracts_graph_parsed.py`
The tests in `test_contracts_graph_parsed.py` are meant to ensure
that we can go from objects to dictionaries and back without any
changes. We've had a desire to simplify these tests. Most tests in
this file have three to four fixtures, this test only has one. What
a test of this format ensures is that parsing a SemanticModel from
a dictionary doesn't add/drop any keys from the dictionary and that
when going back to the dictionary no keys are dropped. This style of
test will still break whenever the semantic model (or sub objects)
change. However now when that happens, only one fixture will have to
be updated (whereas previously we had to update 3-4 fixtures).
* Begin using hypothesis package for symmetry testing
Hypothesis is a python package for doing property testing. The `@given`
parameterizes a test, with it generating the arguements it has following
`strategies`. The main strategies we use is `builds` this takes in a callable
passes any sub strategies for named arguements, and will try to infer any
other arguments if the callable is typed. I found that even though the
test was run many many times, some of the `SemanticModel` properties
weren't being changed. For instance `dimensions`, `entities`, and `measures`
were always empty lists. Because of this I defined sub strategies for
some attributes of `SemanticModel`s.
* Update unittest readme to have details on test_contracts_graph_parsed methodology
* Include option to generate static index.html
* Added changie
* Using DBT's system load / write file methods for better cross platform
support
* Updated docs tests with dbt.client.systems calls for file reading
* Writing out static_index.html as binary file to prevent line-ending
conversions on Windows. (similar behaviour as index.html)
* Add performance metrics to the CommandCompleted event.
* Add changelog entry.
* Add flag for controling the log level of ResourceReport.
* Update changelog entry to reflect changes
* Remove outdated attributes
* Work around missing resource module on windows
* Fix corner case where flags are not set
* Add new get_catalog_relations macro, allowing dbt to specify which relations in a schema the adapter should return data about
* Implement postgres adapter support for relation filtering on catalog queries
* Code review changes adding feature flag for catalog-by-relation-list support
* Use profile specified in --profile with dbt init (#7450)
* Use profile specified in --profile with dbt init
* Update .changes/unreleased/Fixes-20230424-161642.yaml
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Refactor run() method into functions, replace exit() calls with exceptions
* Update help text for profile option
---------
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* add TestLargeEphemeralCompilation (#8376)
* Fix a couple of issues in the postgres implementation of get_catalog_relations
* Add relation count limit at which to fall back to batch retrieval
* Better feature detection mechanism for adapters.
* Code review changes to get_catalog_relations and adapter feature checking
* Add changelog entry
---------
Co-authored-by: ezraerb <ezraerb@alum.mit.edu>
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
* Add `date_spine` macro (and macros it depends on) from dbt-utils to core
The macros added are
- date_spine
- get_intervals_between
- generate_series
- get_powers_of_two
We're adding these to core because they are becoming more prevalently used
with the increase usage in the semantic layer. Basically if you are
using the semantic layer currently, then it is almost a requirement
to use dbt-utils, which is undesireable given the SL is supported directly
in core. The primary focus of this was to just add `date_spine`. However,
because `date_spine` depends on other macros, these other macros were
also moved.
* Add adapter tests for `get_powers_of_two` macro
* Add adapter tests for `generate_series` macro
* Add adapter tests for `get_intervals_between` macro
* Add adapter tests for `date_spine` macro
* Improve test fixture for `date_spine` macro to work with multiple adapters
* Cast to types to date in fixture_date_spine when targeting redshift
* Improve test fixture for `get_intervals_between` macro to work with multiple adapters
* changie doc for adding date_spine macro
* Include 'join_to_timespine` and `fill_nulls_with` in metric fixture
* Support `join_to_timespine` and `fill_nulls_with` properties on measure inputs to metrics
* Assert new `fill_nulls_with` and `join_to_timespine` properties don't break associated DSI protocol
* Add doc for metric null coalescing improvements
* Fix unit test for unparsed metric objects
The `assert_symmetric` function asserts that dictionaries are mostly
equivalent. I say mostly equivalent because it drops keys that are
`None`. The issue is that that `join_to_timespine` gets defaulted
to `False`, so we have to specify it in the `get_ok_dict` so that
they match.
* allow multioption to be quoted
* changelog
* fix test
* remove list format
* fix tests
* fix list object
* review arg change
* fix quotes
* Update .changes/unreleased/Features-20230918-150855.yaml
* add types
* convert list to set in test
* make mypy happy
* mroe mypy happiness
* more mypy happiness
* last mypy change
* add node to test
* Extend use of type annotations in the events module.
* Add return type of None to more __init__ definitions.
* Still more type annotations adding -> None to __init__
* Tweak per review
* Use profile specified in --profile with dbt init
* Update .changes/unreleased/Fixes-20230424-161642.yaml
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Refactor run() method into functions, replace exit() calls with exceptions
* Update help text for profile option
---------
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* move config changes into alter.sql in alignment with other adapters
* move shared relations macros to relations root
* move single models files to models root
* add table to replace
* move create file into relation directory
* implement replace for postgres
* move column specific macros into column directory
* add unit test for can_be_replaced
* update renameable_relations and replaceable_relations to frozensets to set defaults
* fixed tests for new defaults
* Add docstrings to `contracts/graph/metrics.py` functions to document what they do
Used [dbt-labs/dbt-core#5607](https://github.com/dbt-labs/dbt-core/pull/5607)
for context on what the functions should do.
* Add typing to `reverse_dag_parsing` and update function to work on 1.6+ metrics
* Add typing to `parent_metrics` and `parent_metrics_names`
* Add typing to `base_metric_dependency` and `derived_metric_dependency` and update functions to work on 1.6+ metrics
* Simplify implementations of `basic_metric_dependency` and `derived_metric_dependnecy`
* Add typing to `ResolvedMetricReference` initialization
* Add typing to `derived_metric_dependency_graph`
* Simplify conditional controls in `ResolvedMetricReference` functions
The functions in `ResolvedMetricReference` use `manifest.metric.get(...)`
which will only return either a `Metric` or `None`, never a different
node type. Thus we don't need to check that the returned metric is
a metric.
* Don't recurse on over `depends_on` for non-derived metrics in `reverse_dag_parsing`
The function `reverse_dag_parsing` only cares about derived metrics,
that is metrics that depend on other metrics. Metrics only depend on
other metrics if they are one of the `DERIVED_METRICS` types. Thus
doing a recursive call to `reverse_dag_parsing` for non `DERIVED_METRICS`
types is unnecessary. Previously we were iterating over a metric's
`depends_on` property regardless of whether the metric was a `DERIVED_METRICS`
type. Now we only do this work if the metric is of a `DERIVED_METRICS`
type.
* Simplify `parent_metrics_names` by having it call `parent_metrics`
* Unskip `TestMetricHelperFunctions.test_derived_metric` and update fixture setup
* Add changie doc for metric helper function updates
* Get manifest in `test_derived_metric` from the parse dbt_run invocation
* Remove `Relation` a intiatlization attribute for `ResolvedMetricReference`
* Add return typing to class `__` functions of `ResolvedMetricReference`
* Move from `manifest.metrics.get` to `manifest.expect` in metric helpers
Previously with `manifest.metrics.get` we were just skipping when `None`
was returned. Getting `None` back was expected in that `parent_unique_id`s
that didn't belong to metrics should return `None` when calling
`manifest.metrics.get`, and these are fine to skip. However, there's
an edgecase where a `parent_unique_id` is supposed to be a metric, but
isn't found, thus returning `None`. How likely this edge case could
get hit, I'm not sure, but it's a possible edge case. Using `manifest.metrics.get`
it we can't actually tell if we're in the edge case or not. By moving
to `manifest.expect` we get the error handling built in, and the only
trade off is that we need to change our conditional to skip returned
nodes that aren't metrics.
* update `Number` class to handle integer values (#8306)
* add show test for json data
* oh changie my changie
* revert unecessary cahnge to fixture
* keep decimal class for precision methods, but return __int__ value
* jerco updates
* update integer type
* update other tests
* Update .changes/unreleased/Fixes-20230803-093502.yaml
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* account for integer vs number on table merges
* add tests for combining number with integer.
* add unit test when nulls are added
* cant none as an Integer
* fix null tests
---------
Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
Co-authored-by: Dave Connors <dave.connors@fishtownanalytics.com>
* first draft of adding in table - materialized view swap
* table/view/materialized view can all replace each other
* update renameable relations to a config
* migrate relations macros from `macros/adapters/relations` to `macros/relations` so that generics are close to the relation specific macros that they reference; also aligns with adapter macro files structure, to look more familiar
* move drop macro to drop macro file
* align the behavior of get_drop_sql and drop_relation, adopt existing default from drop_relation
* add explicit ddl for drop statements instead of inheriting the default from dbt-core
* update replace macro dependent macros to align with naming standards
* update type for mashumaro, update related test
* Improve typing of `ContextMember` functions
* Improve typing of `Var` functions
* Improve typing of `ContextMeta.__new__`
* Improve typing `BaseContext` and functions
In addition to just adding parameter typing and return typing to
`BaseContext` functions. We also declared `_context_members_` and
`_context_attrs_` as properites of `BaseContext` this was necessary
because they're being accessed in the classes functions. However,
because they were being indirectly instantiated by the metaclass
`ContextMeta`, the properties weren't actually known to exist. By
adding declaring the properties on the `BaseContext`, we let mypy
know they exist.
* Remove bare `invocations` of `@contextmember` and `@contextproperty`, and add typing to them
Previously `contextmember` and `contextproperty` were 2-in-1 decorators.
This meant they could be invoked either as `@contextmember` or
`@contextmember('some_string')`. This was fine until we wanted to return
typing to the functions. In the instance where the bare decorator was used
(i.e. no `(...)` were present) an object was expected to be returned. However
in the instance where parameters were passed on the invocation, a callable
was expected to be returned. Putting a union of both in the return type
made the invocations complain about each others' return type. To get around this
we've dropped the bare invocation as acceptable. The parenthesis are now always
required, but passing a string in them is optional.
* WIP
* WIP
* get group and enabled added
* changelog
* cleanup
* getting measure lookup working
* missed file
* get project level working
* fix last test
* add groups to config tests
* more group tests
* fix path
* clean up manifest.py
* update error message
* fix test assert
* remove extra check
* resolve conflicts in manaifest
* update manifest
* resolve conflict
* add alias
* Add compiled node properties to run_results.json
* Include compiled-node attributes in run_results.json
* Fix typo
* Bump schema version of run_results
* Fix test assertions
* Update expected run_results to reflect new attributes
* Code review changes
* Fix mypy warnings for ManifestLoader.load() (#8443)
* revert python version for docker images (#8445)
* revert python version for docker images
* add comment to not update python version, update changelog
* Bumping version to 1.7.0b1 and generate changelog
* [CT-3013] Fix parsing of `window_groupings` (#8454)
* Update semantic model parsing tests to check measure non_additive_dimension spec
* Make `window_groupings` default to empty list if not specified on `non_additive_dimension`
* Add changie doc for `window_groupings` parsing fix
* update `Number` class to handle integer values (#8306)
* add show test for json data
* oh changie my changie
* revert unecessary cahnge to fixture
* keep decimal class for precision methods, but return __int__ value
* jerco updates
* update integer type
* update other tests
* Update .changes/unreleased/Fixes-20230803-093502.yaml
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Improve docker image README (#8212)
* Improve docker image README
- Fix unnecessary/missing newline escapes
- Remove double whitespace between parameters
- 2-space indent for extra lines in image build commands
* Add changelog entry for #8212
* ADAP-814: Refactor prep for MV updates (#8459)
* apply reformatting changes only for #8449
* add logging back to get_create_materialized_view_as_sql
* changie
* swap trigger (#8463)
* update the implementation template (#8466)
* update the implementation template
* add colon
* Split tests into classes (#8474)
* add flaky decorator
* split up tests into classes
* revert update agate for int (#8478)
* updated typing and methods to meet mypy standards (#8485)
* Convert error to conditional warning for unversioned contracted model, fix msg format (#8451)
* first pass, tests need updates
* update proto defn
* fixing tests
* more test fixes
* finish fixing test file
* reformat the message
* formatting messages
* changelog
* add event to unit test
* feedback on message structure
* WIP
* fix up event to take in all fields
* fix test
* Fix ambiguous reference error for duplicate model names across packages with tests (#8488)
* Safely remove external nodes from manifest (#8495)
* [CT-2840] Improved semantic layer protocol satisfaction tests (#8456)
* Test `SemanticModel` satisfies protocol when none of it's `Optionals` are specified
* Add tests ensuring SourceFileMetadata and FileSlice satisfiy DSI protocols
* Add test asserting Defaults obj satisfies protocol
* Add test asserting SemanticModel with optionals specified satisfies protocol
* Split dimension protocol satisfaction tests into with and without optionals
* Simplify DSI Protocol import strategy in protocol satisfaction tests
* Add test asserting DimensionValidtyParams satisfies protocol
* Add test asserting DimensionTypeParams satisfies protocol
* Split entity protocol satisfaction tests into with and without optionals
* Split measure protocol satisfication tests and add measure aggregation params satisficaition test
* Split metric protocol satisfaction test into optional specified an unspecified
Additionally, create where_filter pytest fixture
* Improve protocol satisfaction tests for MetricTypeParams and sub protocols
Specifically we added/improved protocol satisfaction tests for
- MetricTypeParams
- MetricInput
- MetricInputMeasure
- MetricTimeWindow
* Convert to using mashumaro jsonschema with acceptable performance (#8437)
* Regenerate run_results schema after merging in changes from main.
---------
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Quigley Malcolm <QMalcolm@users.noreply.github.com>
Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Jaime Martínez Rincón <jaime@jamezrin.name>
Co-authored-by: Mike Alfare <13974384+mikealfare@users.noreply.github.com>
Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
* Test `SemanticModel` satisfies protocol when none of it's `Optionals` are specified
* Add tests ensuring SourceFileMetadata and FileSlice satisfiy DSI protocols
* Add test asserting Defaults obj satisfies protocol
* Add test asserting SemanticModel with optionals specified satisfies protocol
* Split dimension protocol satisfaction tests into with and without optionals
* Simplify DSI Protocol import strategy in protocol satisfaction tests
* Add test asserting DimensionValidtyParams satisfies protocol
* Add test asserting DimensionTypeParams satisfies protocol
* Split entity protocol satisfaction tests into with and without optionals
* Split measure protocol satisfication tests and add measure aggregation params satisficaition test
* Split metric protocol satisfaction test into optional specified an unspecified
Additionally, create where_filter pytest fixture
* Improve protocol satisfaction tests for MetricTypeParams and sub protocols
Specifically we added/improved protocol satisfaction tests for
- MetricTypeParams
- MetricInput
- MetricInputMeasure
- MetricTimeWindow
* first pass, tests need updates
* update proto defn
* fixing tests
* more test fixes
* finish fixing test file
* reformat the message
* formatting messages
* changelog
* add event to unit test
* feedback on message structure
* WIP
* fix up event to take in all fields
* fix test
* add show test for json data
* oh changie my changie
* revert unecessary cahnge to fixture
* keep decimal class for precision methods, but return __int__ value
* jerco updates
* update integer type
* update other tests
* Update .changes/unreleased/Fixes-20230803-093502.yaml
---------
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Update semantic model parsing tests to check measure non_additive_dimension spec
* Make `window_groupings` default to empty list if not specified on `non_additive_dimension`
* Add changie doc for `window_groupings` parsing fix
* first pass
* WIP
* update issue body
* fix triggering label
* fix docs
* add better run name
* reduce complexity
* update description
* fix PR title
* point at workflow on main
* fix wording
* add label
* Update semantic model parsing test to check `create_metric = true` functionality
* Add `create_metric` boolean property to unparsed measure objects
* Begin creating metrics from measures with `create_metric = True`
* Add test ensuring partial parsing handles metrics generated from measures
* Ensure partial parsing appropriately deletes metrics generated from semantic models
* Add changie doc for addition
* Separate generated metrics from parsed metrics for partial parsing
I was doing a demo earlier today of this branch (minus this commit)
and noticed something odd. When I changes a semantic model, metrics
that should have been technically uneffected would get dropped. Basically
if I made a change to a semantic model which had metrics in the same
file, and then ran parse, those metrics defined in the same file
would get dropped. Then with no other changes, if I ran parse again
they would come back. What was happening was that parsed metrics
and generated metrics were getting tracked the same way on the file
objects for partial parsing. In 0787a7c7b6
we began dropping all metrics tracked in a file objects when changes
to semantic models were detected. Since parsed metrics and generated
metrics were being tracked together on the file object, the parsed
metrics were getting dropped as well. In this commit we begin separating
out the tracking of generated metrics and parsed metrics on the
file object, and now only drop the generated metrics when semantic
models have a detected change.
* Assert in test that semantic model partial parsing doesn't clobber regular metrics
* Replaced the FirstRunResultError and AfterFirstRunResultError events with RunResultError.
* Attempts at reasonable unit tests.
* Restore event manager after unit test.
* Support configurable delimiter for seed files, default to comma (#3990)
* Update Features-20230317-144957.yaml
* Moved "delimiter" to seed config instead of node config
* Update core/dbt/clients/agate_helper.py
Co-authored-by: Cor <jczuurmond@protonmail.com>
* Update test_contracts_graph_parsed.py
* fixed integration tests
* Added functional tests for seed files with a unique delimiter
* Added docstrings
* Added a test for an empty string configured delimiter value
* whitespace
* ran black
* updated changie entry
* Update Features-20230317-144957.yaml
---------
Co-authored-by: Cor <jczuurmond@protonmail.com>
* add param to control maxBytes for single dbt.log file
* nits
* nits
* Update core/dbt/cli/params.py
Co-authored-by: Peter Webb <peter.webb@dbtlabs.com>
---------
Co-authored-by: Peter Webb <peter.webb@dbtlabs.com>
* Add test ensuring `warn_error_options` is dictified in `invocation_args_dict` of contexts
* Add dictification specific to `warn_error_options` in `args_to_dict`
* Changie doc for serialization changes of warn_error_options
* Add test asserting that a macro with the work materializtion doesn't cause issues
* Let macro names include the word `materialization`
Previously we were checking if a macro included a materialization
based on if the macro name included the word `materialization`. However,
a macro name included the word `materialization` isn't guarnteed to
actually have a materialization, and a macro that doesn't have
`materialization` in the name isn't guaranteed to not have a materialization.
This change is to detect macros with materializations based on the
detected block type of the macro.
* Add changie doc materialization in macro detection
* Add test for checking that `_connection_exception_retry` handles `EOFError`s
* Update `_connection_exception_retry` to handle `EOFError` exceptions
* Add changie docs for `_connection_exception_retry` handling `EOFError` exceptions
* applied new integration tests to existing framework
* applied new integration tests to existing framework
* generalized tests for reusability in adapters; fixed drop index issue
* generalized tests for reusability in adapters; fixed drop index issue
* removed unnecessary overrides in tests
* adjusted import to allow for usage in adapters
* adjusted import to allow for usage in adapters
* removed fixture artifact
* generalized the materialized view fixture which will need to be specific to the adapter
* unskipped tests in the test runner package
* corrected test condition
* corrected test condition
* added missing initial build for the relation type swap tests
* add env vars for datadog ci visibility
* modify pytest command for tracing
* fix posargs
* move env vars to job that needs them
* add test repeater to DD
* swap flags
* Bump version support for `dbt-semantic-interfaces` to `~=0.1.0rc1`
* Add tests for asserting WhereFilter satisfies protocol
* Add `call_parameter_sets` to `WhereFilter` class to satisfy protocol
* Changie doc for moving to DSI 0.1.0rc1
* [CT-2822] Fix `NonAdditiveDimension` Implementation (#8089)
* Add test to ensure `NonAdditiveDimension` implementation satisfies protocol
* Fix typo in `NonAdditiveDimension`: `window_grouples` -> `window_groupings`
* Add changie doc for typo fix in NonAdditiveDimension
* Add metrics from metric type params to a metric's depends_on
* Add Lookup utility for finding `SemanticModel`s by measure names
* Add the `SemanticModel` of a `Metric`'s measure property to the `Metric`'s `depends_on`
* Add `SemanticModelConfig` to `SemanticModel`
Some tests were failing due to `Metric`'s referencing `SemanticModels`.
Specifically there was a check to see if a referenced node was disabled,
and because `SemanticModel`'s didn't have a `config` holding the `enabled`
boolean attr, core would blow up.
* Checkpoint on test fixing
* Correct metricflow_time_spine_sql in test fixtures
* Add check for `SemanticModel` nodes in `Linker.link_node`
Now that `Metrics` depend on `SemanticModels` and `SemanticModels`
have their own dependencies on `Models` they need to be checked for
in the `Linker.link_node`. I forget the details but things blow up
without it. Basically it adds the SemanticModels to the dependency
graph.
* Fix artifacts/test_previous_version_state.py tests
* fix access/test_access.py tests
* Fix function metric tests
* Fix functional partial_parsing tests
* Add time dimension to semantic model in exposures fixture
* Bump DSI version to a minimum of 0.1.0dev10
DSI 0.1.0dev10 fixes an incoherence issue in DSI around `agg_time_dimension`
setting. This incoherence was that `measure.agg_time_dimension` was being
required, even though it was no longer supposed to be a required attribute
(it's specificially typed as optional in the protocol). This was causing
a handful of tests to fail because the `semantic_model.defaults.agg_time_dimension`
value wasn't being respected. Pulling in the fix from DSI 0.1.0dev10 fixes
the issue.
Interestingly after bumping the DSI version, the integration tests were
still failing. If I ran the tests individually they passed though. To get
`make integration` to run properly I ended up having to clear my `.tox`
cache, as it seems some outdated state was being persisted.
* Add test specifically for checking the `depends_on` of `Metric` nodes
* Re-enable test asserting calling metric nodes in models
* Migrate `checked_agg_time_dimension` to `checked_agg_time_dimension_for_measure`
DSI 0.1.0dev10 moved `checked_agg_time_dimension` from the `Measure`
protocol to the `SemanticModel` protocol as `checked_agg_time_dimension_for_measure`.
This finishes a change where for a given measure either the `Measure.agg_time_dimension`
or the measure's parent `SemanticModel.defaults.agg_time_dimension` needs to be
set, instead of always require the measure's `Measure.agg_time_dimension`.
* Add changie doc for populating metric
---------
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
The original implementation of validate_sql was called dry_run,
but in the rename the test classes and much of their associated
documentation still retained the old naming.
This is mainly cosmetic, but since these test classes will be
imported into adapter repositories we should fix this now before
the wrong name proliferates.
* Add dry_run method to base adapter with implementation for SQLAdapters
resolves#7839
In the CLI integration, MetricFlow will issue dry run queries as
part of its warehouse-level validation of the semantic manifest,
including all semantic model and metric definitions.
In most cases, issuing an `explain` query is adequate, however,
BigQuery does not support the `explain` keyword and so we cannot
simply pre-pend `explain` to our input queries and expect the
correct behavior across all contexts.
This commit adds a dry_run() method to the BaseAdapter which mirrors
the execute() method in that it simply delegates to the ConnectionManager.
It also adds a working implementation to the SQLConnectionManager and
includes a few test cases for adapter maintainers to try out on their own.
The current implementation should work out of the box with most
of our adapters. BigQuery will require us to implement the dry_run
method on the BigQueryConnectionManager, and community-maintained
adapters can opt in by enabling the test and ensuring their own
implementations work as expected.
Note - we decided to make these concrete methods that throw runtime
exceptions for direct descendants of BaseAdapter in order to avoid
forcing community adapter maintainers to implement a method that does
not currently have any use cases in dbt proper.
* Switch dry_run implementation to be macro-based
The common pattern for engine-specific SQL statement construction
in dbt is to provide a default macro which can then be overridden
on a per-adapter basis by either adapter maintainers or end users.
The advantage of this is users can take advantage of alternative
SQL syntax for performance or other reasons, or even to enable
local usage if an engine relies on a non-standard expression and
the adapter maintainer has not updated the package.
Although there are some risks here they are minimal, and the benefit
of added expressiveness and consistency with other similar constructs
is clear, so we adopt this approach here.
* Improve error message for InvalidConnectionError in test_invalid_dry_run.
* Rename dry_run to validate_sql
The validate_sql name has less chance of colliding with dbt's
command nomenclature, both now and in some future where we have
dry-run operations.
* Rename macro and test files to validate_sql
* Fix changelog entry
* add permissions
* replace db setup
* try with bash instead of just pytest flags
* fix test command
* remove spaces
* remove force-flaky flag
* add starting vlaues
* add mac and windows postgres isntall
* define use bash
* fix typo
* update output report
* tweak last if condition
* clarify failures/successful runs
* print running success and failure tally
* just output pytest instead of capturing it
* set shell to not exit immediately on exit code
* add formatting around results for easier scanning
* more output formatting
* add matrix to unlock parallel runners
* increase to ten batches
* update debug
* add comment
* clean up comments
* Remove `create_metric` as a public facing `SemanticModel.Measure` property
We want to add `create_metric`. The `create_metric` property will be
incredibly useful. However, at this time it is not hooked up, and we don't
have time to hook it up before the code freeze for 1.6.0rc of core. As
it doesn't do anything, we shouldn't allow people to specify it, because
it won't do what one would expect. We plan on making the implementation
of `create_metric` a priority for 1.7 of core
* Changie doc for the removal of create_metric property
* add negative test case
* changie
* missed a comma
* Update changelog entry
* Add a negative number (rather than subtract a positive number)
---------
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* Fix accidental propagation of log messages to root logger.
* Add changelog entry
* Fixed an issue which blocked debug logging to stdout with --log-level debug, unless --debug was also used.
* Use dbt-semantic-interface validations on semantic models and metrics defined in Core.
* Remove empty test, since semantic models don't generate any validation warnings.
* Add changelog entry.
* Temporarily remove requirement that there must be semantic models definied in order to define metrics
* add interface changes section to the PR template
* update entire template
* split up choices for tests and interfaces
* minor formatting change
* add line breaks
* actually put in line breaks
* revert split choices in checklist
* add line breaks to top
* move docs link
* typo
* ct-2551: adds old and unmodified state selection methods
* ct-2551: update check_unmodified_content to simplify
* add unit and integration tests for unmodified and old
* add changelog entry
* ct-2551: reformatting of contingent adapter assignment list
* UnifiedToUTC
* Check proximity of dbt_valid_to and deleted time
* update the message to print if the assertion fails
* add CHANGELOG entries
* test only if naive
* Added comments about naive and aware
* Generalize comparison of datetimes that are "close enough"
---------
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* Fix tests fixtures which were using measures for metric numerator/denominators
In our previous upgrade to DSI dev7, numerators and denominators for
metrics switched from being `MetricInputMeasure`s to `MetricInput`s.
I.e. metric numerators and denominators should references other metrics,
not semantic model measures. However, at that time, we weren't actually
doing anything with numerators and denominators in core, so no issue
got raised. The changes we are about to make though are going to surface
these issues..
* Add tests for ensuring a metric's `input_measures` gets properly populated
* Begin populating `metric.type_params.input_measures`
This isn't my favorite bit of code. Mostly because there are checks for
existence which really should be handled before this point, however a
good point for that to happen doesn't exist currently. For instance,
in an ideal world by the time we get to `_process_metric_node`, if a
metric is of type `RATIO` and the nominator and denominator should be
guaranteed.
* Update test checking that disabled metrics aren't added to the manifest metrics
We updated from the metric `number_of_people` to `average_tenure_minus_people` for
this test because disabling `number_of_people` raised other exceptions at parse
time due to a metric referencing a disabled metric. The metric `average_tenure_minus_people`
is a leaf metric, and so for this test, it is a better candidate.
* Update `test_disabled_metric_ref_model` to have more disabled metrics
There are metrics which depend on the metric `number_of_people`. If
`number_of_people` is disabled without the metrics that depend on it
being disabled, then a different (expected) exception would be raised
than the one this test is testing for. Thus we've disabled those
downstream metrics.
* Add test which checks that metrics depending on disabled metrics raise an exception
* Add changie doc for populating metric input measures
* Add merge incremental strategy
* Expect merge to be a valid strategy for Postgres
---------
Co-authored-by: Anders Swanson <anders.swanson@dbtlabs.com>
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* CT-2711: Add remove_tests() call to delete_schema_source() so that call sites are more uniform with other node deletion call sites. This will enable further code factorization.
* CT-2711: Factor repeated code section (mostly) out of PartialParsing.handle_schema_file_changes()
* CT-2711: Factor a repeated code section out of schedule_nodes_for_parsing()
* Update semantic model parsing test to check measure agg params
* Make `use_discrete_percentile` and `use_approximate_percentile` non optional and default false
This was a mistake in our implementation of the MeasureAggregationParams.
We had defined them as optional and defaulting to `None`. However, as the
protocol states, they cannot be `None`, they must be a boolean value.
Thus now we now ensure them.
* Add changie doc for measure percentile fixes
* Update semantic model parsing test to check different measure expr types
* Allow semantic model measure exprs to be defined with ints and bools in yaml
Sometimes the expr for a measure can defined in yaml with a bool or an int.
However, we were only allowing for strings. There was a work around for this,
which was wrapping your bool or int in double quotes in the yaml, but
this can be fairly annoying for the end user.
* Changie doc for fixing measure expr yaml specification
* CT-2651: Add Semantic Models to the manifest and various pieces of graph linking code
* CT-2651: Finish integrating semantic models into the partial parsing system
* CT-2651: More semantic model details for partial parsing
* CT-2651: Remove merged references to project_dependencies
* CT-2651: Revise changelog entry
* CT-2651: Disable unit test until partial parsing of semantic models is complete.
* CT-2651: Temporarily disable an apparently-flaky test.
* Add some comments to methods constructing Project/RuntimeConfig
* Save flag that packages dict came from dependencies.yml
* Test for not rendering packages_dict
* Changie
* Ensure packages_yml_dict and dependencies_yml_dict are dictionaries
* Ensure "packages" passed to render_packages is a dict
* Bump DSI dependency version to 0.1.0dev7
* Cleaner DSI type enum importing
Previoulsy we had to use individual import paths for each type enum
that dbt-semantic-interfaces provided. However, dbt-semantic-interfaces
has been updated to allow for importing all the type enums from a
singular path.
* Cleaner DSI protocol importing
Previoulsy we had to use individual import paths for each protocol
that dbt-semantic-interfaces provided. However, dbt-semantic-interfaces
has been updated to allow for importing all the protocols from a
singular path.
* Add semantic protocol satisifcation test for metric type params
* Replace `metric.type_params.measures` with `metric.type_params.input_measures`
In DSI 0.1.0dev7 `measures` on metric type params became `input_measures`.
Additionally `input_measures` should not be user specified but something
we compile at parse time, thus we've removed it from `UnparsedMetricTypeParams`.
Finally, actually populating `input_measures` is somewhat complicated due
to the existance of derived metrics, thus that work is being pushed
off to CT-2707.
* Update metric numerator/denominator to be `MetricInput`s
In DSI 0.1.0dev7 `metric.type_params.numerator` and `metric.type_params.denominator`
switched from being `MetricInputMeasure`s to `MetricInput`s. This
commit reflects that change. Additionally, some helper functions on
metric type params were removed related to the numerator and denominator.
Thus we've removed them respectively in this commit.
* Add protocol satisfaction tests for `MetricInput` and `MetricInputMeasure`
* Add `post_aggregation_reference` to `MetricInput` and fix typo in `MetricInputMeasure`
DSI 0.1.0dev7 added `post_aggregation_reference` to the `MetricInput` protocol,
thus we've added it to our implementation in core. Additionally, we had a typo
in a method name in our implementation of `MetricInputMeasure`, ironically
a similar function to the one we've added for `MetricInput`
* Changie doc for upgraded to DSI 0.1.0dev7
* Fix parsing of metric numerator and denominator in schema_yaml_readers
Previously numerator and denominator of a metric were `MetricInputMeasure`s,
now they're `MetricInput`s. Changing the typing isn't enough though.
We have parsing functions in `schema_yaml_readers` which were specifically
parsing the numerator and denominator as if they were `MetricInputMeasure`s.
Thus we had to updating the schema_yaml_readers to parse them as `MetricInput`s.
During this we had some logic in a parsing function `_get_metric_inputs` which
could be abstracted to newly added functions.
* Upgrade to dbt-semantic-interfaces v0.1.0dev5
This is a fairly simple upgrade. Literally it's just pointing at the
the new versions. The v3 schemas are directly compatible with v5 because
there were no protocol level changes from v3 to v5. All the changers were
updates to tools MetricFlow uses from DSI, not tools that we ourselves
are using in core (yet).
* Add changie doc for DSI version bump
* Update metric filters in testing fixtures
I incorrectly wrote the tests such that they didn't include curly
braces, `{{..}}`, around things like `dimension(..)` for filters.
This updates the tests fixtures to have proper filter specifications
* Skip jinja rendering of `filter` key of metrics
Note that `filter` can show up in multiple places: as a root key
on a metric (`metric.filter`), on a metric input (`metric.type_params.metrics[x].filter`),
denominator (`metric.type_params.denominator.filter`), numerator
(`metric.type_params.numerator.filter`), and a metric input measure
(`metric.type_params.measure.filter` and `metric.type_params.measures[x].filter`).
In this commit we skip all of them :)
* Add changie doc for skipping jinja parsing for metric filters
* Update yaml renderer test for metrics
* Add AdapterRegistered event log message
* Add AdapterRegistered to unit test
* make versioning and logging consistent
* make versioning and logging consistent
* add to_version_string
* remove extra equals
* format fire_event
* Add tests to ensure our semantic layer nodes satisfy the DSI protocols
These tests create runtime checkable versions of the protocols defined in
DSI. Thus we can instantiate instances of our semantic layer nodes and
use `isinstance` to check that they satisfy the protocol. These `runtime_checkable`
versions of the protocols should only exist in testing and should never
be used in the actual package code.
* Update the `Dimension` object of `SemanticModel` node to match DSI protocol
* Make `UnparsedDimension` more strict and update schema readers accordingly
* Update the `Entity` object of `SemanticModel` node to match DSI protocol
* Make `UnparsedEntity` more strict and update schema readers accordingly
* Update the `Measure` object of `SemanticModel` node to match DSI protocol
* Make `UnparsedMeasure` more strict and update schema readers accordingly
* Update the `SemanticModel` node to match DSI protocol
A lot of the additions are helper functions which we don't actually
use in core. This is a known issue. We're in the process of removing
a fair number of them from the DSI protocol spec. However, in the meantime
we need to implement them to satisfy the protocol unfortunately.
* Make `UnparsedSemanticModel` more strict and update schema readers accordingly
* Changie entry for updating SemanticModel node
* Use contextvar to store and get project_root for path selector method
* Changie
* Modify test to check Path selector with project-dir
* Don't set cv_project_root in base task if no config
* Refactor MetricNode definition to satisfy DSI Metric protocol
* Fix tests involving metrics to have updated properties
* Update UnparsedMetricNode to match new metric yaml spec
* Update MetricParser for new unparsed and parsed MetricNodes
* Remove `rename_metric_attr`
We're intentionally breaking the spec. There will be a separate tool provided
for migrating from dbt-metrics to dbt x metricflow. This bit of code was renaming
things like `type` to `calculation_method`. This is problematic because `type` is
on the new spec, while `calculation_method` is not. Additionally, since we're
intentionally breaking the spec, this function, `rename_metric_attr`, shouldn't be
used for any property renaming.
* Fix tests for Metrics (1.6) changes
* Regenerated v10 manifest schema and associated functional test artifact state
* Remove no longer needed tests
* Skip / comment out tests for metrics functionality that we'll be implementing later
* Begin outputting semantic manifest artifact on every run
* Drop metrics during upgrade_manifest_json if manifest is v9 or before
* Update properties of `minimal_parsed_metric_dict` to match new metric spec
* Add changie entry for metric node breaking changes
* Add semantic model nodes to semantic manifest
* Add dbt-semantic-interfaces as a dependency
With the integration with MetricFlow we're taking a dependency on
`dbt-semantic-interfaces` which acts as the source of truth for
protocols which MetricFlow and dbt-core need to agree on. Additionally
we're hard pinning to 0.1.0.dev3 for now. We plan on having a less
restrictive specification when dbt-core 1.6 hits GA.
* Add implementations of DSI Metadata protocol to nodes.py
* CT-2521: Initial work on adding new SemanticModel node
* CT-2521: Second rough draft of SemanticModels
* CT-2521: Update schema v10
* CT-2521: Update unit tests for new SemanticModel collection in manifest
* CT-2521: Add changelog entry
* CT-2521: Final touches on initial implementation of SemanticModel parsing
* Change name of Metadata class to reduce potential for confusion
* Remove "Replaceable" inheritance, per review
* CT-2521: Rename internal variables from semantic_models to semantic_nodes
* CT-2521: Update manifest schema to reflect change
---------
Co-authored-by: Quigley Malcolm <quigley.malcolm@dbtlabs.com>
* changie
* ADAP-387: Stub materialized view as a materialization (#7211)
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* remove unneeded return statement, rename directory
* remove unneeded ()
* responding to some pr feedback
* adjusting order of events for mv base work
* move up prexisting drop of backup
* change relatiion type to view to be consistent
* add base test case
* fix jinja exeception message expression, basic test passing
* response to feedback, removeal of refresh infavor of combined create_as, etc.
* swapping to api layer and stratgeies for default implementation (basing off postgres, redshift)
* remove stratgey to limit need for now
* remove unneeded story level changelog entry
* add strategies to condtional in place of old macros
* macro name fix
* rename refresh macro in api level
* align names between postgres and default to same convention
* align names between postgres and default to same convention
* change a create call to full refresh
* pull adapter rename into strategy, add backup_relation as optional arg
* minor typo fix, add intermediate relation to refresh strategy and initial attempt at further conditional logic
* updating to feature main
---------
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
* ADAP-387: reverting db_api implementation (#7322)
* changie
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* remove unneeded return statement, rename directory
* remove unneeded ()
* responding to some pr feedback
* adjusting order of events for mv base work
* move up prexisting drop of backup
* change relatiion type to view to be consistent
* add base test case
* fix jinja exeception message expression, basic test passing
* response to feedback, removeal of refresh infavor of combined create_as, etc.
* swapping to api layer and stratgeies for default implementation (basing off postgres, redshift)
* remove stratgey to limit need for now
* remove unneeded story level changelog entry
* add strategies to condtional in place of old macros
* macro name fix
* rename refresh macro in api level
* align names between postgres and default to same convention
* change a create call to full refresh
* pull adapter rename into strategy, add backup_relation as optional arg
* minor typo fix, add intermediate relation to refresh strategy and initial attempt at further conditional logic
* updating to feature main
* removing db_api and strategies directories in favor of matching current materialization setups
* macro name change
* revert to current approach for materializations
* added tests
* added `is_materialized_view` to `BaseRelation`
* updated materialized view stored value to snake case
* typo
* moved materialized view tests into adapter test framework
* add enum to relation for comparison in jinja
---------
Co-authored-by: Mike Alfare <mike.alfare@dbtlabs.com>
* ADAP-391: Add configuration change option (#7272)
* changie
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* move up pre-existing drop of backup
* change relation type to view to be consistent
* add base test case
* fix jinja exception message expression, basic test passing
* align names between postgres and default to same convention
* init set of Enum for config
* work on initial Enum class for on_configuration_change base it off ConstraintTypes which is also a str based Enum in core
* add on_configuration_change to unit test expected values
* make suggested name change to Enum class
* add on_configuration_change to some integration tests
* add on_configuration_change to expected_manifest to pass functional tests
* added `is_materialized_view` to `BaseRelation`
* updated materialized view stored value to snake case
* moved materialized view tests into adapter test framework
* add alter materialized view macro
* change class name, and config setup
* play with field setup for on_configuration_change
* add method for default selection in enum class
* renamed get_refresh_data_in_materialized_view_sql to align with experimental package
* changed expected values to default string
* added in `on_configuration_change` setting
* change ignore to skip
* updated default option for on_configuration_change on NodeConfig
* removed explicit calls to enum values
* add test setup for testing fail config option
* updated `config_updates` to `configuration_changes` to align with `on_configuration_change` name
* setup configuration change framework
* skipped tests that are expected to fail without adapter implementation
* cleaned up log checks
---------
Co-authored-by: Mike Alfare <mike.alfare@dbtlabs.com>
* ADAP-388: Stub materialized view as a materialization - postgres (#7244)
* move the body of the default macros into the postgres implementation, throw errors if the default is used, indicating that materialized views have not been implemented for that adapter
---------
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
* ADAP-402: Add configuration change option - postgres (#7334)
* changie
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* remove unneeded return statement, rename directory
* remove unneeded ()
* responding to some pr feedback
* adjusting order of events for mv base work
* move up prexisting drop of backup
* change relatiion type to view to be consistent
* add base test case
* fix jinja exeception message expression, basic test passing
* added materialized view stubs and test
* response to feedback, removeal of refresh infavor of combined create_as, etc.
* updated postgres to use the new macros structure
* swapping to api layer and stratgeies for default implementation (basing off postgres, redshift)
* remove stratgey to limit need for now
* remove unneeded story level changelog entry
* add strategies to condtional in place of old macros
* macro name fix
* rename refresh macro in api level
* align names between postgres and default to same convention
* change a create call to full refresh
* pull adapter rename into strategy, add backup_relation as optional arg
* minor typo fix, add intermediate relation to refresh strategy and initial attempt at further conditional logic
* init copy of pr 387 to begin 391 implementation
* init set of Enum for config
* work on initial Enum class for on_configuration_change base it off ConstraintTypes which is also a str based Enum in core
* remove postgres-specific materialization in favor of core default materialization
* update db_api to use native types (e.g. str) and avoid direct calls to relation or config, which would alter the run order for all db_api dependencies
* add clarifying comment as to why we have a single test that's expected to fail at the dbt-core layer
* add on_configuration_change to unit test expected values
* make suggested name change to Enum class
* add on_configuration_change to some integretion tests
* add on_configuration_change to expected_manifest to pass functuional tests
* removing db_api and strategies directories in favor of matching current materialization setups
* macro name change
* revert to current approach for materializations
* revert to current approach for materializations
* added tests
* move materialized view logic into the `/materializations` directory in line with `dbt-core`
* moved default macros in `dbt-core` into `dbt-postgres`
* added `is_materialized_view` to `BaseRelation`
* updated materialized view stored value to snake case
* moved materialized view tests into adapter test framework
* updated materialized view tests to use adapter test framework
* add alter materialized view macro
* add alter materialized view macro
* change class name, and config setup
* change class name, and config setup
* play with field setup for on_configuration_change
* add method for default selection in enum class
* renamed get_refresh_data_in_materialized_view_sql to align with experimental package
* changed expected values to default string
* added in `on_configuration_change` setting
* change ignore to skip
* added in `on_configuration_change` setting
* updated default option for on_configuration_change on NodeConfig
* updated default option for on_configuration_change on NodeConfig
* fixed list being passed as string bug
* removed explicit calls to enum values
* removed unneeded test class
* fixed on_configuration_change to be picked up appropriately
* add test setup for testing fail config option
* remove breakpoint, uncomment tests
* update skip scenario to use empty strings
* update skip scenario to avoid using sql at all, remove extra whitespace in some templates
* push up initial addition of indexes for mv macro
* push slight change up
* reverting alt macro and moving the do create_index call to be more in line with other materializations
* Merge branch 'feature/materialized-views/ADAP-2' into feature/materialized-views/ADAP-402
# Conflicts:
# core/dbt/contracts/graph/model_config.py
# core/dbt/include/global_project/macros/materializations/models/materialized_view/alter_materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/create_materialized_view_as.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/get_materialized_view_configuration_changes.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/refresh_materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/replace_materialized_view.sql
# plugins/postgres/dbt/include/postgres/macros/materializations/materialized_view.sql
# tests/adapter/dbt/tests/adapter/materialized_views/base.py
# tests/functional/materializations/test_materialized_view.py
* merge feature branch into story branch
* merge feature branch into story branch
* added indexes into the workflow
* fix error in jinja that caused print error
* working on test messaging and skipping tests that might not fit quite into current system
* add drop and show macros for indexes
* add drop and show macros for indexes
* add logic to determine the indexes to create or drop
* pulled index updates through the workflow properly
* convert configuration changes to fixtures, implement index changes into tests
* created Model dataclass for readability, added column to swap index columns for testing
* fixed typo
---------
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
* ADAP-395: Implement native materialized view DDL (#7336)
* changie
* changie
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* init attempt at mv and basic forms of helper macros by mixing view and experimental mv sources
* remove unneeded return statement, rename directory
* remove unneeded ()
* responding to some pr feedback
* adjusting order of events for mv base work
* move up prexisting drop of backup
* change relatiion type to view to be consistent
* add base test case
* fix jinja exeception message expression, basic test passing
* added materialized view stubs and test
* response to feedback, removeal of refresh infavor of combined create_as, etc.
* updated postgres to use the new macros structure
* swapping to api layer and stratgeies for default implementation (basing off postgres, redshift)
* remove stratgey to limit need for now
* remove unneeded story level changelog entry
* add strategies to condtional in place of old macros
* macro name fix
* rename refresh macro in api level
* align names between postgres and default to same convention
* align names between postgres and default to same convention
* change a create call to full refresh
* pull adapter rename into strategy, add backup_relation as optional arg
* minor typo fix, add intermediate relation to refresh strategy and initial attempt at further conditional logic
* init copy of pr 387 to begin 391 implementation
* updating to feature main
* updating to feature main
* init set of Enum for config
* work on initial Enum class for on_configuration_change base it off ConstraintTypes which is also a str based Enum in core
* remove postgres-specific materialization in favor of core default materialization
* update db_api to use native types (e.g. str) and avoid direct calls to relation or config, which would alter the run order for all db_api dependencies
* add clarifying comment as to why we have a single test that's expected to fail at the dbt-core layer
* add on_configuration_change to unit test expected values
* make suggested name change to Enum class
* add on_configuration_change to some integretion tests
* add on_configuration_change to expected_manifest to pass functuional tests
* removing db_api and strategies directories in favor of matching current materialization setups
* macro name change
* revert to current approach for materializations
* revert to current approach for materializations
* added tests
* move materialized view logic into the `/materializations` directory in line with `dbt-core`
* moved default macros in `dbt-core` into `dbt-postgres`
* added `is_materialized_view` to `BaseRelation`
* updated materialized view stored value to snake case
* typo
* moved materialized view tests into adapter test framework
* updated materialized view tests to use adapter test framework
* add alter materialized view macro
* add alter materialized view macro
* added basic sql to default macros, added postgres-specific sql for alter scenario, stubbed a test case for index update
* change class name, and config setup
* change class name, and config setup
* play with field setup for on_configuration_change
* add method for default selection in enum class
* renamed get_refresh_data_in_materialized_view_sql to align with experimental package
* changed expected values to default string
* added in `on_configuration_change` setting
* change ignore to skip
* added in `on_configuration_change` setting
* updated default option for on_configuration_change on NodeConfig
* updated default option for on_configuration_change on NodeConfig
* fixed list being passed as string bug
* fixed list being passed as string bug
* removed explicit calls to enum values
* removed explicit calls to enum values
* removed unneeded test class
* fixed on_configuration_change to be picked up appropriately
* add test setup for testing fail config option
* remove breakpoint, uncomment tests
* update skip scenario to use empty strings
* update skip scenario to avoid using sql at all, remove extra whitespace in some templates
* push up initial addition of indexes for mv macro
* push slight change up
* reverting alt macro and moving the do create_index call to be more in line with other materializations
* Merge branch 'feature/materialized-views/ADAP-2' into feature/materialized-views/ADAP-402
# Conflicts:
# core/dbt/contracts/graph/model_config.py
# core/dbt/include/global_project/macros/materializations/models/materialized_view/alter_materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/create_materialized_view_as.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/get_materialized_view_configuration_changes.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/refresh_materialized_view.sql
# core/dbt/include/global_project/macros/materializations/models/materialized_view/replace_materialized_view.sql
# plugins/postgres/dbt/include/postgres/macros/materializations/materialized_view.sql
# tests/adapter/dbt/tests/adapter/materialized_views/base.py
# tests/functional/materializations/test_materialized_view.py
* merge feature branch into story branch
* merge feature branch into story branch
* added indexes into the workflow
* fix error in jinja that caused print error
* working on test messaging and skipping tests that might not fit quite into current system
* Merge branch 'feature/materialized-views/ADAP-2' into feature/materialized-views/ADAP-395
# Conflicts:
# core/dbt/include/global_project/macros/materializations/models/materialized_view/get_materialized_view_configuration_changes.sql
# plugins/postgres/dbt/include/postgres/macros/adapters.sql
# plugins/postgres/dbt/include/postgres/macros/materializations/materialized_view.sql
# tests/adapter/dbt/tests/adapter/materialized_views/test_on_configuration_change.py
# tests/functional/materializations/test_materialized_view.py
* moved postgres implemention into plugin directory
* update index methods to align with the configuration update macro
* added native ddl to postgres macros
* removed extra docstring
* updated references to View, now references MaterializedView
* decomposed materialization into macros
* refactor index create statement parser, add exceptions for unexpected formats
* swapped conditional to check for positive state
* removed skipped test now that materialized view is being used
* return the results and logs of the run so that additional checks can be applied at the adapter level, add check for refresh to a test
* add check for indexes in particular for apply on configuration scenario
* removed extra argument
* add materialized views to get_relations / list_relations
* typos in index change logic
* moved full refresh check inside the build sql step
---------
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
* removing returns from tests to stop logs from printing
* moved test cases into postgres tests, left non-test functionality in base as new methods or fixtures
* fixed overwrite issue, simplified assertion method
* updated import order to standard
* fixed test import paths
* updated naming convention for proper test collection with the test runner
* still trying to make the test runner happy
* rewrite index updates to use a better source in Postgres
* break out a large test suite as a separate run
* update `skip` and `fail` scenarios with more descriptive results
* typo
* removed call to skip status
* reverting `exceptions_jinja.py`
* added FailFastError back, the right way
* removed PostgresIndex in favor of the already existing PostgresIndexConfig, pulled it into its own file to avoid circular imports
* removed assumed models in method calls, removed odd insert records and replaced with get row count
* fixed index issue, removed some indirection in testing
* made test more readable
* remove the "apply" from the tests and put it on the base as the default
* generalized assertion for reuse with dbt-snowflake, fixed bug in record count utility
* fixed type to be more generic to accommodate adapters with their own relation types
* fixed all the broken index stuff
* updated on_configuration_change to use existing patterns
* updated on_configuration_change to use existing patterns
* reflected update in tests and materialization logic
* reflected update in tests and materialization logic
* reverted the change to create a config object from the option object, using just the option object now
* reverted the change to create a config object from the option object, using just the option object now
* modelled database objects to support monitoring all configuration changes
* updated "skip" to "continue", throw an error on non-implemented macro defaults
* updated "skip" to "continue", throw an error on non-implemented macro defaults
* updated "skip" to "continue", throw an error on non-implemented macro defaults
* updated "skip" to "continue", throw an error on non-implemented macro defaults
* reverted centralized framework, retained a few reusable base classes
* updated names to be more consistent
* readability updates
* added readme specifying that `relation_configs` only supports materialized views for now
---------
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
* --connection-flag
* Standardize the plugin functions used by DebugTask
* Cleanup redundant code and help logic along.
* Add more output tests to add logic coverage and formatting.
* Code review
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Fix names within functional test
* Changelog entry
* Test for implementation of null-safe equals comparison
* Remove duplicated where filter
* Fix null-safe equals comparison
* Fix tests for `concat` and `hash` by using empty strings () instead of `null`
* Remove macro namespace interpolation
* Include null checks in utils test base
* Add tests for the schema test
* Add tests for this macro
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Honor `--skip-profile-setup` parameter when inside an existing project
* Use project name as the profile name
* Use separate file connections for reading and writing
* Raise a custom exception when no adapters are installed
* Test skipping interactive profile setup when inside a dbt project
* Replace `assert_not_called()` since it does not work
* Verbose CLI argument for skipping profile setup
* Use separate file connections for reading and writing
* Check empty list in a Pythonic manner
* CT-2461: Work toward model deprecation
* CT-2461: Remove unneeded conversions
* CT-2461: Fix up unit tests for new fields, correct a couple oversights
* CT-2461: Remaining implementation and tests for model/ref deprecation warnings
* CT-2461: Changelog entry for deprecation warnings
* CT-2461: Refine datetime handling and tests
* CT-2461: Fix up unit test data
* CT-2461: Fix some more unit test data.
* CT-2461: Fix merge issues
* CT-2461: Code review items.
* CT-2461: Improve version -> str conversion
* Allow missing `profiles.yml` for `dbt deps` and `dbt init`
* Some commands allow the `--profiles-dir` to not exist
* Remove fix to verify that CI tests work
* Allow missing `profiles.yml` for `dbt deps` and `dbt init`
* CI is not finding any installed adapters
* Remove functional test for `dbt init`
* Adding perf testing GHA
* Fixing tigger syntax
* Fixing PR creation issue
* Updating testing var
* Remove unneeded branch names
* Fixing branch naming convention
* Standardizing branch name to var
* Consolidating PR jobs
* Updating inputs and making more readable
* Splitting steps up
* Making some updates here to simplify and update
* Remove tab
* Cleaned up testing TODOs before committing
* Fixing spacing
* Fixing spacing issue
* Create publication.py, various Publication classes, Dependency class
* Load dependencies.yml and the corresponding publication file
* Add "public_nodes" and populate ref_lookup
* resolve_ref working
* Add public nodes to parent and child maps
* Bump manifest version and fix tests, use ModelDependsOn
* Split out PublicationArtifact and PublicationConfig, store public_models
separately
* Store dependencies in publication artifact
* change detection of PublicModel for >= python3.10
* Handle removing references for re-processing if publication has changed
* Handle only changed publication artifacts
* Add some logging events
* Remove duplicate nodes from manifest
* refactor relation_from_relation_name
* Remove duplicate writing of manifest.json
* Add public_nodes to flat_graph
* Move some file name constants to core/dbt/constants.py
* Remove "environment" from ProjectDependency. Add
database/schema/identifier to PublicModel. Update TargetNotFound
exception.
* Include external publication dependencies in publication artifact dependencies
* Remove create_from_relation_name, call create_from_node instead
* Change PublicationArtifactChanged message to debug level
* Make write_publication_artifact a function in parser/manifest.py
* Create fixture to create minimal alternate project (just models)
* develop multi project test case
* Latest version should use un-suffixed alias
* Latest version can be in un-suffixed file
* FYI when unpinned ref to model with prerelease version
* [WIP] Nicer error if versioned ref to unversioned model
* Revert "Latest version should use un-suffixed alias"
This reverts commit 3616c52c1eed7588b9e210e1c957dfda598be550.
* Revert "[WIP] Nicer error if versioned ref to unversioned model"
This reverts commit c9ae4af1cfbd6b7bfc5dcbb445556233eb4bd2c0.
* Define real event for UnpinnedRefNewVersionAvailable
* Update pp test for implicit unsuffixed defined_in
* Add changelog entry
* Fix unit test
* marky feedback
* Add test case for UnpinnedRefNewVersionAvailable event
* Adding a new column is not a breaking contract change
* Add changelog entry
* More structured exception
* same_contract: False if non-breaking changes
* PR feedback: rm build_contract_checksum, more comments
* CT-2317: Reset invocation id in preflight for each dbt command.
* CT-2317: Add unit test for invocation_id behavior.
* CT-2317: Add changelog entry.
* CT-2317: Modify freshness test to ignore invocation_id
* CT-2317: Assign invocation_id before tracking initialization.
* CT-2317: Fix unit test failures and a bunch of other stuff
* CT-2317: Remove checks which make outdated assumptions about invocation_id being stable between runs
* CT-2317: Review tweak, more unit test fixes.
* Removed options for `dbt parse`
* Fix misspellings
* Capitalize JSON when appropriate
* Update help text for --write-json/--no-write-json
* Update help text for --config-dir
* Update help text for --resource-types
* Removed decorators for removed dbt parse options
* Remove `--write-manifest` flag from `parse`
* Remove `--parse-only` flag from `compile`
* Update help text for `dbt list --output`
* Standardize on one line per argument
* Factor 3 from 12 Factor CLI Apps
* Update help text for `dbt --version`
* Standardize capitalization of resource types for `dbt build`
* `debug --config-dir` is a boolean flag
* Update help text for `--version-check`
* Specify `-q` as a conventional alias for `--quiet`
* Update help text for `debug --config-dir`
* Update help text for `debug`
* Treat more dense text blobs as binary for `git grep`
* Update help text for `--version-check`
* Update help text for `--defer`
* Update help text for `--indirect-selection`
* Co-locate log colorization with other log settings
* Update help text for `--log-format*`, `--log-level*`, and `--use-colors*`
* Temporarily re-add option for CI tests
* Remove `--parse-only` flag from `show`
* Remove `--write-manifest` flag from `parse` (again)
* Snapshot strategies: newline for subquery
* add changie output
* add test for snapshot ending in comment
* remove import to be flake8 compliant
* add seed import
* add newlines for flake8 compliance
* typo fix
* Fixing up a test, adding a comment or two
* removed un-needed test fixtures
* removed even more un-needed fixtures, collapsed test to single class
* removed errant breakpoint()
* Fix a little typo
---------
Co-authored-by: Ian Knox <ian.knox@dbtlabs.com>
Co-authored-by: Mila Page <67295367+VersusFacit@users.noreply.github.com>
* CT-1922: Rough in functionality for parsing model level constraints
* CT-1922: (Almost) complete support for model level constraints
* CT-1922: Fix typo affecting correct model constraint parsing.
* CT-1922: Rework base class for model tests for greater simplicity
* CT-1922: Rough in functionality for parsing model level constraints
* CT-1922: Revise unit tests for new model-level constraints property
* CT-1922: (Almost) complete support for model level constraints
* first pass
* implement in core
* add proto
* WIP
* resolve errors in columns_spec_ddl
* changelog
* update comment
* move logic over to python
* rename and use enum
* update default constraint_support dict
* generate new proto definition after conflicts
* reorganize code and break warnings into each constraint
* fix postgres constraint support
* remove breakpoint
* convert constraint support to constant
* update postgres
* add to export
* add to export
* regen proto types file
* standardize names
* put back mypy error
* more naming + add back comma
* add constraint support to model level constraints
* update event message and method signature
* rename method
* CT-1922: Rough in functionality for parsing model level constraints
* CT-1922: Revise unit tests for new model-level constraints property
* CT-1922: (Almost) complete support for model level constraints
* CT-1922: Fix typo affecting correct model constraint parsing.
* CT-1922: Improve whitespace handling
* CT-1922: Render raw constraints to constraint list directly
* make method return consistent
* regenerate proto defn
* update evvent test
* add some code cleanup
---------
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
* CT-1922: Rough in functionality for parsing model level constraints
* CT-1922: Revise unit tests for new model-level constraints property
* CT-1922: (Almost) complete support for model level constraints
* CT-1922: Fix typo affecting correct model constraint parsing.
* CT-1922: Minor code review refinements
* CT-1922: Improve whitespace handling
* CT-1922: Render raw constraints to constraint list directly
* CT-1922: Rework base class for model tests for greater simplicity
* CT-1922: Remove debugging properties. Oops.
* CT-1922: Fix type annotation
* improved first line of error
* added basic printing of yaml and sql cols as columns
* added changie log
* used listed dictionary as input to match columns
* swapped order of col headers for printing
* used listed dictionary as input to match columns
* removed merge conflict text from file
* Touch-ups
* Update log introspection in functional tests
* Update format_column macro. Case insensitive test
* PR feedback: just data_type, not formatted
---------
Co-authored-by: Kyle Kent <kyle.kent321@gmail.com>
* remove trial nodes before building subdag
* add changie
* Update graph.py
remove comment
* further optimize by sorting node search by degree
* change degree to product of in and out degree
* Add tests for logging jinja2.Undefined objects
[CT-2259](https://github.com/dbt-labs/dbt-core/issues/7108) identifies
an issue wherein dbt-core 1.0-1.3 raise errors if a jinja2.Undefined
object is attempted to be logged. This generally happened in the form
of `{{ log(undefined_variable, info=True) }}`. This commit adding this
test exists for two reasons
1. Ensure we don't have a regression in this going forward
2. Exist as a commit to be used for backport fixes for dbt-core 1.0-1.3
* Add tests for checking `DBT_ENV_SECRET_`s don't break logging
[CT-1783](https://github.com/dbt-labs/dbt-core/issues/6568) describes
a bug in dbt-core 1.0-1.3 wherein when a `DBT_ENV_SECRET_` all
`{{ log("logging stuff", info=True) }}` invocations break. This commit
adds a test for this for two reasons:
1. Ensure we don't regress to this behavior going forward
2. Act as a base commit for making the backport fixes to dbt-core 1.0-1.3
* Add tests ensuring failed event serialization is handled correctly
[CT-2264](https://github.com/dbt-labs/dbt-core/issues/7113) states
that failed serialization should result in an exception handling path
which will fire another event instead of raising an exception. This is
hard to test perfectly because the exception handling path for
serialization depending on whether pytest is present. If pytest isn't
present, a new event documentation the failed serialization is fired.
If pytest is present, the failed serialization gets raised as an exception.
Thus this added test ensures that the expected exception is raised and
assumes that the correct event will be fired normally.
* Log warning when event serialization fails in `msg_to_dict`
This commit updates the `msg_to_dict` exception handling path to
fire a warning level event instead of raising an exception.
Truthfully, we're not sure if this exception handling path is even
possible to hit. That's because we recently switched from betterproto
to google's protobuf. However, exception path is the subject of
[CT-2264](https://github.com/dbt-labs/dbt-core/issues/7113). Though we
don't think it's actually possible to hit it anymore, we still want
to handle the case if it is.
* Update serialization failure note to be a warn level event in `BaseEvent`
[CT-2264](https://github.com/dbt-labs/dbt-core/issues/7113) wants
logging messages about event serialization failure to be `WARNING`
level events. This does that.
* Add changie info for changes
* Add test to check exception handling of `msg_to_dict`
* One argument per line
* Tests for multiple `--select` or `--exclude`
* Allow `--select` and `--exclude` multiple times
* Changelog entry
* MultiOption options must be specified with type=tuple or type=ChoiceTuple
* Testing for `--output-keys` and `--resource-type`
* Validate that any new param with `MultiOption` should also have `type=tuple` (or `ChoiceTuple`) and `multiple=True`
* first pass
* adding tests
* changelog
* split up tests due to order importance
* update test
* add back comment
* rename base test classes
* move sql
* fix test name
* move sql
* test changes to match main
* organize and cleanup fixtures
* more cleanup of tests
* add utility function to EventManager for explicitly adding callbacks
Technically these aren't necessary in their current state. We could instead
have people do `<InstantiatedEventManager>.callbacks.extend(...)` directly.
However, it's not hard to imagine a world wherein extra things need to take
place when a callback is added. Thus abstracting to a utility method
now means that as the implementation of how callbacks are actually added
changes, the invocation to do so can stay the same.
* update `setup_event_logger` to optionally take in callbacks add them to the EventManager
* update preflight decorator to check for and pass along callbacks for event logger setup
* Add `callbacks` to `dbtRunner`
On instantiation of `dbtRunner` one can now provide `callbacks`. These
callbacks are for the `EventLogger`. When `invoke` is called on a `dbtRunner`,
the `callbacks` are added to the cli context object. In the preflight
decorator these callbacks are extracted from the cli context and then
passed to the `setup_event_logger`, finally `setup_event_logger` ensures
the callbacks are added to the global `EVENT_MANAGER`.
* add test to check dbtRunner callbacks get properly set
I believe technically this tests qualifies as more of an integration
test, but no other tests like it currently exist (that I could find
via a cursory search). The `tests/unit/test_dbt_runner.py` seemed like
the most intuitive spot. However, if somewhere else makes sense, I'd be
happy to move it.
* add changie documentation for CT-1928
* Convert simple copy.
* Adjust class names for import.
* adjust test namespacing
* Resolve test error.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* ct-2198: clean up some type names and uses
* CT-2198: Unify constraints and constraints_check properties on columns
* Make mypy version consistently 0.981 (#7134)
* CT 1808 diff based partial parsing (#6873)
* model contracts on models materialized as views (#7120)
* first pass
* rename tests
* fix failing test
* changelog
* fix functional test
* Update core/dbt/parser/base.py
* Update core/dbt/parser/schemas.py
* Create method for env var deprecation (#7086)
* update to allow adapters to change model name resolution in py models (#7115)
* update to allow adapters to change model name resolution in py models
* add changie
* fix newline adds
* move quoting into macro
* use single quotes
* add env DBT_PROJECT_DIR support #6078 (#6659)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Add new index.html and changelog yaml files from dbt-docs (#7141)
* Make version configs optional (#7060)
* [CT-1584] New top level commands: interactive compile (#7008)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* CT-2198: Add changelog entry
* CT-2198: Fix tests which broke after merge
* CT-2198: Add explicit validation of constraint types w/ unit test
* CT-2198: Move access property, per code review
* CT-2198: Remove a redundant macro
* CT-1298: Rework constraints to be adapter-generated in Python code
* CT-2198: Clarify function name per review
---------
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Stu Kilgore <stu.kilgore@dbtlabs.com>
Co-authored-by: colin-rogers-dbt <111200756+colin-rogers-dbt@users.noreply.github.com>
Co-authored-by: Leo Schick <67712864+leo-schick@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: FishtownBuildBot <77737458+FishtownBuildBot@users.noreply.github.com>
Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* add protobuf message/class for new CommandCompleted event
For [CT-2049](https://github.com/dbt-labs/dbt-core/issues/6878) we
concluded that we wanted a new event type, [CommandCompleted](https://github.com/dbt-labs/dbt-core/issues/6878#issuecomment-1419718606)
with [four (4) values](https://github.com/dbt-labs/dbt-core/issues/6878#issuecomment-1426118283):
which command was run, whether the command succeeded, the timestamp
that the command finished, and how long the command took. This commit
adds the new event proto defition, the auto generated proto_types, and
the instantiatable even type.
* begin emitting CommandCompleted event in the preflight decorator
The [preflight decorator](4186f99b74/core/dbt/cli/requires.py (L19))
runs at the start of every CLI invocation. Thus is a perfect candidate
for emitting the CommandCompleted event. This is noted in the [dicussion
on CT-2049](https://github.com/dbt-labs/dbt-core/issues/6878#issuecomment-1428643539).
* add CommandCompleted event to event unit tests
* Add: changelog entry
* fire CommandCompleted event reguardless of upstream exceptions
Previously, if `--fail-fast` was specified and an issue was run into
or an unhandled issue became an exception, the CommandCompleted event
would not get fired because at this point in the stack we'd be in
exception thrown handling mode. If an exception does reach this point,
we want to still fire the event and also continue to propogate the
exception. Hence the bare `raise` exists to reraise the caught exception
* Update CommandCompleted event to be a `Debug` level event
We don't actually "always" need this event to be logged. Thus we've
updated it to `Debug` level. [Discussion Context](https://github.com/dbt-labs/dbt-core/pull/7180#discussion_r1139281963)
* Init roadmap
* Rework the top paragraph
* Clean-up the whole thing
* Typos and stuff
* Add a missing word
* Fix typo
* Update "when" note
* Next draft
* Propose rename
* Resolve TODOs, still needs a reread
* Being cute
* Another read through
* Fix sentence fragment
---------
Co-authored-by: Florian Eiden <florian.eiden@dbtlabs.com>
* first pass
* WIP
* add notes/stubs on more pieces
* more work
* more cleanup
* cleanup
* add more cleanup and generalization
* update to use reusable workflow
* add TODO
* Add back initialization events
* Fix log_cache_events. Default stdout logger knows less than it used to
* Add back exception handling events
* Revert "Add back exception handling events"
This reverts commit 26f22d91b660f51f0df6a59f9e5cae16b0ee6fe5.
* Add changelog entry
* Fix test by stringifying dict values
* Add generated CLI API docs
---------
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* part 1 of env var for core team
* add logic to use env vars to generate changelog
* modify version bump to add members via env var
* pull in main and tweak
* add token
* changes for testing
* split step
* remove leading slash
* add version check
* more debugging
* try curl
* try more things
* try more things
* chnage auth
* put back token
* update permissions
* add back fishtown pat
* use new pat
* fix typo
* swap token
* comment out list teams
* change url
* debug path
* add continue
* change core case
* more tweaks
* send output to file
* add file view
* make array
* tweak
* remove []
* add quotes
* add tojson
* add quotes to set
* tweak
* fix id
* tweaks
* more
* more
* remove new lines
* more tweaks
* update to generate changelog
* remove debugging bits
* use central version-bump
* use correct author list
* testing with changelog team automation
* add new token to input
* move secret
* remove testing aspects from workflow
* clean up team logic
* explicitly send secret
* move bumpversion comment
* move comments
* point workflow back tp main
* point to branch for testing
* point back to main
* inherit secrets
* first pass at automating latest branches
* checkout repo first
* fetch all history
* reorg
* debugging
* update test id
* swap lines
* incorporate new branch aciton
* tweak vars
* Formatting
* Changelog entry
* Rename to BaseSimpleSeedColumnOverride
* Better error handling
* Update test to include the BOM test
* Cleanup and formating
* Unused import remove
* nit line
* Pr comments
* update regex to match all iterations
* convert to num to match all adapters
* add comments, remove extra .
* clarify with more comments
* Update .bumpversion.cfg
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
---------
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
* Add clearer directions for custom test suite vars in Makefile.
* Fix up PR for review
* Fix erroneous whitespace.
* Fix a spelling error.
* Add documentation to discourage makefile edits but provide override tooling.
* Fix quotation marks. Very strange behavior
* Compact code and verify quotations happy inside bash and python.
* Fold comments into Makefile.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert test and make it a bit more pytest-onic
* Ax old integration test.
* Run black on test conversion
* I didn't like how pytest was running the fixture so wrapped it into a closure.
* Merge converted test into persist docs.
* Move persist docs tests to the adapter zone. Prep for adapter tests.
* Fix up test names
* Fix name to be less confusing.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Test converted and reformatted for pytest.
* Ax old versions of 052 test
* Nix the 'os' import and black format
* Change names of models to be more PEP like
* cleanup code
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert test and make it a bit more pytest-onic
* Ax old integration test.
* Run black on test conversion
* I didn't like how pytest was running the fixture so wrapped it into a closure.
* Merge converted test into persist docs.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* init commit for column_types test conversion
* init start of test_column_types.py
* pass tes macros into both tests
* remove alt tests, remove old tests, push up working conversion
* rename base class, move to adapter zone so adapters can use
* typo fix
* Code cleanup and adding stderr to capture dbt
* Debug with --log-format json now prints structured logs.
* Add changelog.
* Move logs into miscellaneous and add values to test.
* nix whitespace and fix log levels
* List will now do structured logging when log format set to json.
* Add a quick None check.
* Add a get guard to class check.
* Better null checking
* The boolean doesn't reflect the original logic but a try-catch does.
* Address some code review comments and get us working again.
* Simplify logic now that we have a namespace object for self.config.args.
* Simplify logic for json log format checking.
* Simplify code for allowing our GraphTest cases to pass while also hiding compile stats from dbt ls/list .
* Simplify structured logging types.
* Fix up boolean logic and simplify via De'Morgan.
* Nix unneeded fixture.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* CT-1786: Port docs tets to pytest
* Add generated CLI API docs
* CT-1786: Comply with the new style requirements
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* add defer_to_manifest in before_run to fix faulty deferred docs generate
* add a changelog
* add declaration of defer_to_manifest to FreshnessTask and GraphRunnableTask
* fix: add defer_to_manifest method to ListTask
* Re-factor list of YAML keys for hooks to late-render
* Add `pre_` and `post_hook` to list of late-rendered hooks
* Check for non-empty set intersection
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
* Test functional synonymy of `*_hook` with `*-hook`
Test that `pre_hook`/`post_hook` are functionally synonymous with `pre-hook`/`post-hook` for model project config
* Undo bugfix to validate the new test fails
* Revert "Undo bugfix to validate the new test fails"
This reverts commit e83a2be2eb.
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
* add meta attribute to nodeinfo for events
* also add meta to dataclass
* add to unit test to ensure meta is added
* adding functional test to check that meta is passed to nodeinfo during logging
* changelog
* remove used imported
* add tests with non-string keys
* renaming test dict keys
* add non-string value
* resolve failing test
* test additional non-string values
* fix flake8
* Stringify meta dict in node_info
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* convert the test and fix an error due to a dead code seed
* Get rid of old test
* Remove unfortunately added files. Don't use that *
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Update types.proto
* pre-commit passes
* Cleanup tests and tweak EventLevels
* Put node_info back on SQLCommit. Add "level" to fire_event function.
* use event.message() in warn_or_error
* Fix logging test
* Changie
* Fix a couple of unit tests
* import Protocol from typing_extensions for 3.7
* ✨ adding pre-commit install to make dev
* 🎨 updating format of Makefile and CONTRIBUTING.md
* 📝 adding changelog via changie new
* ✨ adding dev_req to Makefile + docs
* 🎨 remove dev_req from docs, dry makefile
* Align names of `.PHONY` targets with their associated rules
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* starting to move jinja exceptions
* convert some exceptions
* add back old functions for backward compatibility
* organize
* more conversions
* more conversions
* add changelog
* split out CacheInconsistency
* more conversions
* convert even more
* convert parsingexceptions
* fix tests
* more conversions
* more conversions
* finish converting exception functions
* convert more tests
* standardize to msg
* remove some TODOs
* fix test param and check the rest
* add comment, move exceptions
* add types
* fix type errors
* fix type for adapter_response
* remove 0.13 version from message
* pass predicated to merge strategy
* postgres delete and insert
* merge with predicates
* update to use arbitrary list of predicates, not dictionaries, merge and delete
* changie
* add functional test to adapter zone
* comma in test config
* add test for incremental predicates delete and insert postgres
* update test structure for inheritance
* handle predicates config for backwards compatibility
* test for predicates keyword
* Add generated CLI API docs
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* Remove unneeded SQL compilation attributes from SeedNode
* Fix various places that referenced removed attributes
* Cleanup a few Unions
* More formatting in nodes.py
* Mypy passing. Untested.
* Unit tests working
* use "doc" in documentation unique_ids
* update some doc_ids
* Fix some artifact tests. Still need previous version.
* Update manifest/v8.json
* Move relation_names to parsing
* Fix a couple of tests
* Update some artifacts. snapshot_seed has wrong schema.
* Changie
* Tweak NodeType.Documentation
* Put store_failures property in the right place
* Fix setting relation_name
* update changie to require issue or pr, and allow multiple
* remove extraneous data from changelog files.
* allow for multiple PR/issues to be entered
* update contributing guide
* remove issue number from bot changelogs
* update format of PR
* fix dependency changelogs
* remove extra line
* remove extra lines, tweak contributor wording
* Update CONTRIBUTING.md
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Get running with Python 3.11
* More tests passing, mypy still unhappy
* Upgrade to 3.11, and bump mashumaro
* patch importlib.import_module last
* lambda: Policy() default_factory on include and quote policy
* Add changelog entry
* Put a lambda on it
* Fix text formatting for log file
* Handle variant type return from e.log_level()
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Josh Taylor <joshuataylorx@gmail.com>
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* feat: add a list of default values to the ctx manager
* tests: dbt.get.config default values
* feat: validate the num of args in config.get
* feat: jinja template for dbt.config.get default values
* docs: changie yaml
* fix:typo on error message
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
* v0 - new dbt deps type: tarball url
in support of
https://github.com/dbt-labs/dbt-core/issues/4205
* flake8 fixes
* adding max size tarball condition
* clean up imports
* typing
* adding sha1 and subdirectory options; improve logging feedback
sha1: allow user to specify sha1 in packages.yaml, will only install if package matches
subdirectory: allow user to specify subdirectory of package in tarfile, if the package is a non standard structure (like with git subdirectory option)
* simple tests added
* flake fixes
* changes to support tests; adding exceptions; fire_event logging
* new logging events
* tarball exceptions added
* build out tests
* removing in memory tarball test
* update type codes to M - Misc
* adding new events to test_events
* fix spacing for flake
* add retry download code - as used in registry calls
* clean
* remove saving tar in memory inside tarfile object
will hit url multiple times instead
* remove duplicative code after refactor
* black updates
* black formatting
* black formatting
* refactor - no more in-memory tarfile - all as file operations now
- remove tarfile passing, always use tempfile instead
- reorganize system.* functions, removing duplicative code
- more notes on current flow and structure - esp need for pattern of 1) unpack 2) scan for package dir 3) copy to destination.
- cleaning
* cleaning and sync to new tarball code
* cleaning and sync to new tarball code
* requested changes from PR
https://github.com/dbt-labs/dbt-core/pull/4689#discussion_r812970847
* reversions from revision 2
removing sha1 check to simplify/mirror hub install pattern
* simplify/mirror hub install pattern
to simplify/mirror hub install pattern
- removing sha1 check
- supply name/version to act as our 'metadata' source
* simplify/mirror hub install pattern
simplify with goal of mirroring hub install pattern
- supporting subfolders like git packages, and sha1 checks are removed
- existing code from RegistryPinnedPackage (install() and download_and_untar()) performs the operations
- RegistryPinnedPackage install() and download_and_untar() are not currently set up as functions that can be used across classes - this should be moved to dbt.deps.base, or to a dbt.deps.common file - need dbt labs feedback on how to proceed (or leave as is)
* remove revisions, no longer doing package check
* slim down to basic tests
more complex features have been removed (sha1, subfolder) so testing is much simpler!
* fix naming to match hubs behavior
remove version from package folder name
* refactor install and download to upstream PinnedPackage class
i'm on the fence if this is right approach, but seems like most sensible after some thought
* Create Features-20221107-105018.yaml
* fix flake, black, mypy errors
* additional flake/black fixes
* Update .changes/unreleased/Features-20221107-105018.yaml
fix username on changelog
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* change to fstring
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* cleaning - remove comment
* remove comment/question for dbt team
* in support of issuecomment 1334055944
https://github.com/dbt-labs/dbt-core/pull/4689#issuecomment-1334055944
* in support of issuecomment 1334118433
https://github.com/dbt-labs/dbt-core/pull/4689#issuecomment-1334118433
* black fixes; remove debug bits
* remove `.format` & add 'tarball' as version
'tarball' as version so that the temp files format nicely:
[tempfile_location]/dbt_utils_2..tar.gz # old
vs
[tempfile_location]/dbt_utils_1.tarball.tar.gz # current
* port os.path refs in `PinnedPackage._install` to pathlib
* lowercase as per PR feedback
* update tests after removing version arg
goes along with 8787ba41af
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* removed Compiled versions of nodes
* Remove compiled fields from dictionary if not compiled
* check compiled is False instead of attribute existence in env_var
processing
* Update artifacts test (CompiledSnapshotNode did not have SnapshotConfig)
* Changie
* more complicated 'compiling' check in env_var
* Update test_exit_codes.py
* CT-1405: Refactor event logging code
* CT-1405: Add changelog entry
* CT-1405: Add code to protect against using closed streams from past tests.
* CT-1405: Restore unit test which was only failing locally
* CT-1405: Document a hack with issue # to resolve it in the future
* CT-1405: Make black happy
* CT-1405: Get rid of confusing factory function and duplicated function
* CT-1405: Remove unused event from types.proto and auto-gen'd file
* Fix the partial parse path
Partial parse should use project root or it does not resolve to correct path.
Eg. `target-path: ../some/dir/target`, if not ran from root, creates an erroneous folder.
* Run pre-commit
* Changie
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* reformatting of test after some spike investigation
* reformat code to pull tests back into base class definition, move a test to more appropriate spot
* Convert incremental schema tests.
* Drop the old test.
* Bad git add. My disappoint is immeasurable and my day has been ruined.
* Adjustments for flake8.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert old test.
Add documentation. Adapt and reenable previously skipped test.
* Convert test and adapt and comment for current standards.
* Remove old versions of tests.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert test 067. One bug outstanding.
* Test now working! Schema needed renaming to avoid 63 char max problems
* Remove old test.
* Add some docs and rewrite.
* Add exception for when audit tables' schema runs over the db limit.
* Code cleanup.
* Revert exception.
* Round out comments.
* Rename what shouldn't be a base class.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* BaseContext: expose md5 function in context
* BaseContext: add return value type
* Add changie entry
* rename "md5" to "local_md5"
* fix test_context.py
* init pr for dbt_debug test conversion
* removal of old test
* minor test format change
* add new Base class and Test classes
* reformatting test, new method for capsys and error messgae to check, todo fix badproject
* refomatting tests, ready for review
* checking yaml file, and small reformat
* modifying since update wasn't working in ci/cd
* Combine various print result log events with different levels
* Changie
* more merge cleanup
* Specify DynamicLevel for event classes that must specify level
* Initial structured logging changes
* remove "this" from core/dbt/events/functions.py
* CT-1047: Fix execution_time definitions to use float
* CT-1047: Revert unintended checking of changes to functions.py
* WIP
* first pass to resolve circular deps
* more circular dep resolution
* remove a bunch of duplication
* move message into log line
* update comments
* fix field that wen missing during rebase
* remove double import
* remove some comments and extra code
* fix pre-commit
* rework deprecations
* WIP converting messages
* WIP converting messages
* remove stray comment
* WIP more message conversion
* WIP more message conversion
* tweak the messages
* convert last message
* rename
* remove warn_or_raise as never used
* add fake calls to all new events
* fix some tests
* put back deprecation
* restore deprecation fully
* fix unit test
* fix log levels
* remove some skipped ids
* fix macro log function
* fix how messages are built to match expected outcome
* fix expected test message
* small fixes from reviews
* fix conflict resolution in UI
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
* Convert test to functional set.
* Remove old statement tests from integration test set.
* Nix whitespace
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Create functors to initialize event types with str-type member attributes. Before this change, the spec of various classes expected base_msg and msg params to be str's. This assumption did not always hold true. post_init hooks ensures the spec is obeyed.
* Add new changelog.
* Add msg type change functor to a few other events that could use it.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Updated string formatting on non-f-strings.
Found all cases of strings separated by white space on a single line and
removed white space separation. EX: "hello " "world" -> "hello world".
* add changelog entry
* CT-625: Fail with clear message for invalid materialized vals
* CT-625: Increase test coverage, run pre-commit checks
* CT-625: run black on problem file
* CT-625: Add changelog entry
* CT-625: Remove test that didn't make sense
* Migrate test
* Remove old integration test.
* Simplify object definitions since we enforce python 3
* Factor many fixtures into a file.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* init query_comment test conversion pr
* importing model and macro, changing to new project_config_update, tests passing locally for core
* delete old integration test
* trying to test against other adapters
* update to main
* file rename
* file rename
* import change
* move query_comment directory to functional/
* move test directory back to adapter zone
* update to main
* updating core test based on feedback from @gshank
* testing removing target checking
* edited comment to correctly specify that views are set, not tables
* updated init test to match starter project change
* added changelog
* update 3 other occurrences of the init test for text update
* clean up debugging
* reword some comments
* changelog
* add more tests
* move around the manifest.node
* fix typos
* all tests passing
* move logic for moving around nodes
* add tests
* more cleanup
* fix failing pp test
* remove comments
* add more tests, patch all disabled nodes
* fix test for windows
* fix node processing to not overwrite enabled nodes
* add checking disabled in pp, fix error msg
* stop deepcopying all nodes when processing
* update error message
* init pr for 026 test conversion
* removing old test, got all tests setup, need to find best way to handle regex in new test and see what we would actually want to do to test check we didn't run anything against
* changes to test_alias_dupe_thorews_exeption passing locally now
* adding test cases for final test
* following the create new shcema method tests are passing up for review for core code
* noving alias test to adapter zone
* adding Base Classes
* changing ref to fixtures
* add double check to test
* minor change to alt schema name formation, removal of unneeded setup fixture
* typo in model names
* update to main
* pull models/schemas/macros into a fixtures file
2022-09-28 11:03:12 -05:00
1632 changed files with 154380 additions and 66216 deletions
- Python model inital version ([#5261](https://github.com/dbt-labs/dbt-core/issues/5261), [#5421](https://github.com/dbt-labs/dbt-core/pull/5421))
- allows user to include the file extension for .py models in the dbt run -m command. ([#5289](https://github.com/dbt-labs/dbt-core/issues/5289), [#5295](https://github.com/dbt-labs/dbt-core/pull/5295))
- Incremental materialization refactor and cleanup ([#5245](https://github.com/dbt-labs/dbt-core/issues/5245), [#5359](https://github.com/dbt-labs/dbt-core/pull/5359))
- Python models can support incremental logic ([#0](https://github.com/dbt-labs/dbt-core/issues/0), [#35](https://github.com/dbt-labs/dbt-core/pull/35))
- Add reusable function for retrying adapter connections. Utilize said function to add retries for Postgres (and Redshift). ([#5022](https://github.com/dbt-labs/dbt-core/issues/5022), [#5432](https://github.com/dbt-labs/dbt-core/pull/5432))
- add exponential backoff to connection retries on Postgres (and Redshift) ([#5502](https://github.com/dbt-labs/dbt-core/issues/5502), [#5503](https://github.com/dbt-labs/dbt-core/pull/5503))
### Fixes
- Add context to compilation errors generated while rendering generic test configuration values. ([#5294](https://github.com/dbt-labs/dbt-core/issues/5294), [#5393](https://github.com/dbt-labs/dbt-core/pull/5393))
- Rename try to strict for more intuitiveness ([#5475](https://github.com/dbt-labs/dbt-core/issues/5475), [#5477](https://github.com/dbt-labs/dbt-core/pull/5477))
- Ignore empty strings passed in as secrets ([#5312](https://github.com/dbt-labs/dbt-core/issues/5312), [#5518](https://github.com/dbt-labs/dbt-core/pull/5518))
- Fix handling of top-level exceptions ([#5564](https://github.com/dbt-labs/dbt-core/issues/5564), [#5560](https://github.com/dbt-labs/dbt-core/pull/5560))
### Docs
- Update dependency inline-source from ^6.1.5 to ^7.2.0 ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Update dependency jest from ^26.2.2 to ^28.1.3 ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Update dependency underscore from ^1.9.0 to ^1.13.4 ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Update dependency webpack-cli from ^3.3.12 to ^4.7.0 ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Update dependency webpack-dev-server from ^3.1.11 to ^4.9.3 ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Searches no longer require perfect matches, and instead consider each word individually. `my model` or `model my` will now find `my_model`, without the need for underscores ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
- Support the renaming of SQL to code happening in dbt-core ([#5574](https://github.com/dbt-labs/dbt-core/issues/5574), [#5577](https://github.com/dbt-labs/dbt-core/pull/5577))
### Under the Hood
- Added language to tracked fields in run_model event ([#5571](https://github.com/dbt-labs/dbt-core/issues/5571), [#5469](https://github.com/dbt-labs/dbt-core/pull/5469))
- Update mashumaro to 3.0.3 ([#4940](https://github.com/dbt-labs/dbt-core/issues/4940), [#5118](https://github.com/dbt-labs/dbt-core/pull/5118))
- Add python incremental materialization test ([#0000](https://github.com/dbt-labs/dbt-core/issues/0000), [#5571](https://github.com/dbt-labs/dbt-core/pull/5571))
### Dependencies
- Upgrade to Jinja2==3.1.2 from Jinja2==2.11.3 ([#4748](https://github.com/dbt-labs/dbt-core/issues/4748), [#5465](https://github.com/dbt-labs/dbt-core/pull/5465))
- Bump mypy from 0.961 to 0.971 ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#5495](https://github.com/dbt-labs/dbt-core/pull/5495))
- Remove pin for MarkUpSafe from >=0.23,<2.1([#5506](https://github.com/dbt-labs/dbt-core/issues/5506), [#5507](https://github.com/dbt-labs/dbt-core/pull/5507))
- Add `--defer` flag to dbt compile & dbt docs generate ([#4110](https://github.com/dbt-labs/dbt-core/issues/4110), [#4514](https://github.com/dbt-labs/dbt-core/pull/4514))
- use MethodName.File when value ends with .csv ([#5578](https://github.com/dbt-labs/dbt-core/issues/5578), [#5581](https://github.com/dbt-labs/dbt-core/pull/5581))
- Make `docs` configurable in `dbt_project.yml` and add a `node_color` attribute to change the color of nodes in the DAG ([#5333](https://github.com/dbt-labs/dbt-core/issues/5333), [#5397](https://github.com/dbt-labs/dbt-core/pull/5397))
- Adding ResolvedMetricReference helper functions and tests ([#5567](https://github.com/dbt-labs/dbt-core/issues/5567), [#5607](https://github.com/dbt-labs/dbt-core/pull/5607))
- Check dbt-core version requirements when installing Hub packages ([#5648](https://github.com/dbt-labs/dbt-core/issues/5648), [#5651](https://github.com/dbt-labs/dbt-core/pull/5651))
### Fixes
- Remove the default 256 characters limit on postgres character varying type when no limitation is set ([#5238](https://github.com/dbt-labs/dbt-core/issues/5238), [#5292](https://github.com/dbt-labs/dbt-core/pull/5292))
- Include schema file config in unrendered_config ([#5338](https://github.com/dbt-labs/dbt-core/issues/5338), [#5344](https://github.com/dbt-labs/dbt-core/pull/5344))
- Resolves #5351 - Do not consider shorter varchar cols as schema changes ([#5351](https://github.com/dbt-labs/dbt-core/issues/5351), [#5395](https://github.com/dbt-labs/dbt-core/pull/5395))
- Extended validations for the project names ([#5379](https://github.com/dbt-labs/dbt-core/issues/5379), [#5620](https://github.com/dbt-labs/dbt-core/pull/5620))
- Use sys.exit instead of exit ([#5621](https://github.com/dbt-labs/dbt-core/issues/5621), [#5627](https://github.com/dbt-labs/dbt-core/pull/5627))
- Finishing logic upgrade to Redshift for name truncation collisions. ([#5586](https://github.com/dbt-labs/dbt-core/issues/5586), [#5656](https://github.com/dbt-labs/dbt-core/pull/5656))
- multiple args for ref and source ([#5634](https://github.com/dbt-labs/dbt-core/issues/5634), [#5635](https://github.com/dbt-labs/dbt-core/pull/5635))
- Fix Unexpected behavior when chaining methods on dbt-ref'ed/sourced dataframes ([#5646](https://github.com/dbt-labs/dbt-core/issues/5646), [#5677](https://github.com/dbt-labs/dbt-core/pull/5677))
### Docs
- Leverages `docs.node_color` from `dbt-core` to color nodes in the DAG ([dbt-docs/#44](https://github.com/dbt-labs/dbt-docs/issues/44), [dbt-docs/#281](https://github.com/dbt-labs/dbt-docs/pull/281))
### Under the Hood
- Save use of default env vars to manifest to enable partial parsing in those cases. ([#5155](https://github.com/dbt-labs/dbt-core/issues/5155), [#5589](https://github.com/dbt-labs/dbt-core/pull/5589))
- add more information to log line interop test failures ([#5658](https://github.com/dbt-labs/dbt-core/issues/5658), [#5659](https://github.com/dbt-labs/dbt-core/pull/5659))
- Add supported languages to materializations ([#5569](https://github.com/dbt-labs/dbt-core/issues/5569), [#5695](https://github.com/dbt-labs/dbt-core/pull/5695))
### Dependency
- Bump python from 3.10.5-slim-bullseye to 3.10.6-slim-bullseye in /docker ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#5623](https://github.com/dbt-labs/dbt-core/pull/5623))
- Bump mashumaro[msgpack] from 3.0.3 to 3.0.4 in /core ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#5649](https://github.com/dbt-labs/dbt-core/pull/5649))
- This file provides a full account of all changes to `dbt-core` and `dbt-postgres`
- This file provides a full account of all changes to `dbt-core`
- Changes are listed under the (pre)release in which they first appear. Subsequent releases include changes from previous releases.
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
body: 'Fix:Order-insensitive unit test equality assertion for expected/actual with
multiple nulls'
time:2024-05-22T18:28:55.91733-04:00
custom:
Author:michelleark
Issue:"10167"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.