* Add breakpoint.
* Move breakpoint.
* Add fix
* Add changelog.
* Avoid sorting for the string case.
* Add unit test.
* Fix test.
* add good unit tests for coverage of sort method.
* add sql format coverage.
* Modify behavior to log a warning and proceed.
* code review comments.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Fix exclusive_primary_alt_value_setting to set warn_error_options correctly
* Add test
* Changie
* Fix unit test
* Replace conversion method
* Refactor normalize_warn_error_options
* Add changelog.
* Avoid sorting for the string case.
* add good unit tests for coverage of sort method.
* add sql format coverage.
---------
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Correct `isort` configuration to include dbt-semantic-interfaces as internal
We thought we were already doing this. However, we accidentally missed the last
`s` of `dbt-semantic-interfaces`, so imports from dbt-semantic-interfaces were not
being identified as an internal package by isort. This fixes that.
* Run isort using updated configs to mark `dbt-semantic-interfaces` as included
* Fix `test_can_silence` tests in `test_warn_error_options.py` to ensure silencing
We're fixing an issue wherein `silence` specifications in the `dbt_project.yaml`
weren't being respected. This was odd since we had tests specifically for this.
It turns out the tests were broken. Essentially the warning was instead being raised
as an error due to `include: 'all'`. Then because it was being raised as an error,
the event wasn't going through the logger. We were only asserting in these tests that
the silenced event wasn't going through the logger (which it wasn't) so everything
"appeared" to be working. Unfortunately everything wasn't actually working. This is now
highlighted because `test_warn_error_options::TestWarnErrorOptionsFromProject:test_can_silence`
is now failing with this commit.
* Fix setting `warn_error_options` via `dbt_project.yaml` flags.
Back when I did the work for #10058 (specifically c52d6531) I thought that
the `warn_error_options` would automatically be converted from the yaml
to the `WarnErrorOptions` object as we were building the `ProjectFlags` object,
which holds `warn_error_options`, via `ProjectFlags.from_dict`. And I thought
this was validated by the `test_can_silence` test added in c52d6531. However,
there were two problems:
1. The definition of `warn_error_options` on `PrjectFlags` is a dict, not a
`WarnErrorOptions` object
2. The `test_can_silence` test was broken, and not testing what I thought
The quick fix (this commit) is to ensure `silence` is passed to `WarnErrorOptions`
instantiation in `dbt.cli.flags.convert_config`. The better fix would be to make
the `warn_error_options` of `ProjectFlags` a `WarnErrorOptions` object instead of
a dict. However, to do this we first need to update dbt-common's `WarnErrorOptions`
definition to default `include` to an empty list. Doing so would allow us to get rid
of `convert_config` entirely.
* Add unit test for `ModelRunner.print_result_line`
* Add (and skip) unit test for `ModelRunner.execute`
An attempt at testing `ModelRunner.execute`. We should probably also be
asserting that the model has been executed. However before we could get there,
we're running into runtime errors during `ModelRunner.execute`. Currently the
struggle is ensuring the adapter exists in the global factory when `execute`
goes looking for. The error we're getting looks like the following:
```
def test_execute(self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:121:
----
core/dbt/task/run.py:259: in execute
context = generate_runtime_model_context(model, self.config, manifest)
core/dbt/context/providers.py:1636: in generate_runtime_model_context
ctx = ModelContext(model, config, manifest, RuntimeProvider(), None)
core/dbt/context/providers.py:834: in __init__
self.adapter = get_adapter(self.config)
venv/lib/python3.10/site-packages/dbt/adapters/factory.py:207: in get_adapter
return FACTORY.lookup_adapter(config.credentials.type)
---`
self = <dbt.adapters.factory.AdapterContainer object at 0x106e73280>, adapter_name = 'postgres'
def lookup_adapter(self, adapter_name: str) -> Adapter:
> return self.adapters[adapter_name]
E KeyError: 'postgres'
venv/lib/python3.10/site-packages/dbt/adapters/factory.py:132: KeyError
```
* Add `postgres_adapter` fixture for use in `TestModelRunner`
Previously we were running into an issue where the during `ModelRunner.execute`
the mock_adapter we were using wouldn't be found in the global adapter
factory. We've gotten past this error by supply a "real" adapter, a
`PostgresAdapter` instance. However we're now running into a new error
in which the materialization macro can't be found. This error looks like
```
model_runner = <dbt.task.run.ModelRunner object at 0x106746650>
def test_execute(
self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner
) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:129:
----
self = <dbt.task.run.ModelRunner object at 0x106746650>
model = ModelNode(database='dbt', schema='dbt_schema', name='table_model', resource_type=<NodeType.Model: 'model'>, package_na...ected'>, constraints=[], version=None, latest_version=None, deprecation_date=None, defer_relation=None, primary_key=[])
manifest = Manifest(nodes={'seed.pkg.seed': SeedNode(database='dbt', schema='dbt_schema', name='seed', resource_type=<NodeType.Se...s(show=True, node_color=None), patch_path=None, arguments=[], created_at=1718229810.21914, supported_languages=None)}})
def execute(self, model, manifest):
context = generate_runtime_model_context(model, self.config, manifest)
materialization_macro = manifest.find_materialization_macro_by_name(
self.config.project_name, model.get_materialization(), self.adapter.type()
)
if materialization_macro is None:
> raise MissingMaterializationError(
materialization=model.get_materialization(), adapter_type=self.adapter.type()
)
E dbt.adapters.exceptions.compilation.MissingMaterializationError: Compilation Error
E No materialization 'table' was found for adapter postgres! (searched types 'default' and 'postgres')
core/dbt/task/run.py:266: MissingMaterializationError
```
* Add spoofed macro fixture `materialization_table_default` for `test_execute` test
Previously the `TestModelRunner:test_execute` test was running into a runtime error
do to the macro `materialization_table_default` macro not existing in the project. This
commit adds that macro to the project (though it should ideally get loaded via interactions
between the manifest and adapter). Manually adding it resolved our previous issue, but created
a new one. The macro appears to not be properly loaded into the manifest, and thus isn't
discoverable later on when getting the macros for the jinja context. This leads to an error
that looks like the following:
```
model_runner = <dbt.task.run.ModelRunner object at 0x1080a4f70>
def test_execute(
self, table_model: ModelNode, manifest: Manifest, model_runner: ModelRunner
) -> None:
> model_runner.execute(model=table_model, manifest=manifest)
tests/unit/task/test_run.py:129:
----
core/dbt/task/run.py:287: in execute
result = MacroGenerator(
core/dbt/clients/jinja.py:82: in __call__
return self.call_macro(*args, **kwargs)
venv/lib/python3.10/site-packages/dbt_common/clients/jinja.py:294: in call_macro
macro = self.get_macro()
---
self = <dbt.clients.jinja.MacroGenerator object at 0x1080f3130>
def get_macro(self):
name = self.get_name()
template = self.get_template()
# make the module. previously we set both vars and local, but that's
# redundant: They both end up in the same place
# make_module is in jinja2.environment. It returns a TemplateModule
module = template.make_module(vars=self.context, shared=False)
> macro = module.__dict__[get_dbt_macro_name(name)]
E KeyError: 'dbt_macro__materialization_table_default'
venv/lib/python3.10/site-packages/dbt_common/clients/jinja.py:277: KeyError
```
It's becoming apparent that we need to find a better way to either mock or legitimately
load the default and adapter macros. At this point I think I've exausted the time box
I should be using to figure out if testing the `ModelRunner` class is possible currently,
with the result being more work has yet to be done.
* Begin adding the `LogModelResult` event catcher to event manager class fixture
* init push for issue 10198
* add changelog
* add unit tests based on michelle example
* add data_tests, and post_hook unit tests
* pull creating macro_func out of try call
* revert last commit
* pull macro_func definition back out of try
* update code formatting
* Add basic semantic layer fixture nodes to unit test `manifest` fixture
We're doing this in preperation to a for a unit test which will be testing
these nodes (as well as others) and thus we want them in the manifest.
* Add `WhereFilterInteresection` to `QueryParams` of `saved_query` fixture
In the previous commit, 58990aa450, we added
the `saved_query` fixture to the `manifest` fixture. This broke the test
`tests/unit/parser/test_manifest.py::TestPartialParse::test_partial_parse_by_version`.
It broke because the `Manifest.deepcopy` manifest basically dictifies things. When we were
dictifying the `QueryParams` of the `saved_query` before, the `where` key was getting
dropped because it was `None`. We'd then run into a runtime error on instantiation of the
`QueryParams` because although `where` is declared as _optional_, we don't set a default
value for it. And thus, kaboom :(
We should probably provide a default value for `where`. However that is out of scope
for this branch of work.
* Fix `test_select_fqn` to account for newly included semantic model
In 58990aa450 we added a semantic model
to the `manifest` fixture. This broke the test
`tests/unit/graph/test_selector_methods.py::test_select_fqn` because in
the test it selects nodes based on the string `*.*.*_model`. The newly
added semantic model matches this, and thus needed to be added to the
expected results.
* Add unit tests for `_warn_for_unused_resource_config_paths` method
Note: At this point the test when run with for a `unit_test` config
taht should be considered used, fails. This is because it is not being
properly identified as used.
* Include `unit_tests` in `Manifest.get_resouce_fqns`
Because `unit_tests` weren't being included in calls to `Manifest.get_resource.fqns`,
it always appeared to `_warn_for_unused_resource_config_paths` that there were no
unit tests in the manifest. Because of this `_warn_for_unused_resource_config_paths` thought
that _any_ `unit_test` config in `dbt_project.yaml` was unused.