forked from repo-mirrors/dbt-core
Compare commits
52 Commits
postgres-s
...
v0.21.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
08b2f16e10 | ||
|
|
c3fbb3727e | ||
|
|
d833c8e45a | ||
|
|
8cc288df57 | ||
|
|
bbe5ff9b74 | ||
|
|
3fcc19a45a | ||
|
|
827059f37a | ||
|
|
5f1ccac564 | ||
|
|
7b4298f013 | ||
|
|
6a17f6e8f0 | ||
|
|
8c0867eff9 | ||
|
|
1e6a32dda3 | ||
|
|
ae45ee0702 | ||
|
|
c66a82beda | ||
|
|
61caa79b36 | ||
|
|
3964efa6ea | ||
|
|
cb3890b69e | ||
|
|
162f3a1dfb | ||
|
|
a1c8374ae8 | ||
|
|
bdc573ec73 | ||
|
|
c009485de2 | ||
|
|
9cdc451bc8 | ||
|
|
4718dd3a1e | ||
|
|
49796d6e13 | ||
|
|
ece3d2c105 | ||
|
|
52bedbad23 | ||
|
|
eb079dd818 | ||
|
|
2a99431c8d | ||
|
|
df5953a71d | ||
|
|
641b0fa365 | ||
|
|
8876afdb14 | ||
|
|
a97a9c9942 | ||
|
|
0070cd99de | ||
|
|
78a1bbe3c7 | ||
|
|
cde82fa2b1 | ||
|
|
34cec7c7b0 | ||
|
|
db5caf97ae | ||
|
|
847046171e | ||
|
|
5dd37a9fb8 | ||
|
|
a2bdd08d88 | ||
|
|
1807526d0a | ||
|
|
362770f5bd | ||
|
|
af38f51041 | ||
|
|
efc8ece12e | ||
|
|
7471f07431 | ||
|
|
6fa30d10ea | ||
|
|
35150f914f | ||
|
|
b477be9eff | ||
|
|
b67e877cc1 | ||
|
|
1c066cd680 | ||
|
|
ec97b46caf | ||
|
|
b5bb354929 |
@@ -1,5 +1,5 @@
|
||||
[bumpversion]
|
||||
current_version = 0.21.0b2
|
||||
current_version = 0.21.1
|
||||
parse = (?P<major>\d+)
|
||||
\.(?P<minor>\d+)
|
||||
\.(?P<patch>\d+)
|
||||
|
||||
2
.github/workflows/integration.yml
vendored
2
.github/workflows/integration.yml
vendored
@@ -182,7 +182,7 @@ jobs:
|
||||
|
||||
- name: Install python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --user --upgrade pip
|
||||
pip install tox
|
||||
pip --version
|
||||
tox --version
|
||||
|
||||
8
.github/workflows/main.yml
vendored
8
.github/workflows/main.yml
vendored
@@ -61,7 +61,7 @@ jobs:
|
||||
|
||||
- name: Install python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --user --upgrade pip
|
||||
pip install tox
|
||||
pip --version
|
||||
tox --version
|
||||
@@ -96,7 +96,7 @@ jobs:
|
||||
|
||||
- name: Install python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --user --upgrade pip
|
||||
pip install tox
|
||||
pip --version
|
||||
tox --version
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
|
||||
- name: Install python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --user --upgrade pip
|
||||
pip install --upgrade setuptools wheel twine check-wheel-contents
|
||||
pip --version
|
||||
|
||||
@@ -177,7 +177,7 @@ jobs:
|
||||
|
||||
- name: Install python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --user --upgrade pip
|
||||
pip install --upgrade wheel
|
||||
pip --version
|
||||
|
||||
|
||||
56
CHANGELOG.md
56
CHANGELOG.md
@@ -1,10 +1,52 @@
|
||||
## dbt 0.21.0 (Release TBD)
|
||||
## dbt 0.21.1 (November 29, 2021)
|
||||
|
||||
## dbt 0.21.0b2 (August 19, 2021)
|
||||
## dbt 0.21.1rc2 (November 15, 2021)
|
||||
|
||||
|
||||
### Fixes
|
||||
- Add `get_where_subquery` to test macro namespace, fixing custom generic tests that rely on introspecting the `model` arg at parse time ([#4195](https://github.com/dbt-labs/dbt/issues/4195), [#4197](https://github.com/dbt-labs/dbt/pull/4197))
|
||||
|
||||
## dbt 0.21.1rc1 (November 03, 2021)
|
||||
|
||||
|
||||
### Fixes
|
||||
- Performance: Use child_map to find tests for nodes in resolve_graph ([#4012](https://github.com/dbt-labs/dbt/issues/4012), [#4057](https://github.com/dbt-labs/dbt/pull/4057))
|
||||
- Switch `unique_field` from abstractproperty to optional property. Add docstring ([#4025](https://github.com/dbt-labs/dbt-core/issues/4025), [#4028](https://github.com/dbt-labs/dbt-core/pull/4028))
|
||||
- Fix multiple partial parsing errors ([#3996](https://github.com/dbt-labs/dbt/issues/3006), [#4020](https://github.com/dbt-labs/dbt/pull/4018))
|
||||
- Include only relational nodes in `database_schema_set` ([#4063](https://github.com/dbt-labs/dbt-core/issues/4063), [#4077](https://github.com/dbt-labs/dbt-core/pull/4077))
|
||||
- Fixed bug with `error_if` test option ([#4070](https://github.com/dbt-labs/dbt-core/pull/4070))
|
||||
- Added support for tests on databases that lack real boolean types. ([#4084](https://github.com/dbt-labs/dbt-core/issues/4084))
|
||||
- Prefer macros defined in the project over the ones in a package by default ([#4106](https://github.com/dbt-labs/dbt-core/issues/4106), [#4114](https://github.com/dbt-labs/dbt-core/pull/4114))
|
||||
- Scrub secrets coming from `CommandError`s so they don't get exposed in logs. ([#4138](https://github.com/dbt-labs/dbt-core/pull/4138))
|
||||
- Syntax fix in `alter_relation_add_remove_columns` if only removing columns in `on_schema_change: sync_all_columns` ([#4147](https://github.com/dbt-labs/dbt-core/issues/4147))
|
||||
- Increase performance of graph subset selection ([#4135](https://github.com/dbt-labs/dbt-core/issues/4135),[#4155](https://github.com/dbt-labs/dbt-core/pull/4155))
|
||||
- Add downstream test edges for `build` task _only_. Restore previous graph construction, compilation performance, and node selection behavior (`test+`) for all other tasks ([#4135](https://github.com/dbt-labs/dbt-core/issues/4135), [#4143](https://github.com/dbt-labs/dbt-core/pull/4143))
|
||||
- Don't require a strict/proper subset when adding testing edges to specialized graph for `build` ([#4158](https://github.com/dbt-labs/dbt-core/issues/4135), [#4158](https://github.com/dbt-labs/dbt-core/pull/4160))
|
||||
- Capping `google-api-core` to version `1.31.3` due to `protobuf` dependency conflict ([#4192](https://github.com/dbt-labs/dbt-core/pull/4192))
|
||||
|
||||
Contributors:
|
||||
- [@ljhopkins2](https://github.com/ljhopkins2) ([#4077](https://github.com/dbt-labs/dbt-core/pull/4077))
|
||||
- [@JCZuurmond](https://github.com/jczuurmond) ([#4114](https://github.com/dbt-labs/dbt-core/pull/4114))
|
||||
|
||||
## dbt 0.21.0 (October 04, 2021)
|
||||
|
||||
|
||||
## dbt 0.21.0rc2 (September 27, 2021)
|
||||
|
||||
|
||||
### Fixes
|
||||
- Fix batching for large seeds on Snowflake ([#3941](https://github.com/dbt-labs/dbt/issues/3941), [#3942](https://github.com/dbt-labs/dbt/pull/3942))
|
||||
- Avoid infinite recursion in `state:modified.macros` check ([#3904](https://github.com/dbt-labs/dbt/issues/3904), [#3957](https://github.com/dbt-labs/dbt/pull/3957))
|
||||
- Cast log messages to strings before scrubbing of prefixed env vars ([#3971](https://github.com/dbt-labs/dbt/issues/3971), [#3972](https://github.com/dbt-labs/dbt/pull/3972))
|
||||
|
||||
### Under the hood
|
||||
- Bump artifact schema versions for 0.21.0 ([#3945](https://github.com/dbt-labs/dbt/pull/3945))
|
||||
|
||||
## dbt 0.21.0rc1 (September 20, 2021)
|
||||
|
||||
### Features
|
||||
|
||||
- Make `--models` and `--select` synonyms, except for `ls` (to preserve existing behavior) ([#3210](https://github.com/dbt-labs/dbt/pull/3210), [#3791](https://github.com/dbt-labs/dbt/pull/3791))
|
||||
- Experimental parser now detects macro overrides of ref, source, and config builtins. ([#3581](https://github.com/dbt-labs/dbt/issues/3866), [#3582](https://github.com/dbt-labs/dbt/pull/3877))
|
||||
- Add connect_timeout profile configuration for Postgres and Redshift adapters. ([#3581](https://github.com/dbt-labs/dbt/issues/3581), [#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||
- Enhance BigQuery copy materialization ([#3570](https://github.com/dbt-labs/dbt/issues/3570), [#3606](https://github.com/dbt-labs/dbt/pull/3606)):
|
||||
@@ -16,6 +58,7 @@
|
||||
- Added default field in the `selectors.yml` to allow user to define default selector ([#3448](https://github.com/dbt-labs/dbt/issues/3448), [#3875](https://github.com/dbt-labs/dbt/issues/3875), [#3892](https://github.com/dbt-labs/dbt/issues/3892))
|
||||
- Added timing and thread information to sources.json artifact ([#3804](https://github.com/dbt-labs/dbt/issues/3804), [#3894](https://github.com/dbt-labs/dbt/pull/3894))
|
||||
- Update cli and rpc flags for the `build` task to align with other commands (`--resource-type`, `--store-failures`) ([#3596](https://github.com/dbt-labs/dbt/issues/3596), [#3884](https://github.com/dbt-labs/dbt/pull/3884))
|
||||
- Log tests that are not indirectly selected. Add `--greedy` flag to `test`, `list`, `build` and `greedy` property in yaml selectors ([#3723](https://github.com/dbt-labs/dbt/pull/3723), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
||||
|
||||
### Fixes
|
||||
|
||||
@@ -29,13 +72,13 @@
|
||||
|
||||
### Under the hood
|
||||
|
||||
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
- Better dbt hub registry packages version logging that prompts the user for upgrades to relevant packages ([#3560](https://github.com/dbt-labs/dbt/issues/3560), [#3763](https://github.com/dbt-labs/dbt/issues/3763), [#3759](https://github.com/dbt-labs/dbt/pull/3759))
|
||||
- Allow the default seed macro's SQL parameter, `%s`, to be replaced by dispatching a new macro, `get_binding_char()`. This enables adapters with parameter marker characters such as `?` to not have to override `basic_load_csv_rows`. ([#3622](https://github.com/dbt-labs/dbt/issues/3622), [#3623](https://github.com/dbt-labs/dbt/pull/3623))
|
||||
- Alert users on package rename ([hub.getdbt.com#180](https://github.com/dbt-labs/hub.getdbt.com/issues/810), [#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||
- Add `adapter_unique_id` to invocation context in anonymous usage tracking, to better understand dbt adoption ([#3713](https://github.com/dbt-labs/dbt/issues/3713), [#3796](https://github.com/dbt-labs/dbt/issues/3796))
|
||||
- Specify `macro_namespace = 'dbt'` for all dispatched macros in the global project, making it possible to dispatch to macro implementations defined in packages. Dispatch `generate_schema_name` and `generate_alias_name` ([#3456](https://github.com/dbt-labs/dbt/issues/3456), [#3851](https://github.com/dbt-labs/dbt/issues/3851))
|
||||
- Retry transient GitHub failures during download ([#3729](https://github.com/dbt-labs/dbt/pull/3729))
|
||||
- Retry transient GitHub failures during download ([#3546](https://github.com/dbt-labs/dbt/pull/3546), [#3729](https://github.com/dbt-labs/dbt/pull/3729))
|
||||
- Don't reload and validate schema files if they haven't changed ([#3563](https://github.com/dbt-labs/dbt/issues/3563), [#3888](https://github.com/dbt-labs/dbt/issues/3888))
|
||||
|
||||
Contributors:
|
||||
|
||||
@@ -44,7 +87,7 @@ Contributors:
|
||||
- [@dbrtly](https://github.com/dbrtly) ([#3834](https://github.com/dbt-labs/dbt/pull/3834))
|
||||
- [@swanderz](https://github.com/swanderz) [#3623](https://github.com/dbt-labs/dbt/pull/3623)
|
||||
- [@JasonGluck](https://github.com/JasonGluck) ([#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
||||
- [@juma-adoreme](https://github.com/juma-adoreme) ([#3838](https://github.com/dbt-labs/dbt/pull/3838))
|
||||
- [@annafil](https://github.com/annafil) ([#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||
- [@AndreasTA-AW](https://github.com/AndreasTA-AW) ([#3691](https://github.com/dbt-labs/dbt/pull/3691))
|
||||
@@ -52,6 +95,7 @@ Contributors:
|
||||
- [@TeddyCr](https://github.com/TeddyCr) ([#3448](https://github.com/dbt-labs/dbt/pull/3865))
|
||||
- [@sdebruyn](https://github.com/sdebruyn) ([#3906](https://github.com/dbt-labs/dbt/pull/3906))
|
||||
|
||||
|
||||
## dbt 0.21.0b2 (August 19, 2021)
|
||||
|
||||
### Features
|
||||
@@ -67,7 +111,6 @@ Contributors:
|
||||
### Under the hood
|
||||
|
||||
- Add `build` RPC method, and a subset of flags for `build` task ([#3595](https://github.com/dbt-labs/dbt/issues/3595), [#3674](https://github.com/dbt-labs/dbt/pull/3674))
|
||||
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
||||
|
||||
## dbt 0.21.0b1 (August 03, 2021)
|
||||
|
||||
@@ -118,6 +161,7 @@ Contributors:
|
||||
- Better error handling for BigQuery job labels that are too long. ([#3612](https://github.com/dbt-labs/dbt/pull/3612), [#3703](https://github.com/dbt-labs/dbt/pull/3703))
|
||||
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
||||
- Switch to full reparse on partial parsing exceptions. Log and report exception information. ([#3725](https://github.com/dbt-labs/dbt/issues/3725), [#3733](https://github.com/dbt-labs/dbt/pull/3733))
|
||||
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
|
||||
### Fixes
|
||||
|
||||
|
||||
@@ -111,12 +111,13 @@ def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueI
|
||||
""" Get a list of tests that depend on the node with the
|
||||
provided unique id """
|
||||
|
||||
return [
|
||||
node.unique_id
|
||||
for _, node in manifest.nodes.items()
|
||||
if node.resource_type == NodeType.Test and
|
||||
unique_id in node.depends_on_nodes
|
||||
]
|
||||
tests = []
|
||||
if unique_id in manifest.child_map:
|
||||
for child_unique_id in manifest.child_map[unique_id]:
|
||||
if child_unique_id.startswith('test.'):
|
||||
tests.append(child_unique_id)
|
||||
|
||||
return tests
|
||||
|
||||
|
||||
class Linker:
|
||||
@@ -417,7 +418,7 @@ class Compiler:
|
||||
else:
|
||||
dependency_not_found(node, dependency)
|
||||
|
||||
def link_graph(self, linker: Linker, manifest: Manifest):
|
||||
def link_graph(self, linker: Linker, manifest: Manifest, add_test_edges: bool = False):
|
||||
for source in manifest.sources.values():
|
||||
linker.add_node(source.unique_id)
|
||||
for node in manifest.nodes.values():
|
||||
@@ -430,27 +431,29 @@ class Compiler:
|
||||
if cycle:
|
||||
raise RuntimeError("Found a cycle: {}".format(cycle))
|
||||
|
||||
self.resolve_graph(linker, manifest)
|
||||
if add_test_edges:
|
||||
manifest.build_parent_and_child_maps()
|
||||
self.add_test_edges(linker, manifest)
|
||||
|
||||
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
|
||||
def add_test_edges(self, linker: Linker, manifest: Manifest) -> None:
|
||||
""" This method adds additional edges to the DAG. For a given non-test
|
||||
executable node, add an edge from an upstream test to the given node if
|
||||
the set of nodes the test depends on is a proper/strict subset of the
|
||||
upstream nodes for the given node. """
|
||||
the set of nodes the test depends on is a subset of the upstream nodes
|
||||
for the given node. """
|
||||
|
||||
# Given a graph:
|
||||
# model1 --> model2 --> model3
|
||||
# | |
|
||||
# | \/
|
||||
# \/ test 2
|
||||
# | |
|
||||
# | \/
|
||||
# \/ test 2
|
||||
# test1
|
||||
#
|
||||
# Produce the following graph:
|
||||
# model1 --> model2 --> model3
|
||||
# | | /\ /\
|
||||
# | \/ | |
|
||||
# \/ test2 ------- |
|
||||
# test1 -------------------
|
||||
# | /\ | /\ /\
|
||||
# | | \/ | |
|
||||
# \/ | test2 ----| |
|
||||
# test1 ----|---------------|
|
||||
|
||||
for node_id in linker.graph:
|
||||
# If node is executable (in manifest.nodes) and does _not_
|
||||
@@ -488,21 +491,19 @@ class Compiler:
|
||||
)
|
||||
|
||||
# If the set of nodes that an upstream test depends on
|
||||
# is a proper (or strict) subset of all upstream nodes of
|
||||
# the current node, add an edge from the upstream test
|
||||
# to the current node. Must be a proper/strict subset to
|
||||
# avoid adding a circular dependency to the graph.
|
||||
if (test_depends_on < upstream_nodes):
|
||||
# is a subset of all upstream nodes of the current node,
|
||||
# add an edge from the upstream test to the current node.
|
||||
if (test_depends_on.issubset(upstream_nodes)):
|
||||
linker.graph.add_edge(
|
||||
upstream_test,
|
||||
node_id
|
||||
)
|
||||
|
||||
def compile(self, manifest: Manifest, write=True) -> Graph:
|
||||
def compile(self, manifest: Manifest, write=True, add_test_edges=False) -> Graph:
|
||||
self.initialize()
|
||||
linker = Linker()
|
||||
|
||||
self.link_graph(linker, manifest)
|
||||
self.link_graph(linker, manifest, add_test_edges)
|
||||
|
||||
stats = _generate_stats(manifest)
|
||||
|
||||
|
||||
@@ -144,7 +144,7 @@ class BaseDatabaseWrapper:
|
||||
elif isinstance(namespace, str):
|
||||
search_packages = self._adapter.config.get_macro_search_order(namespace)
|
||||
if not search_packages and namespace in self._adapter.config.dependencies:
|
||||
search_packages = [namespace]
|
||||
search_packages = [self.config.project_name, namespace]
|
||||
if not search_packages:
|
||||
raise CompilationException(
|
||||
f'In adapter.dispatch, got a string packages argument '
|
||||
@@ -164,10 +164,10 @@ class BaseDatabaseWrapper:
|
||||
macro = self._namespace.get_from_package(
|
||||
package_name, search_name
|
||||
)
|
||||
except CompilationException as exc:
|
||||
raise CompilationException(
|
||||
f'In dispatch: {exc.msg}',
|
||||
) from exc
|
||||
except CompilationException:
|
||||
# Only raise CompilationException if macro is not found in
|
||||
# any package
|
||||
macro = None
|
||||
|
||||
if package_name is None:
|
||||
attempts.append(search_name)
|
||||
@@ -1439,8 +1439,13 @@ class TestContext(ProviderContext):
|
||||
# 'depends_on.macros' by using the TestMacroNamespace
|
||||
def _build_test_namespace(self):
|
||||
depends_on_macros = []
|
||||
# all generic tests use a macro named 'get_where_subquery' to wrap 'model' arg
|
||||
# see generic_test_builders.build_model_str
|
||||
get_where_subquery = self.macro_resolver.macros_by_name.get('get_where_subquery')
|
||||
if get_where_subquery:
|
||||
depends_on_macros.append(get_where_subquery.unique_id)
|
||||
if self.model.depends_on and self.model.depends_on.macros:
|
||||
depends_on_macros = self.model.depends_on.macros
|
||||
depends_on_macros.extend(self.model.depends_on.macros)
|
||||
lookup_macros = depends_on_macros.copy()
|
||||
for macro_unique_id in lookup_macros:
|
||||
lookup_macro = self.macro_resolver.macros.get(macro_unique_id)
|
||||
|
||||
@@ -128,10 +128,14 @@ class Credentials(
|
||||
'type not implemented for base credentials class'
|
||||
)
|
||||
|
||||
@abc.abstractproperty
|
||||
@property
|
||||
def unique_field(self) -> str:
|
||||
"""Hashed and included in anonymous telemetry to track adapter adoption.
|
||||
Return the field from Credentials that can uniquely identify
|
||||
one team/organization using this adapter
|
||||
"""
|
||||
raise NotImplementedError(
|
||||
'type not implemented for base credentials class'
|
||||
'unique_field not implemented for base credentials class'
|
||||
)
|
||||
|
||||
def hashed_unique_field(self) -> str:
|
||||
|
||||
@@ -169,6 +169,43 @@ class RefableLookup(dbtClassMixin):
|
||||
return manifest.nodes[unique_id]
|
||||
|
||||
|
||||
class DisabledLookup(dbtClassMixin):
|
||||
# model, seed, snapshot
|
||||
_lookup_types: ClassVar[set] = set(NodeType.refable())
|
||||
|
||||
def __init__(self, manifest: 'Manifest'):
|
||||
self.storage: Dict[str, Dict[PackageName, List[Any]]] = {}
|
||||
self.populate(manifest)
|
||||
|
||||
def populate(self, manifest):
|
||||
for node in manifest.disabled:
|
||||
self.add_node(node)
|
||||
for node in list(chain.from_iterable(manifest._disabled.values())):
|
||||
self.add_node(node)
|
||||
|
||||
def add_node(self, node: ManifestNode):
|
||||
if node.resource_type in self._lookup_types:
|
||||
if node.name not in self.storage:
|
||||
self.storage[node.name] = {}
|
||||
if node.package_name not in self.storage[node.name]:
|
||||
self.storage[node.name][node.package_name] = []
|
||||
self.storage[node.name][node.package_name].append(node)
|
||||
|
||||
# This should return a list of disabled nodes
|
||||
def find(self, key, package: PackageName):
|
||||
if key not in self.storage:
|
||||
return None
|
||||
|
||||
pkg_dct: Mapping[PackageName, List[ManifestNode]] = self.storage[key]
|
||||
|
||||
if not pkg_dct:
|
||||
return None
|
||||
elif package in pkg_dct:
|
||||
return pkg_dct[package]
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class AnalysisLookup(RefableLookup):
|
||||
_lookup_types: ClassVar[set] = set([NodeType.Analysis])
|
||||
|
||||
@@ -568,7 +605,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
|
||||
state_check: ManifestStateCheck = field(default_factory=ManifestStateCheck)
|
||||
# Moved from the ParseResult object
|
||||
source_patches: MutableMapping[SourceKey, SourcePatch] = field(default_factory=dict)
|
||||
# following is from ParseResult
|
||||
# following contains new disabled nodes until parsing is finished. This changes in 1.0.0
|
||||
_disabled: MutableMapping[str, List[CompileResultNode]] = field(default_factory=dict)
|
||||
_doc_lookup: Optional[DocLookup] = field(
|
||||
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
|
||||
@@ -579,6 +616,9 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
|
||||
_ref_lookup: Optional[RefableLookup] = field(
|
||||
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
|
||||
)
|
||||
_disabled_lookup: Optional[DisabledLookup] = field(
|
||||
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
|
||||
)
|
||||
_analysis_lookup: Optional[AnalysisLookup] = field(
|
||||
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
|
||||
)
|
||||
@@ -652,6 +692,15 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
|
||||
}
|
||||
}
|
||||
|
||||
def build_disabled_by_file_id(self):
|
||||
disabled_by_file_id = {}
|
||||
for node in self.disabled:
|
||||
disabled_by_file_id[node.file_id] = node
|
||||
for node_list in self._disabled.values():
|
||||
for node in node_list:
|
||||
disabled_by_file_id[node.file_id] = node
|
||||
return disabled_by_file_id
|
||||
|
||||
def find_disabled_by_name(
|
||||
self, name: str, package: Optional[str] = None
|
||||
) -> Optional[ManifestNode]:
|
||||
@@ -822,6 +871,15 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
|
||||
def rebuild_ref_lookup(self):
|
||||
self._ref_lookup = RefableLookup(self)
|
||||
|
||||
@property
|
||||
def disabled_lookup(self) -> DisabledLookup:
|
||||
if self._disabled_lookup is None:
|
||||
self._disabled_lookup = DisabledLookup(self)
|
||||
return self._disabled_lookup
|
||||
|
||||
def rebuild_disabled_lookup(self):
|
||||
self._disabled_lookup = DisabledLookup(self)
|
||||
|
||||
@property
|
||||
def analysis_lookup(self) -> AnalysisLookup:
|
||||
if self._analysis_lookup is None:
|
||||
@@ -1054,6 +1112,8 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
|
||||
self._doc_lookup,
|
||||
self._source_lookup,
|
||||
self._ref_lookup,
|
||||
self._disabled_lookup,
|
||||
self._analysis_lookup
|
||||
)
|
||||
return self.__class__, args
|
||||
|
||||
@@ -1071,7 +1131,7 @@ AnyManifest = Union[Manifest, MacroManifest]
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('manifest', 2)
|
||||
@schema_version('manifest', 3)
|
||||
class WritableManifest(ArtifactMixin):
|
||||
nodes: Mapping[UniqueID, ManifestNode] = field(
|
||||
metadata=dict(description=(
|
||||
|
||||
@@ -185,7 +185,7 @@ class RunExecutionResult(
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('run-results', 2)
|
||||
@schema_version('run-results', 3)
|
||||
class RunResultsArtifact(ExecutionResult, ArtifactMixin):
|
||||
results: Sequence[RunResultOutput]
|
||||
args: Dict[str, Any] = field(default_factory=dict)
|
||||
@@ -369,7 +369,7 @@ class FreshnessResult(ExecutionResult):
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('sources', 1)
|
||||
@schema_version('sources', 2)
|
||||
class FreshnessExecutionResultArtifact(
|
||||
ArtifactMixin,
|
||||
VersionedSchema,
|
||||
|
||||
@@ -3,6 +3,7 @@ import functools
|
||||
from typing import NoReturn, Optional, Mapping, Any
|
||||
|
||||
from dbt.logger import GLOBAL_LOGGER as logger
|
||||
from dbt.logger import get_secret_env
|
||||
from dbt.node_types import NodeType
|
||||
from dbt import flags
|
||||
from dbt.ui import line_wrap_message
|
||||
@@ -390,6 +391,8 @@ class CommandError(RuntimeException):
|
||||
super().__init__(message)
|
||||
self.cwd = cwd
|
||||
self.cmd = cmd
|
||||
for secret in get_secret_env():
|
||||
self.cmd = str(self.cmd).replace(secret, "*****")
|
||||
self.args = (cwd, cmd, message)
|
||||
|
||||
def __str__(self):
|
||||
@@ -466,6 +469,15 @@ def invalid_type_error(method_name, arg_name, got_value, expected_type,
|
||||
got_value=got_value, got_type=got_type))
|
||||
|
||||
|
||||
def invalid_bool_error(got_value, macro_name) -> NoReturn:
|
||||
"""Raise a CompilationException when an macro expects a boolean but gets some
|
||||
other value.
|
||||
"""
|
||||
msg = ("Macro '{macro_name}' returns '{got_value}'. It is not type 'bool' "
|
||||
"and cannot not be converted reliably to a bool.")
|
||||
raise_compiler_error(msg.format(macro_name=macro_name, got_value=got_value))
|
||||
|
||||
|
||||
def ref_invalid_args(model, args) -> NoReturn:
|
||||
raise_compiler_error(
|
||||
"ref() takes at most two arguments ({} given)".format(len(args)),
|
||||
|
||||
@@ -18,6 +18,7 @@ WRITE_JSON = None
|
||||
PARTIAL_PARSE = None
|
||||
USE_COLORS = None
|
||||
STORE_FAILURES = None
|
||||
GREEDY = None
|
||||
|
||||
|
||||
def env_set_truthy(key: str) -> Optional[str]:
|
||||
@@ -56,7 +57,7 @@ MP_CONTEXT = _get_context()
|
||||
def reset():
|
||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||
STORE_FAILURES
|
||||
STORE_FAILURES, GREEDY
|
||||
|
||||
STRICT_MODE = False
|
||||
FULL_REFRESH = False
|
||||
@@ -69,12 +70,13 @@ def reset():
|
||||
MP_CONTEXT = _get_context()
|
||||
USE_COLORS = True
|
||||
STORE_FAILURES = False
|
||||
GREEDY = False
|
||||
|
||||
|
||||
def set_from_args(args):
|
||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||
STORE_FAILURES
|
||||
STORE_FAILURES, GREEDY
|
||||
|
||||
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
|
||||
|
||||
@@ -99,6 +101,7 @@ def set_from_args(args):
|
||||
USE_COLORS = use_colors_override
|
||||
|
||||
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
|
||||
GREEDY = getattr(args, 'greedy', GREEDY)
|
||||
|
||||
|
||||
# initialize everything to the defaults on module load
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# special support for CLI argument parsing.
|
||||
from dbt import flags
|
||||
import itertools
|
||||
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
|
||||
|
||||
@@ -66,7 +67,7 @@ def parse_union_from_default(
|
||||
def parse_difference(
|
||||
include: Optional[List[str]], exclude: Optional[List[str]]
|
||||
) -> SelectionDifference:
|
||||
included = parse_union_from_default(include, DEFAULT_INCLUDES)
|
||||
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
|
||||
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
|
||||
return SelectionDifference(components=[included, excluded])
|
||||
|
||||
@@ -180,7 +181,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
union_def_parts = _get_list_dicts(definition, 'union')
|
||||
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
|
||||
|
||||
union = SelectionUnion(components=include)
|
||||
union = SelectionUnion(components=include, greedy_warning=False)
|
||||
|
||||
if exclude is None:
|
||||
union.raw = definition
|
||||
@@ -188,7 +189,8 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
else:
|
||||
return SelectionDifference(
|
||||
components=[union, exclude],
|
||||
raw=definition
|
||||
raw=definition,
|
||||
greedy_warning=False
|
||||
)
|
||||
|
||||
|
||||
@@ -197,7 +199,7 @@ def parse_intersection_definition(
|
||||
) -> SelectionSpec:
|
||||
intersection_def_parts = _get_list_dicts(definition, 'intersection')
|
||||
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
|
||||
intersection = SelectionIntersection(components=include)
|
||||
intersection = SelectionIntersection(components=include, greedy_warning=False)
|
||||
|
||||
if exclude is None:
|
||||
intersection.raw = definition
|
||||
@@ -205,7 +207,8 @@ def parse_intersection_definition(
|
||||
else:
|
||||
return SelectionDifference(
|
||||
components=[intersection, exclude],
|
||||
raw=definition
|
||||
raw=definition,
|
||||
greedy_warning=False
|
||||
)
|
||||
|
||||
|
||||
@@ -239,7 +242,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
if diff_arg is None:
|
||||
return base
|
||||
else:
|
||||
return SelectionDifference(components=[base, diff_arg])
|
||||
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
|
||||
|
||||
|
||||
def parse_from_definition(
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from typing import (
|
||||
Set, Iterable, Iterator, Optional, NewType
|
||||
)
|
||||
from itertools import product
|
||||
import networkx as nx # type: ignore
|
||||
|
||||
from dbt.exceptions import InternalException
|
||||
@@ -77,17 +78,26 @@ class Graph:
|
||||
successors.update(self.graph.successors(node))
|
||||
return successors
|
||||
|
||||
def get_subset_graph(self, selected: Iterable[UniqueId]) -> 'Graph':
|
||||
def get_subset_graph(self, selected: Iterable[UniqueId]) -> "Graph":
|
||||
"""Create and return a new graph that is a shallow copy of the graph,
|
||||
but with only the nodes in include_nodes. Transitive edges across
|
||||
removed nodes are preserved as explicit new edges.
|
||||
"""
|
||||
new_graph = nx.algorithms.transitive_closure(self.graph)
|
||||
|
||||
new_graph = self.graph.copy()
|
||||
include_nodes = set(selected)
|
||||
|
||||
for node in self:
|
||||
if node not in include_nodes:
|
||||
source_nodes = [x for x, _ in new_graph.in_edges(node)]
|
||||
target_nodes = [x for _, x in new_graph.out_edges(node)]
|
||||
|
||||
new_edges = product(source_nodes, target_nodes)
|
||||
non_cyclic_new_edges = [
|
||||
(source, target) for source, target in new_edges if source != target
|
||||
] # removes cyclic refs
|
||||
|
||||
new_graph.add_edges_from(non_cyclic_new_edges)
|
||||
new_graph.remove_node(node)
|
||||
|
||||
for node in include_nodes:
|
||||
@@ -96,6 +106,7 @@ class Graph:
|
||||
"Couldn't find model '{}' -- does it exist or is "
|
||||
"it disabled?".format(node)
|
||||
)
|
||||
|
||||
return Graph(new_graph)
|
||||
|
||||
def subgraph(self, nodes: Iterable[UniqueId]) -> 'Graph':
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
from typing import Set, List, Optional, Tuple
|
||||
|
||||
from .graph import Graph, UniqueId
|
||||
@@ -30,6 +29,24 @@ def alert_non_existence(raw_spec, nodes):
|
||||
)
|
||||
|
||||
|
||||
def alert_unused_nodes(raw_spec, node_names):
|
||||
summary_nodes_str = ("\n - ").join(node_names[:3])
|
||||
debug_nodes_str = ("\n - ").join(node_names)
|
||||
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
|
||||
summary_msg = (
|
||||
f"\nSome tests were excluded because at least one parent is not selected. "
|
||||
f"Use the --greedy flag to include them."
|
||||
f"\n - {summary_nodes_str}{and_more_str}"
|
||||
)
|
||||
logger.info(summary_msg)
|
||||
if len(node_names) > 4:
|
||||
debug_msg = (
|
||||
f"Full list of tests that were excluded:"
|
||||
f"\n - {debug_nodes_str}"
|
||||
)
|
||||
logger.debug(debug_msg)
|
||||
|
||||
|
||||
def can_select_indirectly(node):
|
||||
"""If a node is not selected itself, but its parent(s) are, it may qualify
|
||||
for indirect selection.
|
||||
@@ -151,16 +168,16 @@ class NodeSelector(MethodManager):
|
||||
|
||||
return direct_nodes, indirect_nodes
|
||||
|
||||
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
|
||||
def select_nodes(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
|
||||
"""Select the nodes in the graph according to the spec.
|
||||
|
||||
This is the main point of entry for turning a spec into a set of nodes:
|
||||
- Recurse through spec, select by criteria, combine by set operation
|
||||
- Return final (unfiltered) selection set
|
||||
"""
|
||||
|
||||
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
|
||||
return direct_nodes
|
||||
indirect_only = indirect_nodes.difference(direct_nodes)
|
||||
return direct_nodes, indirect_only
|
||||
|
||||
def _is_graph_member(self, unique_id: UniqueId) -> bool:
|
||||
if unique_id in self.manifest.sources:
|
||||
@@ -213,6 +230,8 @@ class NodeSelector(MethodManager):
|
||||
# - If ANY parent is missing, return it separately. We'll keep it around
|
||||
# for later and see if its other parents show up.
|
||||
# We use this for INCLUSION.
|
||||
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
|
||||
# or by specifying `greedy: true` in a yaml selector
|
||||
|
||||
direct_nodes = set(selected)
|
||||
indirect_nodes = set()
|
||||
@@ -251,15 +270,24 @@ class NodeSelector(MethodManager):
|
||||
|
||||
- node selection. Based on the include/exclude sets, the set
|
||||
of matched unique IDs is returned
|
||||
- expand the graph at each leaf node, before combination
|
||||
- selectors might override this. for example, this is where
|
||||
tests are added
|
||||
- includes direct + indirect selection (for tests)
|
||||
- filtering:
|
||||
- selectors can filter the nodes after all of them have been
|
||||
selected
|
||||
"""
|
||||
selected_nodes = self.select_nodes(spec)
|
||||
selected_nodes, indirect_only = self.select_nodes(spec)
|
||||
filtered_nodes = self.filter_selection(selected_nodes)
|
||||
|
||||
if indirect_only:
|
||||
filtered_unused_nodes = self.filter_selection(indirect_only)
|
||||
if filtered_unused_nodes and spec.greedy_warning:
|
||||
# log anything that didn't make the cut
|
||||
unused_node_names = []
|
||||
for unique_id in filtered_unused_nodes:
|
||||
name = self.manifest.nodes[unique_id].name
|
||||
unused_node_names.append(name)
|
||||
alert_unused_nodes(spec, unused_node_names)
|
||||
|
||||
return filtered_nodes
|
||||
|
||||
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:
|
||||
|
||||
@@ -405,27 +405,38 @@ class StateSelectorMethod(SelectorMethod):
|
||||
|
||||
return modified
|
||||
|
||||
def recursively_check_macros_modified(self, node):
|
||||
# check if there are any changes in macros the first time
|
||||
if self.modified_macros is None:
|
||||
self.modified_macros = self._macros_modified()
|
||||
|
||||
def recursively_check_macros_modified(self, node, previous_macros):
|
||||
# loop through all macros that this node depends on
|
||||
for macro_uid in node.depends_on.macros:
|
||||
# avoid infinite recursion if we've already seen this macro
|
||||
if macro_uid in previous_macros:
|
||||
continue
|
||||
previous_macros.append(macro_uid)
|
||||
# is this macro one of the modified macros?
|
||||
if macro_uid in self.modified_macros:
|
||||
return True
|
||||
# if not, and this macro depends on other macros, keep looping
|
||||
macro = self.manifest.macros[macro_uid]
|
||||
if len(macro.depends_on.macros) > 0:
|
||||
return self.recursively_check_macros_modified(macro)
|
||||
macro_node = self.manifest.macros[macro_uid]
|
||||
if len(macro_node.depends_on.macros) > 0:
|
||||
return self.recursively_check_macros_modified(macro_node, previous_macros)
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
||||
def check_macros_modified(self, node):
|
||||
# check if there are any changes in macros the first time
|
||||
if self.modified_macros is None:
|
||||
self.modified_macros = self._macros_modified()
|
||||
# no macros have been modified, skip looping entirely
|
||||
if not self.modified_macros:
|
||||
return False
|
||||
# recursively loop through upstream macros to see if any is modified
|
||||
else:
|
||||
previous_macros = []
|
||||
return self.recursively_check_macros_modified(node, previous_macros)
|
||||
|
||||
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
different_contents = not new.same_contents(old) # type: ignore
|
||||
upstream_macro_change = self.recursively_check_macros_modified(new)
|
||||
upstream_macro_change = self.check_macros_modified(new)
|
||||
return different_contents or upstream_macro_change
|
||||
|
||||
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
@@ -457,7 +468,7 @@ class StateSelectorMethod(SelectorMethod):
|
||||
return False
|
||||
|
||||
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
|
||||
return self.recursively_check_macros_modified(new)
|
||||
return self.check_macros_modified(new)
|
||||
|
||||
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
return old is None
|
||||
|
||||
@@ -67,6 +67,7 @@ class SelectionCriteria:
|
||||
children: bool
|
||||
children_depth: Optional[int]
|
||||
greedy: bool = False
|
||||
greedy_warning: bool = False # do not raise warning for yaml selectors
|
||||
|
||||
def __post_init__(self):
|
||||
if self.children and self.childrens_parents:
|
||||
@@ -124,11 +125,11 @@ class SelectionCriteria:
|
||||
parents_depth=parents_depth,
|
||||
children=bool(dct.get('children')),
|
||||
children_depth=children_depth,
|
||||
greedy=greedy
|
||||
greedy=(greedy or bool(dct.get('greedy'))),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
|
||||
def dict_from_single_spec(cls, raw: str):
|
||||
result = RAW_SELECTOR_PATTERN.match(raw)
|
||||
if result is None:
|
||||
return {'error': 'Invalid selector spec'}
|
||||
@@ -145,6 +146,8 @@ class SelectionCriteria:
|
||||
dct['parents'] = bool(dct.get('parents'))
|
||||
if 'children' in dct:
|
||||
dct['children'] = bool(dct.get('children'))
|
||||
if 'greedy' in dct:
|
||||
dct['greedy'] = bool(dct.get('greedy'))
|
||||
return dct
|
||||
|
||||
@classmethod
|
||||
@@ -162,10 +165,12 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
|
||||
self,
|
||||
components: Iterable[SelectionSpec],
|
||||
expect_exists: bool = False,
|
||||
greedy_warning: bool = True,
|
||||
raw: Any = None,
|
||||
):
|
||||
self.components: List[SelectionSpec] = list(components)
|
||||
self.expect_exists = expect_exists
|
||||
self.greedy_warning = greedy_warning
|
||||
self.raw = raw
|
||||
|
||||
def __iter__(self) -> Iterator[SelectionSpec]:
|
||||
|
||||
@@ -331,7 +331,7 @@
|
||||
|
||||
{% for column in add_columns %}
|
||||
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
|
||||
{% endfor %}{{ ',' if remove_columns | length > 0 }}
|
||||
{% endfor %}{{ ',' if add_columns and remove_columns }}
|
||||
|
||||
{% for column in remove_columns %}
|
||||
drop column {{ column.name }}{{ ',' if not loop.last }}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{% macro get_where_subquery(relation) -%}
|
||||
{% do return(adapter.dispatch('get_where_subquery')(relation)) %}
|
||||
{% do return(adapter.dispatch('get_where_subquery', 'dbt')(relation)) %}
|
||||
{%- endmacro %}
|
||||
|
||||
{% macro default__get_where_subquery(relation) -%}
|
||||
|
||||
@@ -51,7 +51,7 @@
|
||||
{% endmacro %}
|
||||
|
||||
{% macro get_batch_size() -%}
|
||||
{{ adapter.dispatch('get_batch_size', 'dbt')() }}
|
||||
{{ return(adapter.dispatch('get_batch_size', 'dbt')()) }}
|
||||
{%- endmacro %}
|
||||
|
||||
{% macro default__get_batch_size() %}
|
||||
|
||||
@@ -345,7 +345,7 @@ class TimestampNamed(logbook.Processor):
|
||||
class ScrubSecrets(logbook.Processor):
|
||||
def process(self, record):
|
||||
for secret in get_secret_env():
|
||||
record.message = record.message.replace(secret, "*****")
|
||||
record.message = str(record.message).replace(secret, "*****")
|
||||
|
||||
|
||||
logger = logbook.Logger('dbt')
|
||||
|
||||
@@ -406,6 +406,14 @@ def _build_build_subparser(subparsers, base_subparser):
|
||||
Store test results (failing rows) in the database
|
||||
'''
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
resource_values: List[str] = [
|
||||
str(s) for s in build_task.BuildTask.ALL_RESOURCE_VALUES
|
||||
] + ['all']
|
||||
@@ -637,7 +645,7 @@ def _add_table_mutability_arguments(*subparsers):
|
||||
'--full-refresh',
|
||||
action='store_true',
|
||||
help='''
|
||||
If specified, DBT will drop incremental models and
|
||||
If specified, dbt will drop incremental models and
|
||||
fully-recalculate the incremental table from the model definition.
|
||||
'''
|
||||
)
|
||||
@@ -753,6 +761,14 @@ def _build_test_subparser(subparsers, base_subparser):
|
||||
Store test results (failing rows) in the database
|
||||
'''
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
|
||||
sub.set_defaults(cls=test_task.TestTask, which='test', rpc_method='test')
|
||||
return sub
|
||||
@@ -878,6 +894,14 @@ def _build_list_subparser(subparsers, base_subparser):
|
||||
metavar='SELECTOR',
|
||||
required=False,
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
_add_common_selector_arguments(sub)
|
||||
|
||||
return sub
|
||||
|
||||
@@ -247,12 +247,17 @@ class ManifestLoader:
|
||||
# get file info for local logs
|
||||
parse_file_type = None
|
||||
file_id = partial_parsing.processing_file
|
||||
if file_id and file_id in self.manifest.files:
|
||||
old_file = self.manifest.files[file_id]
|
||||
parse_file_type = old_file.parse_file_type
|
||||
logger.debug(f"Partial parsing exception processing file {file_id}")
|
||||
file_dict = old_file.to_dict()
|
||||
logger.debug(f"PP file: {file_dict}")
|
||||
if file_id:
|
||||
source_file = None
|
||||
if file_id in self.saved_manifest.files:
|
||||
source_file = self.saved_manifest.files[file_id]
|
||||
elif file_id in self.manifest.files:
|
||||
source_file = self.manifest.files[file_id]
|
||||
if source_file:
|
||||
parse_file_type = source_file.parse_file_type
|
||||
logger.debug(f"Partial parsing exception processing file {file_id}")
|
||||
file_dict = source_file.to_dict()
|
||||
logger.debug(f"PP file: {file_dict}")
|
||||
exc_info['parse_file_type'] = parse_file_type
|
||||
logger.debug(f"PP exception info: {exc_info}")
|
||||
|
||||
@@ -310,6 +315,7 @@ class ManifestLoader:
|
||||
# aren't in place yet
|
||||
self.manifest.rebuild_ref_lookup()
|
||||
self.manifest.rebuild_doc_lookup()
|
||||
self.manifest.rebuild_disabled_lookup()
|
||||
|
||||
# Load yaml files
|
||||
parser_types = [SchemaParser]
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
|
||||
from copy import deepcopy
|
||||
from dbt.context.context_config import ContextConfig
|
||||
from dbt.contracts.graph.parsed import ParsedModelNode
|
||||
import dbt.flags as flags
|
||||
@@ -11,7 +13,11 @@ from dbt_extractor import ExtractionError, py_extract_from_source # type: ignor
|
||||
from functools import reduce
|
||||
from itertools import chain
|
||||
import random
|
||||
from typing import Any, Dict, Iterator, List, Optional, Union
|
||||
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
|
||||
|
||||
# debug loglines are used for integration testing. If you change
|
||||
# the code at the beginning of the debug line, change the tests in
|
||||
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
|
||||
|
||||
|
||||
class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
@@ -40,16 +46,11 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
experimentally_parsed: Optional[Union[str, Dict[str, List[Any]]]] = None
|
||||
config_call_dict: Dict[str, Any] = {}
|
||||
source_calls: List[List[str]] = []
|
||||
result: List[str] = []
|
||||
|
||||
# run the experimental parser if the flag is on or if we're sampling
|
||||
if flags.USE_EXPERIMENTAL_PARSER or sample:
|
||||
if self._has_banned_macro(node):
|
||||
# this log line is used for integration testing. If you change
|
||||
# the code at the beginning of the line change the tests in
|
||||
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
|
||||
logger.debug(
|
||||
f"1601: parser fallback to jinja because of macro override for {node.path}"
|
||||
)
|
||||
experimentally_parsed = "has_banned_macro"
|
||||
else:
|
||||
# run the experimental parser and return the results
|
||||
@@ -75,69 +76,76 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
source_calls.append([s[0], s[1]])
|
||||
experimentally_parsed['sources'] = source_calls
|
||||
|
||||
# normal dbt run
|
||||
# if we're sampling during a normal dbt run, populate an entirely new node to compare
|
||||
if not flags.USE_EXPERIMENTAL_PARSER:
|
||||
# normal rendering
|
||||
super().render_update(node, config)
|
||||
# if we're sampling, compare for correctness
|
||||
if sample:
|
||||
result = _get_sample_result(
|
||||
experimentally_parsed,
|
||||
config_call_dict,
|
||||
source_calls,
|
||||
node,
|
||||
config
|
||||
if sample and isinstance(experimentally_parsed, dict):
|
||||
# if this will _never_ mutate anything `self` we could avoid these deep copies,
|
||||
# but we can't really guarantee that going forward.
|
||||
model_parser_copy = self.partial_deepcopy()
|
||||
exp_sample_node = deepcopy(node)
|
||||
exp_sample_config = deepcopy(config)
|
||||
|
||||
model_parser_copy.populate(
|
||||
exp_sample_node,
|
||||
exp_sample_config,
|
||||
experimentally_parsed
|
||||
)
|
||||
# fire a tracking event. this fires one event for every sample
|
||||
# so that we have data on a per file basis. Not only can we expect
|
||||
# no false positives or misses, we can expect the number model
|
||||
# files parseable by the experimental parser to match our internal
|
||||
# testing.
|
||||
if tracking.active_user is not None: # None in some tests
|
||||
tracking.track_experimental_parser_sample({
|
||||
"project_id": self.root_project.hashed_name(),
|
||||
"file_id": utils.get_hash(node),
|
||||
"status": result
|
||||
})
|
||||
|
||||
super().render_update(node, config)
|
||||
|
||||
# if the --use-experimental-parser flag was set, and the experimental parser succeeded
|
||||
elif isinstance(experimentally_parsed, Dict):
|
||||
# since it doesn't need python jinja, fit the refs, sources, and configs
|
||||
# into the node. Down the line the rest of the node will be updated with
|
||||
# this information. (e.g. depends_on etc.)
|
||||
config._config_call_dict = config_call_dict
|
||||
|
||||
# this uses the updated config to set all the right things in the node.
|
||||
# if there are hooks present, it WILL render jinja. Will need to change
|
||||
# when the experimental parser supports hooks
|
||||
self.update_parsed_node_config(node, config)
|
||||
|
||||
# update the unrendered config with values from the file.
|
||||
elif isinstance(experimentally_parsed, dict):
|
||||
# update the unrendered config with values from the static parser.
|
||||
# values from yaml files are in there already
|
||||
node.unrendered_config.update(dict(experimentally_parsed['configs']))
|
||||
|
||||
# set refs and sources on the node object
|
||||
node.refs += experimentally_parsed['refs']
|
||||
node.sources += experimentally_parsed['sources']
|
||||
|
||||
# configs don't need to be merged into the node
|
||||
# setting them in config._config_call_dict is sufficient
|
||||
self.populate(
|
||||
node,
|
||||
config,
|
||||
experimentally_parsed
|
||||
)
|
||||
|
||||
self.manifest._parsing_info.static_analysis_parsed_path_count += 1
|
||||
|
||||
# the experimental parser didn't run on this model.
|
||||
# fall back to python jinja rendering.
|
||||
elif experimentally_parsed in ["has_banned_macro"]:
|
||||
# not logging here since the reason should have been logged above
|
||||
elif isinstance(experimentally_parsed, str):
|
||||
if experimentally_parsed == "cannot_parse":
|
||||
result += ["01_stable_parser_cannot_parse"]
|
||||
logger.debug(
|
||||
f"1602: parser fallback to jinja for {node.path}"
|
||||
)
|
||||
elif experimentally_parsed == "has_banned_macro":
|
||||
result += ["08_has_banned_macro"]
|
||||
logger.debug(
|
||||
f"1601: parser fallback to jinja because of macro override for {node.path}"
|
||||
)
|
||||
|
||||
super().render_update(node, config)
|
||||
# the experimental parser ran on this model and failed.
|
||||
# fall back to python jinja rendering.
|
||||
# otherwise jinja rendering.
|
||||
else:
|
||||
logger.debug(
|
||||
f"1602: parser fallback to jinja because of extractor failure for {node.path}"
|
||||
)
|
||||
super().render_update(node, config)
|
||||
|
||||
if sample and isinstance(experimentally_parsed, dict):
|
||||
# now that the sample succeeded, is populated and the current
|
||||
# values are rendered, compare the two and collect the tracking messages
|
||||
result += _get_exp_sample_result(
|
||||
exp_sample_node,
|
||||
exp_sample_config,
|
||||
node,
|
||||
config,
|
||||
)
|
||||
|
||||
# fire a tracking event. this fires one event for every sample
|
||||
# so that we have data on a per file basis. Not only can we expect
|
||||
# no false positives or misses, we can expect the number model
|
||||
# files parseable by the experimental parser to match our internal
|
||||
# testing.
|
||||
if result and tracking.active_user is not None: # None in some tests
|
||||
tracking.track_experimental_parser_sample({
|
||||
"project_id": self.root_project.hashed_name(),
|
||||
"file_id": utils.get_hash(node),
|
||||
"status": result
|
||||
})
|
||||
|
||||
# checks for banned macros
|
||||
def _has_banned_macro(
|
||||
self, node: ParsedModelNode
|
||||
@@ -163,64 +171,117 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
False
|
||||
)
|
||||
|
||||
# this method updates the model note rendered and unrendered config as well
|
||||
# as the node object. Used to populate these values when circumventing jinja
|
||||
# rendering like the static parser.
|
||||
def populate(
|
||||
self,
|
||||
node: ParsedModelNode,
|
||||
config: ContextConfig,
|
||||
experimentally_parsed: Dict[str, Any]
|
||||
):
|
||||
# manually fit configs in
|
||||
config._config_call_dict = _get_config_call_dict(experimentally_parsed)
|
||||
|
||||
# if there are hooks present this, it WILL render jinja. Will need to change
|
||||
# when the experimental parser supports hooks
|
||||
self.update_parsed_node_config(node, config)
|
||||
|
||||
# update the unrendered config with values from the file.
|
||||
# values from yaml files are in there already
|
||||
node.unrendered_config.update(experimentally_parsed['configs'])
|
||||
|
||||
# set refs and sources on the node object
|
||||
node.refs += experimentally_parsed['refs']
|
||||
node.sources += experimentally_parsed['sources']
|
||||
|
||||
# configs don't need to be merged into the node because they
|
||||
# are read from config._config_call_dict
|
||||
|
||||
# the manifest is often huge so this method avoids deepcopying it
|
||||
def partial_deepcopy(self):
|
||||
return ModelParser(
|
||||
deepcopy(self.project),
|
||||
self.manifest,
|
||||
deepcopy(self.root_project)
|
||||
)
|
||||
|
||||
|
||||
# pure function. safe to use elsewhere, but unlikely to be useful outside this file.
|
||||
def _get_config_call_dict(
|
||||
static_parser_result: Dict[str, List[Any]]
|
||||
) -> Dict[str, Any]:
|
||||
config_call_dict: Dict[str, Any] = {}
|
||||
|
||||
for c in static_parser_result['configs']:
|
||||
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
|
||||
|
||||
return config_call_dict
|
||||
|
||||
|
||||
# returns a list of string codes to be sent as a tracking event
|
||||
def _get_sample_result(
|
||||
sample_output: Optional[Union[str, Dict[str, Any]]],
|
||||
config_call_dict: Dict[str, Any],
|
||||
source_calls: List[List[str]],
|
||||
def _get_exp_sample_result(
|
||||
sample_node: ParsedModelNode,
|
||||
sample_config: ContextConfig,
|
||||
node: ParsedModelNode,
|
||||
config: ContextConfig
|
||||
) -> List[str]:
|
||||
result: List[str] = []
|
||||
# experimental parser didn't run
|
||||
if sample_output is None:
|
||||
result += ["09_experimental_parser_skipped"]
|
||||
# experimental parser couldn't parse
|
||||
elif (isinstance(sample_output, str)):
|
||||
if sample_output == "cannot_parse":
|
||||
result += ["01_experimental_parser_cannot_parse"]
|
||||
elif sample_output == "has_banned_macro":
|
||||
result += ["08_has_banned_macro"]
|
||||
else:
|
||||
# look for false positive configs
|
||||
for k in config_call_dict.keys():
|
||||
if k not in config._config_call_dict:
|
||||
result += ["02_false_positive_config_value"]
|
||||
break
|
||||
result: List[Tuple[int, str]] = _get_sample_result(sample_node, sample_config, node, config)
|
||||
|
||||
# look for missed configs
|
||||
for k in config._config_call_dict.keys():
|
||||
if k not in config_call_dict:
|
||||
result += ["03_missed_config_value"]
|
||||
break
|
||||
def process(codemsg):
|
||||
code, msg = codemsg
|
||||
return f"0{code}_experimental_{msg}"
|
||||
|
||||
# look for false positive sources
|
||||
for s in sample_output['sources']:
|
||||
if s not in node.sources:
|
||||
result += ["04_false_positive_source_value"]
|
||||
break
|
||||
return list(map(process, result))
|
||||
|
||||
# look for missed sources
|
||||
for s in node.sources:
|
||||
if s not in sample_output['sources']:
|
||||
result += ["05_missed_source_value"]
|
||||
break
|
||||
|
||||
# look for false positive refs
|
||||
for r in sample_output['refs']:
|
||||
if r not in node.refs:
|
||||
result += ["06_false_positive_ref_value"]
|
||||
break
|
||||
# returns a list of messages and int codes and messages that need a single digit
|
||||
# prefix to be prepended before being sent as a tracking event
|
||||
def _get_sample_result(
|
||||
sample_node: ParsedModelNode,
|
||||
sample_config: ContextConfig,
|
||||
node: ParsedModelNode,
|
||||
config: ContextConfig
|
||||
) -> List[Tuple[int, str]]:
|
||||
result: List[Tuple[int, str]] = []
|
||||
# look for false positive configs
|
||||
for k in sample_config._config_call_dict:
|
||||
if k not in config._config_call_dict:
|
||||
result += [(2, "false_positive_config_value")]
|
||||
break
|
||||
|
||||
# look for missed refs
|
||||
for r in node.refs:
|
||||
if r not in sample_output['refs']:
|
||||
result += ["07_missed_ref_value"]
|
||||
break
|
||||
# look for missed configs
|
||||
for k in config._config_call_dict.keys():
|
||||
if k not in sample_config._config_call_dict.keys():
|
||||
result += [(3, "missed_config_value")]
|
||||
break
|
||||
|
||||
# if there are no errors, return a success value
|
||||
if not result:
|
||||
result = ["00_exact_match"]
|
||||
# look for false positive sources
|
||||
for s in sample_node.sources:
|
||||
if s not in node.sources:
|
||||
result += [(4, "false_positive_source_value")]
|
||||
break
|
||||
|
||||
# look for missed sources
|
||||
for s in node.sources:
|
||||
if s not in sample_node.sources:
|
||||
result += [(5, "missed_source_value")]
|
||||
break
|
||||
|
||||
# look for false positive refs
|
||||
for r in sample_node.refs:
|
||||
if r not in node.refs:
|
||||
result += [(6, "false_positive_ref_value")]
|
||||
break
|
||||
|
||||
# look for missed refs
|
||||
for r in node.refs:
|
||||
if r not in sample_node.refs:
|
||||
result += [(7, "missed_ref_value")]
|
||||
break
|
||||
|
||||
# if there are no errors, return a success value
|
||||
if not result:
|
||||
result = [(0, "exact_match")]
|
||||
|
||||
return result
|
||||
|
||||
@@ -47,6 +47,7 @@ class PartialParsing:
|
||||
self.macro_child_map: Dict[str, List[str]] = {}
|
||||
self.build_file_diff()
|
||||
self.processing_file = None
|
||||
self.disabled_by_file_id = self.saved_manifest.build_disabled_by_file_id()
|
||||
|
||||
def skip_parsing(self):
|
||||
return (
|
||||
@@ -233,24 +234,38 @@ class PartialParsing:
|
||||
return
|
||||
|
||||
# These files only have one node.
|
||||
unique_id = old_source_file.nodes[0]
|
||||
unique_id = None
|
||||
if old_source_file.nodes:
|
||||
unique_id = old_source_file.nodes[0]
|
||||
else:
|
||||
# It's not clear when this would actually happen.
|
||||
# Logging in case there are other associated errors.
|
||||
logger.debug(f"Partial parsing: node not found for source_file {old_source_file}")
|
||||
|
||||
# replace source_file in saved and add to parsing list
|
||||
file_id = new_source_file.file_id
|
||||
self.deleted_manifest.files[file_id] = old_source_file
|
||||
self.saved_files[file_id] = new_source_file
|
||||
self.add_to_pp_files(new_source_file)
|
||||
self.remove_node_in_saved(new_source_file, unique_id)
|
||||
if unique_id:
|
||||
self.remove_node_in_saved(new_source_file, unique_id)
|
||||
|
||||
def remove_node_in_saved(self, source_file, unique_id):
|
||||
# Has already been deleted by another action
|
||||
if unique_id not in self.saved_manifest.nodes:
|
||||
if unique_id in self.saved_manifest.nodes:
|
||||
# delete node in saved
|
||||
node = self.saved_manifest.nodes.pop(unique_id)
|
||||
self.deleted_manifest.nodes[unique_id] = node
|
||||
elif source_file.file_id in self.disabled_by_file_id:
|
||||
for dis_index, dis_node in self.saved_manifest.disabled:
|
||||
if dis_node.file_id == source_file.file_id:
|
||||
node = dis_node
|
||||
break
|
||||
if dis_node:
|
||||
del self.saved_manifest.disabled[dis_index]
|
||||
else:
|
||||
# Has already been deleted by another action
|
||||
return
|
||||
|
||||
# delete node in saved
|
||||
node = self.saved_manifest.nodes.pop(unique_id)
|
||||
self.deleted_manifest.nodes[unique_id] = node
|
||||
|
||||
# look at patch_path in model node to see if we need
|
||||
# to reapply a patch from a schema_file.
|
||||
if node.patch_path:
|
||||
@@ -261,15 +276,22 @@ class PartialParsing:
|
||||
schema_file = self.saved_files[file_id]
|
||||
dict_key = parse_file_type_to_key[source_file.parse_file_type]
|
||||
# look for a matching list dictionary
|
||||
for elem in schema_file.dict_from_yaml[dict_key]:
|
||||
if elem['name'] == node.name:
|
||||
elem_patch = elem
|
||||
break
|
||||
elem_patch = None
|
||||
if dict_key in schema_file.dict_from_yaml:
|
||||
for elem in schema_file.dict_from_yaml[dict_key]:
|
||||
if elem['name'] == node.name:
|
||||
elem_patch = elem
|
||||
break
|
||||
if elem_patch:
|
||||
self.delete_schema_mssa_links(schema_file, dict_key, elem_patch)
|
||||
self.merge_patch(schema_file, dict_key, elem_patch)
|
||||
if unique_id in schema_file.node_patches:
|
||||
schema_file.node_patches.remove(unique_id)
|
||||
if unique_id in self.saved_manifest.disabled:
|
||||
# We have a patch_path in disabled nodes with a patch so
|
||||
# that we can connect the patch to the node
|
||||
for node in self.saved_manifest.disabled[unique_id]:
|
||||
node.patch_path = None
|
||||
|
||||
def update_macro_in_saved(self, new_source_file, old_source_file):
|
||||
if self.already_scheduled_for_parsing(old_source_file):
|
||||
@@ -290,7 +312,8 @@ class PartialParsing:
|
||||
# nodes [unique_ids] -- SQL files
|
||||
# There should always be a node for a SQL file
|
||||
if not source_file.nodes:
|
||||
raise Exception(f"No nodes found for source file {source_file.file_id}")
|
||||
logger.debug(f"No nodes found for source file {source_file.file_id}")
|
||||
return
|
||||
# There is generally only 1 node for SQL files, except for macros
|
||||
for unique_id in source_file.nodes:
|
||||
self.remove_node_in_saved(source_file, unique_id)
|
||||
@@ -299,7 +322,10 @@ class PartialParsing:
|
||||
# We need to re-parse nodes that reference another removed node
|
||||
def schedule_referencing_nodes_for_parsing(self, unique_id):
|
||||
# Look at "children", i.e. nodes that reference this node
|
||||
self.schedule_nodes_for_parsing(self.saved_manifest.child_map[unique_id])
|
||||
if unique_id in self.saved_manifest.child_map:
|
||||
self.schedule_nodes_for_parsing(self.saved_manifest.child_map[unique_id])
|
||||
else:
|
||||
logger.debug(f"Partial parsing: {unique_id} not found in child_map")
|
||||
|
||||
def schedule_nodes_for_parsing(self, unique_ids):
|
||||
for unique_id in unique_ids:
|
||||
|
||||
@@ -358,7 +358,7 @@ class TestBuilder(Generic[Testable]):
|
||||
if self.warn_if is not None:
|
||||
config['warn_if'] = self.warn_if
|
||||
if self.error_if is not None:
|
||||
config['error_id'] = self.error_if
|
||||
config['error_if'] = self.error_if
|
||||
if self.fail_calc is not None:
|
||||
config['fail_calc'] = self.fail_calc
|
||||
if self.store_failures is not None:
|
||||
@@ -369,8 +369,6 @@ class TestBuilder(Generic[Testable]):
|
||||
config['database'] = self.database
|
||||
if self.schema is not None:
|
||||
config['schema'] = self.schema
|
||||
if self.alias is not None:
|
||||
config['alias'] = self.alias
|
||||
return config
|
||||
|
||||
def tags(self) -> List[str]:
|
||||
|
||||
@@ -825,6 +825,10 @@ class NodePatchParser(
|
||||
)
|
||||
if unique_id is None:
|
||||
# This will usually happen when a node is disabled
|
||||
disabled_nodes = self.manifest.disabled_lookup.find(patch.name, patch.package_name)
|
||||
if disabled_nodes:
|
||||
for node in disabled_nodes:
|
||||
node.patch_path = source_file.file_id
|
||||
return
|
||||
|
||||
# patches can't be overwritten
|
||||
|
||||
@@ -3,6 +3,7 @@ from .snapshot import SnapshotRunner as snapshot_model_runner
|
||||
from .seed import SeedRunner as seed_runner
|
||||
from .test import TestRunner as test_runner
|
||||
|
||||
from dbt.adapters.factory import get_adapter
|
||||
from dbt.contracts.results import NodeStatus
|
||||
from dbt.exceptions import InternalException
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
@@ -64,3 +65,12 @@ class BuildTask(RunTask):
|
||||
|
||||
def get_runner_type(self, node):
|
||||
return self.RUNNER_MAP.get(node.resource_type)
|
||||
|
||||
def compile_manifest(self):
|
||||
if self.manifest is None:
|
||||
raise InternalException(
|
||||
'compile_manifest called before manifest was loaded'
|
||||
)
|
||||
adapter = get_adapter(self.config)
|
||||
compiler = adapter.get_compiler()
|
||||
self.graph = compiler.compile(self.manifest, add_test_edges=True)
|
||||
|
||||
@@ -414,13 +414,13 @@ class RunTask(CompileTask):
|
||||
|
||||
def after_run(self, adapter, results):
|
||||
# in on-run-end hooks, provide the value 'database_schemas', which is a
|
||||
# list of unique database, schema pairs that successfully executed
|
||||
# models were in. for backwards compatibility, include the old
|
||||
# list of unique (database, schema) pairs that successfully executed
|
||||
# models were in. For backwards compatibility, include the old
|
||||
# 'schemas', which did not include database information.
|
||||
|
||||
database_schema_set: Set[Tuple[Optional[str], str]] = {
|
||||
(r.node.database, r.node.schema) for r in results
|
||||
if r.status not in (
|
||||
if r.node.is_relational and r.status not in (
|
||||
NodeStatus.Error,
|
||||
NodeStatus.Fail,
|
||||
NodeStatus.Skipped
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
from abc import abstractmethod
|
||||
from concurrent.futures import as_completed
|
||||
from datetime import datetime
|
||||
@@ -14,6 +15,7 @@ from .printer import (
|
||||
)
|
||||
|
||||
from dbt import ui
|
||||
from dbt.clients.system import write_file
|
||||
from dbt.task.base import ConfiguredTask
|
||||
from dbt.adapters.base import BaseRelation
|
||||
from dbt.adapters.factory import get_adapter
|
||||
@@ -69,6 +71,9 @@ class ManifestTask(ConfiguredTask):
|
||||
if flags.WRITE_JSON:
|
||||
path = os.path.join(self.config.target_path, MANIFEST_FILE_NAME)
|
||||
self.manifest.write(path)
|
||||
if os.getenv('DBT_WRITE_FILES'):
|
||||
path = os.path.join(self.config.target_path, 'files.json')
|
||||
write_file(path, json.dumps(self.manifest.files, cls=dbt.utils.JSONEncoder, indent=4))
|
||||
|
||||
def load_manifest(self):
|
||||
self.manifest = ManifestLoader.get_full_manifest(self.config)
|
||||
@@ -438,7 +443,7 @@ class GraphRunnableTask(ManifestTask):
|
||||
)
|
||||
|
||||
if len(self._flattened_nodes) == 0:
|
||||
logger.warning("WARNING: Nothing to do. Try checking your model "
|
||||
logger.warning("\nWARNING: Nothing to do. Try checking your model "
|
||||
"configs and model specification args")
|
||||
result = self.get_result(
|
||||
results=[],
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from distutils.util import strtobool
|
||||
|
||||
from dataclasses import dataclass
|
||||
from dbt import utils
|
||||
from dbt.dataclass_schema import dbtClassMixin
|
||||
@@ -19,6 +21,7 @@ from dbt.context.providers import generate_runtime_model
|
||||
from dbt.clients.jinja import MacroGenerator
|
||||
from dbt.exceptions import (
|
||||
InternalException,
|
||||
invalid_bool_error,
|
||||
missing_materialization
|
||||
)
|
||||
from dbt.graph import (
|
||||
@@ -36,6 +39,23 @@ class TestResultData(dbtClassMixin):
|
||||
should_warn: bool
|
||||
should_error: bool
|
||||
|
||||
@classmethod
|
||||
def validate(cls, data):
|
||||
data['should_warn'] = cls.convert_bool_type(data['should_warn'])
|
||||
data['should_error'] = cls.convert_bool_type(data['should_error'])
|
||||
super().validate(data)
|
||||
|
||||
def convert_bool_type(field) -> bool:
|
||||
# if it's type string let python decide if it's a valid value to convert to bool
|
||||
if isinstance(field, str):
|
||||
try:
|
||||
return bool(strtobool(field)) # type: ignore
|
||||
except ValueError:
|
||||
raise invalid_bool_error(field, 'get_test_sql')
|
||||
|
||||
# need this so we catch both true bools and 0/1
|
||||
return bool(field)
|
||||
|
||||
|
||||
class TestRunner(CompileRunner):
|
||||
def describe_node(self):
|
||||
|
||||
@@ -96,5 +96,5 @@ def _get_dbt_plugins_info():
|
||||
yield plugin_name, mod.version
|
||||
|
||||
|
||||
__version__ = '0.21.0b2'
|
||||
__version__ = '0.21.1'
|
||||
installed = get_installed_version()
|
||||
|
||||
@@ -284,12 +284,12 @@ def parse_args(argv=None):
|
||||
parser.add_argument('adapter')
|
||||
parser.add_argument('--title-case', '-t', default=None)
|
||||
parser.add_argument('--dependency', action='append')
|
||||
parser.add_argument('--dbt-core-version', default='0.21.0b2')
|
||||
parser.add_argument('--dbt-core-version', default='0.21.1')
|
||||
parser.add_argument('--email')
|
||||
parser.add_argument('--author')
|
||||
parser.add_argument('--url')
|
||||
parser.add_argument('--sql', action='store_true')
|
||||
parser.add_argument('--package-version', default='0.21.0b2')
|
||||
parser.add_argument('--package-version', default='0.21.1')
|
||||
parser.add_argument('--project-version', default='1.0')
|
||||
parser.add_argument(
|
||||
'--no-dependency', action='store_false', dest='set_dependency'
|
||||
|
||||
@@ -24,7 +24,7 @@ def read(fname):
|
||||
|
||||
|
||||
package_name = "dbt-core"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """dbt (data build tool) is a command line tool that helps \
|
||||
analysts and engineers transform data in their warehouse more effectively"""
|
||||
|
||||
|
||||
75
docker/requirements/requirements.0.21.0.txt
Normal file
75
docker/requirements/requirements.0.21.0.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.19.0
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.18.53
|
||||
botocore==1.21.53
|
||||
cachetools==4.2.4
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.3
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.28.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.2.0
|
||||
google-resumable-media==2.0.3
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.41.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.2
|
||||
protobuf==3.17.3
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.4
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.3
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.6.0
|
||||
75
docker/requirements/requirements.0.21.0rc1.txt
Normal file
75
docker/requirements/requirements.0.21.0rc1.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.18.0
|
||||
azure-storage-blob==12.8.1
|
||||
Babel==2.9.1
|
||||
boto3==1.18.44
|
||||
botocore==1.21.44
|
||||
cachetools==4.2.2
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.26.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.1.2
|
||||
google-resumable-media==2.0.2
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.40.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.0
|
||||
protobuf==3.18.0
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.1
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.1
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.6
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.5.0
|
||||
75
docker/requirements/requirements.0.21.0rc2.txt
Normal file
75
docker/requirements/requirements.0.21.0rc2.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.18.0
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.18.48
|
||||
botocore==1.21.48
|
||||
cachetools==4.2.2
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.3
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.27.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.2.0
|
||||
google-resumable-media==2.0.3
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.40.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.0
|
||||
protobuf==3.17.3
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.4
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.1
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.5.0
|
||||
75
docker/requirements/requirements.0.21.1.txt
Normal file
75
docker/requirements/requirements.0.21.1.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.20.1
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.20.15
|
||||
botocore==1.23.15
|
||||
cachetools==4.2.4
|
||||
certifi==2021.10.8
|
||||
cffi==1.15.0
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.8
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.30.1
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.3.0
|
||||
google-resumable-media==2.1.0
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.42.0
|
||||
hologram==0.0.14
|
||||
idna==3.3
|
||||
importlib-metadata==4.8.2
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.4
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.3
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.8
|
||||
protobuf==3.19.1
|
||||
psycopg2-binary==2.9.2
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.21
|
||||
pycryptodomex==3.11.0
|
||||
PyJWT==2.3.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==3.0.6
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.3
|
||||
PyYAML==6.0
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.8
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.2
|
||||
zipp==3.6.0
|
||||
75
docker/requirements/requirements.0.21.1rc1.txt
Normal file
75
docker/requirements/requirements.0.21.1rc1.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.19.1
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.19.9
|
||||
botocore==1.22.9
|
||||
cachetools==4.2.4
|
||||
certifi==2021.10.8
|
||||
cffi==1.15.0
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.7
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.29.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.3.0
|
||||
google-resumable-media==2.1.0
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.41.1
|
||||
hologram==0.0.14
|
||||
idna==3.3
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.4
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.7
|
||||
protobuf==3.19.1
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.11.0
|
||||
PyJWT==2.3.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==3.0.4
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.3
|
||||
PyYAML==6.0
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.2
|
||||
zipp==3.6.0
|
||||
75
docker/requirements/requirements.0.21.1rc2.txt
Normal file
75
docker/requirements/requirements.0.21.1rc2.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.20.1
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.20.5
|
||||
botocore==1.23.5
|
||||
cachetools==4.2.4
|
||||
certifi==2021.10.8
|
||||
cffi==1.15.0
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.7
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.30.1
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.3.0
|
||||
google-resumable-media==2.1.0
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.41.1
|
||||
hologram==0.0.14
|
||||
idna==3.3
|
||||
importlib-metadata==4.8.2
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.4
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.8
|
||||
protobuf==3.19.1
|
||||
psycopg2-binary==2.9.2
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.21
|
||||
pycryptodomex==3.11.0
|
||||
PyJWT==2.3.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==3.0.6
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.3
|
||||
PyYAML==6.0
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.2
|
||||
zipp==3.6.0
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b2'
|
||||
version = '0.21.1'
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-bigquery"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """The bigquery adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
@@ -50,7 +50,7 @@ setup(
|
||||
'protobuf>=3.13.0,<4',
|
||||
'google-cloud-core>=1.3.0,<2',
|
||||
'google-cloud-bigquery>=1.25.0,<3',
|
||||
'google-api-core>=1.16.0,<2',
|
||||
'google-api-core>=1.16.0,<1.31.3',
|
||||
'googleapis-common-protos>=1.6.0,<2',
|
||||
'six>=1.14.0',
|
||||
],
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b2'
|
||||
version = '0.21.1'
|
||||
|
||||
@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
|
||||
|
||||
|
||||
package_name = "dbt-postgres"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """The postgres adpter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b2'
|
||||
version = '0.21.1'
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-redshift"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """The redshift adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b2'
|
||||
version = '0.21.1'
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
{% macro snowflake__load_csv_rows(model, agate_table) %}
|
||||
{% set batch_size = get_batch_size() %}
|
||||
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
|
||||
{% set bindings = [] %}
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-snowflake"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """The snowflake adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
2
setup.py
2
setup.py
@@ -24,7 +24,7 @@ with open(os.path.join(this_directory, 'README.md')) as f:
|
||||
|
||||
|
||||
package_name = "dbt"
|
||||
package_version = "0.21.0b2"
|
||||
package_version = "0.21.1"
|
||||
description = """With dbt, data analysts and engineers can build analytics \
|
||||
the way engineers build applications."""
|
||||
|
||||
|
||||
1
test/integration/005_simple_seed_test/data-big/.gitignore
vendored
Normal file
1
test/integration/005_simple_seed_test/data-big/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.csv
|
||||
@@ -1,5 +1,5 @@
|
||||
import os
|
||||
|
||||
import csv
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
|
||||
|
||||
@@ -311,4 +311,43 @@ class TestSimpleSeedWithDots(DBTIntegrationTest):
|
||||
@use_profile('postgres')
|
||||
def test_postgres_simple_seed(self):
|
||||
results = self.run_dbt(["seed"])
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
class TestSimpleBigSeedBatched(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "simple_seed_005"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
"data-paths": ['data-big'],
|
||||
'seeds': {
|
||||
'quote_columns': False,
|
||||
}
|
||||
}
|
||||
|
||||
def test_big_batched_seed(self):
|
||||
with open('data-big/my_seed.csv', 'w') as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(['id'])
|
||||
for i in range(0, 20000):
|
||||
writer.writerow([i])
|
||||
|
||||
results = self.run_dbt(["seed"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_big_batched_seed(self):
|
||||
self.test_big_batched_seed()
|
||||
|
||||
@use_profile('snowflake')
|
||||
def test_snowflake_big_batched_seed(self):
|
||||
self.test_big_batched_seed()
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
{% macro get_test_sql(main_sql, fail_calc, warn_if, error_if, limit) -%}
|
||||
select
|
||||
{{ fail_calc }} as failures,
|
||||
case when {{ fail_calc }} {{ warn_if }} then 1 else 0 end as should_warn,
|
||||
case when {{ fail_calc }} {{ error_if }} then 1 else 0 end as should_error
|
||||
from (
|
||||
{{ main_sql }}
|
||||
{{ "limit " ~ limit if limit != none }}
|
||||
) dbt_internal_test
|
||||
{%- endmacro %}
|
||||
@@ -0,0 +1,10 @@
|
||||
{% macro get_test_sql(main_sql, fail_calc, warn_if, error_if, limit) -%}
|
||||
select
|
||||
{{ fail_calc }} as failures,
|
||||
case when {{ fail_calc }} {{ warn_if }} then 'x' else 'y' end as should_warn,
|
||||
case when {{ fail_calc }} {{ error_if }} then 'x' else 'y' end as should_error
|
||||
from (
|
||||
{{ main_sql }}
|
||||
{{ "limit " ~ limit if limit != none }}
|
||||
) dbt_internal_test
|
||||
{% endmacro %}
|
||||
@@ -0,0 +1,37 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: table_limit_null
|
||||
description: "The table has 1 null value, and we're okay with that, until it's more than 1."
|
||||
columns:
|
||||
- name: favorite_color_full_list
|
||||
description: "The favorite color"
|
||||
- name: count
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
warn_if: '>1'
|
||||
|
||||
- name: table_warning_limit_null
|
||||
description: "The table has 1 null value, and we're okay with 1, but want to know of any."
|
||||
columns:
|
||||
- name: favorite_color_full_list
|
||||
description: "The favorite color"
|
||||
- name: count
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
|
||||
- name: table_failure_limit_null
|
||||
description: "The table has 2 null values and that's not ok. Fails."
|
||||
columns:
|
||||
- name: favorite_color_full_list
|
||||
description: "The favorite color"
|
||||
- name: count
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
{{
|
||||
config(
|
||||
materialized='table'
|
||||
)
|
||||
}}
|
||||
|
||||
select * from {{ref('table_limit_null')}}
|
||||
|
||||
UNION ALL
|
||||
|
||||
select 'magenta' as favorite_color_full_list, null as count
|
||||
@@ -0,0 +1,13 @@
|
||||
{{
|
||||
config(
|
||||
materialized='table'
|
||||
)
|
||||
}}
|
||||
|
||||
select favorite_color as favorite_color_full_list, count(*) as count
|
||||
from {{ this.schema }}.seed
|
||||
group by 1
|
||||
|
||||
UNION ALL
|
||||
|
||||
select 'purple' as favorite_color_full_list, null as count
|
||||
@@ -0,0 +1,7 @@
|
||||
{{
|
||||
config(
|
||||
materialized='table'
|
||||
)
|
||||
}}
|
||||
|
||||
select * from {{ref('table_limit_null')}}
|
||||
@@ -0,0 +1,3 @@
|
||||
select * from {{ ref('my_model_pass') }}
|
||||
UNION ALL
|
||||
select null as id
|
||||
@@ -0,0 +1,3 @@
|
||||
select 1 as id
|
||||
UNION ALL
|
||||
select null as id
|
||||
@@ -0,0 +1 @@
|
||||
select * from {{ ref('my_model_pass') }}
|
||||
@@ -0,0 +1,31 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: my_model_pass
|
||||
description: "The table has 1 null values, and we're okay with that, until it's more than 1."
|
||||
columns:
|
||||
- name: id
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
warn_if: '>1'
|
||||
|
||||
- name: my_model_warning
|
||||
description: "The table has 1 null values, and we're okay with that, but let us know"
|
||||
columns:
|
||||
- name: id
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
|
||||
- name: my_model_failure
|
||||
description: "The table has 2 null values, and we're not okay with that"
|
||||
columns:
|
||||
- name: id
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null:
|
||||
error_if: '>1'
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
select 1 as id
|
||||
UNION ALL
|
||||
select null as id
|
||||
@@ -0,0 +1,12 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: my_model
|
||||
description: "The table has 1 null values, and we're not okay with that."
|
||||
columns:
|
||||
- name: id
|
||||
description: "The number of responses for this favorite color - purple will be null"
|
||||
tests:
|
||||
- not_null
|
||||
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
/*{# This test will fail if get_where_subquery() is missing from TestContext + TestMacroNamespace #}*/
|
||||
|
||||
{% test self_referential(model) %}
|
||||
|
||||
{%- set relation = api.Relation.create(schema=model.schema, identifier=model.table) -%}
|
||||
{%- set columns = adapter.get_columns_in_relation(relation) -%}
|
||||
{%- set columns_csv = columns | map(attribute='name') | list | join(', ') -%}
|
||||
|
||||
select {{ columns_csv }} from {{ model }}
|
||||
limit 0
|
||||
|
||||
{% endtest %}
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as fun
|
||||
@@ -0,0 +1,6 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: model_a
|
||||
tests:
|
||||
- self_referential
|
||||
@@ -95,6 +95,241 @@ class TestSchemaTests(DBTIntegrationTest):
|
||||
for result in test_results:
|
||||
self.assertTestFailed(result)
|
||||
|
||||
class TestLimitedSchemaTests(DBTIntegrationTest):
|
||||
|
||||
def setUp(self):
|
||||
DBTIntegrationTest.setUp(self)
|
||||
self.run_sql_file("seed.sql")
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "schema_tests_008"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-v2/limit_null"
|
||||
|
||||
def run_schema_validations(self):
|
||||
args = FakeArgs()
|
||||
test_task = TestTask(args, self.config)
|
||||
return test_task.run()
|
||||
|
||||
def assertTestFailed(self, result):
|
||||
self.assertEqual(result.status, "fail")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} did not fail'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestWarn(self, result):
|
||||
self.assertEqual(result.status, "warn")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} passed without expected warning'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestPassed(self, result):
|
||||
self.assertEqual(result.status, "pass")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertEqual(
|
||||
result.failures, 0,
|
||||
'test {} failed'.format(result.node.name)
|
||||
)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_limit_schema_tests(self):
|
||||
results = self.run_dbt(strict=False)
|
||||
self.assertEqual(len(results), 3)
|
||||
test_results = self.run_schema_validations()
|
||||
self.assertEqual(len(test_results), 3)
|
||||
|
||||
for result in test_results:
|
||||
# assert that all deliberately failing tests actually fail
|
||||
if 'failure' in result.node.name:
|
||||
self.assertTestFailed(result)
|
||||
# assert that tests with warnings have them
|
||||
elif 'warning' in result.node.name:
|
||||
self.assertTestWarn(result)
|
||||
# assert that actual tests pass
|
||||
else:
|
||||
self.assertTestPassed(result)
|
||||
# warnings are also marked as failures
|
||||
self.assertEqual(sum(x.failures for x in test_results), 3)
|
||||
|
||||
|
||||
class TestDefaultBoolType(DBTIntegrationTest):
|
||||
# test with default True/False in get_test_sql macro
|
||||
|
||||
def setUp(self):
|
||||
DBTIntegrationTest.setUp(self)
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "schema_tests_008"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-v2/override_get_test_models"
|
||||
|
||||
def run_schema_validations(self):
|
||||
args = FakeArgs()
|
||||
test_task = TestTask(args, self.config)
|
||||
return test_task.run()
|
||||
|
||||
def assertTestFailed(self, result):
|
||||
self.assertEqual(result.status, "fail")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} did not fail'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestWarn(self, result):
|
||||
self.assertEqual(result.status, "warn")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} passed without expected warning'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestPassed(self, result):
|
||||
self.assertEqual(result.status, "pass")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertEqual(
|
||||
result.failures, 0,
|
||||
'test {} failed'.format(result.node.name)
|
||||
)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_limit_schema_tests(self):
|
||||
results = self.run_dbt(strict=False)
|
||||
self.assertEqual(len(results), 3)
|
||||
test_results = self.run_schema_validations()
|
||||
self.assertEqual(len(test_results), 3)
|
||||
|
||||
for result in test_results:
|
||||
# assert that all deliberately failing tests actually fail
|
||||
if 'failure' in result.node.name:
|
||||
self.assertTestFailed(result)
|
||||
# assert that tests with warnings have them
|
||||
elif 'warning' in result.node.name:
|
||||
self.assertTestWarn(result)
|
||||
# assert that actual tests pass
|
||||
else:
|
||||
self.assertTestPassed(result)
|
||||
# warnings are also marked as failures
|
||||
self.assertEqual(sum(x.failures for x in test_results), 3)
|
||||
|
||||
|
||||
class TestOtherBoolType(DBTIntegrationTest):
|
||||
# test with expected 0/1 in custom get_test_sql macro
|
||||
|
||||
def setUp(self):
|
||||
DBTIntegrationTest.setUp(self)
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "schema_tests_008"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-v2/override_get_test_models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
"macro-paths": ["macros-v2/override_get_test_macros"],
|
||||
}
|
||||
|
||||
def run_schema_validations(self):
|
||||
args = FakeArgs()
|
||||
test_task = TestTask(args, self.config)
|
||||
return test_task.run()
|
||||
|
||||
def assertTestFailed(self, result):
|
||||
self.assertEqual(result.status, "fail")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} did not fail'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestWarn(self, result):
|
||||
self.assertEqual(result.status, "warn")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertTrue(
|
||||
result.failures > 0,
|
||||
'test {} passed without expected warning'.format(result.node.name)
|
||||
)
|
||||
|
||||
def assertTestPassed(self, result):
|
||||
self.assertEqual(result.status, "pass")
|
||||
self.assertFalse(result.skipped)
|
||||
self.assertEqual(
|
||||
result.failures, 0,
|
||||
'test {} failed'.format(result.node.name)
|
||||
)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_limit_schema_tests(self):
|
||||
results = self.run_dbt(strict=False)
|
||||
self.assertEqual(len(results), 3)
|
||||
test_results = self.run_schema_validations()
|
||||
self.assertEqual(len(test_results), 3)
|
||||
|
||||
for result in test_results:
|
||||
# assert that all deliberately failing tests actually fail
|
||||
if 'failure' in result.node.name:
|
||||
self.assertTestFailed(result)
|
||||
# assert that tests with warnings have them
|
||||
elif 'warning' in result.node.name:
|
||||
self.assertTestWarn(result)
|
||||
# assert that actual tests pass
|
||||
else:
|
||||
self.assertTestPassed(result)
|
||||
# warnings are also marked as failures
|
||||
self.assertEqual(sum(x.failures for x in test_results), 3)
|
||||
|
||||
|
||||
class TestNonBoolType(DBTIntegrationTest):
|
||||
# test with invalid 'x'/'y' in custom get_test_sql macro
|
||||
def setUp(self):
|
||||
DBTIntegrationTest.setUp(self)
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "schema_tests_008"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-v2/override_get_test_models_fail"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
"macro-paths": ["macros-v2/override_get_test_macros_fail"],
|
||||
}
|
||||
|
||||
def run_schema_validations(self):
|
||||
args = FakeArgs()
|
||||
|
||||
test_task = TestTask(args, self.config)
|
||||
return test_task.run()
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_limit_schema_tests(self):
|
||||
results = self.run_dbt()
|
||||
self.assertEqual(len(results), 1)
|
||||
run_result = self.run_dbt(['test'], expect_pass=False)
|
||||
results = run_result.results
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(results[0].status, TestStatus.Error)
|
||||
self.assertRegex(results[0].message, r"'get_test_sql' returns 'x'")
|
||||
|
||||
|
||||
class TestMalformedSchemaTests(DBTIntegrationTest):
|
||||
|
||||
@@ -566,3 +801,28 @@ class TestInvalidSchema(DBTIntegrationTest):
|
||||
results = self.run_dbt()
|
||||
self.assertRegex(str(exc.exception), r"'models' is not a list")
|
||||
|
||||
class TestSchemaTestContextWhereSubq(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "schema_tests_008"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "test-context-where-subq-models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
"macro-paths": ["test-context-where-subq-macros"],
|
||||
}
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_test_context_tests(self):
|
||||
# This test tests that get_where_subquery() is included in TestContext + TestMacroNamespace,
|
||||
# otherwise api.Relation.create() will return an error
|
||||
results = self.run_dbt()
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
results = self.run_dbt(['test'])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{% macro postgres__get_columns_in_relation(relation) %}
|
||||
{{ return('a string') }}
|
||||
{% endmacro %}
|
||||
@@ -133,6 +133,64 @@ class TestMacroOverrideBuiltin(DBTIntegrationTest):
|
||||
self.run_dbt()
|
||||
|
||||
|
||||
class TestMacroOverridePackage(DBTIntegrationTest):
|
||||
"""
|
||||
The macro in `override-postgres-get-columns-macros` should override the
|
||||
`get_columns_in_relation` macro by default.
|
||||
"""
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "test_macros_016"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return 'override-get-columns-models'
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'macro-paths': ['override-postgres-get-columns-macros'],
|
||||
}
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_overrides(self):
|
||||
# the first time, the model doesn't exist
|
||||
self.run_dbt()
|
||||
self.run_dbt()
|
||||
|
||||
|
||||
class TestMacroNotOverridePackage(DBTIntegrationTest):
|
||||
"""
|
||||
The macro in `override-postgres-get-columns-macros` does NOT override the
|
||||
`get_columns_in_relation` macro because we tell dispatch to not look at the
|
||||
postgres macros.
|
||||
"""
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "test_macros_016"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return 'override-get-columns-models'
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'macro-paths': ['override-postgres-get-columns-macros'],
|
||||
'dispatch': [{'macro_namespace': 'dbt', 'search_order': ['dbt']}],
|
||||
}
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_overrides(self):
|
||||
# the first time, the model doesn't exist
|
||||
self.run_dbt(expect_pass=False)
|
||||
self.run_dbt(expect_pass=False)
|
||||
|
||||
|
||||
class TestDispatchMacroOverrideBuiltin(TestMacroOverrideBuiltin):
|
||||
# test the same functionality as above, but this time,
|
||||
# dbt.get_columns_in_relation will dispatch to a default__ macro
|
||||
|
||||
@@ -1093,7 +1093,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
)
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.model': {
|
||||
@@ -1680,7 +1680,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
snapshot_path = self.dir('snapshot/snapshot_seed.sql')
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.ephemeral_copy': {
|
||||
@@ -2203,7 +2203,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
my_schema_name = self.unique_schema()
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.clustered': {
|
||||
@@ -2695,7 +2695,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
snapshot_path = self.dir('snapshot/snapshot_seed.sql')
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.model': {
|
||||
@@ -2959,7 +2959,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
elif key == 'metadata':
|
||||
metadata = manifest['metadata']
|
||||
self.verify_metadata(
|
||||
metadata, 'https://schemas.getdbt.com/dbt/manifest/v2.json')
|
||||
metadata, 'https://schemas.getdbt.com/dbt/manifest/v3.json')
|
||||
assert 'project_id' in metadata and metadata[
|
||||
'project_id'] == '098f6bcd4621d373cade4e832627b4f6'
|
||||
assert 'send_anonymous_usage_stats' in metadata and metadata[
|
||||
@@ -3100,7 +3100,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
run_results = _read_json('./target/run_results.json')
|
||||
assert 'metadata' in run_results
|
||||
self.verify_metadata(
|
||||
run_results['metadata'], 'https://schemas.getdbt.com/dbt/run-results/v2.json')
|
||||
run_results['metadata'], 'https://schemas.getdbt.com/dbt/run-results/v3.json')
|
||||
self.assertIn('elapsed_time', run_results)
|
||||
self.assertGreater(run_results['elapsed_time'], 0)
|
||||
self.assertTrue(
|
||||
|
||||
@@ -248,7 +248,7 @@ class TestSourceFreshness(SuccessfulSourcesTest):
|
||||
assert isinstance(data['elapsed_time'], float)
|
||||
self.assertBetween(data['metadata']['generated_at'],
|
||||
self.freshness_start_time)
|
||||
assert data['metadata']['dbt_schema_version'] == 'https://schemas.getdbt.com/dbt/sources/v1.json'
|
||||
assert data['metadata']['dbt_schema_version'] == 'https://schemas.getdbt.com/dbt/sources/v2.json'
|
||||
assert data['metadata']['dbt_version'] == dbt.version.__version__
|
||||
assert data['metadata']['invocation_id'] == dbt.tracking.active_user.invocation_id
|
||||
key = 'key'
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
{# trigger infinite recursion if not handled #}
|
||||
|
||||
{% macro my_infinitely_recursive_macro() %}
|
||||
{{ return(adapter.dispatch('my_infinitely_recursive_macro')()) }}
|
||||
{% endmacro %}
|
||||
|
||||
{% macro default__my_infinitely_recursive_macro() %}
|
||||
{% if unmet_condition %}
|
||||
{{ my_infinitely_recursive_macro() }}
|
||||
{% else %}
|
||||
{{ return('') }}
|
||||
{% endif %}
|
||||
{% endmacro %}
|
||||
@@ -1 +1,4 @@
|
||||
select * from {{ ref('seed') }}
|
||||
|
||||
-- establish a macro dependency that trips infinite recursion if not handled
|
||||
-- depends on: {{ my_infinitely_recursive_macro() }}
|
||||
@@ -1,4 +1,5 @@
|
||||
from test.integration.base import DBTIntegrationTest, FakeArgs, use_profile
|
||||
import yaml
|
||||
|
||||
from dbt.task.test import TestTask
|
||||
from dbt.task.list import ListTask
|
||||
@@ -20,12 +21,18 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
"test-paths": ["tests"]
|
||||
}
|
||||
|
||||
def list_tests_and_assert(self, include, exclude, expected_tests):
|
||||
def list_tests_and_assert(self, include, exclude, expected_tests, greedy=False, selector_name=None):
|
||||
list_args = [ 'ls', '--resource-type', 'test']
|
||||
if include:
|
||||
list_args.extend(('--select', include))
|
||||
if exclude:
|
||||
list_args.extend(('--exclude', exclude))
|
||||
if exclude:
|
||||
list_args.extend(('--exclude', exclude))
|
||||
if greedy:
|
||||
list_args.append('--greedy')
|
||||
if selector_name:
|
||||
list_args.extend(('--selector', selector_name))
|
||||
|
||||
listed = self.run_dbt(list_args)
|
||||
assert len(listed) == len(expected_tests)
|
||||
@@ -34,7 +41,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
assert sorted(test_names) == sorted(expected_tests)
|
||||
|
||||
def run_tests_and_assert(
|
||||
self, include, exclude, expected_tests, schema = False, data = False
|
||||
self, include, exclude, expected_tests, schema=False, data=False, greedy=False, selector_name=None
|
||||
):
|
||||
results = self.run_dbt(['run'])
|
||||
self.assertEqual(len(results), 2)
|
||||
@@ -48,6 +55,10 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
test_args.append('--schema')
|
||||
if data:
|
||||
test_args.append('--data')
|
||||
if greedy:
|
||||
test_args.append('--greedy')
|
||||
if selector_name:
|
||||
test_args.extend(('--selector', selector_name))
|
||||
|
||||
results = self.run_dbt(test_args)
|
||||
tests_run = [r.node.name for r in results]
|
||||
@@ -228,3 +239,80 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected)
|
||||
self.run_tests_and_assert(select, exclude, expected)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__model_a_greedy(self):
|
||||
select = 'model_a'
|
||||
exclude = None
|
||||
greedy = True
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
'unique_model_a_fun'
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__model_a_greedy_exclude_unique_tests(self):
|
||||
select = 'model_a'
|
||||
exclude = 'test_name:unique'
|
||||
greedy = True
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
||||
|
||||
class TestExpansionWithSelectors(TestSelectionExpansion):
|
||||
|
||||
@property
|
||||
def selectors_config(self):
|
||||
return yaml.safe_load('''
|
||||
selectors:
|
||||
- name: model_a_greedy_none
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
- name: model_a_greedy_false
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
greedy: false
|
||||
- name: model_a_greedy_true
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
greedy: true
|
||||
''')
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__selector_model_a_not_greedy(self):
|
||||
expected = ['just_a','unique_model_a_fun']
|
||||
|
||||
# when greedy is not specified, so implicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
||||
|
||||
# when greedy is explicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
||||
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__selector_model_a_yes_greedy(self):
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
'unique_model_a_fun'
|
||||
]
|
||||
|
||||
# when greedy is explicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
{{ config(materialized='table', enabled=False) }}
|
||||
|
||||
with source_data as (
|
||||
|
||||
select 1 as id
|
||||
union all
|
||||
select null as id
|
||||
|
||||
)
|
||||
|
||||
select *
|
||||
from source_data
|
||||
@@ -0,0 +1,13 @@
|
||||
- Disabled model
|
||||
{{ config(materialized='table', enabled=False) }}
|
||||
|
||||
with source_data as (
|
||||
|
||||
select 1 as id
|
||||
union all
|
||||
select null as id
|
||||
|
||||
)
|
||||
|
||||
select *
|
||||
from source_data
|
||||
@@ -0,0 +1,14 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
with source_data as (
|
||||
|
||||
{#- This is model three #}
|
||||
|
||||
select 1 as id
|
||||
union all
|
||||
select null as id
|
||||
|
||||
)
|
||||
|
||||
select *
|
||||
from source_data
|
||||
@@ -0,0 +1,15 @@
|
||||
{% snapshot orders_snapshot %}
|
||||
|
||||
{{
|
||||
config(
|
||||
target_schema=schema,
|
||||
strategy='check',
|
||||
unique_key='id',
|
||||
check_cols=['status'],
|
||||
)
|
||||
}}
|
||||
|
||||
select * from {{ ref('orders') }}
|
||||
|
||||
{% endsnapshot %}
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
{% snapshot orders_snapshot %}
|
||||
|
||||
{{
|
||||
config(
|
||||
target_schema=schema,
|
||||
strategy='check',
|
||||
unique_key='id',
|
||||
check_cols=['status'],
|
||||
)
|
||||
}}
|
||||
select * from {{ ref('orders') }}
|
||||
|
||||
{% endsnapshot %}
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id, 101 as user_id, 'pending' as status
|
||||
@@ -2,7 +2,7 @@ from dbt.exceptions import CompilationException
|
||||
from dbt.contracts.graph.manifest import Manifest
|
||||
from dbt.contracts.files import ParseFileType
|
||||
from dbt.contracts.results import TestStatus
|
||||
from test.integration.base import DBTIntegrationTest, use_profile, normalize
|
||||
from test.integration.base import DBTIntegrationTest, use_profile, normalize, get_manifest
|
||||
import shutil
|
||||
import os
|
||||
|
||||
@@ -10,16 +10,6 @@ import os
|
||||
# Note: every test case needs to have separate directories, otherwise
|
||||
# they will interfere with each other when tests are multi-threaded
|
||||
|
||||
def get_manifest():
|
||||
path = './target/partial_parse.msgpack'
|
||||
if os.path.exists(path):
|
||||
with open(path, 'rb') as fp:
|
||||
manifest_mp = fp.read()
|
||||
manifest: Manifest = Manifest.from_msgpack(manifest_mp)
|
||||
return manifest
|
||||
else:
|
||||
return None
|
||||
|
||||
class TestModels(DBTIntegrationTest):
|
||||
|
||||
@property
|
||||
@@ -78,6 +68,15 @@ class TestModels(DBTIntegrationTest):
|
||||
unique_test_id = tests[0]
|
||||
self.assertIn(unique_test_id, manifest.nodes)
|
||||
|
||||
# modify model sql file, ensure description still there
|
||||
shutil.copyfile('extra-files/model_three_modified.sql', 'models-a/model_three.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
manifest = get_manifest()
|
||||
model_id = 'model.test.model_three'
|
||||
self.assertIn(model_id, manifest.nodes)
|
||||
model_three_node = manifest.nodes[model_id]
|
||||
self.assertEqual(model_three_node.description, 'The third model')
|
||||
|
||||
# Change the model 3 test from unique to not_null
|
||||
shutil.copyfile('extra-files/models-schema2b.yml', 'models-a/schema.yml')
|
||||
results = self.run_dbt(["--partial-parse", "test"], expect_pass=False)
|
||||
@@ -187,6 +186,41 @@ class TestModels(DBTIntegrationTest):
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 3)
|
||||
|
||||
# Disable model_three
|
||||
shutil.copyfile('extra-files/model_three_disabled.sql', 'models-a/model_three.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 2)
|
||||
manifest = get_manifest()
|
||||
model_id = 'model.test.model_three'
|
||||
found_in_disabled = False
|
||||
for model in manifest.disabled:
|
||||
if model_id == model.unique_id:
|
||||
found_in_disabled = True
|
||||
self.assertTrue(found_in_disabled)
|
||||
self.assertNotIn(model_id, manifest.nodes)
|
||||
|
||||
# Edit disabled model three
|
||||
shutil.copyfile('extra-files/model_three_disabled2.sql', 'models-a/model_three.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 2)
|
||||
manifest = get_manifest()
|
||||
model_id = 'model.test.model_three'
|
||||
found_in_disabled = False
|
||||
for model in manifest.disabled:
|
||||
if model_id == model.unique_id:
|
||||
found_in_disabled = True
|
||||
self.assertTrue(found_in_disabled)
|
||||
self.assertNotIn(model_id, manifest.nodes)
|
||||
|
||||
# Remove disabled from model three
|
||||
shutil.copyfile('extra-files/model_three.sql', 'models-a/model_three.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 3)
|
||||
manifest = get_manifest()
|
||||
model_id = 'model.test.model_three'
|
||||
self.assertIn(model_id, manifest.nodes)
|
||||
self.assertNotIn(model_id, manifest.disabled)
|
||||
|
||||
def tearDown(self):
|
||||
if os.path.exists(normalize('models-a/model_two.sql')):
|
||||
os.remove(normalize('models-a/model_two.sql'))
|
||||
@@ -468,3 +502,53 @@ class TestMacros(DBTIntegrationTest):
|
||||
self.assertEqual(results[1].status, TestStatus.Fail)
|
||||
self.assertEqual(results[1].node.config.severity, 'ERROR')
|
||||
|
||||
|
||||
class TestSnapshots(DBTIntegrationTest):
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "test_068A"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-d"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'snapshot-paths': ['snapshots-d'],
|
||||
}
|
||||
|
||||
def tearDown(self):
|
||||
if os.path.exists(normalize('snapshots-d/snapshot.sql')):
|
||||
os.remove(normalize('snapshots-d/snapshot.sql'))
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_pp_snapshots(self):
|
||||
|
||||
# initial run
|
||||
results = self.run_dbt()
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
# add snapshot
|
||||
shutil.copyfile('extra-files/snapshot.sql', 'snapshots-d/snapshot.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 1)
|
||||
manifest = get_manifest()
|
||||
snapshot_id = 'snapshot.test.orders_snapshot'
|
||||
self.assertIn(snapshot_id, manifest.nodes)
|
||||
|
||||
# run snapshot
|
||||
results = self.run_dbt(["--partial-parse", "snapshot"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
# modify snapshot
|
||||
shutil.copyfile('extra-files/snapshot2.sql', 'snapshots-d/snapshot.sql')
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
# delete snapshot
|
||||
os.remove(normalize('snapshots-d/snapshot.sql'))
|
||||
results = self.run_dbt(["--partial-parse", "run"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id
|
||||
@@ -0,0 +1 @@
|
||||
select * from {{ ref('model_b') }}
|
||||
@@ -0,0 +1,41 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: model_a
|
||||
columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
to: ref('model_b')
|
||||
field: id
|
||||
- relationships:
|
||||
to: ref('model_c')
|
||||
field: id
|
||||
|
||||
- name: model_b
|
||||
columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
to: ref('model_a')
|
||||
field: id
|
||||
- relationships:
|
||||
to: ref('model_c')
|
||||
field: id
|
||||
|
||||
- name: model_c
|
||||
columns:
|
||||
- name: id
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- relationships:
|
||||
to: ref('model_a')
|
||||
field: id
|
||||
- relationships:
|
||||
to: ref('model_b')
|
||||
field: id
|
||||
@@ -0,0 +1 @@
|
||||
select null as id
|
||||
@@ -0,0 +1 @@
|
||||
select * from {{ ref('model_a') }}
|
||||
@@ -0,0 +1,8 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: model_a
|
||||
columns:
|
||||
- name: id
|
||||
tests:
|
||||
- not_null
|
||||
1
test/integration/069_build_test/test-files/model_b.sql
Normal file
1
test/integration/069_build_test/test-files/model_b.sql
Normal file
@@ -0,0 +1 @@
|
||||
select * from {{ ref('model_a') }}
|
||||
@@ -0,0 +1 @@
|
||||
select null from {{ ref('model_a') }}
|
||||
@@ -1,5 +1,7 @@
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
from test.integration.base import DBTIntegrationTest, use_profile, normalize
|
||||
import yaml
|
||||
import shutil
|
||||
import os
|
||||
|
||||
|
||||
class TestBuildBase(DBTIntegrationTest):
|
||||
@@ -79,3 +81,63 @@ class TestCircularRelationshipTestsBuild(TestBuildBase):
|
||||
actual = [r.status for r in results]
|
||||
expected = ['success']*7 + ['pass']*2
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
|
||||
class TestSimpleBlockingTest(TestBuildBase):
|
||||
@property
|
||||
def models(self):
|
||||
return "models-simple-blocking"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
"config-version": 2,
|
||||
"snapshot-paths": ["does-not-exist"],
|
||||
"data-paths": ["does-not-exist"],
|
||||
}
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_simple_blocking_test(self):
|
||||
""" Ensure that a failed test on model_a always blocks model_b """
|
||||
results = self.build(expect_pass=False)
|
||||
actual = [r.status for r in results]
|
||||
expected = ['success', 'fail', 'skipped']
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
|
||||
class TestInterdependentModels(TestBuildBase):
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
"config-version": 2,
|
||||
"snapshot-paths": ["snapshots-none"],
|
||||
"seeds": {
|
||||
"quote_columns": False,
|
||||
},
|
||||
}
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models-interdependent"
|
||||
|
||||
def tearDown(self):
|
||||
if os.path.exists(normalize('models-interdependent/model_b.sql')):
|
||||
os.remove(normalize('models-interdependent/model_b.sql'))
|
||||
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_interdependent_models(self):
|
||||
# check that basic build works
|
||||
shutil.copyfile('test-files/model_b.sql', 'models-interdependent/model_b.sql')
|
||||
results = self.build()
|
||||
self.assertEqual(len(results), 16)
|
||||
|
||||
# return null from model_b
|
||||
shutil.copyfile('test-files/model_b_null.sql', 'models-interdependent/model_b.sql')
|
||||
results = self.build(expect_pass=False)
|
||||
self.assertEqual(len(results), 16)
|
||||
actual = [str(r.status) for r in results]
|
||||
expected = ['error']*4 + ['skipped']*7 + ['pass']*2 + ['success']*3
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
{{
|
||||
config(
|
||||
materialized='incremental',
|
||||
unique_key='id',
|
||||
on_schema_change='sync_all_columns'
|
||||
|
||||
)
|
||||
}}
|
||||
|
||||
WITH source_data AS (SELECT * FROM {{ ref('model_a') }} )
|
||||
|
||||
{% set string_type = 'string' if target.type == 'bigquery' else 'varchar(10)' %}
|
||||
|
||||
{% if is_incremental() %}
|
||||
|
||||
SELECT id,
|
||||
cast(field1 as {{string_type}}) as field1
|
||||
|
||||
FROM source_data WHERE id NOT IN (SELECT id from {{ this }} )
|
||||
|
||||
{% else %}
|
||||
|
||||
select id,
|
||||
cast(field1 as {{string_type}}) as field1,
|
||||
cast(field2 as {{string_type}}) as field2
|
||||
|
||||
from source_data where id <= 3
|
||||
|
||||
{% endif %}
|
||||
@@ -0,0 +1,17 @@
|
||||
{{
|
||||
config(materialized='table')
|
||||
}}
|
||||
|
||||
with source_data as (
|
||||
|
||||
select * from {{ ref('model_a') }}
|
||||
|
||||
)
|
||||
|
||||
{% set string_type = 'string' if target.type == 'bigquery' else 'varchar(10)' %}
|
||||
|
||||
select id
|
||||
,cast(field1 as {{string_type}}) as field1
|
||||
|
||||
from source_data
|
||||
order by id
|
||||
@@ -1,7 +1,7 @@
|
||||
from test.integration.base import DBTIntegrationTest, FakeArgs, use_profile
|
||||
|
||||
|
||||
class TestSelectionExpansion(DBTIntegrationTest):
|
||||
class TestIncrementalSchemaChange(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "test_incremental_schema_069"
|
||||
@@ -17,24 +17,11 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
"test-paths": ["tests"]
|
||||
}
|
||||
|
||||
def list_tests_and_assert(self, include, exclude, expected_tests):
|
||||
list_args = [ 'ls', '--resource-type', 'test']
|
||||
if include:
|
||||
list_args.extend(('--select', include))
|
||||
if exclude:
|
||||
list_args.extend(('--exclude', exclude))
|
||||
|
||||
listed = self.run_dbt(list_args)
|
||||
print(listed)
|
||||
assert len(listed) == len(expected_tests)
|
||||
|
||||
test_names = [name.split('.')[2] for name in listed]
|
||||
assert sorted(test_names) == sorted(expected_tests)
|
||||
|
||||
def run_tests_and_assert(
|
||||
self, include, exclude, expected_tests, compare_source, compare_target, schema = False, data = False
|
||||
):
|
||||
def run_twice_and_assert(
|
||||
self, include, compare_source, compare_target
|
||||
):
|
||||
|
||||
# dbt run (twice)
|
||||
run_args = ['run']
|
||||
if include:
|
||||
run_args.extend(('--models', include))
|
||||
@@ -44,74 +31,33 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
|
||||
self.assertEqual(len(results_one), 3)
|
||||
self.assertEqual(len(results_two), 3)
|
||||
|
||||
test_args = ['test']
|
||||
if include:
|
||||
test_args.extend(('--models', include))
|
||||
if exclude:
|
||||
test_args.extend(('--exclude', exclude))
|
||||
if schema:
|
||||
test_args.append('--schema')
|
||||
if data:
|
||||
test_args.append('--data')
|
||||
|
||||
results = self.run_dbt(test_args)
|
||||
tests_run = [r.node.name for r in results]
|
||||
assert len(tests_run) == len(expected_tests)
|
||||
assert sorted(tests_run) == sorted(expected_tests)
|
||||
self.assertTablesEqual(compare_source, compare_target)
|
||||
|
||||
def run_incremental_ignore(self):
|
||||
select = 'model_a incremental_ignore incremental_ignore_target'
|
||||
compare_source = 'incremental_ignore'
|
||||
compare_target = 'incremental_ignore_target'
|
||||
exclude = None
|
||||
expected = [
|
||||
'select_from_a',
|
||||
'select_from_incremental_ignore',
|
||||
'select_from_incremental_ignore_target',
|
||||
'unique_model_a_id',
|
||||
'unique_incremental_ignore_id',
|
||||
'unique_incremental_ignore_target_id'
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected)
|
||||
self.run_tests_and_assert(select, exclude, expected, compare_source, compare_target)
|
||||
|
||||
self.run_twice_and_assert(select, compare_source, compare_target)
|
||||
|
||||
def run_incremental_append_new_columns(self):
|
||||
select = 'model_a incremental_append_new_columns incremental_append_new_columns_target'
|
||||
compare_source = 'incremental_append_new_columns'
|
||||
compare_target = 'incremental_append_new_columns_target'
|
||||
exclude = None
|
||||
expected = [
|
||||
'select_from_a',
|
||||
'select_from_incremental_append_new_columns',
|
||||
'select_from_incremental_append_new_columns_target',
|
||||
'unique_model_a_id',
|
||||
'unique_incremental_append_new_columns_id',
|
||||
'unique_incremental_append_new_columns_target_id'
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected)
|
||||
self.run_tests_and_assert(select, exclude, expected, compare_source, compare_target)
|
||||
|
||||
self.run_twice_and_assert(select, compare_source, compare_target)
|
||||
|
||||
def run_incremental_sync_all_columns(self):
|
||||
select = 'model_a incremental_sync_all_columns incremental_sync_all_columns_target'
|
||||
compare_source = 'incremental_sync_all_columns'
|
||||
compare_target = 'incremental_sync_all_columns_target'
|
||||
exclude = None
|
||||
expected = [
|
||||
'select_from_a',
|
||||
'select_from_incremental_sync_all_columns',
|
||||
'select_from_incremental_sync_all_columns_target',
|
||||
'unique_model_a_id',
|
||||
'unique_incremental_sync_all_columns_id',
|
||||
'unique_incremental_sync_all_columns_target_id'
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected)
|
||||
self.run_tests_and_assert(select, exclude, expected, compare_source, compare_target)
|
||||
self.run_twice_and_assert(select, compare_source, compare_target)
|
||||
|
||||
def run_incremental_sync_remove_only(self):
|
||||
select = 'model_a incremental_sync_remove_only incremental_sync_remove_only_target'
|
||||
compare_source = 'incremental_sync_remove_only'
|
||||
compare_target = 'incremental_sync_remove_only_target'
|
||||
self.run_twice_and_assert(select, compare_source, compare_target)
|
||||
|
||||
def run_incremental_fail_on_schema_change(self):
|
||||
select = 'model_a incremental_fail'
|
||||
results_one = self.run_dbt(['run', '--models', select, '--full-refresh'])
|
||||
@@ -130,7 +76,8 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
@use_profile('postgres')
|
||||
def test__postgres__run_incremental_sync_all_columns(self):
|
||||
self.run_incremental_sync_all_columns()
|
||||
|
||||
self.run_incremental_sync_remove_only()
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__run_incremental_fail_on_schema_change(self):
|
||||
self.run_incremental_fail_on_schema_change()
|
||||
@@ -147,6 +94,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
@use_profile('redshift')
|
||||
def test__redshift__run_incremental_sync_all_columns(self):
|
||||
self.run_incremental_sync_all_columns()
|
||||
self.run_incremental_sync_remove_only()
|
||||
|
||||
@use_profile('redshift')
|
||||
def test__redshift__run_incremental_fail_on_schema_change(self):
|
||||
@@ -164,6 +112,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
@use_profile('snowflake')
|
||||
def test__snowflake__run_incremental_sync_all_columns(self):
|
||||
self.run_incremental_sync_all_columns()
|
||||
self.run_incremental_sync_remove_only()
|
||||
|
||||
@use_profile('snowflake')
|
||||
def test__snowflake__run_incremental_fail_on_schema_change(self):
|
||||
@@ -181,6 +130,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
@use_profile('bigquery')
|
||||
def test__bigquery__run_incremental_sync_all_columns(self):
|
||||
self.run_incremental_sync_all_columns()
|
||||
self.run_incremental_sync_remove_only()
|
||||
|
||||
@use_profile('bigquery')
|
||||
def test__bigquery__run_incremental_fail_on_schema_change(self):
|
||||
|
||||
@@ -120,7 +120,7 @@ def test_run_specs(include, exclude, expected):
|
||||
manifest = _get_manifest(graph)
|
||||
selector = graph_selector.NodeSelector(graph, manifest)
|
||||
spec = graph_cli.parse_difference(include, exclude)
|
||||
selected = selector.select_nodes(spec)
|
||||
selected, _ = selector.select_nodes(spec)
|
||||
|
||||
assert selected == expected
|
||||
|
||||
|
||||
@@ -273,7 +273,7 @@ class ManifestTest(unittest.TestCase):
|
||||
'child_map': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'env': {ENV_KEY_NAME: 'value'},
|
||||
# invocation_id is None, so it will not be present
|
||||
@@ -419,7 +419,7 @@ class ManifestTest(unittest.TestCase):
|
||||
'docs': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'project_id': '098f6bcd4621d373cade4e832627b4f6',
|
||||
'user_id': 'cfc9500f-dc7f-4c83-9ea7-2c581c1b38cf',
|
||||
@@ -662,7 +662,7 @@ class MixedManifestTest(unittest.TestCase):
|
||||
'child_map': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'invocation_id': '01234567-0123-0123-0123-0123456789ab',
|
||||
'env': {ENV_KEY_NAME: 'value'},
|
||||
|
||||
@@ -5,9 +5,11 @@ from unittest import mock
|
||||
import os
|
||||
import yaml
|
||||
|
||||
from copy import deepcopy
|
||||
import dbt.flags
|
||||
import dbt.parser
|
||||
from dbt import tracking
|
||||
from dbt.context.context_config import ContextConfig
|
||||
from dbt.exceptions import CompilationException
|
||||
from dbt.parser import (
|
||||
ModelParser, MacroParser, DataTestParser, SchemaParser,
|
||||
@@ -32,6 +34,9 @@ from dbt.contracts.graph.parsed import (
|
||||
UnpatchedSourceDefinition
|
||||
)
|
||||
from dbt.contracts.graph.unparsed import Docs
|
||||
from dbt.parser.models import (
|
||||
_get_config_call_dict, _get_exp_sample_result, _get_sample_result
|
||||
)
|
||||
import itertools
|
||||
from .utils import config_from_parts_or_dicts, normalize, generate_name_macros, MockNode, MockSource, MockDocumentation
|
||||
|
||||
@@ -573,6 +578,144 @@ class StaticModelParserTest(BaseParserTest):
|
||||
assert(self.parser._has_banned_macro(node))
|
||||
|
||||
|
||||
class StaticModelParserUnitTest(BaseParserTest):
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
self.parser = ModelParser(
|
||||
project=self.snowplow_project_config,
|
||||
manifest=self.manifest,
|
||||
root_project=self.root_project_config,
|
||||
)
|
||||
self.example_node = ParsedModelNode(
|
||||
alias='model_1',
|
||||
name='model_1',
|
||||
database='test',
|
||||
schema='analytics',
|
||||
resource_type=NodeType.Model,
|
||||
unique_id='model.snowplow.model_1',
|
||||
fqn=['snowplow', 'nested', 'model_1'],
|
||||
package_name='snowplow',
|
||||
original_file_path=normalize('models/nested/model_1.sql'),
|
||||
root_path=get_abs_os_path('./dbt_packages/snowplow'),
|
||||
config=NodeConfig(materialized='table'),
|
||||
path=normalize('nested/model_1.sql'),
|
||||
raw_sql='{{ config(materialized="table") }}select 1 as id',
|
||||
checksum=None,
|
||||
unrendered_config={'materialized': 'table'},
|
||||
)
|
||||
self.example_config = ContextConfig(
|
||||
self.root_project_config,
|
||||
self.example_node.fqn,
|
||||
self.example_node.resource_type,
|
||||
self.snowplow_project_config,
|
||||
)
|
||||
|
||||
def file_block_for(self, data, filename):
|
||||
return super().file_block_for(data, filename, 'models')
|
||||
|
||||
# tests that configs get extracted properly. the function should respect merge behavior,
|
||||
# but becuase it's only reading from one dictionary it won't matter except in edge cases
|
||||
# like this example with tags changing type to a list.
|
||||
def test_config_shifting(self):
|
||||
static_parser_result = {
|
||||
'configs': [
|
||||
('hello', 'world'),
|
||||
('flag', True),
|
||||
('tags', 'tag1'),
|
||||
('tags', 'tag2')
|
||||
]
|
||||
}
|
||||
expected = {
|
||||
'hello': 'world',
|
||||
'flag': True,
|
||||
'tags': ['tag1', 'tag2']
|
||||
}
|
||||
got = _get_config_call_dict(static_parser_result)
|
||||
self.assertEqual(expected, got)
|
||||
|
||||
def test_sample_results(self):
|
||||
# --- missed ref --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_node.refs = []
|
||||
node.refs = ['myref']
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(7, "missed_ref_value")], result)
|
||||
|
||||
# --- false positive ref --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_node.refs = ['myref']
|
||||
node.refs = []
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(6, "false_positive_ref_value")], result)
|
||||
|
||||
# --- missed source --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_node.sources = []
|
||||
node.sources = [['abc', 'def']]
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(5, 'missed_source_value')], result)
|
||||
|
||||
# --- false positive source --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_node.sources = [['abc', 'def']]
|
||||
node.sources = []
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(4, 'false_positive_source_value')], result)
|
||||
|
||||
# --- missed config --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_config._config_call_dict = {}
|
||||
config._config_call_dict = {'key': 'value'}
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(3, 'missed_config_value')], result)
|
||||
|
||||
# --- false positive config --- #
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
|
||||
sample_config._config_call_dict = {'key': 'value'}
|
||||
config._config_call_dict = {}
|
||||
|
||||
result = _get_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual([(2, "false_positive_config_value")], result)
|
||||
|
||||
def test_exp_sample_results(self):
|
||||
node = deepcopy(self.example_node)
|
||||
config = deepcopy(self.example_config)
|
||||
sample_node = deepcopy(self.example_node)
|
||||
sample_config = deepcopy(self.example_config)
|
||||
result = _get_exp_sample_result(sample_node, sample_config, node, config)
|
||||
self.assertEqual(["00_experimental_exact_match"], result)
|
||||
|
||||
|
||||
class SnapshotParserTest(BaseParserTest):
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
Reference in New Issue
Block a user