Compare commits

...

6 Commits

Author SHA1 Message Date
github-actions[bot]
d934e713db Bumping version to 1.4.0rc2 and generate CHANGELOG (#6661)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2023-01-19 12:13:06 -05:00
github-actions[bot]
ef9bb925d3 add backwards compatibility and default argument for incremental_predicates (#6628) (#6660)
* add backwards compatibility and default argument

* changie <3

* Update .changes/unreleased/Fixes-20230117-101342.yaml

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
(cherry picked from commit f841a7ca76)

Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
2023-01-19 10:24:49 -05:00
github-actions[bot]
f73359b87c [Backport 1.4.latest] convert 062_defer_state_tests (#6657)
* convert 062_defer_state_tests (#6616)

* Fix --favor-state flag

* Convert 062_defer_state_tests

* Revert "Fix --favor-state flag"

This reverts commit ccbdcbad98b26822629364e6fdbd2780db0c20d3.

* Reformat

* Revert "Revert "Fix --favor-state flag""

This reverts commit fa9d2a09d6.

(cherry picked from commit 07a004b301)

* Add changelog entry

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2023-01-19 13:27:23 +01:00
Emily Rockman
b4706c4dec finish message rename in types.proto (#6594) (#6596)
* finish message rename in types.proto

* add new parameter
2023-01-13 10:20:34 -06:00
github-actions[bot]
b46d35c13f Call update_event_status earlier + rename an event (#6572) (#6591)
* Rename HookFinished -> FinishedRunningStats

* Move update_event_status earlier when node finishes

* Add changelog entry

* Add update_event_status for skip

* Update changelog entry

(cherry picked from commit 86e8722cd8)

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2023-01-13 11:53:14 +01:00
github-actions[bot]
eba90863ed Bumping version to 1.4.0rc1 and generate CHANGELOG (#6569)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2023-01-10 22:04:55 -05:00
79 changed files with 1332 additions and 1159 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion] [bumpversion]
current_version = 1.4.0b1 current_version = 1.4.0rc2
parse = (?P<major>\d+) parse = (?P<major>\d+)
\.(?P<minor>\d+) \.(?P<minor>\d+)
\.(?P<patch>\d+) \.(?P<patch>\d+)

55
.changes/1.4.0-rc1.md Normal file
View File

@@ -0,0 +1,55 @@
## dbt-core 1.4.0-rc1 - January 11, 2023
### Breaking Changes
- Cleaned up exceptions to directly raise in code. Also updated the existing exception to meet PEP guidelines.Removed use of all exception functions in the code base and marked them all as deprecated to be removed next minor release. ([#6339](https://github.com/dbt-labs/dbt-core/issues/6339), [#6393](https://github.com/dbt-labs/dbt-core/issues/6393), [#6460](https://github.com/dbt-labs/dbt-core/issues/6460))
### Features
- Making timestamp optional for metrics ([#6398](https://github.com/dbt-labs/dbt-core/issues/6398))
- The meta configuration field is now included in the node_info property of structured logs. ([#6216](https://github.com/dbt-labs/dbt-core/issues/6216))
- Adds buildable selection mode ([#6365](https://github.com/dbt-labs/dbt-core/issues/6365))
- --warn-error-options: Treat warnings as errors for specific events, based on user configuration ([#6165](https://github.com/dbt-labs/dbt-core/issues/6165))
### Fixes
- fix missing f-strings, convert old .format() messages to f-strings for consistency ([#6241](https://github.com/dbt-labs/dbt-core/issues/6241))
- Fix typo in util.py ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904))
- add pre-commit install to make dev script in Makefile ([#6269](https://github.com/dbt-labs/dbt-core/issues/6269))
- Late-rendering for `pre_` and `post_hook`s in `dbt_project.yml` ([#6411](https://github.com/dbt-labs/dbt-core/issues/6411))
- [CT-1591] Don't parse empty Python files ([#6345](https://github.com/dbt-labs/dbt-core/issues/6345))
- fix docs generate --defer by adding defer_to_manifest to before_run ([#6488](https://github.com/dbt-labs/dbt-core/issues/6488))
- Bug when partial parsing with an empty schema file ([#4850](https://github.com/dbt-labs/dbt-core/issues/4850))
- Fix DBT_FAVOR_STATE env var ([#5859](https://github.com/dbt-labs/dbt-core/issues/5859))
- Restore historical behavior of certain disabled test messages, so that they are at the less obtrusive debug level, rather than the warning level. ([#6501](https://github.com/dbt-labs/dbt-core/issues/6501))
- Bump mashumuro version to get regression fix and add unit test to verify that fix. ([#6428](https://github.com/dbt-labs/dbt-core/issues/6428))
### Docs
- Updated minor typos encountered when skipping profile setup ([dbt-docs/#6529](https://github.com/dbt-labs/dbt-docs/issues/6529))
### Under the Hood
- Treat dense text blobs as binary for `git grep` ([#6294](https://github.com/dbt-labs/dbt-core/issues/6294))
- Prune partial parsing logging events ([#6313](https://github.com/dbt-labs/dbt-core/issues/6313))
- Updating the deprecation warning in the metric attributes renamed event ([#6507](https://github.com/dbt-labs/dbt-core/issues/6507))
- [CT-1693] Port severity test to Pytest ([#6466](https://github.com/dbt-labs/dbt-core/issues/6466))
- [CT-1694] Deprecate event tracking tests ([#6467](https://github.com/dbt-labs/dbt-core/issues/6467))
- Reorganize structured logging events to have two top keys ([#6311](https://github.com/dbt-labs/dbt-core/issues/6311))
- Combine some logging events ([#1716](https://github.com/dbt-labs/dbt-core/issues/1716), [#1717](https://github.com/dbt-labs/dbt-core/issues/1717), [#1719](https://github.com/dbt-labs/dbt-core/issues/1719))
- Check length of escaped strings in the adapter test ([#6566](https://github.com/dbt-labs/dbt-core/issues/6566))
### Dependencies
- Update agate requirement from <1.6.4,>=1.6 to >=1.6,<1.7.1 in /core ([#6506](https://github.com/dbt-labs/dbt-core/pull/6506))
### Contributors
- [@NiallRees](https://github.com/NiallRees) ([#5859](https://github.com/dbt-labs/dbt-core/issues/5859))
- [@agpapa](https://github.com/agpapa) ([#6365](https://github.com/dbt-labs/dbt-core/issues/6365))
- [@callum-mcdata](https://github.com/callum-mcdata) ([#6398](https://github.com/dbt-labs/dbt-core/issues/6398), [#6507](https://github.com/dbt-labs/dbt-core/issues/6507))
- [@dbeatty10](https://github.com/dbeatty10) ([#6411](https://github.com/dbt-labs/dbt-core/issues/6411), [#6294](https://github.com/dbt-labs/dbt-core/issues/6294), [#6566](https://github.com/dbt-labs/dbt-core/issues/6566))
- [@eltociear](https://github.com/eltociear) ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904))
- [@justbldwn](https://github.com/justbldwn) ([#6241](https://github.com/dbt-labs/dbt-core/issues/6241), [#6269](https://github.com/dbt-labs/dbt-core/issues/6269))
- [@mivanicova](https://github.com/mivanicova) ([#6488](https://github.com/dbt-labs/dbt-core/issues/6488))
- [@nshuman1](https://github.com/nshuman1) ([dbt-docs/#6529](https://github.com/dbt-labs/dbt-docs/issues/6529))
- [@tmastny](https://github.com/tmastny) ([#6216](https://github.com/dbt-labs/dbt-core/issues/6216))

10
.changes/1.4.0-rc2.md Normal file
View File

@@ -0,0 +1,10 @@
## dbt-core 1.4.0-rc2 - January 19, 2023
### Fixes
- Call update_event_status earlier for node results. Rename event 'HookFinished' -> FinishedRunningStats ([#6571](https://github.com/dbt-labs/dbt-core/issues/6571))
- Provide backward compatibility for `get_merge_sql` arguments ([#6625](https://github.com/dbt-labs/dbt-core/issues/6625))
- Fix behavior of --favor-state with --defer ([#6617](https://github.com/dbt-labs/dbt-core/issues/6617))
### Contributors
- [@dave-connors-3](https://github.com/dave-connors-3) ([#6625](https://github.com/dbt-labs/dbt-core/issues/6625))

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Call update_event_status earlier for node results. Rename event 'HookFinished' -> FinishedRunningStats
time: 2023-01-11T13:40:58.577722+01:00
custom:
Author: jtcohen6
Issue: "6571"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Provide backward compatibility for `get_merge_sql` arguments
time: 2023-01-17T10:13:42.118336-06:00
custom:
Author: dave-connors-3
Issue: "6625"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Fix behavior of --favor-state with --defer
time: 2023-01-19T11:11:01.354227+01:00
custom:
Author: jtcohen6
Issue: "6617"

View File

@@ -5,6 +5,74 @@
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version. - "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry) - Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
## dbt-core 1.4.0-rc2 - January 19, 2023
### Fixes
- Call update_event_status earlier for node results. Rename event 'HookFinished' -> FinishedRunningStats ([#6571](https://github.com/dbt-labs/dbt-core/issues/6571))
- Provide backward compatibility for `get_merge_sql` arguments ([#6625](https://github.com/dbt-labs/dbt-core/issues/6625))
- Fix behavior of --favor-state with --defer ([#6617](https://github.com/dbt-labs/dbt-core/issues/6617))
### Contributors
- [@dave-connors-3](https://github.com/dave-connors-3) ([#6625](https://github.com/dbt-labs/dbt-core/issues/6625))
## dbt-core 1.4.0-rc1 - January 11, 2023
### Breaking Changes
- Cleaned up exceptions to directly raise in code. Also updated the existing exception to meet PEP guidelines.Removed use of all exception functions in the code base and marked them all as deprecated to be removed next minor release. ([#6339](https://github.com/dbt-labs/dbt-core/issues/6339), [#6393](https://github.com/dbt-labs/dbt-core/issues/6393), [#6460](https://github.com/dbt-labs/dbt-core/issues/6460))
### Features
- Making timestamp optional for metrics ([#6398](https://github.com/dbt-labs/dbt-core/issues/6398))
- The meta configuration field is now included in the node_info property of structured logs. ([#6216](https://github.com/dbt-labs/dbt-core/issues/6216))
- Adds buildable selection mode ([#6365](https://github.com/dbt-labs/dbt-core/issues/6365))
- --warn-error-options: Treat warnings as errors for specific events, based on user configuration ([#6165](https://github.com/dbt-labs/dbt-core/issues/6165))
### Fixes
- fix missing f-strings, convert old .format() messages to f-strings for consistency ([#6241](https://github.com/dbt-labs/dbt-core/issues/6241))
- Fix typo in util.py ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904))
- add pre-commit install to make dev script in Makefile ([#6269](https://github.com/dbt-labs/dbt-core/issues/6269))
- Late-rendering for `pre_` and `post_hook`s in `dbt_project.yml` ([#6411](https://github.com/dbt-labs/dbt-core/issues/6411))
- [CT-1591] Don't parse empty Python files ([#6345](https://github.com/dbt-labs/dbt-core/issues/6345))
- fix docs generate --defer by adding defer_to_manifest to before_run ([#6488](https://github.com/dbt-labs/dbt-core/issues/6488))
- Bug when partial parsing with an empty schema file ([#4850](https://github.com/dbt-labs/dbt-core/issues/4850))
- Fix DBT_FAVOR_STATE env var ([#5859](https://github.com/dbt-labs/dbt-core/issues/5859))
- Restore historical behavior of certain disabled test messages, so that they are at the less obtrusive debug level, rather than the warning level. ([#6501](https://github.com/dbt-labs/dbt-core/issues/6501))
- Bump mashumuro version to get regression fix and add unit test to verify that fix. ([#6428](https://github.com/dbt-labs/dbt-core/issues/6428))
### Docs
- Updated minor typos encountered when skipping profile setup ([dbt-docs/#6529](https://github.com/dbt-labs/dbt-docs/issues/6529))
### Under the Hood
- Treat dense text blobs as binary for `git grep` ([#6294](https://github.com/dbt-labs/dbt-core/issues/6294))
- Prune partial parsing logging events ([#6313](https://github.com/dbt-labs/dbt-core/issues/6313))
- Updating the deprecation warning in the metric attributes renamed event ([#6507](https://github.com/dbt-labs/dbt-core/issues/6507))
- [CT-1693] Port severity test to Pytest ([#6466](https://github.com/dbt-labs/dbt-core/issues/6466))
- [CT-1694] Deprecate event tracking tests ([#6467](https://github.com/dbt-labs/dbt-core/issues/6467))
- Reorganize structured logging events to have two top keys ([#6311](https://github.com/dbt-labs/dbt-core/issues/6311))
- Combine some logging events ([#1716](https://github.com/dbt-labs/dbt-core/issues/1716), [#1717](https://github.com/dbt-labs/dbt-core/issues/1717), [#1719](https://github.com/dbt-labs/dbt-core/issues/1719))
- Check length of escaped strings in the adapter test ([#6566](https://github.com/dbt-labs/dbt-core/issues/6566))
### Dependencies
- Update agate requirement from <1.6.4,>=1.6 to >=1.6,<1.7.1 in /core ([#6506](https://github.com/dbt-labs/dbt-core/pull/6506))
### Contributors
- [@NiallRees](https://github.com/NiallRees) ([#5859](https://github.com/dbt-labs/dbt-core/issues/5859))
- [@agpapa](https://github.com/agpapa) ([#6365](https://github.com/dbt-labs/dbt-core/issues/6365))
- [@callum-mcdata](https://github.com/callum-mcdata) ([#6398](https://github.com/dbt-labs/dbt-core/issues/6398), [#6507](https://github.com/dbt-labs/dbt-core/issues/6507))
- [@dbeatty10](https://github.com/dbeatty10) ([#6411](https://github.com/dbt-labs/dbt-core/issues/6411), [#6294](https://github.com/dbt-labs/dbt-core/issues/6294), [#6566](https://github.com/dbt-labs/dbt-core/issues/6566))
- [@eltociear](https://github.com/eltociear) ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904))
- [@justbldwn](https://github.com/justbldwn) ([#6241](https://github.com/dbt-labs/dbt-core/issues/6241), [#6269](https://github.com/dbt-labs/dbt-core/issues/6269))
- [@mivanicova](https://github.com/mivanicova) ([#6488](https://github.com/dbt-labs/dbt-core/issues/6488))
- [@nshuman1](https://github.com/nshuman1) ([dbt-docs/#6529](https://github.com/dbt-labs/dbt-docs/issues/6529))
- [@tmastny](https://github.com/tmastny) ([#6216](https://github.com/dbt-labs/dbt-core/issues/6216))
## dbt-core 1.4.0-b1 - December 15, 2022 ## dbt-core 1.4.0-b1 - December 15, 2022
### Features ### Features
@@ -94,7 +162,6 @@
- [@timle2](https://github.com/timle2) ([#4205](https://github.com/dbt-labs/dbt-core/issues/4205)) - [@timle2](https://github.com/timle2) ([#4205](https://github.com/dbt-labs/dbt-core/issues/4205))
- [@dave-connors-3](https://github.com/dave-connors-3) ([#5680](https://github.com/dbt-labs/dbt-core/issues/5680)) - [@dave-connors-3](https://github.com/dave-connors-3) ([#5680](https://github.com/dbt-labs/dbt-core/issues/5680))
## Previous Releases ## Previous Releases
For information on prior major and minor releases, see their changelogs: For information on prior major and minor releases, see their changelogs:

View File

@@ -9,7 +9,7 @@ from dbt.config import Profile, Project, read_user_config
from dbt.config.renderer import DbtProjectYamlRenderer, ProfileRenderer from dbt.config.renderer import DbtProjectYamlRenderer, ProfileRenderer
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import InvalidOptionYAML from dbt.events.types import InvalidOptionYAML
from dbt.exceptions import DbtValidationError, OptionNotYamlDict from dbt.exceptions import DbtValidationError, OptionNotYamlDictError
def parse_cli_vars(var_string: str) -> Dict[str, Any]: def parse_cli_vars(var_string: str) -> Dict[str, Any]:
@@ -23,7 +23,7 @@ def parse_cli_yaml_string(var_string: str, cli_option_name: str) -> Dict[str, An
if var_type is dict: if var_type is dict:
return cli_vars return cli_vars
else: else:
raise OptionNotYamlDict(var_type, cli_option_name) raise OptionNotYamlDictError(var_type, cli_option_name)
except DbtValidationError: except DbtValidationError:
fire_event(InvalidOptionYAML(option_name=cli_option_name)) fire_event(InvalidOptionYAML(option_name=cli_option_name))
raise raise

View File

@@ -47,7 +47,9 @@ class NodeInfo(betterproto.Message):
node_status: str = betterproto.string_field(6) node_status: str = betterproto.string_field(6)
node_started_at: str = betterproto.string_field(7) node_started_at: str = betterproto.string_field(7)
node_finished_at: str = betterproto.string_field(8) node_finished_at: str = betterproto.string_field(8)
meta: str = betterproto.string_field(9) meta: Dict[str, str] = betterproto.map_field(
9, betterproto.TYPE_STRING, betterproto.TYPE_STRING
)
@dataclass @dataclass
@@ -945,7 +947,7 @@ class HooksRunningMsg(betterproto.Message):
@dataclass @dataclass
class HookFinished(betterproto.Message): class FinishedRunningStats(betterproto.Message):
"""E047""" """E047"""
stat_line: str = betterproto.string_field(1) stat_line: str = betterproto.string_field(1)
@@ -954,9 +956,9 @@ class HookFinished(betterproto.Message):
@dataclass @dataclass
class HookFinishedMsg(betterproto.Message): class FinishedRunningStatsMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1) info: "EventInfo" = betterproto.message_field(1)
data: "HookFinished" = betterproto.message_field(2) data: "FinishedRunningStats" = betterproto.message_field(2)
@dataclass @dataclass

View File

@@ -124,12 +124,13 @@ message MissingProfileTargetMsg {
// Skipped A006, A007 // Skipped A006, A007
// A008 // A008
message InvalidVarsYAML { message InvalidOptionYAML {
string option_name = 1;
} }
message InvalidVarsYAMLMsg { message InvalidOptionYAMLMsg {
EventInfo info = 1; EventInfo info = 1;
InvalidVarsYAML data = 2; InvalidOptionYAML data = 2;
} }
// A009 // A009
@@ -747,15 +748,15 @@ message HooksRunningMsg {
} }
// E047 // E047
message HookFinished { message FinishedRunningStats {
string stat_line = 1; string stat_line = 1;
string execution = 2; string execution = 2;
float execution_time = 3; float execution_time = 3;
} }
message HookFinishedMsg { message FinishedRunningStatsMsg {
EventInfo info = 1; EventInfo info = 1;
HookFinished data = 2; FinishedRunningStats data = 2;
} }

View File

@@ -755,7 +755,7 @@ class HooksRunning(InfoLevel, pt.HooksRunning):
@dataclass @dataclass
class HookFinished(InfoLevel, pt.HookFinished): class FinishedRunningStats(InfoLevel, pt.FinishedRunningStats):
def code(self): def code(self):
return "E047" return "E047"

View File

@@ -1703,7 +1703,7 @@ class UninstalledPackagesFoundError(CompilationError):
return msg return msg
class OptionNotYamlDict(CompilationError): class OptionNotYamlDictError(CompilationError):
def __init__(self, var_type, option_name): def __init__(self, var_type, option_name):
self.var_type = var_type self.var_type = var_type
self.option_name = option_name self.option_name = option_name

View File

@@ -1,8 +1,10 @@
{% macro get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates) -%} {% macro get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates=none) -%}
-- back compat for old kwarg name
{% set incremental_predicates = kwargs.get('predicates', incremental_predicates) %}
{{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, incremental_predicates) }} {{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, incremental_predicates) }}
{%- endmacro %} {%- endmacro %}
{% macro default__get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates) -%} {% macro default__get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates=none) -%}
{%- set predicates = [] if incremental_predicates is none else [] + incremental_predicates -%} {%- set predicates = [] if incremental_predicates is none else [] + incremental_predicates -%}
{%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%} {%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
{%- set merge_update_columns = config.get('merge_update_columns') -%} {%- set merge_update_columns = config.get('merge_update_columns') -%}

View File

@@ -486,7 +486,7 @@ def _build_snapshot_subparser(subparsers, base_subparser):
return sub return sub
def _add_defer_argument(*subparsers): def _add_defer_arguments(*subparsers):
for sub in subparsers: for sub in subparsers:
sub.add_optional_argument_inverse( sub.add_optional_argument_inverse(
"--defer", "--defer",
@@ -499,10 +499,6 @@ def _add_defer_argument(*subparsers):
""", """,
default=flags.DEFER_MODE, default=flags.DEFER_MODE,
) )
def _add_favor_state_argument(*subparsers):
for sub in subparsers:
sub.add_optional_argument_inverse( sub.add_optional_argument_inverse(
"--favor-state", "--favor-state",
enable_help=""" enable_help="""
@@ -580,7 +576,7 @@ def _build_docs_generate_subparser(subparsers, base_subparser):
Do not run "dbt compile" as part of docs generation Do not run "dbt compile" as part of docs generation
""", """,
) )
_add_defer_argument(generate_sub) _add_defer_arguments(generate_sub)
return generate_sub return generate_sub
@@ -1192,9 +1188,7 @@ def parse_args(args, cls=DBTArgumentParser):
# list_sub sets up its own arguments. # list_sub sets up its own arguments.
_add_selection_arguments(run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub) _add_selection_arguments(run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
# --defer # --defer
_add_defer_argument(run_sub, test_sub, build_sub, snapshot_sub, compile_sub) _add_defer_arguments(run_sub, test_sub, build_sub, snapshot_sub, compile_sub)
# --favor-state
_add_favor_state_argument(run_sub, test_sub, build_sub, snapshot_sub)
# --full-refresh # --full-refresh
_add_table_mutability_arguments(run_sub, compile_sub, build_sub) _add_table_mutability_arguments(run_sub, compile_sub, build_sub)

View File

@@ -4,6 +4,7 @@ import time
import traceback import traceback
from abc import ABCMeta, abstractmethod from abc import ABCMeta, abstractmethod
from typing import Type, Union, Dict, Any, Optional from typing import Type, Union, Dict, Any, Optional
from datetime import datetime
from dbt import tracking from dbt import tracking
from dbt import flags from dbt import flags
@@ -208,6 +209,9 @@ class BaseRunner(metaclass=ABCMeta):
self.before_execute() self.before_execute()
result = self.safe_run(manifest) result = self.safe_run(manifest)
self.node.update_event_status(
node_status=result.status, finished_at=datetime.utcnow().isoformat()
)
if not self.node.is_ephemeral_model: if not self.node.is_ephemeral_model:
self.after_execute(result) self.after_execute(result)
@@ -448,6 +452,9 @@ class BaseRunner(metaclass=ABCMeta):
) )
) )
else: else:
# 'skipped' nodes should not have a value for 'node_finished_at'
# they do have 'node_started_at', which is set in GraphRunnableTask.call_runner
self.node.update_event_status(node_status=RunStatus.Skipped)
fire_event( fire_event(
SkippingDetails( SkippingDetails(
resource_type=self.node.resource_type, resource_type=self.node.resource_type,

View File

@@ -83,6 +83,7 @@ class CompileTask(GraphRunnableTask):
adapter=adapter, adapter=adapter,
other=deferred_manifest, other=deferred_manifest,
selected=selected_uids, selected=selected_uids,
favor_state=bool(self.args.favor_state),
) )
# TODO: is it wrong to write the manifest here? I think it's right... # TODO: is it wrong to write the manifest here? I think it's right...
self.write_manifest() self.write_manifest()

View File

@@ -32,7 +32,7 @@ from dbt.events.types import (
DatabaseErrorRunningHook, DatabaseErrorRunningHook,
EmptyLine, EmptyLine,
HooksRunning, HooksRunning,
HookFinished, FinishedRunningStats,
LogModelResult, LogModelResult,
LogStartLine, LogStartLine,
LogHookEndLine, LogHookEndLine,
@@ -421,7 +421,9 @@ class RunTask(CompileTask):
with TextOnly(): with TextOnly():
fire_event(EmptyLine()) fire_event(EmptyLine())
fire_event( fire_event(
HookFinished(stat_line=stat_line, execution=execution, execution_time=execution_time) FinishedRunningStats(
stat_line=stat_line, execution=execution, execution_time=execution_time
)
) )
def before_run(self, adapter, selected_uids: AbstractSet[str]): def before_run(self, adapter, selected_uids: AbstractSet[str]):

View File

@@ -226,10 +226,6 @@ class GraphRunnableTask(ManifestTask):
status: Dict[str, str] = {} status: Dict[str, str] = {}
try: try:
result = runner.run_with_hooks(self.manifest) result = runner.run_with_hooks(self.manifest)
status = runner.get_result_status(result)
runner.node.update_event_status(
node_status=result.status, finished_at=datetime.utcnow().isoformat()
)
finally: finally:
finishctx = TimestampNamed("finished_at") finishctx = TimestampNamed("finished_at")
with finishctx, DbtModelState(status): with finishctx, DbtModelState(status):

View File

@@ -235,5 +235,5 @@ def _get_adapter_plugin_names() -> Iterator[str]:
yield plugin_name yield plugin_name
__version__ = "1.4.0b1" __version__ = "1.4.0rc2"
installed = get_installed_version() installed = get_installed_version()

View File

@@ -25,7 +25,7 @@ with open(os.path.join(this_directory, "README.md")) as f:
package_name = "dbt-core" package_name = "dbt-core"
package_version = "1.4.0b1" package_version = "1.4.0rc2"
description = """With dbt, data analysts and engineers can build analytics \ description = """With dbt, data analysts and engineers can build analytics \
the way engineers build applications.""" the way engineers build applications."""

View File

@@ -14,12 +14,12 @@ FROM --platform=$build_for python:3.10.7-slim-bullseye as base
# N.B. The refs updated automagically every release via bumpversion # N.B. The refs updated automagically every release via bumpversion
# N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct # N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct
ARG dbt_core_ref=dbt-core@v1.4.0b1 ARG dbt_core_ref=dbt-core@v1.4.0rc2
ARG dbt_postgres_ref=dbt-core@v1.4.0b1 ARG dbt_postgres_ref=dbt-core@v1.4.0rc2
ARG dbt_redshift_ref=dbt-redshift@v1.4.0b1 ARG dbt_redshift_ref=dbt-redshift@v1.4.0rc2
ARG dbt_bigquery_ref=dbt-bigquery@v1.4.0b1 ARG dbt_bigquery_ref=dbt-bigquery@v1.4.0rc2
ARG dbt_snowflake_ref=dbt-snowflake@v1.4.0b1 ARG dbt_snowflake_ref=dbt-snowflake@v1.4.0rc2
ARG dbt_spark_ref=dbt-spark@v1.4.0b1 ARG dbt_spark_ref=dbt-spark@v1.4.0rc2
# special case args # special case args
ARG dbt_spark_version=all ARG dbt_spark_version=all
ARG dbt_third_party ARG dbt_third_party

View File

@@ -1 +1 @@
version = "1.4.0b1" version = "1.4.0rc2"

View File

@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
package_name = "dbt-postgres" package_name = "dbt-postgres"
package_version = "1.4.0b1" package_version = "1.4.0rc2"
description = """The postgres adapter plugin for dbt (data build tool)""" description = """The postgres adapter plugin for dbt (data build tool)"""
this_directory = os.path.abspath(os.path.dirname(__file__)) this_directory = os.path.abspath(os.path.dirname(__file__))

View File

@@ -1,2 +0,0 @@
{{ config(materialized='ephemeral') }}
select * from {{ ref('view_model') }}

View File

@@ -1,9 +0,0 @@
version: 2
models:
- name: view_model
columns:
- name: id
tests:
- unique
- not_null
- name: name

View File

@@ -1,5 +0,0 @@
{{ config(materialized='table') }}
select * from {{ ref('ephemeral_model') }}
-- establish a macro dependency to trigger state:modified.macros
-- depends on: {{ my_macro() }}

View File

@@ -1 +0,0 @@
select * from no.such.table

View File

@@ -1,2 +0,0 @@
{{ config(materialized='ephemeral') }}
select * from no.such.table

View File

@@ -1,9 +0,0 @@
version: 2
models:
- name: view_model
columns:
- name: id
tests:
- unique
- not_null
- name: name

View File

@@ -1,5 +0,0 @@
{{ config(materialized='table') }}
select * from {{ ref('ephemeral_model') }}
-- establish a macro dependency to trigger state:modified.macros
-- depends on: {{ my_macro() }}

View File

@@ -1 +0,0 @@
select * from no.such.table

View File

@@ -1,9 +0,0 @@
version: 2
models:
- name: view_model
columns:
- name: id
tests:
- unique
- not_null
- name: name

View File

@@ -1,2 +0,0 @@
{{ config(materialized='table') }}
select 1 as fun

View File

@@ -1 +0,0 @@
select * from {{ ref('seed') }}

View File

@@ -1,13 +0,0 @@
{# trigger infinite recursion if not handled #}
{% macro my_infinitely_recursive_macro() %}
{{ return(adapter.dispatch('my_infinitely_recursive_macro')()) }}
{% endmacro %}
{% macro default__my_infinitely_recursive_macro() %}
{% if unmet_condition %}
{{ my_infinitely_recursive_macro() }}
{% else %}
{{ return('') }}
{% endif %}
{% endmacro %}

View File

@@ -1,3 +0,0 @@
{% macro my_macro() %}
{% do log('in a macro' ) %}
{% endmacro %}

View File

@@ -1,2 +0,0 @@
{{ config(materialized='ephemeral') }}
select * from {{ ref('view_model') }}

View File

@@ -1,8 +0,0 @@
version: 2
exposures:
- name: my_exposure
type: application
depends_on:
- ref('view_model')
owner:
email: test@example.com

View File

@@ -1,10 +0,0 @@
version: 2
models:
- name: view_model
columns:
- name: id
tests:
- unique:
severity: error
- not_null
- name: name

View File

@@ -1,5 +0,0 @@
{{ config(materialized='table') }}
select * from {{ ref('ephemeral_model') }}
-- establish a macro dependency to trigger state:modified.macros
-- depends on: {{ my_macro() }}

View File

@@ -1,4 +0,0 @@
select * from {{ ref('seed') }}
-- establish a macro dependency that trips infinite recursion if not handled
-- depends on: {{ my_infinitely_recursive_macro() }}

View File

@@ -1,6 +0,0 @@
{
"metadata": {
"dbt_schema_version": "https://schemas.getdbt.com/dbt/manifest/v3.json",
"dbt_version": "0.21.1"
}
}

View File

@@ -1,3 +0,0 @@
id,name
1,Alice
2,Bob
1 id name
2 1 Alice
3 2 Bob

View File

@@ -1,14 +0,0 @@
{% snapshot my_cool_snapshot %}
{{
config(
target_database=database,
target_schema=schema,
unique_key='id',
strategy='check',
check_cols=['id'],
)
}}
select * from {{ ref('view_model') }}
{% endsnapshot %}

View File

@@ -1,354 +0,0 @@
from test.integration.base import DBTIntegrationTest, use_profile
import copy
import json
import os
import shutil
import pytest
import dbt.exceptions
class TestDeferState(DBTIntegrationTest):
@property
def schema(self):
return "defer_state_062"
@property
def models(self):
return "models"
def setUp(self):
self.other_schema = None
super().setUp()
self._created_schemas.add(self.other_schema)
@property
def project_config(self):
return {
'config-version': 2,
'seeds': {
'test': {
'quote_columns': False,
}
}
}
def get_profile(self, adapter_type):
if self.other_schema is None:
self.other_schema = self.unique_schema() + '_other'
profile = super().get_profile(adapter_type)
default_name = profile['test']['target']
profile['test']['outputs']['otherschema'] = copy.deepcopy(profile['test']['outputs'][default_name])
profile['test']['outputs']['otherschema']['schema'] = self.other_schema
return profile
def copy_state(self):
assert not os.path.exists('state')
os.makedirs('state')
shutil.copyfile('target/manifest.json', 'state/manifest.json')
def run_and_compile_defer(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['run'])
assert len(results) == 2
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['test'])
assert len(results) == 2
# copy files
self.copy_state()
# defer test, it succeeds
results, success = self.run_dbt_and_check(['compile', '--state', 'state', '--defer'])
self.assertEqual(len(results.results), 6)
self.assertEqual(results.results[0].node.name, "seed")
self.assertTrue(success)
def run_and_snapshot_defer(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['run'])
assert len(results) == 2
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['test'])
assert len(results) == 2
# snapshot succeeds without --defer
results = self.run_dbt(['snapshot'])
# no state, snapshot fails
with pytest.raises(dbt.exceptions.DbtRuntimeError):
results = self.run_dbt(['snapshot', '--state', 'state', '--defer'])
# copy files
self.copy_state()
# defer test, it succeeds
results = self.run_dbt(['snapshot', '--state', 'state', '--defer'])
# favor_state test, it succeeds
results = self.run_dbt(['snapshot', '--state', 'state', '--defer', '--favor-state'])
def run_and_defer(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['run'])
assert len(results) == 2
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['test'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
# test tests first, because run will change things
# no state, wrong schema, failure.
self.run_dbt(['test', '--target', 'otherschema'], expect_pass=False)
# test generate docs
# no state, wrong schema, empty nodes
catalog = self.run_dbt(['docs','generate','--target', 'otherschema'])
assert not catalog.nodes
# no state, run also fails
self.run_dbt(['run', '--target', 'otherschema'], expect_pass=False)
# defer test, it succeeds
results = self.run_dbt(['test', '-m', 'view_model+', '--state', 'state', '--defer', '--target', 'otherschema'])
# defer docs generate with state, catalog refers schema from the happy times
catalog = self.run_dbt(['docs','generate', '-m', 'view_model+', '--state', 'state', '--defer','--target', 'otherschema'])
assert self.other_schema not in catalog.nodes["seed.test.seed"].metadata.schema
assert self.unique_schema() in catalog.nodes["seed.test.seed"].metadata.schema
# with state it should work though
results = self.run_dbt(['run', '-m', 'view_model', '--state', 'state', '--defer', '--target', 'otherschema'])
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
with open('target/manifest.json') as fp:
data = json.load(fp)
assert data['nodes']['seed.test.seed']['deferred']
assert len(results) == 1
def run_and_defer_favor_state(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['run'])
assert len(results) == 2
assert not any(r.node.deferred for r in results)
results = self.run_dbt(['test'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
# test tests first, because run will change things
# no state, wrong schema, failure.
self.run_dbt(['test', '--target', 'otherschema'], expect_pass=False)
# no state, run also fails
self.run_dbt(['run', '--target', 'otherschema'], expect_pass=False)
# defer test, it succeeds
results = self.run_dbt(['test', '-m', 'view_model+', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'])
# with state it should work though
results = self.run_dbt(['run', '-m', 'view_model', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'])
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
with open('target/manifest.json') as fp:
data = json.load(fp)
assert data['nodes']['seed.test.seed']['deferred']
assert len(results) == 1
def run_switchdirs_defer(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
self.use_default_project({'model-paths': ['changed_models']})
# the sql here is just wrong, so it should fail
self.run_dbt(
['run', '-m', 'view_model', '--state', 'state', '--defer', '--target', 'otherschema'],
expect_pass=False,
)
# but this should work since we just use the old happy model
self.run_dbt(
['run', '-m', 'table_model', '--state', 'state', '--defer', '--target', 'otherschema'],
expect_pass=True,
)
self.use_default_project({'model-paths': ['changed_models_bad']})
# this should fail because the table model refs a broken ephemeral
# model, which it should see
self.run_dbt(
['run', '-m', 'table_model', '--state', 'state', '--defer', '--target', 'otherschema'],
expect_pass=False,
)
def run_switchdirs_defer_favor_state(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
self.use_default_project({'model-paths': ['changed_models']})
# the sql here is just wrong, so it should fail
self.run_dbt(
['run', '-m', 'view_model', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'],
expect_pass=False,
)
# but this should work since we just use the old happy model
self.run_dbt(
['run', '-m', 'table_model', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'],
expect_pass=True,
)
self.use_default_project({'model-paths': ['changed_models_bad']})
# this should fail because the table model refs a broken ephemeral
# model, which it should see
self.run_dbt(
['run', '-m', 'table_model', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'],
expect_pass=False,
)
def run_defer_iff_not_exists(self):
results = self.run_dbt(['seed', '--target', 'otherschema'])
assert len(results) == 1
results = self.run_dbt(['run', '--target', 'otherschema'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run', '--state', 'state', '--defer'])
assert len(results) == 2
# because the seed now exists in our schema, we shouldn't defer it
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
def run_defer_iff_not_exists_favor_state(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'])
assert len(results) == 2
# because the seed exists in other schema, we should defer it
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
def run_defer_deleted_upstream(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
self.use_default_project({'model-paths': ['changed_models_missing']})
# ephemeral_model is now gone. previously this caused a
# keyerror (dbt#2875), now it should pass
self.run_dbt(
['run', '-m', 'view_model', '--state', 'state', '--defer', '--target', 'otherschema'],
expect_pass=True,
)
# despite deferral, test should use models just created in our schema
results = self.run_dbt(['test', '--state', 'state', '--defer'])
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
def run_defer_deleted_upstream_favor_state(self):
results = self.run_dbt(['seed'])
assert len(results) == 1
results = self.run_dbt(['run'])
assert len(results) == 2
# copy files over from the happy times when we had a good target
self.copy_state()
self.use_default_project({'model-paths': ['changed_models_missing']})
self.run_dbt(
['run', '-m', 'view_model', '--state', 'state', '--defer', '--favor-state', '--target', 'otherschema'],
expect_pass=True,
)
# despite deferral, test should use models just created in our schema
results = self.run_dbt(['test', '--state', 'state', '--defer', '--favor-state'])
assert self.other_schema not in results[0].node.compiled_code
assert self.unique_schema() in results[0].node.compiled_code
@use_profile('postgres')
def test_postgres_state_changetarget(self):
self.run_and_defer()
# make sure these commands don't work with --defer
with pytest.raises(SystemExit):
self.run_dbt(['seed', '--defer'])
@use_profile('postgres')
def test_postgres_state_changetarget_favor_state(self):
self.run_and_defer_favor_state()
# make sure these commands don't work with --defer
with pytest.raises(SystemExit):
self.run_dbt(['seed', '--defer'])
@use_profile('postgres')
def test_postgres_state_changedir(self):
self.run_switchdirs_defer()
@use_profile('postgres')
def test_postgres_state_changedir_favor_state(self):
self.run_switchdirs_defer_favor_state()
@use_profile('postgres')
def test_postgres_state_defer_iffnotexists(self):
self.run_defer_iff_not_exists()
@use_profile('postgres')
def test_postgres_state_defer_iffnotexists_favor_state(self):
self.run_defer_iff_not_exists_favor_state()
@use_profile('postgres')
def test_postgres_state_defer_deleted_upstream(self):
self.run_defer_deleted_upstream()
@use_profile('postgres')
def test_postgres_state_defer_deleted_upstream_favor_state(self):
self.run_defer_deleted_upstream_favor_state()
@use_profile('postgres')
def test_postgres_state_snapshot_defer(self):
self.run_and_snapshot_defer()
@use_profile('postgres')
def test_postgres_state_compile_defer(self):
self.run_and_compile_defer()

View File

@@ -1,211 +0,0 @@
from test.integration.base import DBTIntegrationTest, use_profile
import os
import random
import shutil
import string
import pytest
from dbt.exceptions import CompilationError, IncompatibleSchemaError
class TestModifiedState(DBTIntegrationTest):
@property
def schema(self):
return "modified_state_062"
@property
def models(self):
return "models"
@property
def project_config(self):
return {
'config-version': 2,
'macro-paths': ['macros'],
'seeds': {
'test': {
'quote_columns': True,
}
}
}
def _symlink_test_folders(self):
# dbt's normal symlink behavior breaks this test. Copy the files
# so we can freely modify them.
for entry in os.listdir(self.test_original_source_path):
src = os.path.join(self.test_original_source_path, entry)
tst = os.path.join(self.test_root_dir, entry)
if entry in {'models', 'seeds', 'macros', 'previous_state'}:
shutil.copytree(src, tst)
elif os.path.isdir(entry) or entry.endswith('.sql'):
os.symlink(src, tst)
def copy_state(self):
assert not os.path.exists('state')
os.makedirs('state')
shutil.copyfile('target/manifest.json', 'state/manifest.json')
def setUp(self):
super().setUp()
self.run_dbt(['seed'])
self.run_dbt(['run'])
self.copy_state()
@use_profile('postgres')
def test_postgres_changed_seed_contents_state(self):
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'], expect_pass=True)
assert len(results) == 0
with open('seeds/seed.csv') as fp:
fp.readline()
newline = fp.newlines
with open('seeds/seed.csv', 'a') as fp:
fp.write(f'3,carl{newline}')
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'state:modified+', '--state', './state'])
assert len(results) == 7
assert set(results) == {'test.seed', 'test.table_model', 'test.view_model', 'test.ephemeral_model', 'test.not_null_view_model_id', 'test.unique_view_model_id', 'exposure:test.my_exposure'}
shutil.rmtree('./state')
self.copy_state()
with open('seeds/seed.csv', 'a') as fp:
# assume each line is ~2 bytes + len(name)
target_size = 1*1024*1024
line_size = 64
num_lines = target_size // line_size
maxlines = num_lines + 4
for idx in range(4, maxlines):
value = ''.join(random.choices(string.ascii_letters, k=62))
fp.write(f'{idx},{value}{newline}')
# now if we run again, we should get a warning
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
with pytest.raises(CompilationError) as exc:
self.run_dbt(['--warn-error', 'ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'])
assert '>1MB' in str(exc.value)
shutil.rmtree('./state')
self.copy_state()
# once it's in path mode, we don't mark it as modified if it changes
with open('seeds/seed.csv', 'a') as fp:
fp.write(f'{random},test{newline}')
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'], expect_pass=True)
assert len(results) == 0
@use_profile('postgres')
def test_postgres_changed_seed_config(self):
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'], expect_pass=True)
assert len(results) == 0
self.use_default_project({'seeds': {'test': {'quote_columns': False}}})
# quoting change -> seed changed
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
@use_profile('postgres')
def test_postgres_unrendered_config_same(self):
results = self.run_dbt(['ls', '--resource-type', 'model', '--select', 'state:modified', '--state', './state'], expect_pass=True)
assert len(results) == 0
# although this is the default value, dbt will recognize it as a change
# for previously-unconfigured models, because it's been explicitly set
self.use_default_project({'models': {'test': {'materialized': 'view'}}})
results = self.run_dbt(['ls', '--resource-type', 'model', '--select', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.view_model'
@use_profile('postgres')
def test_postgres_changed_model_contents(self):
results = self.run_dbt(['run', '--models', 'state:modified', '--state', './state'])
assert len(results) == 0
with open('models/table_model.sql') as fp:
fp.readline()
newline = fp.newlines
with open('models/table_model.sql', 'w') as fp:
fp.write("{{ config(materialized='table') }}")
fp.write(newline)
fp.write("select * from {{ ref('seed') }}")
fp.write(newline)
results = self.run_dbt(['run', '--models', 'state:modified', '--state', './state'])
assert len(results) == 1
assert results[0].node.name == 'table_model'
@use_profile('postgres')
def test_postgres_new_macro(self):
with open('macros/macros.sql') as fp:
fp.readline()
newline = fp.newlines
new_macro = '{% macro my_other_macro() %}{% endmacro %}' + newline
# add a new macro to a new file
with open('macros/second_macro.sql', 'w') as fp:
fp.write(new_macro)
results, stdout = self.run_dbt_and_capture(['run', '--models', 'state:modified', '--state', './state'])
assert len(results) == 0
os.remove('macros/second_macro.sql')
# add a new macro to the existing file
with open('macros/macros.sql', 'a') as fp:
fp.write(new_macro)
results, stdout = self.run_dbt_and_capture(['run', '--models', 'state:modified', '--state', './state'])
assert len(results) == 0
@use_profile('postgres')
def test_postgres_changed_macro_contents(self):
with open('macros/macros.sql') as fp:
fp.readline()
newline = fp.newlines
# modify an existing macro
with open('macros/macros.sql', 'w') as fp:
fp.write("{% macro my_macro() %}")
fp.write(newline)
fp.write(" {% do log('in a macro', info=True) %}")
fp.write(newline)
fp.write('{% endmacro %}')
fp.write(newline)
# table_model calls this macro
results, stdout = self.run_dbt_and_capture(['run', '--models', 'state:modified', '--state', './state'])
assert len(results) == 1
@use_profile('postgres')
def test_postgres_changed_exposure(self):
with open('models/exposures.yml', 'a') as fp:
fp.write(' name: John Doe\n')
results, stdout = self.run_dbt_and_capture(['run', '--models', '+state:modified', '--state', './state'])
assert len(results) == 1
assert results[0].node.name == 'view_model'
@use_profile('postgres')
def test_postgres_previous_version_manifest(self):
# This tests that a different schema version in the file throws an error
with self.assertRaises(IncompatibleSchemaError) as exc:
results = self.run_dbt(['ls', '-s', 'state:modified', '--state', './previous_state'])
self.assertEqual(exc.CODE, 10014)

View File

@@ -1,434 +0,0 @@
from test.integration.base import DBTIntegrationTest, use_profile
import os
import random
import shutil
import string
import pytest
class TestRunResultsState(DBTIntegrationTest):
@property
def schema(self):
return "run_results_state_062"
@property
def models(self):
return "models"
@property
def project_config(self):
return {
'config-version': 2,
'macro-paths': ['macros'],
'seeds': {
'test': {
'quote_columns': True,
}
}
}
def _symlink_test_folders(self):
# dbt's normal symlink behavior breaks this test. Copy the files
# so we can freely modify them.
for entry in os.listdir(self.test_original_source_path):
src = os.path.join(self.test_original_source_path, entry)
tst = os.path.join(self.test_root_dir, entry)
if entry in {'models', 'seeds', 'macros'}:
shutil.copytree(src, tst)
elif os.path.isdir(entry) or entry.endswith('.sql'):
os.symlink(src, tst)
def copy_state(self):
assert not os.path.exists('state')
os.makedirs('state')
shutil.copyfile('target/manifest.json', 'state/manifest.json')
shutil.copyfile('target/run_results.json', 'state/run_results.json')
def setUp(self):
super().setUp()
self.run_dbt(['build'])
self.copy_state()
def rebuild_run_dbt(self, expect_pass=True):
shutil.rmtree('./state')
self.run_dbt(['build'], expect_pass=expect_pass)
self.copy_state()
@use_profile('postgres')
def test_postgres_seed_run_results_state(self):
shutil.rmtree('./state')
self.run_dbt(['seed'])
self.copy_state()
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'result:success', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:success', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:success+', '--state', './state'])
assert len(results) == 7
assert set(results) == {'test.seed', 'test.table_model', 'test.view_model', 'test.ephemeral_model', 'test.not_null_view_model_id', 'test.unique_view_model_id', 'exposure:test.my_exposure'}
with open('seeds/seed.csv') as fp:
fp.readline()
newline = fp.newlines
with open('seeds/seed.csv', 'a') as fp:
fp.write(f'\"\'\'3,carl{newline}')
shutil.rmtree('./state')
self.run_dbt(['seed'], expect_pass=False)
self.copy_state()
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'result:error', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:error', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:error+', '--state', './state'])
assert len(results) == 7
assert set(results) == {'test.seed', 'test.table_model', 'test.view_model', 'test.ephemeral_model', 'test.not_null_view_model_id', 'test.unique_view_model_id', 'exposure:test.my_exposure'}
with open('seeds/seed.csv') as fp:
fp.readline()
newline = fp.newlines
with open('seeds/seed.csv', 'a') as fp:
# assume each line is ~2 bytes + len(name)
target_size = 1*1024*1024
line_size = 64
num_lines = target_size // line_size
maxlines = num_lines + 4
for idx in range(4, maxlines):
value = ''.join(random.choices(string.ascii_letters, k=62))
fp.write(f'{idx},{value}{newline}')
shutil.rmtree('./state')
self.run_dbt(['seed'], expect_pass=False)
self.copy_state()
results = self.run_dbt(['ls', '--resource-type', 'seed', '--select', 'result:error', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:error', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.seed'
results = self.run_dbt(['ls', '--select', 'result:error+', '--state', './state'])
assert len(results) == 7
assert set(results) == {'test.seed', 'test.table_model', 'test.view_model', 'test.ephemeral_model', 'test.not_null_view_model_id', 'test.unique_view_model_id', 'exposure:test.my_exposure'}
@use_profile('postgres')
def test_postgres_build_run_results_state(self):
results = self.run_dbt(['build', '--select', 'result:error', '--state', './state'])
assert len(results) == 0
with open('models/view_model.sql') as fp:
fp.readline()
newline = fp.newlines
with open('models/view_model.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
self.rebuild_run_dbt(expect_pass=False)
results = self.run_dbt(['build', '--select', 'result:error', '--state', './state'], expect_pass=False)
assert len(results) == 3
nodes = set([elem.node.name for elem in results])
assert nodes == {'view_model', 'not_null_view_model_id','unique_view_model_id'}
results = self.run_dbt(['ls', '--select', 'result:error', '--state', './state'])
assert len(results) == 3
assert set(results) == {'test.view_model', 'test.not_null_view_model_id', 'test.unique_view_model_id'}
results = self.run_dbt(['build', '--select', 'result:error+', '--state', './state'], expect_pass=False)
assert len(results) == 4
nodes = set([elem.node.name for elem in results])
assert nodes == {'table_model','view_model', 'not_null_view_model_id','unique_view_model_id'}
results = self.run_dbt(['ls', '--select', 'result:error+', '--state', './state'])
assert len(results) == 6 # includes exposure
assert set(results) == {'test.table_model', 'test.view_model', 'test.ephemeral_model', 'test.not_null_view_model_id', 'test.unique_view_model_id', 'exposure:test.my_exposure'}
# test failure on build tests
# fail the unique test
with open('models/view_model.sql', 'w') as fp:
fp.write(newline)
fp.write("select 1 as id union all select 1 as id")
fp.write(newline)
self.rebuild_run_dbt(expect_pass=False)
results = self.run_dbt(['build', '--select', 'result:fail', '--state', './state'], expect_pass=False)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
results = self.run_dbt(['ls', '--select', 'result:fail', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.unique_view_model_id'
results = self.run_dbt(['build', '--select', 'result:fail+', '--state', './state'], expect_pass=False)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {'table_model', 'unique_view_model_id'}
results = self.run_dbt(['ls', '--select', 'result:fail+', '--state', './state'])
assert len(results) == 1
assert set(results) == {'test.unique_view_model_id'}
# change the unique test severity from error to warn and reuse the same view_model.sql changes above
f = open('models/schema.yml', 'r')
filedata = f.read()
f.close()
newdata = filedata.replace('error','warn')
f = open('models/schema.yml', 'w')
f.write(newdata)
f.close()
self.rebuild_run_dbt(expect_pass=True)
results = self.run_dbt(['build', '--select', 'result:warn', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
results = self.run_dbt(['ls', '--select', 'result:warn', '--state', './state'])
assert len(results) == 1
assert results[0] == 'test.unique_view_model_id'
results = self.run_dbt(['build', '--select', 'result:warn+', '--state', './state'], expect_pass=True)
assert len(results) == 2 # includes table_model to be run
nodes = set([elem.node.name for elem in results])
assert nodes == {'table_model', 'unique_view_model_id'}
results = self.run_dbt(['ls', '--select', 'result:warn+', '--state', './state'])
assert len(results) == 1
assert set(results) == {'test.unique_view_model_id'}
@use_profile('postgres')
def test_postgres_run_run_results_state(self):
results = self.run_dbt(['run', '--select', 'result:success', '--state', './state'], expect_pass=True)
assert len(results) == 2
assert results[0].node.name == 'view_model'
assert results[1].node.name == 'table_model'
# clear state and rerun upstream view model to test + operator
shutil.rmtree('./state')
self.run_dbt(['run', '--select', 'view_model'], expect_pass=True)
self.copy_state()
results = self.run_dbt(['run', '--select', 'result:success+', '--state', './state'], expect_pass=True)
assert len(results) == 2
assert results[0].node.name == 'view_model'
assert results[1].node.name == 'table_model'
# check we are starting from a place with 0 errors
results = self.run_dbt(['run', '--select', 'result:error', '--state', './state'])
assert len(results) == 0
# force an error in the view model to test error and skipped states
with open('models/view_model.sql') as fp:
fp.readline()
newline = fp.newlines
with open('models/view_model.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
shutil.rmtree('./state')
self.run_dbt(['run'], expect_pass=False)
self.copy_state()
# test single result selector on error
results = self.run_dbt(['run', '--select', 'result:error', '--state', './state'], expect_pass=False)
assert len(results) == 1
assert results[0].node.name == 'view_model'
# test + operator selection on error
results = self.run_dbt(['run', '--select', 'result:error+', '--state', './state'], expect_pass=False)
assert len(results) == 2
assert results[0].node.name == 'view_model'
assert results[1].node.name == 'table_model'
# single result selector on skipped. Expect this to pass becase underlying view already defined above
results = self.run_dbt(['run', '--select', 'result:skipped', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0].node.name == 'table_model'
# add a downstream model that depends on table_model for skipped+ selector
with open('models/table_model_downstream.sql', 'w') as fp:
fp.write("select * from {{ref('table_model')}}")
shutil.rmtree('./state')
self.run_dbt(['run'], expect_pass=False)
self.copy_state()
results = self.run_dbt(['run', '--select', 'result:skipped+', '--state', './state'], expect_pass=True)
assert len(results) == 2
assert results[0].node.name == 'table_model'
assert results[1].node.name == 'table_model_downstream'
@use_profile('postgres')
def test_postgres_test_run_results_state(self):
# run passed nodes
results = self.run_dbt(['test', '--select', 'result:pass', '--state', './state'], expect_pass=True)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {'unique_view_model_id', 'not_null_view_model_id'}
# run passed nodes with + operator
results = self.run_dbt(['test', '--select', 'result:pass+', '--state', './state'], expect_pass=True)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {'unique_view_model_id', 'not_null_view_model_id'}
# update view model to generate a failure case
os.remove('./models/view_model.sql')
with open('models/view_model.sql', 'w') as fp:
fp.write("select 1 as id union all select 1 as id")
self.rebuild_run_dbt(expect_pass=False)
# test with failure selector
results = self.run_dbt(['test', '--select', 'result:fail', '--state', './state'], expect_pass=False)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
# test with failure selector and + operator
results = self.run_dbt(['test', '--select', 'result:fail+', '--state', './state'], expect_pass=False)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
# change the unique test severity from error to warn and reuse the same view_model.sql changes above
with open('models/schema.yml', 'r+') as f:
filedata = f.read()
newdata = filedata.replace('error','warn')
f.seek(0)
f.write(newdata)
f.truncate()
# rebuild - expect_pass = True because we changed the error to a warning this time around
self.rebuild_run_dbt(expect_pass=True)
# test with warn selector
results = self.run_dbt(['test', '--select', 'result:warn', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
# test with warn selector and + operator
results = self.run_dbt(['test', '--select', 'result:warn+', '--state', './state'], expect_pass=True)
assert len(results) == 1
assert results[0].node.name == 'unique_view_model_id'
@use_profile('postgres')
def test_postgres_concurrent_selectors_run_run_results_state(self):
results = self.run_dbt(['run', '--select', 'state:modified+', 'result:error+', '--state', './state'])
assert len(results) == 0
# force an error on a dbt model
with open('models/view_model.sql') as fp:
fp.readline()
newline = fp.newlines
with open('models/view_model.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
shutil.rmtree('./state')
self.run_dbt(['run'], expect_pass=False)
self.copy_state()
# modify another dbt model
with open('models/table_model_modified_example.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
results = self.run_dbt(['run', '--select', 'state:modified+', 'result:error+', '--state', './state'], expect_pass=False)
assert len(results) == 3
nodes = set([elem.node.name for elem in results])
assert nodes == {'view_model', 'table_model_modified_example', 'table_model'}
@use_profile('postgres')
def test_postgres_concurrent_selectors_test_run_results_state(self):
# create failure test case for result:fail selector
os.remove('./models/view_model.sql')
with open('./models/view_model.sql', 'w') as f:
f.write('select 1 as id union all select 1 as id union all select null as id')
# run dbt build again to trigger test errors
self.rebuild_run_dbt(expect_pass=False)
# get the failures from
results = self.run_dbt(['test', '--select', 'result:fail', '--exclude', 'not_null_view_model_id', '--state', './state'], expect_pass=False)
assert len(results) == 1
nodes = set([elem.node.name for elem in results])
assert nodes == {'unique_view_model_id'}
@use_profile('postgres')
def test_postgres_concurrent_selectors_build_run_results_state(self):
results = self.run_dbt(['build', '--select', 'state:modified+', 'result:error+', '--state', './state'])
assert len(results) == 0
# force an error on a dbt model
with open('models/view_model.sql') as fp:
fp.readline()
newline = fp.newlines
with open('models/view_model.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
self.rebuild_run_dbt(expect_pass=False)
# modify another dbt model
with open('models/table_model_modified_example.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_error")
fp.write(newline)
results = self.run_dbt(['build', '--select', 'state:modified+', 'result:error+', '--state', './state'], expect_pass=False)
assert len(results) == 5
nodes = set([elem.node.name for elem in results])
assert nodes == {'table_model_modified_example', 'view_model', 'table_model', 'not_null_view_model_id', 'unique_view_model_id'}
# create failure test case for result:fail selector
os.remove('./models/view_model.sql')
with open('./models/view_model.sql', 'w') as f:
f.write('select 1 as id union all select 1 as id')
# create error model case for result:error selector
with open('./models/error_model.sql', 'w') as f:
f.write('select 1 as id from not_exists')
# create something downstream from the error model to rerun
with open('./models/downstream_of_error_model.sql', 'w') as f:
f.write('select * from {{ ref("error_model") }} )')
# regenerate build state
self.rebuild_run_dbt(expect_pass=False)
# modify model again to trigger the state:modified selector
with open('models/table_model_modified_example.sql', 'w') as fp:
fp.write(newline)
fp.write("select * from forced_another_error")
fp.write(newline)
results = self.run_dbt(['build', '--select', 'state:modified+', 'result:error+', 'result:fail+', '--state', './state'], expect_pass=False)
assert len(results) == 5
nodes = set([elem.node.name for elem in results])
assert nodes == {'error_model', 'downstream_of_error_model', 'table_model_modified_example', 'table_model', 'unique_view_model_id'}

View File

@@ -1 +1 @@
version = "1.4.0b1" version = "1.4.0rc2"

View File

@@ -20,7 +20,7 @@ except ImportError:
package_name = "dbt-tests-adapter" package_name = "dbt-tests-adapter"
package_version = "1.4.0b1" package_version = "1.4.0rc2"
description = """The dbt adapter tests for adapter plugins""" description = """The dbt adapter tests for adapter plugins"""
this_directory = os.path.abspath(os.path.dirname(__file__)) this_directory = os.path.abspath(os.path.dirname(__file__))

View File

@@ -0,0 +1,101 @@
seed_csv = """id,name
1,Alice
2,Bob
"""
table_model_sql = """
{{ config(materialized='table') }}
select * from {{ ref('ephemeral_model') }}
-- establish a macro dependency to trigger state:modified.macros
-- depends on: {{ my_macro() }}
"""
changed_table_model_sql = """
{{ config(materialized='table') }}
select 1 as fun
"""
view_model_sql = """
select * from {{ ref('seed') }}
-- establish a macro dependency that trips infinite recursion if not handled
-- depends on: {{ my_infinitely_recursive_macro() }}
"""
changed_view_model_sql = """
select * from no.such.table
"""
ephemeral_model_sql = """
{{ config(materialized='ephemeral') }}
select * from {{ ref('view_model') }}
"""
changed_ephemeral_model_sql = """
{{ config(materialized='ephemeral') }}
select * from no.such.table
"""
schema_yml = """
version: 2
models:
- name: view_model
columns:
- name: id
tests:
- unique:
severity: error
- not_null
- name: name
"""
exposures_yml = """
version: 2
exposures:
- name: my_exposure
type: application
depends_on:
- ref('view_model')
owner:
email: test@example.com
"""
macros_sql = """
{% macro my_macro() %}
{% do log('in a macro' ) %}
{% endmacro %}
"""
infinite_macros_sql = """
{# trigger infinite recursion if not handled #}
{% macro my_infinitely_recursive_macro() %}
{{ return(adapter.dispatch('my_infinitely_recursive_macro')()) }}
{% endmacro %}
{% macro default__my_infinitely_recursive_macro() %}
{% if unmet_condition %}
{{ my_infinitely_recursive_macro() }}
{% else %}
{{ return('') }}
{% endif %}
{% endmacro %}
"""
snapshot_sql = """
{% snapshot my_cool_snapshot %}
{{
config(
target_database=database,
target_schema=schema,
unique_key='id',
strategy='check',
check_cols=['id'],
)
}}
select * from {{ ref('view_model') }}
{% endsnapshot %}
"""

View File

@@ -0,0 +1,273 @@
import json
import os
import shutil
from copy import deepcopy
import pytest
from dbt.tests.util import run_dbt, write_file, rm_file
from dbt.exceptions import DbtRuntimeError
from tests.functional.defer_state.fixtures import (
seed_csv,
table_model_sql,
changed_table_model_sql,
view_model_sql,
changed_view_model_sql,
ephemeral_model_sql,
changed_ephemeral_model_sql,
schema_yml,
exposures_yml,
macros_sql,
infinite_macros_sql,
snapshot_sql,
)
class BaseDeferState:
@pytest.fixture(scope="class")
def models(self):
return {
"table_model.sql": table_model_sql,
"view_model.sql": view_model_sql,
"ephemeral_model.sql": ephemeral_model_sql,
"schema.yml": schema_yml,
"exposures.yml": exposures_yml,
}
@pytest.fixture(scope="class")
def macros(self):
return {
"macros.sql": macros_sql,
"infinite_macros.sql": infinite_macros_sql,
}
@pytest.fixture(scope="class")
def seeds(self):
return {
"seed.csv": seed_csv,
}
@pytest.fixture(scope="class")
def snapshots(self):
return {
"snapshot.sql": snapshot_sql,
}
@pytest.fixture(scope="class")
def other_schema(self, unique_schema):
return unique_schema + "_other"
@property
def project_config_update(self):
return {
"seeds": {
"test": {
"quote_columns": False,
}
}
}
@pytest.fixture(scope="class")
def profiles_config_update(self, dbt_profile_target, unique_schema, other_schema):
outputs = {"default": dbt_profile_target, "otherschema": deepcopy(dbt_profile_target)}
outputs["default"]["schema"] = unique_schema
outputs["otherschema"]["schema"] = other_schema
return {"test": {"outputs": outputs, "target": "default"}}
def copy_state(self):
if not os.path.exists("state"):
os.makedirs("state")
shutil.copyfile("target/manifest.json", "state/manifest.json")
def run_and_save_state(self):
results = run_dbt(["seed"])
assert len(results) == 1
assert not any(r.node.deferred for r in results)
results = run_dbt(["run"])
assert len(results) == 2
assert not any(r.node.deferred for r in results)
results = run_dbt(["test"])
assert len(results) == 2
# copy files
self.copy_state()
class TestDeferStateUnsupportedCommands(BaseDeferState):
def test_unsupported_commands(self, project):
# make sure these commands don"t work with --defer
with pytest.raises(SystemExit):
run_dbt(["seed", "--defer"])
def test_no_state(self, project):
# no "state" files present, snapshot fails
with pytest.raises(DbtRuntimeError):
run_dbt(["snapshot", "--state", "state", "--defer"])
class TestRunCompileState(BaseDeferState):
def test_run_and_compile_defer(self, project):
self.run_and_save_state()
# defer test, it succeeds
results = run_dbt(["compile", "--state", "state", "--defer"])
assert len(results.results) == 6
assert results.results[0].node.name == "seed"
class TestSnapshotState(BaseDeferState):
def test_snapshot_state_defer(self, project):
self.run_and_save_state()
# snapshot succeeds without --defer
run_dbt(["snapshot"])
# copy files
self.copy_state()
# defer test, it succeeds
run_dbt(["snapshot", "--state", "state", "--defer"])
# favor_state test, it succeeds
run_dbt(["snapshot", "--state", "state", "--defer", "--favor-state"])
class TestRunDeferState(BaseDeferState):
def test_run_and_defer(self, project, unique_schema, other_schema):
project.create_test_schema(other_schema)
self.run_and_save_state()
# test tests first, because run will change things
# no state, wrong schema, failure.
run_dbt(["test", "--target", "otherschema"], expect_pass=False)
# test generate docs
# no state, wrong schema, empty nodes
catalog = run_dbt(["docs", "generate", "--target", "otherschema"])
assert not catalog.nodes
# no state, run also fails
run_dbt(["run", "--target", "otherschema"], expect_pass=False)
# defer test, it succeeds
results = run_dbt(
["test", "-m", "view_model+", "--state", "state", "--defer", "--target", "otherschema"]
)
# defer docs generate with state, catalog refers schema from the happy times
catalog = run_dbt(
[
"docs",
"generate",
"-m",
"view_model+",
"--state",
"state",
"--defer",
"--target",
"otherschema",
]
)
assert other_schema not in catalog.nodes["seed.test.seed"].metadata.schema
assert unique_schema in catalog.nodes["seed.test.seed"].metadata.schema
# with state it should work though
results = run_dbt(
["run", "-m", "view_model", "--state", "state", "--defer", "--target", "otherschema"]
)
assert other_schema not in results[0].node.compiled_code
assert unique_schema in results[0].node.compiled_code
with open("target/manifest.json") as fp:
data = json.load(fp)
assert data["nodes"]["seed.test.seed"]["deferred"]
assert len(results) == 1
class TestRunDeferStateChangedModel(BaseDeferState):
def test_run_defer_state_changed_model(self, project):
self.run_and_save_state()
# change "view_model"
write_file(changed_view_model_sql, "models", "view_model.sql")
# the sql here is just wrong, so it should fail
run_dbt(
["run", "-m", "view_model", "--state", "state", "--defer", "--target", "otherschema"],
expect_pass=False,
)
# but this should work since we just use the old happy model
run_dbt(
["run", "-m", "table_model", "--state", "state", "--defer", "--target", "otherschema"],
expect_pass=True,
)
# change "ephemeral_model"
write_file(changed_ephemeral_model_sql, "models", "ephemeral_model.sql")
# this should fail because the table model refs a broken ephemeral
# model, which it should see
run_dbt(
["run", "-m", "table_model", "--state", "state", "--defer", "--target", "otherschema"],
expect_pass=False,
)
class TestRunDeferStateIFFNotExists(BaseDeferState):
def test_run_defer_iff_not_exists(self, project, unique_schema, other_schema):
project.create_test_schema(other_schema)
self.run_and_save_state()
results = run_dbt(["seed", "--target", "otherschema"])
assert len(results) == 1
results = run_dbt(["run", "--state", "state", "--defer", "--target", "otherschema"])
assert len(results) == 2
# because the seed now exists in our "other" schema, we should prefer it over the one
# available from state
assert other_schema in results[0].node.compiled_code
# this time with --favor-state: even though the seed now exists in our "other" schema,
# we should still favor the one available from state
results = run_dbt(
["run", "--state", "state", "--defer", "--favor-state", "--target", "otherschema"]
)
assert len(results) == 2
assert other_schema not in results[0].node.compiled_code
class TestDeferStateDeletedUpstream(BaseDeferState):
def test_run_defer_deleted_upstream(self, project, unique_schema, other_schema):
project.create_test_schema(other_schema)
self.run_and_save_state()
# remove "ephemeral_model" + change "table_model"
rm_file("models", "ephemeral_model.sql")
write_file(changed_table_model_sql, "models", "table_model.sql")
# ephemeral_model is now gone. previously this caused a
# keyerror (dbt#2875), now it should pass
run_dbt(
["run", "-m", "view_model", "--state", "state", "--defer", "--target", "otherschema"],
expect_pass=True,
)
# despite deferral, we should use models just created in our schema
results = run_dbt(["test", "--state", "state", "--defer", "--target", "otherschema"])
assert other_schema in results[0].node.compiled_code
# this time with --favor-state: prefer the models in the "other" schema, even though they exist in ours
run_dbt(
[
"run",
"-m",
"view_model",
"--state",
"state",
"--defer",
"--favor-state",
"--target",
"otherschema",
],
expect_pass=True,
)
results = run_dbt(["test", "--state", "state", "--defer", "--favor-state"])
assert other_schema not in results[0].node.compiled_code

View File

@@ -0,0 +1,263 @@
import os
import random
import shutil
import string
import pytest
from dbt.tests.util import run_dbt, update_config_file, write_file
from dbt.exceptions import CompilationError
from tests.functional.defer_state.fixtures import (
seed_csv,
table_model_sql,
view_model_sql,
ephemeral_model_sql,
schema_yml,
exposures_yml,
macros_sql,
infinite_macros_sql,
)
class BaseModifiedState:
@pytest.fixture(scope="class")
def models(self):
return {
"table_model.sql": table_model_sql,
"view_model.sql": view_model_sql,
"ephemeral_model.sql": ephemeral_model_sql,
"schema.yml": schema_yml,
"exposures.yml": exposures_yml,
}
@pytest.fixture(scope="class")
def macros(self):
return {
"macros.sql": macros_sql,
"infinite_macros.sql": infinite_macros_sql,
}
@pytest.fixture(scope="class")
def seeds(self):
return {
"seed.csv": seed_csv,
}
@property
def project_config_update(self):
return {
"seeds": {
"test": {
"quote_columns": False,
}
}
}
def copy_state(self):
if not os.path.exists("state"):
os.makedirs("state")
shutil.copyfile("target/manifest.json", "state/manifest.json")
def run_and_save_state(self):
run_dbt(["seed"])
run_dbt(["run"])
self.copy_state()
class TestChangedSeedContents(BaseModifiedState):
def test_changed_seed_contents_state(self, project):
self.run_and_save_state()
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 0
# add a new row to the seed
changed_seed_contents = seed_csv + "\n" + "3,carl"
write_file(changed_seed_contents, "seeds", "seed.csv")
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"]
)
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "state:modified", "--state", "./state"])
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "state:modified+", "--state", "./state"])
assert len(results) == 7
assert set(results) == {
"test.seed",
"test.table_model",
"test.view_model",
"test.ephemeral_model",
"test.not_null_view_model_id",
"test.unique_view_model_id",
"exposure:test.my_exposure",
}
shutil.rmtree("./state")
self.copy_state()
# make a very big seed
# assume each line is ~2 bytes + len(name)
target_size = 1 * 1024 * 1024
line_size = 64
num_lines = target_size // line_size
maxlines = num_lines + 4
seed_lines = [seed_csv]
for idx in range(4, maxlines):
value = "".join(random.choices(string.ascii_letters, k=62))
seed_lines.append(f"{idx},{value}")
seed_contents = "\n".join(seed_lines)
write_file(seed_contents, "seeds", "seed.csv")
# now if we run again, we should get a warning
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"]
)
assert len(results) == 1
assert results[0] == "test.seed"
with pytest.raises(CompilationError) as exc:
run_dbt(
[
"--warn-error",
"ls",
"--resource-type",
"seed",
"--select",
"state:modified",
"--state",
"./state",
]
)
assert ">1MB" in str(exc.value)
shutil.rmtree("./state")
self.copy_state()
# once it"s in path mode, we don"t mark it as modified if it changes
write_file(seed_contents + "\n1,test", "seeds", "seed.csv")
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 0
class TestChangedSeedConfig(BaseModifiedState):
def test_changed_seed_config(self, project):
self.run_and_save_state()
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 0
update_config_file({"seeds": {"test": {"quote_columns": False}}}, "dbt_project.yml")
# quoting change -> seed changed
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "state:modified", "--state", "./state"]
)
assert len(results) == 1
assert results[0] == "test.seed"
class TestUnrenderedConfigSame(BaseModifiedState):
def test_unrendered_config_same(self, project):
self.run_and_save_state()
results = run_dbt(
["ls", "--resource-type", "model", "--select", "state:modified", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 0
# although this is the default value, dbt will recognize it as a change
# for previously-unconfigured models, because it"s been explicitly set
update_config_file({"models": {"test": {"materialized": "view"}}}, "dbt_project.yml")
results = run_dbt(
["ls", "--resource-type", "model", "--select", "state:modified", "--state", "./state"]
)
assert len(results) == 1
assert results[0] == "test.view_model"
class TestChangedModelContents(BaseModifiedState):
def test_changed_model_contents(self, project):
self.run_and_save_state()
results = run_dbt(["run", "--models", "state:modified", "--state", "./state"])
assert len(results) == 0
table_model_update = """
{{ config(materialized="table") }}
select * from {{ ref("seed") }}
"""
write_file(table_model_update, "models", "table_model.sql")
results = run_dbt(["run", "--models", "state:modified", "--state", "./state"])
assert len(results) == 1
assert results[0].node.name == "table_model"
class TestNewMacro(BaseModifiedState):
def test_new_macro(self, project):
self.run_and_save_state()
new_macro = """
{% macro my_other_macro() %}
{% endmacro %}
"""
# add a new macro to a new file
write_file(new_macro, "macros", "second_macro.sql")
results = run_dbt(["run", "--models", "state:modified", "--state", "./state"])
assert len(results) == 0
os.remove("macros/second_macro.sql")
# add a new macro to the existing file
with open("macros/macros.sql", "a") as fp:
fp.write(new_macro)
results = run_dbt(["run", "--models", "state:modified", "--state", "./state"])
assert len(results) == 0
class TestChangedMacroContents(BaseModifiedState):
def test_changed_macro_contents(self, project):
self.run_and_save_state()
# modify an existing macro
updated_macro = """
{% macro my_macro() %}
{% do log("in a macro", info=True) %}
{% endmacro %}
"""
write_file(updated_macro, "macros", "macros.sql")
# table_model calls this macro
results = run_dbt(["run", "--models", "state:modified", "--state", "./state"])
assert len(results) == 1
class TestChangedExposure(BaseModifiedState):
def test_changed_exposure(self, project):
self.run_and_save_state()
# add an "owner.name" to existing exposure
updated_exposure = exposures_yml + "\n name: John Doe\n"
write_file(updated_exposure, "models", "exposures.yml")
results = run_dbt(["run", "--models", "+state:modified", "--state", "./state"])
assert len(results) == 1
assert results[0].node.name == "view_model"

View File

@@ -0,0 +1,494 @@
import os
import shutil
import pytest
from dbt.tests.util import run_dbt, write_file
from tests.functional.defer_state.fixtures import (
seed_csv,
table_model_sql,
view_model_sql,
ephemeral_model_sql,
schema_yml,
exposures_yml,
macros_sql,
infinite_macros_sql,
)
class BaseRunResultsState:
@pytest.fixture(scope="class")
def models(self):
return {
"table_model.sql": table_model_sql,
"view_model.sql": view_model_sql,
"ephemeral_model.sql": ephemeral_model_sql,
"schema.yml": schema_yml,
"exposures.yml": exposures_yml,
}
@pytest.fixture(scope="class")
def macros(self):
return {
"macros.sql": macros_sql,
"infinite_macros.sql": infinite_macros_sql,
}
@pytest.fixture(scope="class")
def seeds(self):
return {
"seed.csv": seed_csv,
}
@property
def project_config_update(self):
return {
"seeds": {
"test": {
"quote_columns": False,
}
}
}
def clear_state(self):
shutil.rmtree("./state")
def copy_state(self):
if not os.path.exists("state"):
os.makedirs("state")
shutil.copyfile("target/manifest.json", "state/manifest.json")
shutil.copyfile("target/run_results.json", "state/run_results.json")
def run_and_save_state(self):
run_dbt(["build"])
self.copy_state()
def rebuild_run_dbt(self, expect_pass=True):
self.clear_state()
run_dbt(["build"], expect_pass=expect_pass)
self.copy_state()
def update_view_model_bad_sql(self):
# update view model to generate a failure case
not_unique_sql = "select * from forced_error"
write_file(not_unique_sql, "models", "view_model.sql")
def update_view_model_failing_tests(self, with_dupes=True, with_nulls=False):
# test failure on build tests
# fail the unique test
select_1 = "select 1 as id"
select_stmts = [select_1]
if with_dupes:
select_stmts.append(select_1)
if with_nulls:
select_stmts.append("select null as id")
failing_tests_sql = " union all ".join(select_stmts)
write_file(failing_tests_sql, "models", "view_model.sql")
def update_unique_test_severity_warn(self):
# change the unique test severity from error to warn and reuse the same view_model.sql changes above
new_config = schema_yml.replace("error", "warn")
write_file(new_config, "models", "schema.yml")
class TestSeedRunResultsState(BaseRunResultsState):
def test_seed_run_results_state(self, project):
self.run_and_save_state()
self.clear_state()
run_dbt(["seed"])
self.copy_state()
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "result:success", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "result:success", "--state", "./state"])
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "result:success+", "--state", "./state"])
assert len(results) == 7
assert set(results) == {
"test.seed",
"test.table_model",
"test.view_model",
"test.ephemeral_model",
"test.not_null_view_model_id",
"test.unique_view_model_id",
"exposure:test.my_exposure",
}
# add a new faulty row to the seed
changed_seed_contents = seed_csv + "\n" + "\\\3,carl"
write_file(changed_seed_contents, "seeds", "seed.csv")
self.clear_state()
run_dbt(["seed"], expect_pass=False)
self.copy_state()
results = run_dbt(
["ls", "--resource-type", "seed", "--select", "result:error", "--state", "./state"],
expect_pass=True,
)
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "result:error", "--state", "./state"])
assert len(results) == 1
assert results[0] == "test.seed"
results = run_dbt(["ls", "--select", "result:error+", "--state", "./state"])
assert len(results) == 7
assert set(results) == {
"test.seed",
"test.table_model",
"test.view_model",
"test.ephemeral_model",
"test.not_null_view_model_id",
"test.unique_view_model_id",
"exposure:test.my_exposure",
}
class TestBuildRunResultsState(BaseRunResultsState):
def test_build_run_results_state(self, project):
self.run_and_save_state()
results = run_dbt(["build", "--select", "result:error", "--state", "./state"])
assert len(results) == 0
self.update_view_model_bad_sql()
self.rebuild_run_dbt(expect_pass=False)
results = run_dbt(
["build", "--select", "result:error", "--state", "./state"], expect_pass=False
)
assert len(results) == 3
nodes = set([elem.node.name for elem in results])
assert nodes == {"view_model", "not_null_view_model_id", "unique_view_model_id"}
results = run_dbt(["ls", "--select", "result:error", "--state", "./state"])
assert len(results) == 3
assert set(results) == {
"test.view_model",
"test.not_null_view_model_id",
"test.unique_view_model_id",
}
results = run_dbt(
["build", "--select", "result:error+", "--state", "./state"], expect_pass=False
)
assert len(results) == 4
nodes = set([elem.node.name for elem in results])
assert nodes == {
"table_model",
"view_model",
"not_null_view_model_id",
"unique_view_model_id",
}
results = run_dbt(["ls", "--select", "result:error+", "--state", "./state"])
assert len(results) == 6 # includes exposure
assert set(results) == {
"test.table_model",
"test.view_model",
"test.ephemeral_model",
"test.not_null_view_model_id",
"test.unique_view_model_id",
"exposure:test.my_exposure",
}
self.update_view_model_failing_tests()
self.rebuild_run_dbt(expect_pass=False)
results = run_dbt(
["build", "--select", "result:fail", "--state", "./state"], expect_pass=False
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
results = run_dbt(["ls", "--select", "result:fail", "--state", "./state"])
assert len(results) == 1
assert results[0] == "test.unique_view_model_id"
results = run_dbt(
["build", "--select", "result:fail+", "--state", "./state"], expect_pass=False
)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {"table_model", "unique_view_model_id"}
results = run_dbt(["ls", "--select", "result:fail+", "--state", "./state"])
assert len(results) == 1
assert set(results) == {"test.unique_view_model_id"}
self.update_unique_test_severity_warn()
self.rebuild_run_dbt(expect_pass=True)
results = run_dbt(
["build", "--select", "result:warn", "--state", "./state"], expect_pass=True
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
results = run_dbt(["ls", "--select", "result:warn", "--state", "./state"])
assert len(results) == 1
assert results[0] == "test.unique_view_model_id"
results = run_dbt(
["build", "--select", "result:warn+", "--state", "./state"], expect_pass=True
)
assert len(results) == 2 # includes table_model to be run
nodes = set([elem.node.name for elem in results])
assert nodes == {"table_model", "unique_view_model_id"}
results = run_dbt(["ls", "--select", "result:warn+", "--state", "./state"])
assert len(results) == 1
assert set(results) == {"test.unique_view_model_id"}
class TestRunRunResultsState(BaseRunResultsState):
def test_run_run_results_state(self, project):
self.run_and_save_state()
results = run_dbt(
["run", "--select", "result:success", "--state", "./state"], expect_pass=True
)
assert len(results) == 2
assert results[0].node.name == "view_model"
assert results[1].node.name == "table_model"
# clear state and rerun upstream view model to test + operator
self.clear_state()
run_dbt(["run", "--select", "view_model"], expect_pass=True)
self.copy_state()
results = run_dbt(
["run", "--select", "result:success+", "--state", "./state"], expect_pass=True
)
assert len(results) == 2
assert results[0].node.name == "view_model"
assert results[1].node.name == "table_model"
# check we are starting from a place with 0 errors
results = run_dbt(["run", "--select", "result:error", "--state", "./state"])
assert len(results) == 0
self.update_view_model_bad_sql()
self.clear_state()
run_dbt(["run"], expect_pass=False)
self.copy_state()
# test single result selector on error
results = run_dbt(
["run", "--select", "result:error", "--state", "./state"], expect_pass=False
)
assert len(results) == 1
assert results[0].node.name == "view_model"
# test + operator selection on error
results = run_dbt(
["run", "--select", "result:error+", "--state", "./state"], expect_pass=False
)
assert len(results) == 2
assert results[0].node.name == "view_model"
assert results[1].node.name == "table_model"
# single result selector on skipped. Expect this to pass becase underlying view already defined above
results = run_dbt(
["run", "--select", "result:skipped", "--state", "./state"], expect_pass=True
)
assert len(results) == 1
assert results[0].node.name == "table_model"
# add a downstream model that depends on table_model for skipped+ selector
downstream_model_sql = "select * from {{ref('table_model')}}"
write_file(downstream_model_sql, "models", "table_model_downstream.sql")
self.clear_state()
run_dbt(["run"], expect_pass=False)
self.copy_state()
results = run_dbt(
["run", "--select", "result:skipped+", "--state", "./state"], expect_pass=True
)
assert len(results) == 2
assert results[0].node.name == "table_model"
assert results[1].node.name == "table_model_downstream"
class TestTestRunResultsState(BaseRunResultsState):
def test_test_run_results_state(self, project):
self.run_and_save_state()
# run passed nodes
results = run_dbt(
["test", "--select", "result:pass", "--state", "./state"], expect_pass=True
)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {"unique_view_model_id", "not_null_view_model_id"}
# run passed nodes with + operator
results = run_dbt(
["test", "--select", "result:pass+", "--state", "./state"], expect_pass=True
)
assert len(results) == 2
nodes = set([elem.node.name for elem in results])
assert nodes == {"unique_view_model_id", "not_null_view_model_id"}
self.update_view_model_failing_tests()
self.rebuild_run_dbt(expect_pass=False)
# test with failure selector
results = run_dbt(
["test", "--select", "result:fail", "--state", "./state"], expect_pass=False
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
# test with failure selector and + operator
results = run_dbt(
["test", "--select", "result:fail+", "--state", "./state"], expect_pass=False
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
self.update_unique_test_severity_warn()
# rebuild - expect_pass = True because we changed the error to a warning this time around
self.rebuild_run_dbt(expect_pass=True)
# test with warn selector
results = run_dbt(
["test", "--select", "result:warn", "--state", "./state"], expect_pass=True
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
# test with warn selector and + operator
results = run_dbt(
["test", "--select", "result:warn+", "--state", "./state"], expect_pass=True
)
assert len(results) == 1
assert results[0].node.name == "unique_view_model_id"
class TestConcurrentSelectionRunResultsState(BaseRunResultsState):
def test_concurrent_selection_run_run_results_state(self, project):
self.run_and_save_state()
results = run_dbt(
["run", "--select", "state:modified+", "result:error+", "--state", "./state"]
)
assert len(results) == 0
self.update_view_model_bad_sql()
self.clear_state()
run_dbt(["run"], expect_pass=False)
self.copy_state()
# add a new failing dbt model
bad_sql = "select * from forced_error"
write_file(bad_sql, "models", "table_model_modified_example.sql")
results = run_dbt(
["run", "--select", "state:modified+", "result:error+", "--state", "./state"],
expect_pass=False,
)
assert len(results) == 3
nodes = set([elem.node.name for elem in results])
assert nodes == {"view_model", "table_model_modified_example", "table_model"}
class TestConcurrentSelectionTestRunResultsState(BaseRunResultsState):
def test_concurrent_selection_test_run_results_state(self, project):
self.run_and_save_state()
# create failure test case for result:fail selector
self.update_view_model_failing_tests(with_nulls=True)
# run dbt build again to trigger test errors
self.rebuild_run_dbt(expect_pass=False)
# get the failures from
results = run_dbt(
[
"test",
"--select",
"result:fail",
"--exclude",
"not_null_view_model_id",
"--state",
"./state",
],
expect_pass=False,
)
assert len(results) == 1
nodes = set([elem.node.name for elem in results])
assert nodes == {"unique_view_model_id"}
class TestConcurrentSelectionBuildRunResultsState(BaseRunResultsState):
def test_concurrent_selectors_build_run_results_state(self, project):
self.run_and_save_state()
results = run_dbt(
["build", "--select", "state:modified+", "result:error+", "--state", "./state"]
)
assert len(results) == 0
self.update_view_model_bad_sql()
self.rebuild_run_dbt(expect_pass=False)
# add a new failing dbt model
bad_sql = "select * from forced_error"
write_file(bad_sql, "models", "table_model_modified_example.sql")
results = run_dbt(
["build", "--select", "state:modified+", "result:error+", "--state", "./state"],
expect_pass=False,
)
assert len(results) == 5
nodes = set([elem.node.name for elem in results])
assert nodes == {
"table_model_modified_example",
"view_model",
"table_model",
"not_null_view_model_id",
"unique_view_model_id",
}
self.update_view_model_failing_tests()
# create error model case for result:error selector
more_bad_sql = "select 1 as id from not_exists"
write_file(more_bad_sql, "models", "error_model.sql")
# create something downstream from the error model to rerun
downstream_model_sql = "select * from {{ ref('error_model') }} )"
write_file(downstream_model_sql, "models", "downstream_of_error_model.sql")
# regenerate build state
self.rebuild_run_dbt(expect_pass=False)
# modify model again to trigger the state:modified selector
bad_again_sql = "select * from forced_anothererror"
write_file(bad_again_sql, "models", "table_model_modified_example.sql")
results = run_dbt(
[
"build",
"--select",
"state:modified+",
"result:error+",
"result:fail+",
"--state",
"./state",
],
expect_pass=False,
)
assert len(results) == 5
nodes = set([elem.node.name for elem in results])
assert nodes == {
"error_model",
"downstream_of_error_model",
"table_model_modified_example",
"table_model",
"unique_view_model_id",
}

View File

@@ -177,7 +177,7 @@ sample_values = [
BuildingCatalog(), BuildingCatalog(),
DatabaseErrorRunningHook(hook_type=""), DatabaseErrorRunningHook(hook_type=""),
HooksRunning(num_hooks=0, hook_type=""), HooksRunning(num_hooks=0, hook_type=""),
HookFinished(stat_line="", execution="", execution_time=0), FinishedRunningStats(stat_line="", execution="", execution_time=0),
# I - Project parsing ====================== # I - Project parsing ======================
ParseCmdOut(msg="testing"), ParseCmdOut(msg="testing"),