mirror of
https://github.com/dbt-labs/dbt-core
synced 2025-12-21 15:41:28 +00:00
Compare commits
51 Commits
db-setup-w
...
v0.21.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c009485de2 | ||
|
|
9cdc451bc8 | ||
|
|
4718dd3a1e | ||
|
|
49796d6e13 | ||
|
|
ece3d2c105 | ||
|
|
52bedbad23 | ||
|
|
eb079dd818 | ||
|
|
2a99431c8d | ||
|
|
df5953a71d | ||
|
|
641b0fa365 | ||
|
|
8876afdb14 | ||
|
|
a97a9c9942 | ||
|
|
0070cd99de | ||
|
|
78a1bbe3c7 | ||
|
|
cde82fa2b1 | ||
|
|
05dea18b62 | ||
|
|
d7177c7d89 | ||
|
|
35f0fea804 | ||
|
|
8953c7c533 | ||
|
|
76c59a5545 | ||
|
|
5c0a31b829 | ||
|
|
243bc3d41d | ||
|
|
67b594a950 | ||
|
|
2493c21649 | ||
|
|
d3826e670f | ||
|
|
4b5b1696b7 | ||
|
|
abb59ef14f | ||
|
|
3b7c2816b9 | ||
|
|
484517416f | ||
|
|
39447055d3 | ||
|
|
95cca277c9 | ||
|
|
96083dcaf5 | ||
|
|
75b4cf691b | ||
|
|
7c9171b00b | ||
|
|
34cec7c7b0 | ||
|
|
db5caf97ae | ||
|
|
847046171e | ||
|
|
5dd37a9fb8 | ||
|
|
a2bdd08d88 | ||
|
|
1807526d0a | ||
|
|
362770f5bd | ||
|
|
af38f51041 | ||
|
|
efc8ece12e | ||
|
|
7471f07431 | ||
|
|
6fa30d10ea | ||
|
|
35150f914f | ||
|
|
b477be9eff | ||
|
|
b67e877cc1 | ||
|
|
1c066cd680 | ||
|
|
ec97b46caf | ||
|
|
b5bb354929 |
@@ -1,5 +1,5 @@
|
||||
[bumpversion]
|
||||
current_version = 0.21.0b1
|
||||
current_version = 0.21.0
|
||||
parse = (?P<major>\d+)
|
||||
\.(?P<minor>\d+)
|
||||
\.(?P<patch>\d+)
|
||||
|
||||
41
CHANGELOG.md
41
CHANGELOG.md
@@ -1,12 +1,33 @@
|
||||
## dbt 0.21.0 (Release TBD)
|
||||
## dbt 0.21.0 (October 04, 2021)
|
||||
|
||||
## dbt 0.21.0rc2 (September 27, 2021)
|
||||
|
||||
|
||||
### Fixes
|
||||
- Fix batching for large seeds on Snowflake ([#3941](https://github.com/dbt-labs/dbt/issues/3941), [#3942](https://github.com/dbt-labs/dbt/pull/3942))
|
||||
- Avoid infinite recursion in `state:modified.macros` check ([#3904](https://github.com/dbt-labs/dbt/issues/3904), [#3957](https://github.com/dbt-labs/dbt/pull/3957))
|
||||
- Cast log messages to strings before scrubbing of prefixed env vars ([#3971](https://github.com/dbt-labs/dbt/issues/3971), [#3972](https://github.com/dbt-labs/dbt/pull/3972))
|
||||
|
||||
### Under the hood
|
||||
- Bump artifact schema versions for 0.21.0 ([#3945](https://github.com/dbt-labs/dbt/pull/3945))
|
||||
|
||||
## dbt 0.21.0rc1 (September 20, 2021)
|
||||
|
||||
### Features
|
||||
|
||||
- Make `--models` and `--select` synonyms, except for `ls` (to preserve existing behavior) ([#3210](https://github.com/dbt-labs/dbt/pull/3210), [#3791](https://github.com/dbt-labs/dbt/pull/3791))
|
||||
- Experimental parser now detects macro overrides of ref, source, and config builtins. ([#3581](https://github.com/dbt-labs/dbt/issues/3866), [#3582](https://github.com/dbt-labs/dbt/pull/3877))
|
||||
- Add connect_timeout profile configuration for Postgres and Redshift adapters. ([#3581](https://github.com/dbt-labs/dbt/issues/3581), [#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||
- Enhance BigQuery copy materialization ([#3570](https://github.com/dbt-labs/dbt/issues/3570), [#3606](https://github.com/dbt-labs/dbt/pull/3606)):
|
||||
- to simplify config (default usage of `copy_materialization='table'` if is is not found in global or local config)
|
||||
- to let copy several source tables into single target table at a time. ([Google doc reference](https://cloud.google.com/bigquery/docs/managing-tables#copying_multiple_source_tables))
|
||||
- Customize ls task JSON output by adding new flag `--output-keys` ([#3778](https://github.com/dbt-labs/dbt/issues/3778), [#3395](https://github.com/dbt-labs/dbt/issues/3395))
|
||||
- Add support for execution project on BigQuery through profile configuration ([#3707](https://github.com/dbt-labs/dbt/issues/3707), [#3708](https://github.com/dbt-labs/dbt/issues/3708))
|
||||
- Skip downstream nodes during the `build` task when a test fails. ([#3597](https://github.com/dbt-labs/dbt/issues/3597), [#3792](https://github.com/dbt-labs/dbt/pull/3792))
|
||||
- Added default field in the `selectors.yml` to allow user to define default selector ([#3448](https://github.com/dbt-labs/dbt/issues/3448), [#3875](https://github.com/dbt-labs/dbt/issues/3875), [#3892](https://github.com/dbt-labs/dbt/issues/3892))
|
||||
- Added timing and thread information to sources.json artifact ([#3804](https://github.com/dbt-labs/dbt/issues/3804), [#3894](https://github.com/dbt-labs/dbt/pull/3894))
|
||||
- Update cli and rpc flags for the `build` task to align with other commands (`--resource-type`, `--store-failures`) ([#3596](https://github.com/dbt-labs/dbt/issues/3596), [#3884](https://github.com/dbt-labs/dbt/pull/3884))
|
||||
- Log tests that are not indirectly selected. Add `--greedy` flag to `test`, `list`, `build` and `greedy` property in yaml selectors ([#3723](https://github.com/dbt-labs/dbt/pull/3723), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
||||
|
||||
### Fixes
|
||||
|
||||
@@ -15,19 +36,18 @@
|
||||
- Fix issue when running the `deps` task after the `list` task in the RPC server ([#3846](https://github.com/dbt-labs/dbt/issues/3846), [#3848](https://github.com/dbt-labs/dbt/pull/3848), [#3850](https://github.com/dbt-labs/dbt/pull/3850))
|
||||
- Fix bug with initializing a dataclass that inherits from `typing.Protocol`, specifically for `dbt.config.profile.Profile` ([#3843](https://github.com/dbt-labs/dbt/issues/3843), [#3855](https://github.com/dbt-labs/dbt/pull/3855))
|
||||
- Introduce a macro, `get_where_subquery`, for tests that use `where` config. Alias filtering subquery as `dbt_subquery` instead of resource identifier ([#3857](https://github.com/dbt-labs/dbt/issues/3857), [#3859](https://github.com/dbt-labs/dbt/issues/3859))
|
||||
|
||||
### Fixes
|
||||
|
||||
- Use group by column_name in accepted_values test for compatibility with most database engines ([#3905](https://github.com/dbt-labs/dbt/issues/3905), [#3906](https://github.com/dbt-labs/dbt/pull/3906))
|
||||
- Separated table vs view configuration for BigQuery since some configuration is not possible to set for tables vs views. ([#3682](https://github.com/dbt-labs/dbt/issues/3682), [#3691](https://github.com/dbt-labs/dbt/issues/3682))
|
||||
|
||||
### Under the hood
|
||||
|
||||
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
- Better dbt hub registry packages version logging that prompts the user for upgrades to relevant packages ([#3560](https://github.com/dbt-labs/dbt/issues/3560), [#3763](https://github.com/dbt-labs/dbt/issues/3763), [#3759](https://github.com/dbt-labs/dbt/pull/3759))
|
||||
- Allow the default seed macro's SQL parameter, `%s`, to be replaced by dispatching a new macro, `get_binding_char()`. This enables adapters with parameter marker characters such as `?` to not have to override `basic_load_csv_rows`. ([#3622](https://github.com/fishtown-analytics/dbt/issues/3622), [#3623](https://github.com/fishtown-analytics/dbt/pull/3623))
|
||||
- Allow the default seed macro's SQL parameter, `%s`, to be replaced by dispatching a new macro, `get_binding_char()`. This enables adapters with parameter marker characters such as `?` to not have to override `basic_load_csv_rows`. ([#3622](https://github.com/dbt-labs/dbt/issues/3622), [#3623](https://github.com/dbt-labs/dbt/pull/3623))
|
||||
- Alert users on package rename ([hub.getdbt.com#180](https://github.com/dbt-labs/hub.getdbt.com/issues/810), [#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||
- Add `adapter_unique_id` to invocation context in anonymous usage tracking, to better understand dbt adoption ([#3713](https://github.com/dbt-labs/dbt/issues/3713), [#3796](https://github.com/dbt-labs/dbt/issues/3796))
|
||||
- Specify `macro_namespace = 'dbt'` for all dispatched macros in the global project, making it possible to dispatch to macro implementations defined in packages. Dispatch `generate_schema_name` and `generate_alias_name` ([#3456](https://github.com/dbt-labs/dbt/issues/3456), [#3851](https://github.com/dbt-labs/dbt/issues/3851))
|
||||
- Retry transient GitHub failures during download ([#3546](https://github.com/dbt-labs/dbt/pull/3546), [#3729](https://github.com/dbt-labs/dbt/pull/3729))
|
||||
- Don't reload and validate schema files if they haven't changed ([#3563](https://github.com/dbt-labs/dbt/issues/3563), [#3888](https://github.com/dbt-labs/dbt/issues/3888))
|
||||
|
||||
Contributors:
|
||||
|
||||
@@ -36,10 +56,14 @@ Contributors:
|
||||
- [@dbrtly](https://github.com/dbrtly) ([#3834](https://github.com/dbt-labs/dbt/pull/3834))
|
||||
- [@swanderz](https://github.com/swanderz) [#3623](https://github.com/dbt-labs/dbt/pull/3623)
|
||||
- [@JasonGluck](https://github.com/JasonGluck) ([#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
||||
- [@juma-adoreme](https://github.com/juma-adoreme) ([#3838](https://github.com/dbt-labs/dbt/pull/3838))
|
||||
- [@annafil](https://github.com/annafil) ([#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||
- [@AndreasTA-AW](https://github.com/AndreasTA-AW) ([#3691](https://github.com/dbt-labs/dbt/pull/3691))
|
||||
- [@Kayrnt](https://github.com/Kayrnt) ([3707](https://github.com/dbt-labs/dbt/pull/3707))
|
||||
- [@TeddyCr](https://github.com/TeddyCr) ([#3448](https://github.com/dbt-labs/dbt/pull/3865))
|
||||
- [@sdebruyn](https://github.com/sdebruyn) ([#3906](https://github.com/dbt-labs/dbt/pull/3906))
|
||||
|
||||
|
||||
## dbt 0.21.0b2 (August 19, 2021)
|
||||
|
||||
@@ -56,7 +80,6 @@ Contributors:
|
||||
### Under the hood
|
||||
|
||||
- Add `build` RPC method, and a subset of flags for `build` task ([#3595](https://github.com/dbt-labs/dbt/issues/3595), [#3674](https://github.com/dbt-labs/dbt/pull/3674))
|
||||
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
||||
|
||||
## dbt 0.21.0b1 (August 03, 2021)
|
||||
|
||||
@@ -106,6 +129,8 @@ Contributors:
|
||||
|
||||
- Better error handling for BigQuery job labels that are too long. ([#3612](https://github.com/dbt-labs/dbt/pull/3612), [#3703](https://github.com/dbt-labs/dbt/pull/3703))
|
||||
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
||||
- Switch to full reparse on partial parsing exceptions. Log and report exception information. ([#3725](https://github.com/dbt-labs/dbt/issues/3725), [#3733](https://github.com/dbt-labs/dbt/pull/3733))
|
||||
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||
|
||||
### Fixes
|
||||
|
||||
|
||||
@@ -68,7 +68,7 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
|
||||
|
||||
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
|
||||
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
|
||||
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
|
||||
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk branch or a specific release branch.
|
||||
|
||||
## Getting the code
|
||||
|
||||
@@ -135,7 +135,7 @@ brew install postgresql
|
||||
|
||||
### Installation
|
||||
|
||||
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
|
||||
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt` (and its dependencies) with:
|
||||
|
||||
```sh
|
||||
make dev
|
||||
@@ -170,6 +170,8 @@ docker-compose up -d database
|
||||
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
|
||||
```
|
||||
|
||||
Note that you may need to run the previous command twice as it does not currently wait for the database to be running before attempting to run commands against it. This will be fixed with [#3876](https://github.com/dbt-labs/dbt/issues/3876).
|
||||
|
||||
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
|
||||
|
||||
```
|
||||
|
||||
@@ -30,7 +30,7 @@ def find_matching(
|
||||
root_path: str,
|
||||
relative_paths_to_search: List[str],
|
||||
file_pattern: str,
|
||||
) -> List[Dict[str, str]]:
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Given an absolute `root_path`, a list of relative paths to that
|
||||
absolute root path (`relative_paths_to_search`), and a `file_pattern`
|
||||
@@ -61,11 +61,19 @@ def find_matching(
|
||||
relative_path = os.path.relpath(
|
||||
absolute_path, absolute_path_to_search
|
||||
)
|
||||
modification_time = 0.0
|
||||
try:
|
||||
modification_time = os.path.getmtime(absolute_path)
|
||||
except OSError:
|
||||
logger.exception(
|
||||
f"Error retrieving modification time for file {absolute_path}"
|
||||
)
|
||||
if reobj.match(local_file):
|
||||
matching.append({
|
||||
'searched_path': relative_path_to_search,
|
||||
'absolute_path': absolute_path,
|
||||
'relative_path': relative_path,
|
||||
'modification_time': modification_time,
|
||||
})
|
||||
|
||||
return matching
|
||||
|
||||
@@ -10,7 +10,7 @@ from dbt.adapters.factory import get_adapter
|
||||
from dbt.clients import jinja
|
||||
from dbt.clients.system import make_directory
|
||||
from dbt.context.providers import generate_runtime_model
|
||||
from dbt.contracts.graph.manifest import Manifest
|
||||
from dbt.contracts.graph.manifest import Manifest, UniqueID
|
||||
from dbt.contracts.graph.compiled import (
|
||||
COMPILED_TYPES,
|
||||
CompiledSchemaTestNode,
|
||||
@@ -107,6 +107,18 @@ def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
|
||||
_add_prepended_cte(prepended_ctes, new_cte)
|
||||
|
||||
|
||||
def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueID]:
|
||||
""" Get a list of tests that depend on the node with the
|
||||
provided unique id """
|
||||
|
||||
return [
|
||||
node.unique_id
|
||||
for _, node in manifest.nodes.items()
|
||||
if node.resource_type == NodeType.Test and
|
||||
unique_id in node.depends_on_nodes
|
||||
]
|
||||
|
||||
|
||||
class Linker:
|
||||
def __init__(self, data=None):
|
||||
if data is None:
|
||||
@@ -142,7 +154,7 @@ class Linker:
|
||||
include all nodes in their corresponding graph entries.
|
||||
"""
|
||||
out_graph = self.graph.copy()
|
||||
for node_id in self.graph.nodes():
|
||||
for node_id in self.graph:
|
||||
data = manifest.expect(node_id).to_dict(omit_none=True)
|
||||
out_graph.add_node(node_id, **data)
|
||||
nx.write_gpickle(out_graph, outfile)
|
||||
@@ -412,13 +424,80 @@ class Compiler:
|
||||
self.link_node(linker, node, manifest)
|
||||
for exposure in manifest.exposures.values():
|
||||
self.link_node(linker, exposure, manifest)
|
||||
# linker.add_node(exposure.unique_id)
|
||||
|
||||
cycle = linker.find_cycles()
|
||||
|
||||
if cycle:
|
||||
raise RuntimeError("Found a cycle: {}".format(cycle))
|
||||
|
||||
self.resolve_graph(linker, manifest)
|
||||
|
||||
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
|
||||
""" This method adds additional edges to the DAG. For a given non-test
|
||||
executable node, add an edge from an upstream test to the given node if
|
||||
the set of nodes the test depends on is a proper/strict subset of the
|
||||
upstream nodes for the given node. """
|
||||
|
||||
# Given a graph:
|
||||
# model1 --> model2 --> model3
|
||||
# | |
|
||||
# | \/
|
||||
# \/ test 2
|
||||
# test1
|
||||
#
|
||||
# Produce the following graph:
|
||||
# model1 --> model2 --> model3
|
||||
# | | /\ /\
|
||||
# | \/ | |
|
||||
# \/ test2 ------- |
|
||||
# test1 -------------------
|
||||
|
||||
for node_id in linker.graph:
|
||||
# If node is executable (in manifest.nodes) and does _not_
|
||||
# represent a test, continue.
|
||||
if (
|
||||
node_id in manifest.nodes and
|
||||
manifest.nodes[node_id].resource_type != NodeType.Test
|
||||
):
|
||||
# Get *everything* upstream of the node
|
||||
all_upstream_nodes = nx.traversal.bfs_tree(
|
||||
linker.graph, node_id, reverse=True
|
||||
)
|
||||
# Get the set of upstream nodes not including the current node.
|
||||
upstream_nodes = set([
|
||||
n for n in all_upstream_nodes if n != node_id
|
||||
])
|
||||
|
||||
# Get all tests that depend on any upstream nodes.
|
||||
upstream_tests = []
|
||||
for upstream_node in upstream_nodes:
|
||||
upstream_tests += _get_tests_for_node(
|
||||
manifest,
|
||||
upstream_node
|
||||
)
|
||||
|
||||
for upstream_test in upstream_tests:
|
||||
# Get the set of all nodes that the test depends on
|
||||
# including the upstream_node itself. This is necessary
|
||||
# because tests can depend on multiple nodes (ex:
|
||||
# relationship tests). Test nodes do not distinguish
|
||||
# between what node the test is "testing" and what
|
||||
# node(s) it depends on.
|
||||
test_depends_on = set(
|
||||
manifest.nodes[upstream_test].depends_on_nodes
|
||||
)
|
||||
|
||||
# If the set of nodes that an upstream test depends on
|
||||
# is a proper (or strict) subset of all upstream nodes of
|
||||
# the current node, add an edge from the upstream test
|
||||
# to the current node. Must be a proper/strict subset to
|
||||
# avoid adding a circular dependency to the graph.
|
||||
if (test_depends_on < upstream_nodes):
|
||||
linker.graph.add_edge(
|
||||
upstream_test,
|
||||
node_id
|
||||
)
|
||||
|
||||
def compile(self, manifest: Manifest, write=True) -> Graph:
|
||||
self.initialize()
|
||||
linker = Linker()
|
||||
|
||||
@@ -645,13 +645,24 @@ class Project:
|
||||
def hashed_name(self):
|
||||
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
|
||||
|
||||
def get_selector(self, name: str) -> SelectionSpec:
|
||||
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
|
||||
if name not in self.selectors:
|
||||
raise RuntimeException(
|
||||
f'Could not find selector named {name}, expected one of '
|
||||
f'{list(self.selectors)}'
|
||||
)
|
||||
return self.selectors[name]
|
||||
return self.selectors[name]["definition"]
|
||||
|
||||
def get_default_selector_name(self) -> Union[str, None]:
|
||||
"""This function fetch the default selector to use on `dbt run` (if any)
|
||||
:return: either a selector if default is set or None
|
||||
:rtype: Union[SelectionSpec, None]
|
||||
"""
|
||||
for selector_name, selector in self.selectors.items():
|
||||
if selector["default"] is True:
|
||||
return selector_name
|
||||
|
||||
return None
|
||||
|
||||
def get_macro_search_order(self, macro_namespace: str):
|
||||
for dispatch_entry in self.dispatch:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from typing import Dict, Any, Union
|
||||
from dbt.clients.yaml_helper import ( # noqa: F401
|
||||
yaml, Loader, Dumper, load_yaml_text
|
||||
)
|
||||
@@ -29,13 +29,14 @@ Validator Error:
|
||||
"""
|
||||
|
||||
|
||||
class SelectorConfig(Dict[str, SelectionSpec]):
|
||||
class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
|
||||
|
||||
@classmethod
|
||||
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
|
||||
try:
|
||||
SelectorFile.validate(data)
|
||||
selector_file = SelectorFile.from_dict(data)
|
||||
validate_selector_default(selector_file)
|
||||
selectors = parse_from_selectors_definition(selector_file)
|
||||
except ValidationError as exc:
|
||||
yaml_sel_cfg = yaml.dump(exc.instance)
|
||||
@@ -118,6 +119,24 @@ def selector_config_from_data(
|
||||
return selectors
|
||||
|
||||
|
||||
def validate_selector_default(selector_file: SelectorFile) -> None:
|
||||
"""Check if a selector.yml file has more than 1 default key set to true"""
|
||||
default_set: bool = False
|
||||
default_selector_name: Union[str, None] = None
|
||||
|
||||
for selector in selector_file.selectors:
|
||||
if selector.default is True and default_set is False:
|
||||
default_set = True
|
||||
default_selector_name = selector.name
|
||||
continue
|
||||
if selector.default is True and default_set is True:
|
||||
raise DbtSelectorsError(
|
||||
"Error when parsing the selector file. "
|
||||
"Found multiple selectors with `default: true`:"
|
||||
f"{default_selector_name} and {selector.name}"
|
||||
)
|
||||
|
||||
|
||||
# These are utilities to clean up the dictionary created from
|
||||
# selectors.yml by turning the cli-string format entries into
|
||||
# normalized dictionary entries. It parallels the flow in
|
||||
|
||||
@@ -42,6 +42,7 @@ parse_file_type_to_parser = {
|
||||
class FilePath(dbtClassMixin):
|
||||
searched_path: str
|
||||
relative_path: str
|
||||
modification_time: float
|
||||
project_root: str
|
||||
|
||||
@property
|
||||
@@ -132,6 +133,10 @@ class RemoteFile(dbtClassMixin):
|
||||
def original_file_path(self):
|
||||
return 'from remote system'
|
||||
|
||||
@property
|
||||
def modification_time(self):
|
||||
return 'from remote system'
|
||||
|
||||
|
||||
@dataclass
|
||||
class BaseSourceFile(dbtClassMixin, SerializableType):
|
||||
@@ -150,8 +155,6 @@ class BaseSourceFile(dbtClassMixin, SerializableType):
|
||||
def file_id(self):
|
||||
if isinstance(self.path, RemoteFile):
|
||||
return None
|
||||
if self.checksum.name == 'none':
|
||||
return None
|
||||
return f'{self.project_name}://{self.path.original_file_path}'
|
||||
|
||||
def _serialize(self):
|
||||
|
||||
@@ -1071,7 +1071,7 @@ AnyManifest = Union[Manifest, MacroManifest]
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('manifest', 2)
|
||||
@schema_version('manifest', 3)
|
||||
class WritableManifest(ArtifactMixin):
|
||||
nodes: Mapping[UniqueID, ManifestNode] = field(
|
||||
metadata=dict(description=(
|
||||
|
||||
@@ -185,7 +185,7 @@ class RunExecutionResult(
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('run-results', 2)
|
||||
@schema_version('run-results', 3)
|
||||
class RunResultsArtifact(ExecutionResult, ArtifactMixin):
|
||||
results: Sequence[RunResultOutput]
|
||||
args: Dict[str, Any] = field(default_factory=dict)
|
||||
@@ -285,6 +285,9 @@ class SourceFreshnessOutput(dbtClassMixin):
|
||||
status: FreshnessStatus
|
||||
criteria: FreshnessThreshold
|
||||
adapter_response: Dict[str, Any]
|
||||
timing: List[TimingInfo]
|
||||
thread_id: str
|
||||
execution_time: float
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -333,7 +336,10 @@ def process_freshness_result(
|
||||
max_loaded_at_time_ago_in_s=result.age,
|
||||
status=result.status,
|
||||
criteria=criteria,
|
||||
adapter_response=result.adapter_response
|
||||
adapter_response=result.adapter_response,
|
||||
timing=result.timing,
|
||||
thread_id=result.thread_id,
|
||||
execution_time=result.execution_time,
|
||||
)
|
||||
|
||||
|
||||
@@ -363,7 +369,7 @@ class FreshnessResult(ExecutionResult):
|
||||
|
||||
|
||||
@dataclass
|
||||
@schema_version('sources', 1)
|
||||
@schema_version('sources', 2)
|
||||
class FreshnessExecutionResultArtifact(
|
||||
ArtifactMixin,
|
||||
VersionedSchema,
|
||||
|
||||
@@ -121,9 +121,9 @@ class RPCDocsGenerateParameters(RPCParameters):
|
||||
|
||||
@dataclass
|
||||
class RPCBuildParameters(RPCParameters):
|
||||
threads: Optional[int] = None
|
||||
models: Union[None, str, List[str]] = None
|
||||
resource_types: Optional[List[str]] = None
|
||||
select: Union[None, str, List[str]] = None
|
||||
threads: Optional[int] = None
|
||||
exclude: Union[None, str, List[str]] = None
|
||||
selector: Optional[str] = None
|
||||
state: Optional[str] = None
|
||||
|
||||
@@ -9,6 +9,7 @@ class SelectorDefinition(dbtClassMixin):
|
||||
name: str
|
||||
definition: Union[str, Dict[str, Any]]
|
||||
description: str = ''
|
||||
default: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
|
||||
@@ -18,6 +18,7 @@ WRITE_JSON = None
|
||||
PARTIAL_PARSE = None
|
||||
USE_COLORS = None
|
||||
STORE_FAILURES = None
|
||||
GREEDY = None
|
||||
|
||||
|
||||
def env_set_truthy(key: str) -> Optional[str]:
|
||||
@@ -56,7 +57,7 @@ MP_CONTEXT = _get_context()
|
||||
def reset():
|
||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||
STORE_FAILURES
|
||||
STORE_FAILURES, GREEDY
|
||||
|
||||
STRICT_MODE = False
|
||||
FULL_REFRESH = False
|
||||
@@ -69,12 +70,13 @@ def reset():
|
||||
MP_CONTEXT = _get_context()
|
||||
USE_COLORS = True
|
||||
STORE_FAILURES = False
|
||||
GREEDY = False
|
||||
|
||||
|
||||
def set_from_args(args):
|
||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||
STORE_FAILURES
|
||||
STORE_FAILURES, GREEDY
|
||||
|
||||
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
|
||||
|
||||
@@ -99,6 +101,7 @@ def set_from_args(args):
|
||||
USE_COLORS = use_colors_override
|
||||
|
||||
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
|
||||
GREEDY = getattr(args, 'greedy', GREEDY)
|
||||
|
||||
|
||||
# initialize everything to the defaults on module load
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# special support for CLI argument parsing.
|
||||
from dbt import flags
|
||||
import itertools
|
||||
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
|
||||
|
||||
@@ -66,7 +67,7 @@ def parse_union_from_default(
|
||||
def parse_difference(
|
||||
include: Optional[List[str]], exclude: Optional[List[str]]
|
||||
) -> SelectionDifference:
|
||||
included = parse_union_from_default(include, DEFAULT_INCLUDES)
|
||||
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
|
||||
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
|
||||
return SelectionDifference(components=[included, excluded])
|
||||
|
||||
@@ -180,7 +181,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
union_def_parts = _get_list_dicts(definition, 'union')
|
||||
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
|
||||
|
||||
union = SelectionUnion(components=include)
|
||||
union = SelectionUnion(components=include, greedy_warning=False)
|
||||
|
||||
if exclude is None:
|
||||
union.raw = definition
|
||||
@@ -188,7 +189,8 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
else:
|
||||
return SelectionDifference(
|
||||
components=[union, exclude],
|
||||
raw=definition
|
||||
raw=definition,
|
||||
greedy_warning=False
|
||||
)
|
||||
|
||||
|
||||
@@ -197,7 +199,7 @@ def parse_intersection_definition(
|
||||
) -> SelectionSpec:
|
||||
intersection_def_parts = _get_list_dicts(definition, 'intersection')
|
||||
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
|
||||
intersection = SelectionIntersection(components=include)
|
||||
intersection = SelectionIntersection(components=include, greedy_warning=False)
|
||||
|
||||
if exclude is None:
|
||||
intersection.raw = definition
|
||||
@@ -205,7 +207,8 @@ def parse_intersection_definition(
|
||||
else:
|
||||
return SelectionDifference(
|
||||
components=[intersection, exclude],
|
||||
raw=definition
|
||||
raw=definition,
|
||||
greedy_warning=False
|
||||
)
|
||||
|
||||
|
||||
@@ -239,7 +242,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
||||
if diff_arg is None:
|
||||
return base
|
||||
else:
|
||||
return SelectionDifference(components=[base, diff_arg])
|
||||
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
|
||||
|
||||
|
||||
def parse_from_definition(
|
||||
@@ -271,10 +274,12 @@ def parse_from_definition(
|
||||
|
||||
def parse_from_selectors_definition(
|
||||
source: SelectorFile
|
||||
) -> Dict[str, SelectionSpec]:
|
||||
result: Dict[str, SelectionSpec] = {}
|
||||
) -> Dict[str, Dict[str, Union[SelectionSpec, bool]]]:
|
||||
result: Dict[str, Dict[str, Union[SelectionSpec, bool]]] = {}
|
||||
selector: SelectorDefinition
|
||||
for selector in source.selectors:
|
||||
result[selector.name] = parse_from_definition(selector.definition,
|
||||
rootlevel=True)
|
||||
result[selector.name] = {
|
||||
"default": selector.default,
|
||||
"definition": parse_from_definition(selector.definition, rootlevel=True)
|
||||
}
|
||||
return result
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
from typing import Set, List, Optional, Tuple
|
||||
|
||||
from .graph import Graph, UniqueId
|
||||
@@ -30,6 +29,24 @@ def alert_non_existence(raw_spec, nodes):
|
||||
)
|
||||
|
||||
|
||||
def alert_unused_nodes(raw_spec, node_names):
|
||||
summary_nodes_str = ("\n - ").join(node_names[:3])
|
||||
debug_nodes_str = ("\n - ").join(node_names)
|
||||
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
|
||||
summary_msg = (
|
||||
f"\nSome tests were excluded because at least one parent is not selected. "
|
||||
f"Use the --greedy flag to include them."
|
||||
f"\n - {summary_nodes_str}{and_more_str}"
|
||||
)
|
||||
logger.info(summary_msg)
|
||||
if len(node_names) > 4:
|
||||
debug_msg = (
|
||||
f"Full list of tests that were excluded:"
|
||||
f"\n - {debug_nodes_str}"
|
||||
)
|
||||
logger.debug(debug_msg)
|
||||
|
||||
|
||||
def can_select_indirectly(node):
|
||||
"""If a node is not selected itself, but its parent(s) are, it may qualify
|
||||
for indirect selection.
|
||||
@@ -151,16 +168,16 @@ class NodeSelector(MethodManager):
|
||||
|
||||
return direct_nodes, indirect_nodes
|
||||
|
||||
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
|
||||
def select_nodes(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
|
||||
"""Select the nodes in the graph according to the spec.
|
||||
|
||||
This is the main point of entry for turning a spec into a set of nodes:
|
||||
- Recurse through spec, select by criteria, combine by set operation
|
||||
- Return final (unfiltered) selection set
|
||||
"""
|
||||
|
||||
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
|
||||
return direct_nodes
|
||||
indirect_only = indirect_nodes.difference(direct_nodes)
|
||||
return direct_nodes, indirect_only
|
||||
|
||||
def _is_graph_member(self, unique_id: UniqueId) -> bool:
|
||||
if unique_id in self.manifest.sources:
|
||||
@@ -213,6 +230,8 @@ class NodeSelector(MethodManager):
|
||||
# - If ANY parent is missing, return it separately. We'll keep it around
|
||||
# for later and see if its other parents show up.
|
||||
# We use this for INCLUSION.
|
||||
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
|
||||
# or by specifying `greedy: true` in a yaml selector
|
||||
|
||||
direct_nodes = set(selected)
|
||||
indirect_nodes = set()
|
||||
@@ -251,15 +270,24 @@ class NodeSelector(MethodManager):
|
||||
|
||||
- node selection. Based on the include/exclude sets, the set
|
||||
of matched unique IDs is returned
|
||||
- expand the graph at each leaf node, before combination
|
||||
- selectors might override this. for example, this is where
|
||||
tests are added
|
||||
- includes direct + indirect selection (for tests)
|
||||
- filtering:
|
||||
- selectors can filter the nodes after all of them have been
|
||||
selected
|
||||
"""
|
||||
selected_nodes = self.select_nodes(spec)
|
||||
selected_nodes, indirect_only = self.select_nodes(spec)
|
||||
filtered_nodes = self.filter_selection(selected_nodes)
|
||||
|
||||
if indirect_only:
|
||||
filtered_unused_nodes = self.filter_selection(indirect_only)
|
||||
if filtered_unused_nodes and spec.greedy_warning:
|
||||
# log anything that didn't make the cut
|
||||
unused_node_names = []
|
||||
for unique_id in filtered_unused_nodes:
|
||||
name = self.manifest.nodes[unique_id].name
|
||||
unused_node_names.append(name)
|
||||
alert_unused_nodes(spec, unused_node_names)
|
||||
|
||||
return filtered_nodes
|
||||
|
||||
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:
|
||||
|
||||
@@ -405,27 +405,38 @@ class StateSelectorMethod(SelectorMethod):
|
||||
|
||||
return modified
|
||||
|
||||
def recursively_check_macros_modified(self, node):
|
||||
# check if there are any changes in macros the first time
|
||||
if self.modified_macros is None:
|
||||
self.modified_macros = self._macros_modified()
|
||||
|
||||
def recursively_check_macros_modified(self, node, previous_macros):
|
||||
# loop through all macros that this node depends on
|
||||
for macro_uid in node.depends_on.macros:
|
||||
# avoid infinite recursion if we've already seen this macro
|
||||
if macro_uid in previous_macros:
|
||||
continue
|
||||
previous_macros.append(macro_uid)
|
||||
# is this macro one of the modified macros?
|
||||
if macro_uid in self.modified_macros:
|
||||
return True
|
||||
# if not, and this macro depends on other macros, keep looping
|
||||
macro = self.manifest.macros[macro_uid]
|
||||
if len(macro.depends_on.macros) > 0:
|
||||
return self.recursively_check_macros_modified(macro)
|
||||
macro_node = self.manifest.macros[macro_uid]
|
||||
if len(macro_node.depends_on.macros) > 0:
|
||||
return self.recursively_check_macros_modified(macro_node, previous_macros)
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
||||
def check_macros_modified(self, node):
|
||||
# check if there are any changes in macros the first time
|
||||
if self.modified_macros is None:
|
||||
self.modified_macros = self._macros_modified()
|
||||
# no macros have been modified, skip looping entirely
|
||||
if not self.modified_macros:
|
||||
return False
|
||||
# recursively loop through upstream macros to see if any is modified
|
||||
else:
|
||||
previous_macros = []
|
||||
return self.recursively_check_macros_modified(node, previous_macros)
|
||||
|
||||
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
different_contents = not new.same_contents(old) # type: ignore
|
||||
upstream_macro_change = self.recursively_check_macros_modified(new)
|
||||
upstream_macro_change = self.check_macros_modified(new)
|
||||
return different_contents or upstream_macro_change
|
||||
|
||||
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
@@ -457,7 +468,7 @@ class StateSelectorMethod(SelectorMethod):
|
||||
return False
|
||||
|
||||
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
|
||||
return self.recursively_check_macros_modified(new)
|
||||
return self.check_macros_modified(new)
|
||||
|
||||
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
|
||||
return old is None
|
||||
|
||||
@@ -67,6 +67,7 @@ class SelectionCriteria:
|
||||
children: bool
|
||||
children_depth: Optional[int]
|
||||
greedy: bool = False
|
||||
greedy_warning: bool = False # do not raise warning for yaml selectors
|
||||
|
||||
def __post_init__(self):
|
||||
if self.children and self.childrens_parents:
|
||||
@@ -124,11 +125,11 @@ class SelectionCriteria:
|
||||
parents_depth=parents_depth,
|
||||
children=bool(dct.get('children')),
|
||||
children_depth=children_depth,
|
||||
greedy=greedy
|
||||
greedy=(greedy or bool(dct.get('greedy'))),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
|
||||
def dict_from_single_spec(cls, raw: str):
|
||||
result = RAW_SELECTOR_PATTERN.match(raw)
|
||||
if result is None:
|
||||
return {'error': 'Invalid selector spec'}
|
||||
@@ -145,6 +146,8 @@ class SelectionCriteria:
|
||||
dct['parents'] = bool(dct.get('parents'))
|
||||
if 'children' in dct:
|
||||
dct['children'] = bool(dct.get('children'))
|
||||
if 'greedy' in dct:
|
||||
dct['greedy'] = bool(dct.get('greedy'))
|
||||
return dct
|
||||
|
||||
@classmethod
|
||||
@@ -162,10 +165,12 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
|
||||
self,
|
||||
components: Iterable[SelectionSpec],
|
||||
expect_exists: bool = False,
|
||||
greedy_warning: bool = True,
|
||||
raw: Any = None,
|
||||
):
|
||||
self.components: List[SelectionSpec] = list(components)
|
||||
self.expect_exists = expect_exists
|
||||
self.greedy_warning = greedy_warning
|
||||
self.raw = raw
|
||||
|
||||
def __iter__(self) -> Iterator[SelectionSpec]:
|
||||
|
||||
@@ -51,7 +51,7 @@
|
||||
{% endmacro %}
|
||||
|
||||
{% macro get_batch_size() -%}
|
||||
{{ adapter.dispatch('get_batch_size', 'dbt')() }}
|
||||
{{ return(adapter.dispatch('get_batch_size', 'dbt')()) }}
|
||||
{%- endmacro %}
|
||||
|
||||
{% macro default__get_batch_size() %}
|
||||
|
||||
@@ -7,7 +7,7 @@ with all_values as (
|
||||
count(*) as n_records
|
||||
|
||||
from {{ model }}
|
||||
group by 1
|
||||
group by {{ column_name }}
|
||||
|
||||
)
|
||||
|
||||
|
||||
@@ -345,7 +345,7 @@ class TimestampNamed(logbook.Processor):
|
||||
class ScrubSecrets(logbook.Processor):
|
||||
def process(self, record):
|
||||
for secret in get_secret_env():
|
||||
record.message = record.message.replace(secret, "*****")
|
||||
record.message = str(record.message).replace(secret, "*****")
|
||||
|
||||
|
||||
logger = logbook.Logger('dbt')
|
||||
|
||||
@@ -10,23 +10,23 @@ from pathlib import Path
|
||||
|
||||
import dbt.version
|
||||
import dbt.flags as flags
|
||||
import dbt.task.run as run_task
|
||||
import dbt.task.build as build_task
|
||||
import dbt.task.clean as clean_task
|
||||
import dbt.task.compile as compile_task
|
||||
import dbt.task.debug as debug_task
|
||||
import dbt.task.clean as clean_task
|
||||
import dbt.task.deps as deps_task
|
||||
import dbt.task.init as init_task
|
||||
import dbt.task.seed as seed_task
|
||||
import dbt.task.test as test_task
|
||||
import dbt.task.snapshot as snapshot_task
|
||||
import dbt.task.generate as generate_task
|
||||
import dbt.task.serve as serve_task
|
||||
import dbt.task.freshness as freshness_task
|
||||
import dbt.task.run_operation as run_operation_task
|
||||
import dbt.task.generate as generate_task
|
||||
import dbt.task.init as init_task
|
||||
import dbt.task.list as list_task
|
||||
import dbt.task.parse as parse_task
|
||||
import dbt.task.run as run_task
|
||||
import dbt.task.run_operation as run_operation_task
|
||||
import dbt.task.seed as seed_task
|
||||
import dbt.task.serve as serve_task
|
||||
import dbt.task.snapshot as snapshot_task
|
||||
import dbt.task.test as test_task
|
||||
from dbt.profiler import profiler
|
||||
from dbt.task.list import ListTask
|
||||
from dbt.task.rpc.server import RPCServerTask
|
||||
from dbt.adapters.factory import reset_adapters, cleanup_connections
|
||||
|
||||
@@ -399,6 +399,40 @@ def _build_build_subparser(subparsers, base_subparser):
|
||||
Stop execution upon a first failure.
|
||||
'''
|
||||
)
|
||||
sub.add_argument(
|
||||
'--store-failures',
|
||||
action='store_true',
|
||||
help='''
|
||||
Store test results (failing rows) in the database
|
||||
'''
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
resource_values: List[str] = [
|
||||
str(s) for s in build_task.BuildTask.ALL_RESOURCE_VALUES
|
||||
] + ['all']
|
||||
sub.add_argument('--resource-type',
|
||||
choices=resource_values,
|
||||
action='append',
|
||||
default=[],
|
||||
dest='resource_types')
|
||||
# explicity don't support --models
|
||||
sub.add_argument(
|
||||
'-s',
|
||||
'--select',
|
||||
dest='select',
|
||||
nargs='+',
|
||||
help='''
|
||||
Specify the nodes to include.
|
||||
''',
|
||||
)
|
||||
_add_common_selector_arguments(sub)
|
||||
return sub
|
||||
|
||||
|
||||
@@ -611,7 +645,7 @@ def _add_table_mutability_arguments(*subparsers):
|
||||
'--full-refresh',
|
||||
action='store_true',
|
||||
help='''
|
||||
If specified, DBT will drop incremental models and
|
||||
If specified, dbt will drop incremental models and
|
||||
fully-recalculate the incremental table from the model definition.
|
||||
'''
|
||||
)
|
||||
@@ -727,6 +761,14 @@ def _build_test_subparser(subparsers, base_subparser):
|
||||
Store test results (failing rows) in the database
|
||||
'''
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
|
||||
sub.set_defaults(cls=test_task.TestTask, which='test', rpc_method='test')
|
||||
return sub
|
||||
@@ -815,9 +857,9 @@ def _build_list_subparser(subparsers, base_subparser):
|
||||
''',
|
||||
aliases=['ls'],
|
||||
)
|
||||
sub.set_defaults(cls=ListTask, which='list', rpc_method=None)
|
||||
sub.set_defaults(cls=list_task.ListTask, which='list', rpc_method=None)
|
||||
resource_values: List[str] = [
|
||||
str(s) for s in ListTask.ALL_RESOURCE_VALUES
|
||||
str(s) for s in list_task.ListTask.ALL_RESOURCE_VALUES
|
||||
] + ['default', 'all']
|
||||
sub.add_argument('--resource-type',
|
||||
choices=resource_values,
|
||||
@@ -852,6 +894,14 @@ def _build_list_subparser(subparsers, base_subparser):
|
||||
metavar='SELECTOR',
|
||||
required=False,
|
||||
)
|
||||
sub.add_argument(
|
||||
'--greedy',
|
||||
action='store_true',
|
||||
help='''
|
||||
Select all tests that touch the selected resources,
|
||||
even if they also depend on unselected resources
|
||||
'''
|
||||
)
|
||||
_add_common_selector_arguments(sub)
|
||||
|
||||
return sub
|
||||
@@ -1062,7 +1112,7 @@ def parse_args(args, cls=DBTArgumentParser):
|
||||
# --select, --exclude
|
||||
# list_sub sets up its own arguments.
|
||||
_add_selection_arguments(
|
||||
build_sub, run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
|
||||
run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
|
||||
# --defer
|
||||
_add_defer_argument(run_sub, test_sub, build_sub)
|
||||
# --full-refresh
|
||||
|
||||
@@ -72,10 +72,13 @@ class HookParser(SimpleParser[HookBlock, ParsedHookNode]):
|
||||
|
||||
# Hooks are only in the dbt_project.yml file for the project
|
||||
def get_path(self) -> FilePath:
|
||||
# There ought to be an existing file object for this, but
|
||||
# until that is implemented use a dummy modification time
|
||||
path = FilePath(
|
||||
project_root=self.project.project_root,
|
||||
searched_path='.',
|
||||
relative_path='dbt_project.yml',
|
||||
modification_time=0.0,
|
||||
)
|
||||
return path
|
||||
|
||||
|
||||
@@ -203,8 +203,11 @@ class ManifestLoader:
|
||||
# used to get the SourceFiles from the manifest files.
|
||||
start_read_files = time.perf_counter()
|
||||
project_parser_files = {}
|
||||
saved_files = {}
|
||||
if self.saved_manifest:
|
||||
saved_files = self.saved_manifest.files
|
||||
for project in self.all_projects.values():
|
||||
read_files(project, self.manifest.files, project_parser_files)
|
||||
read_files(project, self.manifest.files, project_parser_files, saved_files)
|
||||
self._perf_info.path_count = len(self.manifest.files)
|
||||
self._perf_info.read_files_elapsed = (time.perf_counter() - start_read_files)
|
||||
|
||||
@@ -423,7 +426,7 @@ class ManifestLoader:
|
||||
if not self.partially_parsing and HookParser in parser_types:
|
||||
hook_parser = HookParser(project, self.manifest, self.root_project)
|
||||
path = hook_parser.get_path()
|
||||
file = load_source_file(path, ParseFileType.Hook, project.project_name)
|
||||
file = load_source_file(path, ParseFileType.Hook, project.project_name, {})
|
||||
if file:
|
||||
file_block = FileBlock(file)
|
||||
hook_parser.parse_file(file_block)
|
||||
@@ -648,7 +651,7 @@ class ManifestLoader:
|
||||
macro_parser = MacroParser(project, self.manifest)
|
||||
for path in macro_parser.get_paths():
|
||||
source_file = load_source_file(
|
||||
path, ParseFileType.Macro, project.project_name)
|
||||
path, ParseFileType.Macro, project.project_name, {})
|
||||
block = FileBlock(source_file)
|
||||
# This does not add the file to the manifest.files,
|
||||
# but that shouldn't be necessary here.
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
from dbt.context.context_config import ContextConfig
|
||||
from dbt.contracts.graph.parsed import ParsedModelNode
|
||||
import dbt.flags as flags
|
||||
import dbt.tracking
|
||||
from dbt.logger import GLOBAL_LOGGER as logger
|
||||
from dbt.node_types import NodeType
|
||||
from dbt.parser.base import SimpleSQLParser
|
||||
from dbt.parser.search import FileBlock
|
||||
import dbt.tracking as tracking
|
||||
from dbt import utils
|
||||
from dbt_extractor import ExtractionError, py_extract_from_source # type: ignore
|
||||
from functools import reduce
|
||||
from itertools import chain
|
||||
import random
|
||||
from typing import Any, Dict, List
|
||||
from typing import Any, Dict, Iterator, List, Optional, Union
|
||||
|
||||
|
||||
class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
@@ -26,32 +28,52 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
def get_compiled_path(cls, block: FileBlock):
|
||||
return block.path.relative_path
|
||||
|
||||
# TODO when this is turned on by default, simplify the nasty if/else tree inside this method.
|
||||
def render_update(
|
||||
self, node: ParsedModelNode, config: ContextConfig
|
||||
) -> None:
|
||||
self.manifest._parsing_info.static_analysis_path_count += 1
|
||||
# TODO go back to 1/100 when this is turned on by default.
|
||||
# `True` roughly 1/50 times this function is called
|
||||
sample: bool = random.randint(1, 51) == 50
|
||||
|
||||
# `True` roughly 1/100 times this function is called
|
||||
sample: bool = random.randint(1, 101) == 100
|
||||
# top-level declaration of variables
|
||||
experimentally_parsed: Optional[Union[str, Dict[str, List[Any]]]] = None
|
||||
config_call_dict: Dict[str, Any] = {}
|
||||
source_calls: List[List[str]] = []
|
||||
|
||||
# run the experimental parser if the flag is on or if we're sampling
|
||||
if flags.USE_EXPERIMENTAL_PARSER or sample:
|
||||
try:
|
||||
experimentally_parsed: Dict[str, List[Any]] = py_extract_from_source(node.raw_sql)
|
||||
if self._has_banned_macro(node):
|
||||
# this log line is used for integration testing. If you change
|
||||
# the code at the beginning of the line change the tests in
|
||||
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
|
||||
logger.debug(
|
||||
f"1601: parser fallback to jinja because of macro override for {node.path}"
|
||||
)
|
||||
experimentally_parsed = "has_banned_macro"
|
||||
else:
|
||||
# run the experimental parser and return the results
|
||||
try:
|
||||
experimentally_parsed = py_extract_from_source(
|
||||
node.raw_sql
|
||||
)
|
||||
logger.debug(f"1699: statically parsed {node.path}")
|
||||
# if we want information on what features are barring the experimental
|
||||
# parser from reading model files, this is where we would add that
|
||||
# since that information is stored in the `ExtractionError`.
|
||||
except ExtractionError:
|
||||
experimentally_parsed = "cannot_parse"
|
||||
|
||||
# second config format
|
||||
config_call_dict: Dict[str, Any] = {}
|
||||
for c in experimentally_parsed['configs']:
|
||||
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
|
||||
# if the parser succeeded, extract some data in easy-to-compare formats
|
||||
if isinstance(experimentally_parsed, dict):
|
||||
# create second config format
|
||||
for c in experimentally_parsed['configs']:
|
||||
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
|
||||
|
||||
# format sources TODO change extractor to match this type
|
||||
source_calls: List[List[str]] = []
|
||||
for s in experimentally_parsed['sources']:
|
||||
source_calls.append([s[0], s[1]])
|
||||
experimentally_parsed['sources'] = source_calls
|
||||
|
||||
except ExtractionError as e:
|
||||
experimentally_parsed = e
|
||||
# format sources TODO change extractor to match this type
|
||||
for s in experimentally_parsed['sources']:
|
||||
source_calls.append([s[0], s[1]])
|
||||
experimentally_parsed['sources'] = source_calls
|
||||
|
||||
# normal dbt run
|
||||
if not flags.USE_EXPERIMENTAL_PARSER:
|
||||
@@ -59,57 +81,19 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
super().render_update(node, config)
|
||||
# if we're sampling, compare for correctness
|
||||
if sample:
|
||||
result: List[str] = []
|
||||
# experimental parser couldn't parse
|
||||
if isinstance(experimentally_parsed, Exception):
|
||||
result += ["01_experimental_parser_cannot_parse"]
|
||||
else:
|
||||
# look for false positive configs
|
||||
for k in config_call_dict.keys():
|
||||
if k not in config._config_call_dict:
|
||||
result += ["02_false_positive_config_value"]
|
||||
break
|
||||
|
||||
# look for missed configs
|
||||
for k in config._config_call_dict.keys():
|
||||
if k not in config_call_dict:
|
||||
result += ["03_missed_config_value"]
|
||||
break
|
||||
|
||||
# look for false positive sources
|
||||
for s in experimentally_parsed['sources']:
|
||||
if s not in node.sources:
|
||||
result += ["04_false_positive_source_value"]
|
||||
break
|
||||
|
||||
# look for missed sources
|
||||
for s in node.sources:
|
||||
if s not in experimentally_parsed['sources']:
|
||||
result += ["05_missed_source_value"]
|
||||
break
|
||||
|
||||
# look for false positive refs
|
||||
for r in experimentally_parsed['refs']:
|
||||
if r not in node.refs:
|
||||
result += ["06_false_positive_ref_value"]
|
||||
break
|
||||
|
||||
# look for missed refs
|
||||
for r in node.refs:
|
||||
if r not in experimentally_parsed['refs']:
|
||||
result += ["07_missed_ref_value"]
|
||||
break
|
||||
|
||||
# if there are no errors, return a success value
|
||||
if not result:
|
||||
result = ["00_exact_match"]
|
||||
|
||||
result = _get_sample_result(
|
||||
experimentally_parsed,
|
||||
config_call_dict,
|
||||
source_calls,
|
||||
node,
|
||||
config
|
||||
)
|
||||
# fire a tracking event. this fires one event for every sample
|
||||
# so that we have data on a per file basis. Not only can we expect
|
||||
# no false positives or misses, we can expect the number model
|
||||
# files parseable by the experimental parser to match our internal
|
||||
# testing.
|
||||
if dbt.tracking.active_user is not None: # None in some tests
|
||||
if tracking.active_user is not None: # None in some tests
|
||||
tracking.track_experimental_parser_sample({
|
||||
"project_id": self.root_project.hashed_name(),
|
||||
"file_id": utils.get_hash(node),
|
||||
@@ -117,7 +101,7 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
})
|
||||
|
||||
# if the --use-experimental-parser flag was set, and the experimental parser succeeded
|
||||
elif not isinstance(experimentally_parsed, Exception):
|
||||
elif isinstance(experimentally_parsed, Dict):
|
||||
# since it doesn't need python jinja, fit the refs, sources, and configs
|
||||
# into the node. Down the line the rest of the node will be updated with
|
||||
# this information. (e.g. depends_on etc.)
|
||||
@@ -141,7 +125,102 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||
|
||||
self.manifest._parsing_info.static_analysis_parsed_path_count += 1
|
||||
|
||||
# the experimental parser tried and failed on this model.
|
||||
# the experimental parser didn't run on this model.
|
||||
# fall back to python jinja rendering.
|
||||
elif experimentally_parsed in ["has_banned_macro"]:
|
||||
# not logging here since the reason should have been logged above
|
||||
super().render_update(node, config)
|
||||
# the experimental parser ran on this model and failed.
|
||||
# fall back to python jinja rendering.
|
||||
else:
|
||||
logger.debug(
|
||||
f"1602: parser fallback to jinja because of extractor failure for {node.path}"
|
||||
)
|
||||
super().render_update(node, config)
|
||||
|
||||
# checks for banned macros
|
||||
def _has_banned_macro(
|
||||
self, node: ParsedModelNode
|
||||
) -> bool:
|
||||
# first check if there is a banned macro defined in scope for this model file
|
||||
root_project_name = self.root_project.project_name
|
||||
project_name = node.package_name
|
||||
banned_macros = ['ref', 'source', 'config']
|
||||
|
||||
all_banned_macro_keys: Iterator[str] = chain.from_iterable(
|
||||
map(
|
||||
lambda name: [
|
||||
f"macro.{project_name}.{name}",
|
||||
f"macro.{root_project_name}.{name}"
|
||||
],
|
||||
banned_macros
|
||||
)
|
||||
)
|
||||
|
||||
return reduce(
|
||||
lambda z, key: z or (key in self.manifest.macros),
|
||||
all_banned_macro_keys,
|
||||
False
|
||||
)
|
||||
|
||||
|
||||
# returns a list of string codes to be sent as a tracking event
|
||||
def _get_sample_result(
|
||||
sample_output: Optional[Union[str, Dict[str, Any]]],
|
||||
config_call_dict: Dict[str, Any],
|
||||
source_calls: List[List[str]],
|
||||
node: ParsedModelNode,
|
||||
config: ContextConfig
|
||||
) -> List[str]:
|
||||
result: List[str] = []
|
||||
# experimental parser didn't run
|
||||
if sample_output is None:
|
||||
result += ["09_experimental_parser_skipped"]
|
||||
# experimental parser couldn't parse
|
||||
elif (isinstance(sample_output, str)):
|
||||
if sample_output == "cannot_parse":
|
||||
result += ["01_experimental_parser_cannot_parse"]
|
||||
elif sample_output == "has_banned_macro":
|
||||
result += ["08_has_banned_macro"]
|
||||
else:
|
||||
# look for false positive configs
|
||||
for k in config_call_dict.keys():
|
||||
if k not in config._config_call_dict:
|
||||
result += ["02_false_positive_config_value"]
|
||||
break
|
||||
|
||||
# look for missed configs
|
||||
for k in config._config_call_dict.keys():
|
||||
if k not in config_call_dict:
|
||||
result += ["03_missed_config_value"]
|
||||
break
|
||||
|
||||
# look for false positive sources
|
||||
for s in sample_output['sources']:
|
||||
if s not in node.sources:
|
||||
result += ["04_false_positive_source_value"]
|
||||
break
|
||||
|
||||
# look for missed sources
|
||||
for s in node.sources:
|
||||
if s not in sample_output['sources']:
|
||||
result += ["05_missed_source_value"]
|
||||
break
|
||||
|
||||
# look for false positive refs
|
||||
for r in sample_output['refs']:
|
||||
if r not in node.refs:
|
||||
result += ["06_false_positive_ref_value"]
|
||||
break
|
||||
|
||||
# look for missed refs
|
||||
for r in node.refs:
|
||||
if r not in sample_output['refs']:
|
||||
result += ["07_missed_ref_value"]
|
||||
break
|
||||
|
||||
# if there are no errors, return a success value
|
||||
if not result:
|
||||
result = ["00_exact_match"]
|
||||
|
||||
return result
|
||||
|
||||
@@ -12,13 +12,27 @@ from typing import Optional
|
||||
# This loads the files contents and creates the SourceFile object
|
||||
def load_source_file(
|
||||
path: FilePath, parse_file_type: ParseFileType,
|
||||
project_name: str) -> Optional[AnySourceFile]:
|
||||
file_contents = load_file_contents(path.absolute_path, strip=False)
|
||||
checksum = FileHash.from_contents(file_contents)
|
||||
project_name: str, saved_files,) -> Optional[AnySourceFile]:
|
||||
|
||||
sf_cls = SchemaSourceFile if parse_file_type == ParseFileType.Schema else SourceFile
|
||||
source_file = sf_cls(path=path, checksum=checksum,
|
||||
source_file = sf_cls(path=path, checksum=FileHash.empty(),
|
||||
parse_file_type=parse_file_type, project_name=project_name)
|
||||
source_file.contents = file_contents.strip()
|
||||
|
||||
skip_loading_schema_file = False
|
||||
if (parse_file_type == ParseFileType.Schema and
|
||||
saved_files and source_file.file_id in saved_files):
|
||||
old_source_file = saved_files[source_file.file_id]
|
||||
if (source_file.path.modification_time != 0.0 and
|
||||
old_source_file.path.modification_time == source_file.path.modification_time):
|
||||
source_file.checksum = old_source_file.checksum
|
||||
source_file.dfy = old_source_file.dfy
|
||||
skip_loading_schema_file = True
|
||||
|
||||
if not skip_loading_schema_file:
|
||||
file_contents = load_file_contents(path.absolute_path, strip=False)
|
||||
source_file.checksum = FileHash.from_contents(file_contents)
|
||||
source_file.contents = file_contents.strip()
|
||||
|
||||
if parse_file_type == ParseFileType.Schema and source_file.contents:
|
||||
dfy = yaml_from_file(source_file)
|
||||
if dfy:
|
||||
@@ -69,7 +83,7 @@ def load_seed_source_file(match: FilePath, project_name) -> SourceFile:
|
||||
|
||||
# Use the FilesystemSearcher to get a bunch of FilePaths, then turn
|
||||
# them into a bunch of FileSource objects
|
||||
def get_source_files(project, paths, extension, parse_file_type):
|
||||
def get_source_files(project, paths, extension, parse_file_type, saved_files):
|
||||
# file path list
|
||||
fp_list = list(FilesystemSearcher(
|
||||
project, paths, extension
|
||||
@@ -80,17 +94,17 @@ def get_source_files(project, paths, extension, parse_file_type):
|
||||
if parse_file_type == ParseFileType.Seed:
|
||||
fb_list.append(load_seed_source_file(fp, project.project_name))
|
||||
else:
|
||||
file = load_source_file(fp, parse_file_type, project.project_name)
|
||||
file = load_source_file(fp, parse_file_type, project.project_name, saved_files)
|
||||
# only append the list if it has contents. added to fix #3568
|
||||
if file:
|
||||
fb_list.append(file)
|
||||
return fb_list
|
||||
|
||||
|
||||
def read_files_for_parser(project, files, dirs, extension, parse_ft):
|
||||
def read_files_for_parser(project, files, dirs, extension, parse_ft, saved_files):
|
||||
parser_files = []
|
||||
source_files = get_source_files(
|
||||
project, dirs, extension, parse_ft
|
||||
project, dirs, extension, parse_ft, saved_files
|
||||
)
|
||||
for sf in source_files:
|
||||
files[sf.file_id] = sf
|
||||
@@ -102,46 +116,46 @@ def read_files_for_parser(project, files, dirs, extension, parse_ft):
|
||||
# dictionary needs to be passed in. What determines the order of
|
||||
# the various projects? Is the root project always last? Do the
|
||||
# non-root projects need to be done separately in order?
|
||||
def read_files(project, files, parser_files):
|
||||
def read_files(project, files, parser_files, saved_files):
|
||||
|
||||
project_files = {}
|
||||
|
||||
project_files['MacroParser'] = read_files_for_parser(
|
||||
project, files, project.macro_paths, '.sql', ParseFileType.Macro,
|
||||
project, files, project.macro_paths, '.sql', ParseFileType.Macro, saved_files
|
||||
)
|
||||
|
||||
project_files['ModelParser'] = read_files_for_parser(
|
||||
project, files, project.source_paths, '.sql', ParseFileType.Model,
|
||||
project, files, project.source_paths, '.sql', ParseFileType.Model, saved_files
|
||||
)
|
||||
|
||||
project_files['SnapshotParser'] = read_files_for_parser(
|
||||
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot,
|
||||
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot, saved_files
|
||||
)
|
||||
|
||||
project_files['AnalysisParser'] = read_files_for_parser(
|
||||
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis,
|
||||
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis, saved_files
|
||||
)
|
||||
|
||||
project_files['DataTestParser'] = read_files_for_parser(
|
||||
project, files, project.test_paths, '.sql', ParseFileType.Test,
|
||||
project, files, project.test_paths, '.sql', ParseFileType.Test, saved_files
|
||||
)
|
||||
|
||||
project_files['SeedParser'] = read_files_for_parser(
|
||||
project, files, project.data_paths, '.csv', ParseFileType.Seed,
|
||||
project, files, project.data_paths, '.csv', ParseFileType.Seed, saved_files
|
||||
)
|
||||
|
||||
project_files['DocumentationParser'] = read_files_for_parser(
|
||||
project, files, project.docs_paths, '.md', ParseFileType.Documentation,
|
||||
project, files, project.docs_paths, '.md', ParseFileType.Documentation, saved_files
|
||||
)
|
||||
|
||||
project_files['SchemaParser'] = read_files_for_parser(
|
||||
project, files, project.all_source_paths, '.yml', ParseFileType.Schema,
|
||||
project, files, project.all_source_paths, '.yml', ParseFileType.Schema, saved_files
|
||||
)
|
||||
|
||||
# Also read .yaml files for schema files. Might be better to change
|
||||
# 'read_files_for_parser' accept an array in the future.
|
||||
yaml_files = read_files_for_parser(
|
||||
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema,
|
||||
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema, saved_files
|
||||
)
|
||||
project_files['SchemaParser'].extend(yaml_files)
|
||||
|
||||
|
||||
@@ -84,6 +84,7 @@ class FilesystemSearcher(Iterable[FilePath]):
|
||||
file_match = FilePath(
|
||||
searched_path=result['searched_path'],
|
||||
relative_path=result['relative_path'],
|
||||
modification_time=result['modification_time'],
|
||||
project_root=root,
|
||||
)
|
||||
yield file_match
|
||||
|
||||
@@ -3,19 +3,22 @@ from .snapshot import SnapshotRunner as snapshot_model_runner
|
||||
from .seed import SeedRunner as seed_runner
|
||||
from .test import TestRunner as test_runner
|
||||
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
from dbt.contracts.results import NodeStatus
|
||||
from dbt.exceptions import InternalException
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
from dbt.node_types import NodeType
|
||||
from dbt.task.test import TestSelector
|
||||
|
||||
|
||||
class BuildTask(RunTask):
|
||||
"""The Build task processes all assets of a given process and attempts to 'build'
|
||||
them in an opinionated fashion. Every resource type outlined in RUNNER_MAP
|
||||
will be processed by the mapped runner class.
|
||||
"""The Build task processes all assets of a given process and attempts to
|
||||
'build' them in an opinionated fashion. Every resource type outlined in
|
||||
RUNNER_MAP will be processed by the mapped runner class.
|
||||
|
||||
I.E. a resource of type Model is handled by the ModelRunner which is imported
|
||||
as run_model_runner.
|
||||
"""
|
||||
I.E. a resource of type Model is handled by the ModelRunner which is
|
||||
imported as run_model_runner. """
|
||||
|
||||
MARK_DEPENDENT_ERRORS_STATUSES = [NodeStatus.Error, NodeStatus.Fail]
|
||||
|
||||
RUNNER_MAP = {
|
||||
NodeType.Model: run_model_runner,
|
||||
@@ -23,6 +26,20 @@ class BuildTask(RunTask):
|
||||
NodeType.Seed: seed_runner,
|
||||
NodeType.Test: test_runner,
|
||||
}
|
||||
ALL_RESOURCE_VALUES = frozenset({x for x in RUNNER_MAP.keys()})
|
||||
|
||||
@property
|
||||
def resource_types(self):
|
||||
if not self.args.resource_types:
|
||||
return list(self.ALL_RESOURCE_VALUES)
|
||||
|
||||
values = set(self.args.resource_types)
|
||||
|
||||
if 'all' in values:
|
||||
values.remove('all')
|
||||
values.update(self.ALL_RESOURCE_VALUES)
|
||||
|
||||
return list(values)
|
||||
|
||||
def get_node_selector(self) -> ResourceTypeSelector:
|
||||
if self.manifest is None or self.graph is None:
|
||||
@@ -30,11 +47,19 @@ class BuildTask(RunTask):
|
||||
'manifest and graph must be set to get node selection'
|
||||
)
|
||||
|
||||
resource_types = self.resource_types
|
||||
|
||||
if resource_types == [NodeType.Test]:
|
||||
return TestSelector(
|
||||
graph=self.graph,
|
||||
manifest=self.manifest,
|
||||
previous_state=self.previous_state,
|
||||
)
|
||||
return ResourceTypeSelector(
|
||||
graph=self.graph,
|
||||
manifest=self.manifest,
|
||||
previous_state=self.previous_state,
|
||||
resource_types=[x for x in self.RUNNER_MAP.keys()],
|
||||
resource_types=resource_types,
|
||||
)
|
||||
|
||||
def get_runner_type(self, node):
|
||||
|
||||
@@ -4,7 +4,7 @@ from .base import BaseRunner
|
||||
|
||||
from dbt.contracts.results import RunStatus, RunResult
|
||||
from dbt.exceptions import InternalException
|
||||
from dbt.graph import ResourceTypeSelector, SelectionSpec, parse_difference
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
from dbt.logger import print_timestamped_line
|
||||
from dbt.node_types import NodeType
|
||||
|
||||
@@ -37,13 +37,6 @@ class CompileTask(GraphRunnableTask):
|
||||
def raise_on_first_error(self):
|
||||
return True
|
||||
|
||||
def get_selection_spec(self) -> SelectionSpec:
|
||||
if self.args.selector_name:
|
||||
spec = self.config.get_selector(self.args.selector_name)
|
||||
else:
|
||||
spec = parse_difference(self.args.select, self.args.exclude)
|
||||
return spec
|
||||
|
||||
def get_node_selector(self) -> ResourceTypeSelector:
|
||||
if self.manifest is None or self.graph is None:
|
||||
raise InternalException(
|
||||
|
||||
@@ -19,7 +19,7 @@ from dbt.exceptions import RuntimeException, InternalException
|
||||
from dbt.logger import print_timestamped_line
|
||||
from dbt.node_types import NodeType
|
||||
|
||||
from dbt.graph import ResourceTypeSelector, SelectionSpec, parse_difference
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
from dbt.contracts.graph.parsed import ParsedSourceDefinition
|
||||
|
||||
|
||||
@@ -136,19 +136,6 @@ class FreshnessTask(GraphRunnableTask):
|
||||
def raise_on_first_error(self):
|
||||
return False
|
||||
|
||||
def get_selection_spec(self) -> SelectionSpec:
|
||||
"""Generates a selection spec from task arguments to use when
|
||||
processing graph. A SelectionSpec describes what nodes to select
|
||||
when creating queue from graph of nodes.
|
||||
"""
|
||||
if self.args.selector_name:
|
||||
# use pre-defined selector (--selector) to create selection spec
|
||||
spec = self.config.get_selector(self.args.selector_name)
|
||||
else:
|
||||
# use --select and --exclude args to create selection spec
|
||||
spec = parse_difference(self.args.select, self.args.exclude)
|
||||
return spec
|
||||
|
||||
def get_node_selector(self):
|
||||
if self.manifest is None or self.graph is None:
|
||||
raise InternalException(
|
||||
|
||||
@@ -1,15 +1,10 @@
|
||||
import json
|
||||
from typing import Type
|
||||
|
||||
from dbt.contracts.graph.parsed import (
|
||||
ParsedExposure,
|
||||
ParsedSourceDefinition
|
||||
)
|
||||
from dbt.graph import (
|
||||
parse_difference,
|
||||
ResourceTypeSelector,
|
||||
SelectionSpec,
|
||||
)
|
||||
from dbt.graph import ResourceTypeSelector
|
||||
from dbt.task.runnable import GraphRunnableTask, ManifestTask
|
||||
from dbt.task.test import TestSelector
|
||||
from dbt.node_types import NodeType
|
||||
@@ -165,25 +160,19 @@ class ListTask(GraphRunnableTask):
|
||||
return list(values)
|
||||
|
||||
@property
|
||||
def selector(self):
|
||||
def selection_arg(self):
|
||||
# for backwards compatibility, list accepts both --models and --select,
|
||||
# with slightly different behavior: --models implies --resource-type model
|
||||
if self.args.models:
|
||||
return self.args.models
|
||||
else:
|
||||
return self.args.select
|
||||
|
||||
def get_selection_spec(self) -> SelectionSpec:
|
||||
if self.args.selector_name:
|
||||
spec = self.config.get_selector(self.args.selector_name)
|
||||
else:
|
||||
spec = parse_difference(self.selector, self.args.exclude)
|
||||
return spec
|
||||
|
||||
def get_node_selector(self):
|
||||
if self.manifest is None or self.graph is None:
|
||||
raise InternalException(
|
||||
'manifest and graph must be set to get perform node selection'
|
||||
)
|
||||
cls: Type[ResourceTypeSelector]
|
||||
if self.resource_types == [NodeType.Test]:
|
||||
return TestSelector(
|
||||
graph=self.graph,
|
||||
|
||||
@@ -320,13 +320,12 @@ class RemoteListTask(
|
||||
|
||||
|
||||
class RemoteBuildProjectTask(RPCCommandTask[RPCBuildParameters], BuildTask):
|
||||
|
||||
METHOD_NAME = 'build'
|
||||
|
||||
def set_args(self, params: RPCBuildParameters) -> None:
|
||||
if params.models:
|
||||
self.args.select = self._listify(params.models)
|
||||
else:
|
||||
self.args.select = self._listify(params.select)
|
||||
self.args.resource_types = self._listify(params.resource_types)
|
||||
self.args.select = self._listify(params.select)
|
||||
self.args.exclude = self._listify(params.exclude)
|
||||
self.args.selector_name = params.selector
|
||||
|
||||
|
||||
@@ -41,7 +41,13 @@ from dbt.exceptions import (
|
||||
FailFastException,
|
||||
)
|
||||
|
||||
from dbt.graph import GraphQueue, NodeSelector, SelectionSpec, Graph
|
||||
from dbt.graph import (
|
||||
GraphQueue,
|
||||
NodeSelector,
|
||||
SelectionSpec,
|
||||
parse_difference,
|
||||
Graph
|
||||
)
|
||||
from dbt.parser.manifest import ManifestLoader
|
||||
|
||||
import dbt.exceptions
|
||||
@@ -83,6 +89,9 @@ class ManifestTask(ConfiguredTask):
|
||||
|
||||
|
||||
class GraphRunnableTask(ManifestTask):
|
||||
|
||||
MARK_DEPENDENT_ERRORS_STATUSES = [NodeStatus.Error]
|
||||
|
||||
def __init__(self, args, config):
|
||||
super().__init__(args, config)
|
||||
self.job_queue: Optional[GraphQueue] = None
|
||||
@@ -103,11 +112,27 @@ class GraphRunnableTask(ManifestTask):
|
||||
def index_offset(self, value: int) -> int:
|
||||
return value
|
||||
|
||||
@abstractmethod
|
||||
@property
|
||||
def selection_arg(self):
|
||||
return self.args.select
|
||||
|
||||
@property
|
||||
def exclusion_arg(self):
|
||||
return self.args.exclude
|
||||
|
||||
def get_selection_spec(self) -> SelectionSpec:
|
||||
raise NotImplementedException(
|
||||
f'get_selection_spec not implemented for task {type(self)}'
|
||||
)
|
||||
default_selector_name = self.config.get_default_selector_name()
|
||||
if self.args.selector_name:
|
||||
# use pre-defined selector (--selector)
|
||||
spec = self.config.get_selector(self.args.selector_name)
|
||||
elif not (self.selection_arg or self.exclusion_arg) and default_selector_name:
|
||||
# use pre-defined selector (--selector) with default: true
|
||||
logger.info(f"Using default selector {default_selector_name}")
|
||||
spec = self.config.get_selector(default_selector_name)
|
||||
else:
|
||||
# use --select and --exclude args
|
||||
spec = parse_difference(self.selection_arg, self.exclusion_arg)
|
||||
return spec
|
||||
|
||||
@abstractmethod
|
||||
def get_node_selector(self) -> NodeSelector:
|
||||
@@ -289,7 +314,7 @@ class GraphRunnableTask(ManifestTask):
|
||||
else:
|
||||
self.manifest.update_node(node)
|
||||
|
||||
if result.status == NodeStatus.Error:
|
||||
if result.status in self.MARK_DEPENDENT_ERRORS_STATUSES:
|
||||
if is_ephemeral:
|
||||
cause = result
|
||||
else:
|
||||
@@ -413,7 +438,7 @@ class GraphRunnableTask(ManifestTask):
|
||||
)
|
||||
|
||||
if len(self._flattened_nodes) == 0:
|
||||
logger.warning("WARNING: Nothing to do. Try checking your model "
|
||||
logger.warning("\nWARNING: Nothing to do. Try checking your model "
|
||||
"configs and model specification args")
|
||||
result = self.get_result(
|
||||
results=[],
|
||||
|
||||
@@ -96,5 +96,5 @@ def _get_dbt_plugins_info():
|
||||
yield plugin_name, mod.version
|
||||
|
||||
|
||||
__version__ = '0.21.0b1'
|
||||
__version__ = '0.21.0'
|
||||
installed = get_installed_version()
|
||||
|
||||
@@ -284,12 +284,12 @@ def parse_args(argv=None):
|
||||
parser.add_argument('adapter')
|
||||
parser.add_argument('--title-case', '-t', default=None)
|
||||
parser.add_argument('--dependency', action='append')
|
||||
parser.add_argument('--dbt-core-version', default='0.21.0b1')
|
||||
parser.add_argument('--dbt-core-version', default='0.21.0')
|
||||
parser.add_argument('--email')
|
||||
parser.add_argument('--author')
|
||||
parser.add_argument('--url')
|
||||
parser.add_argument('--sql', action='store_true')
|
||||
parser.add_argument('--package-version', default='0.21.0b1')
|
||||
parser.add_argument('--package-version', default='0.21.0')
|
||||
parser.add_argument('--project-version', default='1.0')
|
||||
parser.add_argument(
|
||||
'--no-dependency', action='store_false', dest='set_dependency'
|
||||
|
||||
@@ -24,7 +24,7 @@ def read(fname):
|
||||
|
||||
|
||||
package_name = "dbt-core"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """dbt (data build tool) is a command line tool that helps \
|
||||
analysts and engineers transform data in their warehouse more effectively"""
|
||||
|
||||
|
||||
75
docker/requirements/requirements.0.21.0.txt
Normal file
75
docker/requirements/requirements.0.21.0.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.19.0
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.18.53
|
||||
botocore==1.21.53
|
||||
cachetools==4.2.4
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.3
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.28.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.2.0
|
||||
google-resumable-media==2.0.3
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.41.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.2
|
||||
protobuf==3.17.3
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.4
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.3
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.6.0
|
||||
75
docker/requirements/requirements.0.21.0b2.txt
Normal file
75
docker/requirements/requirements.0.21.0b2.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.17.0
|
||||
azure-storage-blob==12.8.1
|
||||
Babel==2.9.1
|
||||
boto3==1.18.25
|
||||
botocore==1.21.25
|
||||
cachetools==4.2.2
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.4
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.7
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.24.1
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.1.2
|
||||
google-resumable-media==2.0.0
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.39.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.6.4
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.2
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.0
|
||||
protobuf==3.17.3
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.1
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.1
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.3.1
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.0
|
||||
urllib3==1.26.6
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.5.0
|
||||
75
docker/requirements/requirements.0.21.0rc1.txt
Normal file
75
docker/requirements/requirements.0.21.0rc1.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.18.0
|
||||
azure-storage-blob==12.8.1
|
||||
Babel==2.9.1
|
||||
boto3==1.18.44
|
||||
botocore==1.21.44
|
||||
cachetools==4.2.2
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.2
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.26.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.1.2
|
||||
google-resumable-media==2.0.2
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.40.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.0
|
||||
protobuf==3.18.0
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.1
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.1
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.6
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.5.0
|
||||
75
docker/requirements/requirements.0.21.0rc2.txt
Normal file
75
docker/requirements/requirements.0.21.0rc2.txt
Normal file
@@ -0,0 +1,75 @@
|
||||
agate==1.6.1
|
||||
asn1crypto==1.4.0
|
||||
attrs==21.2.0
|
||||
azure-common==1.1.27
|
||||
azure-core==1.18.0
|
||||
azure-storage-blob==12.9.0
|
||||
Babel==2.9.1
|
||||
boto3==1.18.48
|
||||
botocore==1.21.48
|
||||
cachetools==4.2.2
|
||||
certifi==2021.5.30
|
||||
cffi==1.14.6
|
||||
chardet==4.0.0
|
||||
charset-normalizer==2.0.6
|
||||
colorama==0.4.4
|
||||
cryptography==3.4.8
|
||||
google-api-core==1.31.3
|
||||
google-auth==1.35.0
|
||||
google-cloud-bigquery==2.27.0
|
||||
google-cloud-core==1.7.2
|
||||
google-crc32c==1.2.0
|
||||
google-resumable-media==2.0.3
|
||||
googleapis-common-protos==1.53.0
|
||||
grpcio==1.40.0
|
||||
hologram==0.0.14
|
||||
idna==3.2
|
||||
importlib-metadata==4.8.1
|
||||
isodate==0.6.0
|
||||
jeepney==0.7.1
|
||||
Jinja2==2.11.3
|
||||
jmespath==0.10.0
|
||||
json-rpc==1.13.0
|
||||
jsonschema==3.1.1
|
||||
keyring==21.8.0
|
||||
leather==0.3.3
|
||||
Logbook==1.5.3
|
||||
MarkupSafe==2.0.1
|
||||
mashumaro==2.5
|
||||
minimal-snowplow-tracker==0.0.2
|
||||
msgpack==1.0.2
|
||||
msrest==0.6.21
|
||||
networkx==2.6.3
|
||||
oauthlib==3.1.1
|
||||
oscrypto==1.2.1
|
||||
packaging==20.9
|
||||
parsedatetime==2.6
|
||||
proto-plus==1.19.0
|
||||
protobuf==3.17.3
|
||||
psycopg2-binary==2.9.1
|
||||
pyasn1==0.4.8
|
||||
pyasn1-modules==0.2.8
|
||||
pycparser==2.20
|
||||
pycryptodomex==3.10.4
|
||||
PyJWT==2.1.0
|
||||
pyOpenSSL==20.0.1
|
||||
pyparsing==2.4.7
|
||||
pyrsistent==0.18.0
|
||||
python-dateutil==2.8.2
|
||||
python-slugify==5.0.2
|
||||
pytimeparse==1.1.8
|
||||
pytz==2021.1
|
||||
PyYAML==5.4.1
|
||||
requests==2.26.0
|
||||
requests-oauthlib==1.3.0
|
||||
rsa==4.7.2
|
||||
s3transfer==0.5.0
|
||||
SecretStorage==3.3.1
|
||||
six==1.16.0
|
||||
snowflake-connector-python==2.5.1
|
||||
sqlparse==0.4.2
|
||||
text-unidecode==1.3
|
||||
typing-extensions==3.10.0.2
|
||||
urllib3==1.26.7
|
||||
Werkzeug==2.0.1
|
||||
zipp==3.5.0
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b1'
|
||||
version = '0.21.0'
|
||||
|
||||
@@ -83,6 +83,7 @@ class BigQueryCredentials(Credentials):
|
||||
# BigQuery allows an empty database / project, where it defers to the
|
||||
# environment for the project
|
||||
database: Optional[str]
|
||||
execution_project: Optional[str] = None
|
||||
timeout_seconds: Optional[int] = 300
|
||||
location: Optional[str] = None
|
||||
priority: Optional[Priority] = None
|
||||
@@ -130,6 +131,9 @@ class BigQueryCredentials(Credentials):
|
||||
if 'database' not in d:
|
||||
_, database = get_bigquery_defaults()
|
||||
d['database'] = database
|
||||
# `execution_project` default to dataset/project
|
||||
if 'execution_project' not in d:
|
||||
d['execution_project'] = d['database']
|
||||
return d
|
||||
|
||||
|
||||
@@ -252,12 +256,12 @@ class BigQueryConnectionManager(BaseConnectionManager):
|
||||
cls.get_impersonated_bigquery_credentials(profile_credentials)
|
||||
else:
|
||||
creds = cls.get_bigquery_credentials(profile_credentials)
|
||||
database = profile_credentials.database
|
||||
execution_project = profile_credentials.execution_project
|
||||
location = getattr(profile_credentials, 'location', None)
|
||||
|
||||
info = client_info.ClientInfo(user_agent=f'dbt-{dbt_version}')
|
||||
return google.cloud.bigquery.Client(
|
||||
database,
|
||||
execution_project,
|
||||
creds,
|
||||
location=location,
|
||||
client_info=info,
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-bigquery"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """The bigquery adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b1'
|
||||
version = '0.21.0'
|
||||
|
||||
@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
|
||||
|
||||
|
||||
package_name = "dbt-postgres"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """The postgres adpter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b1'
|
||||
version = '0.21.0'
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-redshift"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """The redshift adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = '0.21.0b1'
|
||||
version = '0.21.0'
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
{% macro snowflake__load_csv_rows(model, agate_table) %}
|
||||
{% set batch_size = get_batch_size() %}
|
||||
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
|
||||
{% set bindings = [] %}
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-snowflake"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """The snowflake adapter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
2
setup.py
2
setup.py
@@ -24,7 +24,7 @@ with open(os.path.join(this_directory, 'README.md')) as f:
|
||||
|
||||
|
||||
package_name = "dbt"
|
||||
package_version = "0.21.0b1"
|
||||
package_version = "0.21.0"
|
||||
description = """With dbt, data analysts and engineers can build analytics \
|
||||
the way engineers build applications."""
|
||||
|
||||
|
||||
1
test/integration/005_simple_seed_test/data-big/.gitignore
vendored
Normal file
1
test/integration/005_simple_seed_test/data-big/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.csv
|
||||
@@ -1,5 +1,5 @@
|
||||
import os
|
||||
|
||||
import csv
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
|
||||
|
||||
@@ -311,4 +311,43 @@ class TestSimpleSeedWithDots(DBTIntegrationTest):
|
||||
@use_profile('postgres')
|
||||
def test_postgres_simple_seed(self):
|
||||
results = self.run_dbt(["seed"])
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
class TestSimpleBigSeedBatched(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "simple_seed_005"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
"data-paths": ['data-big'],
|
||||
'seeds': {
|
||||
'quote_columns': False,
|
||||
}
|
||||
}
|
||||
|
||||
def test_big_batched_seed(self):
|
||||
with open('data-big/my_seed.csv', 'w') as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(['id'])
|
||||
for i in range(0, 20000):
|
||||
writer.writerow([i])
|
||||
|
||||
results = self.run_dbt(["seed"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_big_batched_seed(self):
|
||||
self.test_big_batched_seed()
|
||||
|
||||
@use_profile('snowflake')
|
||||
def test_snowflake_big_batched_seed(self):
|
||||
self.test_big_batched_seed()
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id
|
||||
@@ -0,0 +1,10 @@
|
||||
version: 2
|
||||
models:
|
||||
- name: model
|
||||
description: |
|
||||
I'm testing the profile execution_project
|
||||
tests:
|
||||
- project_for_job_id:
|
||||
region: region-us
|
||||
project_id: "{{ project_id}}"
|
||||
unique_schema_id: "{{ unique_schema_id }}"
|
||||
@@ -0,0 +1,7 @@
|
||||
{% test project_for_job_id(model, region, unique_schema_id, project_id) %}
|
||||
select 1
|
||||
from `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
|
||||
where date(creation_time) = current_date
|
||||
and job_project = {{project_id}}
|
||||
and destination_table.dataset_id = {{unique_schema_id}}
|
||||
{% endtest %}
|
||||
@@ -0,0 +1,23 @@
|
||||
import os
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
|
||||
|
||||
class TestAlternateExecutionProjectBigQueryRun(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "bigquery_test_022"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "execution-project-models"
|
||||
|
||||
@use_profile('bigquery')
|
||||
def test__bigquery_execute_project(self):
|
||||
results = self.run_dbt(['run', '--models', 'model'])
|
||||
self.assertEqual(len(results), 1)
|
||||
execution_project = os.environ['BIGQUERY_TEST_ALT_DATABASE']
|
||||
self.run_dbt(['test',
|
||||
'--target', 'alternate',
|
||||
'--vars', '{ project_id: %s, unique_schema_id: %s }'
|
||||
% (execution_project, self.unique_schema())],
|
||||
expect_pass=False)
|
||||
@@ -1093,7 +1093,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
)
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.model': {
|
||||
@@ -1680,7 +1680,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
snapshot_path = self.dir('snapshot/snapshot_seed.sql')
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.ephemeral_copy': {
|
||||
@@ -2203,7 +2203,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
my_schema_name = self.unique_schema()
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.clustered': {
|
||||
@@ -2695,7 +2695,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
snapshot_path = self.dir('snapshot/snapshot_seed.sql')
|
||||
|
||||
return {
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'nodes': {
|
||||
'model.test.model': {
|
||||
@@ -2959,7 +2959,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
elif key == 'metadata':
|
||||
metadata = manifest['metadata']
|
||||
self.verify_metadata(
|
||||
metadata, 'https://schemas.getdbt.com/dbt/manifest/v2.json')
|
||||
metadata, 'https://schemas.getdbt.com/dbt/manifest/v3.json')
|
||||
assert 'project_id' in metadata and metadata[
|
||||
'project_id'] == '098f6bcd4621d373cade4e832627b4f6'
|
||||
assert 'send_anonymous_usage_stats' in metadata and metadata[
|
||||
@@ -3100,7 +3100,7 @@ class TestDocsGenerate(DBTIntegrationTest):
|
||||
run_results = _read_json('./target/run_results.json')
|
||||
assert 'metadata' in run_results
|
||||
self.verify_metadata(
|
||||
run_results['metadata'], 'https://schemas.getdbt.com/dbt/run-results/v2.json')
|
||||
run_results['metadata'], 'https://schemas.getdbt.com/dbt/run-results/v3.json')
|
||||
self.assertIn('elapsed_time', run_results)
|
||||
self.assertGreater(run_results['elapsed_time'], 0)
|
||||
self.assertTrue(
|
||||
|
||||
@@ -248,7 +248,7 @@ class TestSourceFreshness(SuccessfulSourcesTest):
|
||||
assert isinstance(data['elapsed_time'], float)
|
||||
self.assertBetween(data['metadata']['generated_at'],
|
||||
self.freshness_start_time)
|
||||
assert data['metadata']['dbt_schema_version'] == 'https://schemas.getdbt.com/dbt/sources/v1.json'
|
||||
assert data['metadata']['dbt_schema_version'] == 'https://schemas.getdbt.com/dbt/sources/v2.json'
|
||||
assert data['metadata']['dbt_version'] == dbt.version.__version__
|
||||
assert data['metadata']['invocation_id'] == dbt.tracking.active_user.invocation_id
|
||||
key = 'key'
|
||||
@@ -272,7 +272,21 @@ class TestSourceFreshness(SuccessfulSourcesTest):
|
||||
'warn_after': {'count': 10, 'period': 'hour'},
|
||||
'error_after': {'count': 18, 'period': 'hour'},
|
||||
},
|
||||
'adapter_response': {}
|
||||
'adapter_response': {},
|
||||
'thread_id': AnyStringWith('Thread-'),
|
||||
'execution_time': AnyFloat(),
|
||||
'timing': [
|
||||
{
|
||||
'name': 'compile',
|
||||
'started_at': AnyStringWith(),
|
||||
'completed_at': AnyStringWith(),
|
||||
},
|
||||
{
|
||||
'name': 'execute',
|
||||
'started_at': AnyStringWith(),
|
||||
'completed_at': AnyStringWith(),
|
||||
}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
{# trigger infinite recursion if not handled #}
|
||||
|
||||
{% macro my_infinitely_recursive_macro() %}
|
||||
{{ return(adapter.dispatch('my_infinitely_recursive_macro')()) }}
|
||||
{% endmacro %}
|
||||
|
||||
{% macro default__my_infinitely_recursive_macro() %}
|
||||
{% if unmet_condition %}
|
||||
{{ my_infinitely_recursive_macro() }}
|
||||
{% else %}
|
||||
{{ return('') }}
|
||||
{% endif %}
|
||||
{% endmacro %}
|
||||
@@ -1 +1,4 @@
|
||||
select * from {{ ref('seed') }}
|
||||
|
||||
-- establish a macro dependency that trips infinite recursion if not handled
|
||||
-- depends on: {{ my_infinitely_recursive_macro() }}
|
||||
@@ -1,4 +1,5 @@
|
||||
from test.integration.base import DBTIntegrationTest, FakeArgs, use_profile
|
||||
import yaml
|
||||
|
||||
from dbt.task.test import TestTask
|
||||
from dbt.task.list import ListTask
|
||||
@@ -20,12 +21,18 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
"test-paths": ["tests"]
|
||||
}
|
||||
|
||||
def list_tests_and_assert(self, include, exclude, expected_tests):
|
||||
def list_tests_and_assert(self, include, exclude, expected_tests, greedy=False, selector_name=None):
|
||||
list_args = [ 'ls', '--resource-type', 'test']
|
||||
if include:
|
||||
list_args.extend(('--select', include))
|
||||
if exclude:
|
||||
list_args.extend(('--exclude', exclude))
|
||||
if exclude:
|
||||
list_args.extend(('--exclude', exclude))
|
||||
if greedy:
|
||||
list_args.append('--greedy')
|
||||
if selector_name:
|
||||
list_args.extend(('--selector', selector_name))
|
||||
|
||||
listed = self.run_dbt(list_args)
|
||||
assert len(listed) == len(expected_tests)
|
||||
@@ -34,7 +41,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
assert sorted(test_names) == sorted(expected_tests)
|
||||
|
||||
def run_tests_and_assert(
|
||||
self, include, exclude, expected_tests, schema = False, data = False
|
||||
self, include, exclude, expected_tests, schema=False, data=False, greedy=False, selector_name=None
|
||||
):
|
||||
results = self.run_dbt(['run'])
|
||||
self.assertEqual(len(results), 2)
|
||||
@@ -48,6 +55,10 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
test_args.append('--schema')
|
||||
if data:
|
||||
test_args.append('--data')
|
||||
if greedy:
|
||||
test_args.append('--greedy')
|
||||
if selector_name:
|
||||
test_args.extend(('--selector', selector_name))
|
||||
|
||||
results = self.run_dbt(test_args)
|
||||
tests_run = [r.node.name for r in results]
|
||||
@@ -228,3 +239,80 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected)
|
||||
self.run_tests_and_assert(select, exclude, expected)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__model_a_greedy(self):
|
||||
select = 'model_a'
|
||||
exclude = None
|
||||
greedy = True
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
'unique_model_a_fun'
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__model_a_greedy_exclude_unique_tests(self):
|
||||
select = 'model_a'
|
||||
exclude = 'test_name:unique'
|
||||
greedy = True
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
]
|
||||
|
||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
||||
|
||||
class TestExpansionWithSelectors(TestSelectionExpansion):
|
||||
|
||||
@property
|
||||
def selectors_config(self):
|
||||
return yaml.safe_load('''
|
||||
selectors:
|
||||
- name: model_a_greedy_none
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
- name: model_a_greedy_false
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
greedy: false
|
||||
- name: model_a_greedy_true
|
||||
definition:
|
||||
method: fqn
|
||||
value: model_a
|
||||
greedy: true
|
||||
''')
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__selector_model_a_not_greedy(self):
|
||||
expected = ['just_a','unique_model_a_fun']
|
||||
|
||||
# when greedy is not specified, so implicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
||||
|
||||
# when greedy is explicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
||||
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__selector_model_a_yes_greedy(self):
|
||||
expected = [
|
||||
'cf_a_b', 'cf_a_src', 'just_a',
|
||||
'relationships_model_a_fun__fun__ref_model_b_',
|
||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
||||
'unique_model_a_fun'
|
||||
]
|
||||
|
||||
# when greedy is explicitly False
|
||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('countries') }}
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('model_0') }}
|
||||
@@ -0,0 +1,4 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select '1' as "num"
|
||||
|
||||
@@ -0,0 +1,18 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: model_0
|
||||
columns:
|
||||
- name: iso3
|
||||
tests:
|
||||
- relationships:
|
||||
to: ref('model_1')
|
||||
field: iso3
|
||||
|
||||
- name: model_1
|
||||
columns:
|
||||
- name: iso3
|
||||
tests:
|
||||
- relationships:
|
||||
to: ref('model_0')
|
||||
field: iso3
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('model_1') }}
|
||||
@@ -2,15 +2,11 @@ from test.integration.base import DBTIntegrationTest, use_profile
|
||||
import yaml
|
||||
|
||||
|
||||
class TestBuild(DBTIntegrationTest):
|
||||
class TestBuildBase(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "build_test_069"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
@@ -31,24 +27,55 @@ class TestBuild(DBTIntegrationTest):
|
||||
|
||||
return self.run_dbt(args, expect_pass=expect_pass)
|
||||
|
||||
|
||||
class TestPassingBuild(TestBuildBase):
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_build_happy_path(self):
|
||||
self.build()
|
||||
|
||||
|
||||
class TestFailingBuild(TestBuild):
|
||||
@property
|
||||
def schema(self):
|
||||
return "build_test_069"
|
||||
|
||||
class TestFailingBuild(TestBuildBase):
|
||||
@property
|
||||
def models(self):
|
||||
return "models-failing"
|
||||
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_build_happy_path(self):
|
||||
results = self.build(expect_pass=False)
|
||||
self.assertEqual(len(results), 12)
|
||||
self.assertEqual(len(results), 13)
|
||||
actual = [r.status for r in results]
|
||||
expected = ['error']*1 + ['skipped']*4 + ['pass']*2 + ['success']*5
|
||||
expected = ['error']*1 + ['skipped']*5 + ['pass']*2 + ['success']*5
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
|
||||
class TestFailingTestsBuild(TestBuildBase):
|
||||
@property
|
||||
def models(self):
|
||||
return "tests-failing"
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_failing_test_skips_downstream(self):
|
||||
results = self.build(expect_pass=False)
|
||||
self.assertEqual(len(results), 13)
|
||||
actual = [str(r.status) for r in results]
|
||||
expected = ['fail'] + ['skipped']*6 + ['pass']*2 + ['success']*4
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
|
||||
class TestCircularRelationshipTestsBuild(TestBuildBase):
|
||||
@property
|
||||
def models(self):
|
||||
return "models-circular-relationship"
|
||||
|
||||
@use_profile("postgres")
|
||||
def test__postgres_circular_relationship_test_success(self):
|
||||
""" Ensure that tests that refer to each other's model don't create
|
||||
a circular dependency. """
|
||||
results = self.build()
|
||||
actual = [r.status for r in results]
|
||||
expected = ['success']*7 + ['pass']*2
|
||||
self.assertEqual(sorted(actual), sorted(expected))
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('countries') }}
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('snap_0') }}
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select * from {{ ref('snap_1') }}
|
||||
@@ -0,0 +1,3 @@
|
||||
{{ config(materialized='table') }}
|
||||
|
||||
select '1' as "num"
|
||||
18
test/integration/069_build_test/tests-failing/test.yml
Normal file
18
test/integration/069_build_test/tests-failing/test.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
version: 2
|
||||
|
||||
models:
|
||||
- name: model_0
|
||||
columns:
|
||||
- name: iso3
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
- name: historical_iso_numeric
|
||||
tests:
|
||||
- not_null
|
||||
- name: model_2
|
||||
columns:
|
||||
- name: iso3
|
||||
tests:
|
||||
- unique
|
||||
- not_null
|
||||
@@ -0,0 +1,3 @@
|
||||
{% macro config() %}
|
||||
|
||||
{% endmacro %}
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id
|
||||
@@ -0,0 +1 @@
|
||||
version: 2
|
||||
@@ -0,0 +1,3 @@
|
||||
{% macro ref(model_name) %}
|
||||
|
||||
{% endmacro %}
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id
|
||||
@@ -0,0 +1 @@
|
||||
version: 2
|
||||
@@ -0,0 +1,3 @@
|
||||
{% macro source(source_name, table_name) %}
|
||||
|
||||
{% endmacro %}
|
||||
@@ -0,0 +1 @@
|
||||
select 1 as id
|
||||
@@ -0,0 +1 @@
|
||||
version: 2
|
||||
@@ -14,17 +14,18 @@ def get_manifest():
|
||||
return None
|
||||
|
||||
|
||||
class TestAllExperimentalParser(DBTIntegrationTest):
|
||||
class TestBasicExperimentalParser(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "072_experimental_parser"
|
||||
return "072_basic"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
return "basic"
|
||||
|
||||
# test that the experimental parser extracts some basic ref, source, and config calls.
|
||||
@use_profile('postgres')
|
||||
def test_postgres_experimental_parser(self):
|
||||
def test_postgres_experimental_parser_basic(self):
|
||||
results = self.run_dbt(['--use-experimental-parser', 'parse'])
|
||||
manifest = get_manifest()
|
||||
node = manifest.nodes['model.test.model_a']
|
||||
@@ -32,4 +33,93 @@ class TestAllExperimentalParser(DBTIntegrationTest):
|
||||
self.assertEqual(node.sources, [['my_src', 'my_tbl']])
|
||||
self.assertEqual(node.config._extra, {'x': True})
|
||||
self.assertEqual(node.config.tags, ['hello', 'world'])
|
||||
|
||||
|
||||
class TestRefOverrideExperimentalParser(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "072_ref_macro"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "ref_macro/models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'macro-paths': ['source_macro', 'macros'],
|
||||
}
|
||||
|
||||
# test that the experimental parser doesn't run if the ref built-in is overriden with a macro
|
||||
@use_profile('postgres')
|
||||
def test_postgres_experimental_parser_ref_override(self):
|
||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
||||
|
||||
print(log_output)
|
||||
|
||||
# successful static parsing
|
||||
self.assertFalse("1699: " in log_output)
|
||||
# ran static parser but failed
|
||||
self.assertFalse("1602: " in log_output)
|
||||
# didn't run static parser because dbt detected a built-in macro override
|
||||
self.assertTrue("1601: " in log_output)
|
||||
|
||||
class TestSourceOverrideExperimentalParser(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "072_source_macro"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "source_macro/models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'macro-paths': ['source_macro', 'macros'],
|
||||
}
|
||||
|
||||
# test that the experimental parser doesn't run if the source built-in is overriden with a macro
|
||||
@use_profile('postgres')
|
||||
def test_postgres_experimental_parser_source_override(self):
|
||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
||||
|
||||
print(log_output)
|
||||
|
||||
# successful static parsing
|
||||
self.assertFalse("1699: " in log_output)
|
||||
# ran static parser but failed
|
||||
self.assertFalse("1602: " in log_output)
|
||||
# didn't run static parser because dbt detected a built-in macro override
|
||||
self.assertTrue("1601: " in log_output)
|
||||
|
||||
class TestConfigOverrideExperimentalParser(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "072_config_macro"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "config_macro/models"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'macro-paths': ['config_macro', 'macros'],
|
||||
}
|
||||
|
||||
# test that the experimental parser doesn't run if the config built-in is overriden with a macro
|
||||
@use_profile('postgres')
|
||||
def test_postgres_experimental_parser_config_override(self):
|
||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
||||
|
||||
print(log_output)
|
||||
|
||||
# successful static parsing
|
||||
self.assertFalse("1699: " in log_output)
|
||||
# ran static parser but failed
|
||||
self.assertFalse("1602: " in log_output)
|
||||
# didn't run static parser because dbt detected a built-in macro override
|
||||
self.assertTrue("1601: " in log_output)
|
||||
|
||||
@@ -0,0 +1,2 @@
|
||||
fun,_loaded_at
|
||||
1,2021-04-19 01:00:00
|
||||
|
@@ -0,0 +1 @@
|
||||
SELECT 1 AS fun
|
||||
@@ -0,0 +1 @@
|
||||
SELECT 1 AS fun
|
||||
@@ -0,0 +1,35 @@
|
||||
version: 2
|
||||
|
||||
sources:
|
||||
- name: src
|
||||
schema: "{{ target.schema }}"
|
||||
freshness:
|
||||
warn_after: {count: 24, period: hour}
|
||||
loaded_at_field: _loaded_at
|
||||
tables:
|
||||
- name: source_a
|
||||
identifier: model_c
|
||||
columns:
|
||||
- name: fun
|
||||
- name: _loaded_at
|
||||
- name: src
|
||||
schema: "{{ target.schema }}"
|
||||
freshness:
|
||||
warn_after: {count: 24, period: hour}
|
||||
loaded_at_field: _loaded_at
|
||||
tables:
|
||||
- name: source_b
|
||||
identifier: model_c
|
||||
columns:
|
||||
- name: fun
|
||||
- name: _loaded_at
|
||||
|
||||
models:
|
||||
- name: model_a
|
||||
columns:
|
||||
- name: fun
|
||||
tags: [marketing]
|
||||
- name: model_b
|
||||
columns:
|
||||
- name: fun
|
||||
tags: [finance]
|
||||
@@ -0,0 +1,77 @@
|
||||
import yaml
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
|
||||
|
||||
class TestDefaultSelectors(DBTIntegrationTest):
|
||||
'''Test the selectors default argument'''
|
||||
@property
|
||||
def schema(self):
|
||||
return 'test_default_selectors_101'
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return 'models'
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'source-paths': ['models'],
|
||||
'data-paths': ['data'],
|
||||
'seeds': {
|
||||
'quote_columns': False,
|
||||
},
|
||||
}
|
||||
|
||||
@property
|
||||
def selectors_config(self):
|
||||
return yaml.safe_load('''
|
||||
selectors:
|
||||
- name: default_selector
|
||||
description: test default selector
|
||||
definition:
|
||||
union:
|
||||
- method: source
|
||||
value: "test.src.source_a"
|
||||
- method: fqn
|
||||
value: "model_a"
|
||||
default: true
|
||||
''')
|
||||
|
||||
def list_and_assert(self, expected):
|
||||
'''list resources in the project with the selectors default'''
|
||||
listed = self.run_dbt(['ls', '--resource-type', 'model'])
|
||||
|
||||
assert len(listed) == len(expected)
|
||||
|
||||
def compile_and_assert(self, expected):
|
||||
'''Compile project with the selectors default'''
|
||||
compiled = self.run_dbt(['compile'])
|
||||
|
||||
assert len(compiled.results) == len(expected)
|
||||
assert compiled.results[0].node.name == expected[0]
|
||||
|
||||
def run_and_assert(self, expected):
|
||||
run = self.run_dbt(['run'])
|
||||
|
||||
assert len(run.results) == len(expected)
|
||||
assert run.results[0].node.name == expected[0]
|
||||
|
||||
def freshness_and_assert(self, expected):
|
||||
self.run_dbt(['seed', '-s', 'test.model_c'])
|
||||
freshness = self.run_dbt(['source', 'freshness'])
|
||||
|
||||
assert len(freshness.results) == len(expected)
|
||||
assert freshness.results[0].node.name == expected[0]
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres__model_a_only(self):
|
||||
expected_model = ['model_a']
|
||||
|
||||
self.list_and_assert(expected_model)
|
||||
self.compile_and_assert(expected_model)
|
||||
|
||||
def test__postgres__source_a_only(self):
|
||||
expected_source = ['source_a']
|
||||
|
||||
self.freshness_and_assert(expected_source)
|
||||
@@ -263,6 +263,15 @@ class DBTIntegrationTest(unittest.TestCase):
|
||||
'keyfile_json': credentials,
|
||||
'schema': self.unique_schema(),
|
||||
},
|
||||
'alternate': {
|
||||
'type': 'bigquery',
|
||||
'method': 'service-account-json',
|
||||
'threads': 1,
|
||||
'project': project_id,
|
||||
'keyfile_json': credentials,
|
||||
'schema': self.unique_schema(),
|
||||
'execution_project': self.alternative_database,
|
||||
},
|
||||
},
|
||||
'target': 'default2'
|
||||
}
|
||||
|
||||
@@ -26,6 +26,7 @@ snapshot_data = '''
|
||||
{% endsnapshot %}
|
||||
'''
|
||||
|
||||
|
||||
@pytest.mark.supported('postgres')
|
||||
def test_rpc_build_threads(
|
||||
project_root, profiles_root, dbt_profile, unique_schema
|
||||
@@ -112,25 +113,84 @@ def test_rpc_build_state(
|
||||
|
||||
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
||||
|
||||
project.models['my_model.sql'] = 'select * from {{ ref("data" )}} where id = 2'
|
||||
project.models['my_model.sql'] =\
|
||||
'select * from {{ ref("data" )}} where id = 2'
|
||||
project.write_models(project_root, remove=True)
|
||||
querier.sighup()
|
||||
assert querier.wait_for_status('ready') is True
|
||||
|
||||
results = querier.async_wait_for_result(
|
||||
querier.build(state='./state', models=['state:modified'])
|
||||
querier.build(state='./state', select=['state:modified'])
|
||||
)
|
||||
assert len(results['results']) == 3
|
||||
|
||||
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
||||
|
||||
results = querier.async_wait_for_result(
|
||||
querier.build(state='./state', models=['state:modified']),
|
||||
querier.build(state='./state', select=['state:modified']),
|
||||
)
|
||||
assert len(results['results']) == 0
|
||||
|
||||
|
||||
# a better test of defer would require multiple targets
|
||||
results = querier.async_wait_for_result(
|
||||
querier.build(state='./state', models=['state:modified'], defer=True)
|
||||
querier.build(
|
||||
state='./state',
|
||||
select=['state:modified'],
|
||||
defer=True
|
||||
)
|
||||
)
|
||||
assert len(results['results']) == 0
|
||||
|
||||
|
||||
@pytest.mark.supported('postgres')
|
||||
def test_rpc_build_selectors(
|
||||
project_root, profiles_root, dbt_profile, unique_schema
|
||||
):
|
||||
schema_yaml = {
|
||||
'version': 2,
|
||||
'models': [{
|
||||
'name': 'my_model',
|
||||
'columns': [
|
||||
{
|
||||
'name': 'id',
|
||||
'tests': ['not_null', 'unique'],
|
||||
},
|
||||
],
|
||||
}],
|
||||
}
|
||||
project = ProjectDefinition(
|
||||
name='test',
|
||||
project_data={
|
||||
'seeds': {'+quote_columns': False},
|
||||
'models': {'test': {'my_model': {'+tags': 'example_tag'}}}
|
||||
},
|
||||
models={
|
||||
'my_model.sql': 'select * from {{ ref("data") }}',
|
||||
'schema.yml': yaml.safe_dump(schema_yaml)
|
||||
},
|
||||
seeds={'data.csv': 'id,message\n1,hello\n2,goodbye'},
|
||||
snapshots={'my_snapshots.sql': snapshot_data},
|
||||
)
|
||||
querier_ctx = get_querier(
|
||||
project_def=project,
|
||||
project_dir=project_root,
|
||||
profiles_dir=profiles_root,
|
||||
schema=unique_schema,
|
||||
test_kwargs={},
|
||||
)
|
||||
with querier_ctx as querier:
|
||||
# test simple resource_types param
|
||||
results = querier.async_wait_for_result(
|
||||
querier.build(resource_types=['seed'])
|
||||
)
|
||||
assert len(results['results']) == 1
|
||||
assert results['results'][0]['node']['resource_type'] == 'seed'
|
||||
|
||||
# test simple select param (should select tagged model and its tests)
|
||||
results = querier.async_wait_for_result(
|
||||
querier.build(select=['tag:example_tag'])
|
||||
)
|
||||
assert len(results['results']) == 3
|
||||
assert sorted(
|
||||
[result['node']['resource_type'] for result in results['results']]
|
||||
) == ['model', 'test', 'test']
|
||||
|
||||
@@ -105,7 +105,7 @@ def test_rpc_test_state(
|
||||
querier.test(state='./state', models=['state:modified']),
|
||||
)
|
||||
assert len(results['results']) == 0
|
||||
|
||||
|
||||
# a better test of defer would require multiple targets
|
||||
results = querier.async_wait_for_result(
|
||||
querier.run(state='./state', models=['state:modified'], defer=True)
|
||||
|
||||
@@ -260,10 +260,10 @@ class Querier:
|
||||
def run_operation(
|
||||
self,
|
||||
macro: str,
|
||||
args: Optional[Dict[str, Any]],
|
||||
args: Optional[Dict[str, Any]] = None,
|
||||
request_id: int = 1,
|
||||
):
|
||||
params = {'macro': macro}
|
||||
params: Dict[str, Any] = {'macro': macro}
|
||||
if args is not None:
|
||||
params['args'] = args
|
||||
return self.request(
|
||||
@@ -277,7 +277,7 @@ class Querier:
|
||||
show: bool = None,
|
||||
threads: Optional[int] = None,
|
||||
request_id: int = 1,
|
||||
state: Optional[bool] = None,
|
||||
state: Optional[str] = None,
|
||||
):
|
||||
params = {}
|
||||
if select is not None:
|
||||
@@ -300,7 +300,7 @@ class Querier:
|
||||
exclude: Optional[Union[str, List[str]]] = None,
|
||||
threads: Optional[int] = None,
|
||||
request_id: int = 1,
|
||||
state: Optional[bool] = None,
|
||||
state: Optional[str] = None,
|
||||
):
|
||||
params = {}
|
||||
if select is not None:
|
||||
@@ -337,7 +337,7 @@ class Querier:
|
||||
exclude: Optional[Union[str, List[str]]] = None,
|
||||
threads: Optional[int] = None,
|
||||
request_id: int = 1,
|
||||
state: Optional[bool] = None,
|
||||
state: Optional[str] = None,
|
||||
):
|
||||
params = {}
|
||||
if select is not None:
|
||||
@@ -361,7 +361,7 @@ class Querier:
|
||||
schema: bool = None,
|
||||
request_id: int = 1,
|
||||
defer: Optional[bool] = None,
|
||||
state: Optional[bool] = None,
|
||||
state: Optional[str] = None,
|
||||
):
|
||||
params = {}
|
||||
if models is not None:
|
||||
@@ -384,18 +384,21 @@ class Querier:
|
||||
|
||||
def build(
|
||||
self,
|
||||
models: Optional[Union[str, List[str]]] = None,
|
||||
select: Optional[Union[str, List[str]]] = None,
|
||||
exclude: Optional[Union[str, List[str]]] = None,
|
||||
resource_types: Optional[Union[str, List[str]]] = None,
|
||||
threads: Optional[int] = None,
|
||||
request_id: int = 1,
|
||||
defer: Optional[bool] = None,
|
||||
state: Optional[bool] = None,
|
||||
state: Optional[str] = None,
|
||||
):
|
||||
params = {}
|
||||
if models is not None:
|
||||
params['models'] = models
|
||||
if select is not None:
|
||||
params['select'] = select
|
||||
if exclude is not None:
|
||||
params['exclude'] = exclude
|
||||
if resource_types is not None:
|
||||
params['resource_types'] = resource_types
|
||||
if threads is not None:
|
||||
params['threads'] = threads
|
||||
if defer is not None:
|
||||
|
||||
@@ -136,6 +136,7 @@ class DocumentationParserTest(unittest.TestCase):
|
||||
relative_path=relative_path,
|
||||
project_root=self.root_path,
|
||||
searched_path=self.subdir_path,
|
||||
modification_time=0.0,
|
||||
)
|
||||
source_file = SourceFile(path=match, checksum=FileHash.empty())
|
||||
source_file.contents = contents
|
||||
|
||||
@@ -122,7 +122,7 @@ class GraphTest(unittest.TestCase):
|
||||
# Create the source file patcher
|
||||
self.load_source_file_patcher = patch('dbt.parser.read_files.load_source_file')
|
||||
self.mock_source_file = self.load_source_file_patcher.start()
|
||||
def mock_load_source_file(path, parse_file_type, project_name):
|
||||
def mock_load_source_file(path, parse_file_type, project_name, saved_files):
|
||||
for sf in self.mock_models:
|
||||
if sf.path == path:
|
||||
source_file = sf
|
||||
@@ -137,6 +137,7 @@ class GraphTest(unittest.TestCase):
|
||||
searched_path='.',
|
||||
project_root=os.path.normcase(os.getcwd()),
|
||||
relative_path='dbt_project.yml',
|
||||
modification_time=0.0,
|
||||
)
|
||||
return path
|
||||
|
||||
@@ -165,6 +166,7 @@ class GraphTest(unittest.TestCase):
|
||||
searched_path='models',
|
||||
project_root=os.path.normcase(os.getcwd()),
|
||||
relative_path='{}.sql'.format(k),
|
||||
modification_time=0.0,
|
||||
)
|
||||
# FileHash can't be empty or 'search_key' will be None
|
||||
source_file = SourceFile(path=path, checksum=FileHash.from_contents('abc'))
|
||||
|
||||
@@ -120,7 +120,7 @@ def test_run_specs(include, exclude, expected):
|
||||
manifest = _get_manifest(graph)
|
||||
selector = graph_selector.NodeSelector(graph, manifest)
|
||||
spec = graph_cli.parse_difference(include, exclude)
|
||||
selected = selector.select_nodes(spec)
|
||||
selected, _ = selector.select_nodes(spec)
|
||||
|
||||
assert selected == expected
|
||||
|
||||
|
||||
@@ -115,7 +115,7 @@ def test_parse_simple():
|
||||
childrens_parents=False,
|
||||
children_depth=None,
|
||||
parents_depth=None,
|
||||
) == parsed['tagged_foo']
|
||||
) == parsed['tagged_foo']["definition"]
|
||||
|
||||
|
||||
def test_parse_simple_childrens_parents():
|
||||
@@ -141,7 +141,7 @@ def test_parse_simple_childrens_parents():
|
||||
childrens_parents=True,
|
||||
children_depth=None,
|
||||
parents_depth=None,
|
||||
) == parsed['tagged_foo']
|
||||
) == parsed['tagged_foo']["definition"]
|
||||
|
||||
|
||||
def test_parse_simple_arguments_with_modifiers():
|
||||
@@ -169,7 +169,7 @@ def test_parse_simple_arguments_with_modifiers():
|
||||
childrens_parents=False,
|
||||
children_depth=2,
|
||||
parents_depth=None,
|
||||
) == parsed['configured_view']
|
||||
) == parsed['configured_view']["definition"]
|
||||
|
||||
|
||||
def test_parse_union():
|
||||
@@ -188,7 +188,7 @@ def test_parse_union():
|
||||
assert Union(
|
||||
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
||||
) == parsed['views-or-foos']
|
||||
) == parsed['views-or-foos']["definition"]
|
||||
|
||||
|
||||
def test_parse_intersection():
|
||||
@@ -208,7 +208,7 @@ def test_parse_intersection():
|
||||
assert Intersection(
|
||||
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[]),
|
||||
) == parsed['views-and-foos']
|
||||
) == parsed['views-and-foos']["definition"]
|
||||
|
||||
|
||||
def test_parse_union_excluding():
|
||||
@@ -232,7 +232,7 @@ def test_parse_union_excluding():
|
||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
||||
),
|
||||
Criteria(method=MethodName.Tag, value='bar', method_arguments=[]),
|
||||
) == parsed['views-or-foos-not-bars']
|
||||
) == parsed['views-or-foos-not-bars']["definition"]
|
||||
|
||||
|
||||
def test_parse_yaml_complex():
|
||||
@@ -272,7 +272,7 @@ def test_parse_yaml_complex():
|
||||
assert Union(
|
||||
Criteria(method=MethodName.Tag, value='nightly'),
|
||||
Criteria(method=MethodName.Tag, value='weeknights_only'),
|
||||
) == parsed['weeknights']
|
||||
) == parsed['weeknights']["definition"]
|
||||
|
||||
assert Union(
|
||||
Intersection(
|
||||
@@ -300,4 +300,4 @@ def test_parse_yaml_complex():
|
||||
),
|
||||
),
|
||||
),
|
||||
) == parsed['test_name']
|
||||
) == parsed['test_name']["definition"]
|
||||
@@ -273,7 +273,7 @@ class ManifestTest(unittest.TestCase):
|
||||
'child_map': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'env': {ENV_KEY_NAME: 'value'},
|
||||
# invocation_id is None, so it will not be present
|
||||
@@ -419,7 +419,7 @@ class ManifestTest(unittest.TestCase):
|
||||
'docs': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'project_id': '098f6bcd4621d373cade4e832627b4f6',
|
||||
'user_id': 'cfc9500f-dc7f-4c83-9ea7-2c581c1b38cf',
|
||||
@@ -662,7 +662,7 @@ class MixedManifestTest(unittest.TestCase):
|
||||
'child_map': {},
|
||||
'metadata': {
|
||||
'generated_at': '2018-02-14T09:15:13Z',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v2.json',
|
||||
'dbt_schema_version': 'https://schemas.getdbt.com/dbt/manifest/v3.json',
|
||||
'dbt_version': dbt.version.__version__,
|
||||
'invocation_id': '01234567-0123-0123-0123-0123456789ab',
|
||||
'env': {ENV_KEY_NAME: 'value'},
|
||||
|
||||
@@ -32,7 +32,7 @@ from dbt.contracts.graph.parsed import (
|
||||
UnpatchedSourceDefinition
|
||||
)
|
||||
from dbt.contracts.graph.unparsed import Docs
|
||||
|
||||
import itertools
|
||||
from .utils import config_from_parts_or_dicts, normalize, generate_name_macros, MockNode, MockSource, MockDocumentation
|
||||
|
||||
|
||||
@@ -146,6 +146,7 @@ class BaseParserTest(unittest.TestCase):
|
||||
searched_path=searched,
|
||||
relative_path=filename,
|
||||
project_root=root_dir,
|
||||
modification_time=0.0,
|
||||
)
|
||||
sf_cls = SchemaSourceFile if filename.endswith('.yml') else SourceFile
|
||||
source_file = sf_cls(
|
||||
@@ -521,6 +522,57 @@ class ModelParserTest(BaseParserTest):
|
||||
self.parser.parse_file(block)
|
||||
|
||||
|
||||
class StaticModelParserTest(BaseParserTest):
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
self.parser = ModelParser(
|
||||
project=self.snowplow_project_config,
|
||||
manifest=self.manifest,
|
||||
root_project=self.root_project_config,
|
||||
)
|
||||
|
||||
def file_block_for(self, data, filename):
|
||||
return super().file_block_for(data, filename, 'models')
|
||||
|
||||
# tests that when the ref built-in is overriden with a macro definition
|
||||
# that the ModelParser can detect it. This does not test that the static
|
||||
# parser does not run in this case. That test is in integration test suite 072
|
||||
def test_built_in_macro_override_detection(self):
|
||||
macro_unique_id = 'macro.root.ref'
|
||||
self.parser.manifest.macros[macro_unique_id] = ParsedMacro(
|
||||
name='ref',
|
||||
resource_type=NodeType.Macro,
|
||||
unique_id=macro_unique_id,
|
||||
package_name='root',
|
||||
original_file_path=normalize('macros/macro.sql'),
|
||||
root_path=get_abs_os_path('./dbt_modules/root'),
|
||||
path=normalize('macros/macro.sql'),
|
||||
macro_sql='{% macro ref(model_name) %}{% set x = raise("boom") %}{% endmacro %}',
|
||||
)
|
||||
|
||||
raw_sql = '{{ config(materialized="table") }}select 1 as id'
|
||||
block = self.file_block_for(raw_sql, 'nested/model_1.sql')
|
||||
node = ParsedModelNode(
|
||||
alias='model_1',
|
||||
name='model_1',
|
||||
database='test',
|
||||
schema='analytics',
|
||||
resource_type=NodeType.Model,
|
||||
unique_id='model.snowplow.model_1',
|
||||
fqn=['snowplow', 'nested', 'model_1'],
|
||||
package_name='snowplow',
|
||||
original_file_path=normalize('models/nested/model_1.sql'),
|
||||
root_path=get_abs_os_path('./dbt_modules/snowplow'),
|
||||
config=NodeConfig(materialized='table'),
|
||||
path=normalize('nested/model_1.sql'),
|
||||
raw_sql=raw_sql,
|
||||
checksum=block.file.checksum,
|
||||
unrendered_config={'materialized': 'table'},
|
||||
)
|
||||
|
||||
assert(self.parser._has_banned_macro(node))
|
||||
|
||||
|
||||
class SnapshotParserTest(BaseParserTest):
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import unittest
|
||||
from unittest import mock
|
||||
import time
|
||||
|
||||
import dbt.exceptions
|
||||
from dbt.parser.partial import PartialParsing
|
||||
@@ -17,14 +18,14 @@ class TestPartialParsing(unittest.TestCase):
|
||||
project_name = 'my_test'
|
||||
project_root = '/users/root'
|
||||
model_file = SourceFile(
|
||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='my_model.sql'),
|
||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='my_model.sql', modification_time=time.time()),
|
||||
checksum=FileHash.from_contents('abcdef'),
|
||||
project_name=project_name,
|
||||
parse_file_type=ParseFileType.Model,
|
||||
nodes=['model.my_test.my_model'],
|
||||
)
|
||||
schema_file = SchemaSourceFile(
|
||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='schema.yml'),
|
||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='schema.yml', modification_time=time.time()),
|
||||
checksum=FileHash.from_contents('ghijkl'),
|
||||
project_name=project_name,
|
||||
parse_file_type=ParseFileType.Schema,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user