mirror of
https://github.com/dbt-labs/dbt-core
synced 2025-12-20 04:21:26 +00:00
Compare commits
1 Commits
regression
...
db-setup-w
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
03dfb11d2b |
@@ -1,5 +1,5 @@
|
|||||||
[bumpversion]
|
[bumpversion]
|
||||||
current_version = 0.21.0b2
|
current_version = 0.21.0b1
|
||||||
parse = (?P<major>\d+)
|
parse = (?P<major>\d+)
|
||||||
\.(?P<minor>\d+)
|
\.(?P<minor>\d+)
|
||||||
\.(?P<patch>\d+)
|
\.(?P<patch>\d+)
|
||||||
|
|||||||
6
.github/workflows/performance.yml
vendored
6
.github/workflows/performance.yml
vendored
@@ -164,13 +164,11 @@ jobs:
|
|||||||
name: runner
|
name: runner
|
||||||
- name: change permissions
|
- name: change permissions
|
||||||
run: chmod +x ./runner
|
run: chmod +x ./runner
|
||||||
- name: make results directory
|
|
||||||
run: mkdir ./final-output/
|
|
||||||
- name: run calculation
|
- name: run calculation
|
||||||
run: ./runner calculate -r ./ -o ./final-output/
|
run: ./runner calculate -r ./
|
||||||
# always attempt to upload the results even if there were regressions found
|
# always attempt to upload the results even if there were regressions found
|
||||||
- uses: actions/upload-artifact@v2
|
- uses: actions/upload-artifact@v2
|
||||||
if: ${{ always() }}
|
if: ${{ always() }}
|
||||||
with:
|
with:
|
||||||
name: final-calculations
|
name: final-calculations
|
||||||
path: ./final-output/*
|
path: ./final_calculations.json
|
||||||
|
|||||||
23
CHANGELOG.md
23
CHANGELOG.md
@@ -1,22 +1,12 @@
|
|||||||
## dbt 0.21.0 (Release TBD)
|
## dbt 0.21.0 (Release TBD)
|
||||||
|
|
||||||
## dbt 0.21.0b2 (August 19, 2021)
|
|
||||||
|
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
||||||
- Experimental parser now detects macro overrides of ref, source, and config builtins. ([#3581](https://github.com/dbt-labs/dbt/issues/3866), [#3582](https://github.com/dbt-labs/dbt/pull/3877))
|
|
||||||
- Add connect_timeout profile configuration for Postgres and Redshift adapters. ([#3581](https://github.com/dbt-labs/dbt/issues/3581), [#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
- Add connect_timeout profile configuration for Postgres and Redshift adapters. ([#3581](https://github.com/dbt-labs/dbt/issues/3581), [#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||||
- Enhance BigQuery copy materialization ([#3570](https://github.com/dbt-labs/dbt/issues/3570), [#3606](https://github.com/dbt-labs/dbt/pull/3606)):
|
- Enhance BigQuery copy materialization ([#3570](https://github.com/dbt-labs/dbt/issues/3570), [#3606](https://github.com/dbt-labs/dbt/pull/3606)):
|
||||||
- to simplify config (default usage of `copy_materialization='table'` if is is not found in global or local config)
|
- to simplify config (default usage of `copy_materialization='table'` if is is not found in global or local config)
|
||||||
- to let copy several source tables into single target table at a time. ([Google doc reference](https://cloud.google.com/bigquery/docs/managing-tables#copying_multiple_source_tables))
|
- to let copy several source tables into single target table at a time. ([Google doc reference](https://cloud.google.com/bigquery/docs/managing-tables#copying_multiple_source_tables))
|
||||||
- Customize ls task JSON output by adding new flag `--output-keys` ([#3778](https://github.com/dbt-labs/dbt/issues/3778), [#3395](https://github.com/dbt-labs/dbt/issues/3395))
|
- Customize ls task JSON output by adding new flag `--output-keys` ([#3778](https://github.com/dbt-labs/dbt/issues/3778), [#3395](https://github.com/dbt-labs/dbt/issues/3395))
|
||||||
- Add support for execution project on BigQuery through profile configuration ([#3707](https://github.com/dbt-labs/dbt/issues/3707), [#3708](https://github.com/dbt-labs/dbt/issues/3708))
|
|
||||||
- Skip downstream nodes during the `build` task when a test fails. ([#3597](https://github.com/dbt-labs/dbt/issues/3597), [#3792](https://github.com/dbt-labs/dbt/pull/3792))
|
|
||||||
- Added default field in the `selectors.yml` to allow user to define default selector ([#3448](https://github.com/dbt-labs/dbt/issues/3448), [#3875](https://github.com/dbt-labs/dbt/issues/3875), [#3892](https://github.com/dbt-labs/dbt/issues/3892))
|
|
||||||
- Added timing and thread information to sources.json artifact ([#3804](https://github.com/dbt-labs/dbt/issues/3804), [#3894](https://github.com/dbt-labs/dbt/pull/3894))
|
|
||||||
- Update cli and rpc flags for the `build` task to align with other commands (`--resource-type`, `--store-failures`) ([#3596](https://github.com/dbt-labs/dbt/issues/3596), [#3884](https://github.com/dbt-labs/dbt/pull/3884))
|
|
||||||
- Log tests that are not indirectly selected. Add `--greedy` flag to `test`, `list`, `build` and `greedy` property in yaml selectors ([#3723](https://github.com/dbt-labs/dbt/pull/3723), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
|
||||||
|
|
||||||
### Fixes
|
### Fixes
|
||||||
|
|
||||||
@@ -25,18 +15,19 @@
|
|||||||
- Fix issue when running the `deps` task after the `list` task in the RPC server ([#3846](https://github.com/dbt-labs/dbt/issues/3846), [#3848](https://github.com/dbt-labs/dbt/pull/3848), [#3850](https://github.com/dbt-labs/dbt/pull/3850))
|
- Fix issue when running the `deps` task after the `list` task in the RPC server ([#3846](https://github.com/dbt-labs/dbt/issues/3846), [#3848](https://github.com/dbt-labs/dbt/pull/3848), [#3850](https://github.com/dbt-labs/dbt/pull/3850))
|
||||||
- Fix bug with initializing a dataclass that inherits from `typing.Protocol`, specifically for `dbt.config.profile.Profile` ([#3843](https://github.com/dbt-labs/dbt/issues/3843), [#3855](https://github.com/dbt-labs/dbt/pull/3855))
|
- Fix bug with initializing a dataclass that inherits from `typing.Protocol`, specifically for `dbt.config.profile.Profile` ([#3843](https://github.com/dbt-labs/dbt/issues/3843), [#3855](https://github.com/dbt-labs/dbt/pull/3855))
|
||||||
- Introduce a macro, `get_where_subquery`, for tests that use `where` config. Alias filtering subquery as `dbt_subquery` instead of resource identifier ([#3857](https://github.com/dbt-labs/dbt/issues/3857), [#3859](https://github.com/dbt-labs/dbt/issues/3859))
|
- Introduce a macro, `get_where_subquery`, for tests that use `where` config. Alias filtering subquery as `dbt_subquery` instead of resource identifier ([#3857](https://github.com/dbt-labs/dbt/issues/3857), [#3859](https://github.com/dbt-labs/dbt/issues/3859))
|
||||||
- Use group by column_name in accepted_values test for compatibility with most database engines ([#3905](https://github.com/dbt-labs/dbt/issues/3905), [#3906](https://github.com/dbt-labs/dbt/pull/3906))
|
|
||||||
|
### Fixes
|
||||||
|
|
||||||
- Separated table vs view configuration for BigQuery since some configuration is not possible to set for tables vs views. ([#3682](https://github.com/dbt-labs/dbt/issues/3682), [#3691](https://github.com/dbt-labs/dbt/issues/3682))
|
- Separated table vs view configuration for BigQuery since some configuration is not possible to set for tables vs views. ([#3682](https://github.com/dbt-labs/dbt/issues/3682), [#3691](https://github.com/dbt-labs/dbt/issues/3682))
|
||||||
|
|
||||||
### Under the hood
|
### Under the hood
|
||||||
|
|
||||||
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
- Use GitHub Actions for CI ([#3688](https://github.com/dbt-labs/dbt/issues/3688), [#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||||
- Better dbt hub registry packages version logging that prompts the user for upgrades to relevant packages ([#3560](https://github.com/dbt-labs/dbt/issues/3560), [#3763](https://github.com/dbt-labs/dbt/issues/3763), [#3759](https://github.com/dbt-labs/dbt/pull/3759))
|
- Better dbt hub registry packages version logging that prompts the user for upgrades to relevant packages ([#3560](https://github.com/dbt-labs/dbt/issues/3560), [#3763](https://github.com/dbt-labs/dbt/issues/3763), [#3759](https://github.com/dbt-labs/dbt/pull/3759))
|
||||||
- Allow the default seed macro's SQL parameter, `%s`, to be replaced by dispatching a new macro, `get_binding_char()`. This enables adapters with parameter marker characters such as `?` to not have to override `basic_load_csv_rows`. ([#3622](https://github.com/dbt-labs/dbt/issues/3622), [#3623](https://github.com/dbt-labs/dbt/pull/3623))
|
- Allow the default seed macro's SQL parameter, `%s`, to be replaced by dispatching a new macro, `get_binding_char()`. This enables adapters with parameter marker characters such as `?` to not have to override `basic_load_csv_rows`. ([#3622](https://github.com/fishtown-analytics/dbt/issues/3622), [#3623](https://github.com/fishtown-analytics/dbt/pull/3623))
|
||||||
- Alert users on package rename ([hub.getdbt.com#180](https://github.com/dbt-labs/hub.getdbt.com/issues/810), [#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
- Alert users on package rename ([hub.getdbt.com#180](https://github.com/dbt-labs/hub.getdbt.com/issues/810), [#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||||
- Add `adapter_unique_id` to invocation context in anonymous usage tracking, to better understand dbt adoption ([#3713](https://github.com/dbt-labs/dbt/issues/3713), [#3796](https://github.com/dbt-labs/dbt/issues/3796))
|
- Add `adapter_unique_id` to invocation context in anonymous usage tracking, to better understand dbt adoption ([#3713](https://github.com/dbt-labs/dbt/issues/3713), [#3796](https://github.com/dbt-labs/dbt/issues/3796))
|
||||||
- Specify `macro_namespace = 'dbt'` for all dispatched macros in the global project, making it possible to dispatch to macro implementations defined in packages. Dispatch `generate_schema_name` and `generate_alias_name` ([#3456](https://github.com/dbt-labs/dbt/issues/3456), [#3851](https://github.com/dbt-labs/dbt/issues/3851))
|
- Specify `macro_namespace = 'dbt'` for all dispatched macros in the global project, making it possible to dispatch to macro implementations defined in packages. Dispatch `generate_schema_name` and `generate_alias_name` ([#3456](https://github.com/dbt-labs/dbt/issues/3456), [#3851](https://github.com/dbt-labs/dbt/issues/3851))
|
||||||
- Retry transient GitHub failures during download ([#3729](https://github.com/dbt-labs/dbt/pull/3729))
|
|
||||||
|
|
||||||
Contributors:
|
Contributors:
|
||||||
|
|
||||||
@@ -45,13 +36,10 @@ Contributors:
|
|||||||
- [@dbrtly](https://github.com/dbrtly) ([#3834](https://github.com/dbt-labs/dbt/pull/3834))
|
- [@dbrtly](https://github.com/dbrtly) ([#3834](https://github.com/dbt-labs/dbt/pull/3834))
|
||||||
- [@swanderz](https://github.com/swanderz) [#3623](https://github.com/dbt-labs/dbt/pull/3623)
|
- [@swanderz](https://github.com/swanderz) [#3623](https://github.com/dbt-labs/dbt/pull/3623)
|
||||||
- [@JasonGluck](https://github.com/JasonGluck) ([#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
- [@JasonGluck](https://github.com/JasonGluck) ([#3582](https://github.com/dbt-labs/dbt/pull/3582))
|
||||||
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669), [#3833](https://github.com/dbt-labs/dbt/pull/3833))
|
- [@joellabes](https://github.com/joellabes) ([#3669](https://github.com/dbt-labs/dbt/pull/3669))
|
||||||
- [@juma-adoreme](https://github.com/juma-adoreme) ([#3838](https://github.com/dbt-labs/dbt/pull/3838))
|
- [@juma-adoreme](https://github.com/juma-adoreme) ([#3838](https://github.com/dbt-labs/dbt/pull/3838))
|
||||||
- [@annafil](https://github.com/annafil) ([#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
- [@annafil](https://github.com/annafil) ([#3825](https://github.com/dbt-labs/dbt/pull/3825))
|
||||||
- [@AndreasTA-AW](https://github.com/AndreasTA-AW) ([#3691](https://github.com/dbt-labs/dbt/pull/3691))
|
- [@AndreasTA-AW](https://github.com/AndreasTA-AW) ([#3691](https://github.com/dbt-labs/dbt/pull/3691))
|
||||||
- [@Kayrnt](https://github.com/Kayrnt) ([3707](https://github.com/dbt-labs/dbt/pull/3707))
|
|
||||||
- [@TeddyCr](https://github.com/TeddyCr) ([#3448](https://github.com/dbt-labs/dbt/pull/3865))
|
|
||||||
- [@sdebruyn](https://github.com/sdebruyn) ([#3906](https://github.com/dbt-labs/dbt/pull/3906))
|
|
||||||
|
|
||||||
## dbt 0.21.0b2 (August 19, 2021)
|
## dbt 0.21.0b2 (August 19, 2021)
|
||||||
|
|
||||||
@@ -118,7 +106,6 @@ Contributors:
|
|||||||
|
|
||||||
- Better error handling for BigQuery job labels that are too long. ([#3612](https://github.com/dbt-labs/dbt/pull/3612), [#3703](https://github.com/dbt-labs/dbt/pull/3703))
|
- Better error handling for BigQuery job labels that are too long. ([#3612](https://github.com/dbt-labs/dbt/pull/3612), [#3703](https://github.com/dbt-labs/dbt/pull/3703))
|
||||||
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
- Get more information on partial parsing version mismatches ([#3757](https://github.com/dbt-labs/dbt/issues/3757), [#3758](https://github.com/dbt-labs/dbt/pull/3758))
|
||||||
- Switch to full reparse on partial parsing exceptions. Log and report exception information. ([#3725](https://github.com/dbt-labs/dbt/issues/3725), [#3733](https://github.com/dbt-labs/dbt/pull/3733))
|
|
||||||
|
|
||||||
### Fixes
|
### Fixes
|
||||||
|
|
||||||
|
|||||||
@@ -68,7 +68,7 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
|
|||||||
|
|
||||||
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
|
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
|
||||||
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
|
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
|
||||||
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk branch or a specific release branch.
|
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
|
||||||
|
|
||||||
## Getting the code
|
## Getting the code
|
||||||
|
|
||||||
@@ -135,7 +135,7 @@ brew install postgresql
|
|||||||
|
|
||||||
### Installation
|
### Installation
|
||||||
|
|
||||||
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt` (and its dependencies) with:
|
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
make dev
|
make dev
|
||||||
@@ -170,8 +170,6 @@ docker-compose up -d database
|
|||||||
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
|
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that you may need to run the previous command twice as it does not currently wait for the database to be running before attempting to run commands against it. This will be fixed with [#3876](https://github.com/dbt-labs/dbt/issues/3876).
|
|
||||||
|
|
||||||
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
|
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ def find_matching(
|
|||||||
root_path: str,
|
root_path: str,
|
||||||
relative_paths_to_search: List[str],
|
relative_paths_to_search: List[str],
|
||||||
file_pattern: str,
|
file_pattern: str,
|
||||||
) -> List[Dict[str, Any]]:
|
) -> List[Dict[str, str]]:
|
||||||
"""
|
"""
|
||||||
Given an absolute `root_path`, a list of relative paths to that
|
Given an absolute `root_path`, a list of relative paths to that
|
||||||
absolute root path (`relative_paths_to_search`), and a `file_pattern`
|
absolute root path (`relative_paths_to_search`), and a `file_pattern`
|
||||||
@@ -61,19 +61,11 @@ def find_matching(
|
|||||||
relative_path = os.path.relpath(
|
relative_path = os.path.relpath(
|
||||||
absolute_path, absolute_path_to_search
|
absolute_path, absolute_path_to_search
|
||||||
)
|
)
|
||||||
modification_time = 0.0
|
|
||||||
try:
|
|
||||||
modification_time = os.path.getmtime(absolute_path)
|
|
||||||
except OSError:
|
|
||||||
logger.exception(
|
|
||||||
f"Error retrieving modification time for file {absolute_path}"
|
|
||||||
)
|
|
||||||
if reobj.match(local_file):
|
if reobj.match(local_file):
|
||||||
matching.append({
|
matching.append({
|
||||||
'searched_path': relative_path_to_search,
|
'searched_path': relative_path_to_search,
|
||||||
'absolute_path': absolute_path,
|
'absolute_path': absolute_path,
|
||||||
'relative_path': relative_path,
|
'relative_path': relative_path,
|
||||||
'modification_time': modification_time,
|
|
||||||
})
|
})
|
||||||
|
|
||||||
return matching
|
return matching
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ from dbt.adapters.factory import get_adapter
|
|||||||
from dbt.clients import jinja
|
from dbt.clients import jinja
|
||||||
from dbt.clients.system import make_directory
|
from dbt.clients.system import make_directory
|
||||||
from dbt.context.providers import generate_runtime_model
|
from dbt.context.providers import generate_runtime_model
|
||||||
from dbt.contracts.graph.manifest import Manifest, UniqueID
|
from dbt.contracts.graph.manifest import Manifest
|
||||||
from dbt.contracts.graph.compiled import (
|
from dbt.contracts.graph.compiled import (
|
||||||
COMPILED_TYPES,
|
COMPILED_TYPES,
|
||||||
CompiledSchemaTestNode,
|
CompiledSchemaTestNode,
|
||||||
@@ -107,18 +107,6 @@ def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
|
|||||||
_add_prepended_cte(prepended_ctes, new_cte)
|
_add_prepended_cte(prepended_ctes, new_cte)
|
||||||
|
|
||||||
|
|
||||||
def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueID]:
|
|
||||||
""" Get a list of tests that depend on the node with the
|
|
||||||
provided unique id """
|
|
||||||
|
|
||||||
return [
|
|
||||||
node.unique_id
|
|
||||||
for _, node in manifest.nodes.items()
|
|
||||||
if node.resource_type == NodeType.Test and
|
|
||||||
unique_id in node.depends_on_nodes
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
class Linker:
|
class Linker:
|
||||||
def __init__(self, data=None):
|
def __init__(self, data=None):
|
||||||
if data is None:
|
if data is None:
|
||||||
@@ -154,7 +142,7 @@ class Linker:
|
|||||||
include all nodes in their corresponding graph entries.
|
include all nodes in their corresponding graph entries.
|
||||||
"""
|
"""
|
||||||
out_graph = self.graph.copy()
|
out_graph = self.graph.copy()
|
||||||
for node_id in self.graph:
|
for node_id in self.graph.nodes():
|
||||||
data = manifest.expect(node_id).to_dict(omit_none=True)
|
data = manifest.expect(node_id).to_dict(omit_none=True)
|
||||||
out_graph.add_node(node_id, **data)
|
out_graph.add_node(node_id, **data)
|
||||||
nx.write_gpickle(out_graph, outfile)
|
nx.write_gpickle(out_graph, outfile)
|
||||||
@@ -424,80 +412,13 @@ class Compiler:
|
|||||||
self.link_node(linker, node, manifest)
|
self.link_node(linker, node, manifest)
|
||||||
for exposure in manifest.exposures.values():
|
for exposure in manifest.exposures.values():
|
||||||
self.link_node(linker, exposure, manifest)
|
self.link_node(linker, exposure, manifest)
|
||||||
|
# linker.add_node(exposure.unique_id)
|
||||||
|
|
||||||
cycle = linker.find_cycles()
|
cycle = linker.find_cycles()
|
||||||
|
|
||||||
if cycle:
|
if cycle:
|
||||||
raise RuntimeError("Found a cycle: {}".format(cycle))
|
raise RuntimeError("Found a cycle: {}".format(cycle))
|
||||||
|
|
||||||
self.resolve_graph(linker, manifest)
|
|
||||||
|
|
||||||
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
|
|
||||||
""" This method adds additional edges to the DAG. For a given non-test
|
|
||||||
executable node, add an edge from an upstream test to the given node if
|
|
||||||
the set of nodes the test depends on is a proper/strict subset of the
|
|
||||||
upstream nodes for the given node. """
|
|
||||||
|
|
||||||
# Given a graph:
|
|
||||||
# model1 --> model2 --> model3
|
|
||||||
# | |
|
|
||||||
# | \/
|
|
||||||
# \/ test 2
|
|
||||||
# test1
|
|
||||||
#
|
|
||||||
# Produce the following graph:
|
|
||||||
# model1 --> model2 --> model3
|
|
||||||
# | | /\ /\
|
|
||||||
# | \/ | |
|
|
||||||
# \/ test2 ------- |
|
|
||||||
# test1 -------------------
|
|
||||||
|
|
||||||
for node_id in linker.graph:
|
|
||||||
# If node is executable (in manifest.nodes) and does _not_
|
|
||||||
# represent a test, continue.
|
|
||||||
if (
|
|
||||||
node_id in manifest.nodes and
|
|
||||||
manifest.nodes[node_id].resource_type != NodeType.Test
|
|
||||||
):
|
|
||||||
# Get *everything* upstream of the node
|
|
||||||
all_upstream_nodes = nx.traversal.bfs_tree(
|
|
||||||
linker.graph, node_id, reverse=True
|
|
||||||
)
|
|
||||||
# Get the set of upstream nodes not including the current node.
|
|
||||||
upstream_nodes = set([
|
|
||||||
n for n in all_upstream_nodes if n != node_id
|
|
||||||
])
|
|
||||||
|
|
||||||
# Get all tests that depend on any upstream nodes.
|
|
||||||
upstream_tests = []
|
|
||||||
for upstream_node in upstream_nodes:
|
|
||||||
upstream_tests += _get_tests_for_node(
|
|
||||||
manifest,
|
|
||||||
upstream_node
|
|
||||||
)
|
|
||||||
|
|
||||||
for upstream_test in upstream_tests:
|
|
||||||
# Get the set of all nodes that the test depends on
|
|
||||||
# including the upstream_node itself. This is necessary
|
|
||||||
# because tests can depend on multiple nodes (ex:
|
|
||||||
# relationship tests). Test nodes do not distinguish
|
|
||||||
# between what node the test is "testing" and what
|
|
||||||
# node(s) it depends on.
|
|
||||||
test_depends_on = set(
|
|
||||||
manifest.nodes[upstream_test].depends_on_nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
# If the set of nodes that an upstream test depends on
|
|
||||||
# is a proper (or strict) subset of all upstream nodes of
|
|
||||||
# the current node, add an edge from the upstream test
|
|
||||||
# to the current node. Must be a proper/strict subset to
|
|
||||||
# avoid adding a circular dependency to the graph.
|
|
||||||
if (test_depends_on < upstream_nodes):
|
|
||||||
linker.graph.add_edge(
|
|
||||||
upstream_test,
|
|
||||||
node_id
|
|
||||||
)
|
|
||||||
|
|
||||||
def compile(self, manifest: Manifest, write=True) -> Graph:
|
def compile(self, manifest: Manifest, write=True) -> Graph:
|
||||||
self.initialize()
|
self.initialize()
|
||||||
linker = Linker()
|
linker = Linker()
|
||||||
|
|||||||
@@ -645,24 +645,13 @@ class Project:
|
|||||||
def hashed_name(self):
|
def hashed_name(self):
|
||||||
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
|
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
|
||||||
|
|
||||||
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
|
def get_selector(self, name: str) -> SelectionSpec:
|
||||||
if name not in self.selectors:
|
if name not in self.selectors:
|
||||||
raise RuntimeException(
|
raise RuntimeException(
|
||||||
f'Could not find selector named {name}, expected one of '
|
f'Could not find selector named {name}, expected one of '
|
||||||
f'{list(self.selectors)}'
|
f'{list(self.selectors)}'
|
||||||
)
|
)
|
||||||
return self.selectors[name]["definition"]
|
return self.selectors[name]
|
||||||
|
|
||||||
def get_default_selector_name(self) -> Union[str, None]:
|
|
||||||
"""This function fetch the default selector to use on `dbt run` (if any)
|
|
||||||
:return: either a selector if default is set or None
|
|
||||||
:rtype: Union[SelectionSpec, None]
|
|
||||||
"""
|
|
||||||
for selector_name, selector in self.selectors.items():
|
|
||||||
if selector["default"] is True:
|
|
||||||
return selector_name
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_macro_search_order(self, macro_namespace: str):
|
def get_macro_search_order(self, macro_namespace: str):
|
||||||
for dispatch_entry in self.dispatch:
|
for dispatch_entry in self.dispatch:
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Any, Union
|
from typing import Dict, Any
|
||||||
from dbt.clients.yaml_helper import ( # noqa: F401
|
from dbt.clients.yaml_helper import ( # noqa: F401
|
||||||
yaml, Loader, Dumper, load_yaml_text
|
yaml, Loader, Dumper, load_yaml_text
|
||||||
)
|
)
|
||||||
@@ -29,14 +29,13 @@ Validator Error:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
|
class SelectorConfig(Dict[str, SelectionSpec]):
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
|
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
|
||||||
try:
|
try:
|
||||||
SelectorFile.validate(data)
|
SelectorFile.validate(data)
|
||||||
selector_file = SelectorFile.from_dict(data)
|
selector_file = SelectorFile.from_dict(data)
|
||||||
validate_selector_default(selector_file)
|
|
||||||
selectors = parse_from_selectors_definition(selector_file)
|
selectors = parse_from_selectors_definition(selector_file)
|
||||||
except ValidationError as exc:
|
except ValidationError as exc:
|
||||||
yaml_sel_cfg = yaml.dump(exc.instance)
|
yaml_sel_cfg = yaml.dump(exc.instance)
|
||||||
@@ -119,24 +118,6 @@ def selector_config_from_data(
|
|||||||
return selectors
|
return selectors
|
||||||
|
|
||||||
|
|
||||||
def validate_selector_default(selector_file: SelectorFile) -> None:
|
|
||||||
"""Check if a selector.yml file has more than 1 default key set to true"""
|
|
||||||
default_set: bool = False
|
|
||||||
default_selector_name: Union[str, None] = None
|
|
||||||
|
|
||||||
for selector in selector_file.selectors:
|
|
||||||
if selector.default is True and default_set is False:
|
|
||||||
default_set = True
|
|
||||||
default_selector_name = selector.name
|
|
||||||
continue
|
|
||||||
if selector.default is True and default_set is True:
|
|
||||||
raise DbtSelectorsError(
|
|
||||||
"Error when parsing the selector file. "
|
|
||||||
"Found multiple selectors with `default: true`:"
|
|
||||||
f"{default_selector_name} and {selector.name}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# These are utilities to clean up the dictionary created from
|
# These are utilities to clean up the dictionary created from
|
||||||
# selectors.yml by turning the cli-string format entries into
|
# selectors.yml by turning the cli-string format entries into
|
||||||
# normalized dictionary entries. It parallels the flow in
|
# normalized dictionary entries. It parallels the flow in
|
||||||
|
|||||||
@@ -42,7 +42,6 @@ parse_file_type_to_parser = {
|
|||||||
class FilePath(dbtClassMixin):
|
class FilePath(dbtClassMixin):
|
||||||
searched_path: str
|
searched_path: str
|
||||||
relative_path: str
|
relative_path: str
|
||||||
modification_time: float
|
|
||||||
project_root: str
|
project_root: str
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@@ -133,10 +132,6 @@ class RemoteFile(dbtClassMixin):
|
|||||||
def original_file_path(self):
|
def original_file_path(self):
|
||||||
return 'from remote system'
|
return 'from remote system'
|
||||||
|
|
||||||
@property
|
|
||||||
def modification_time(self):
|
|
||||||
return 'from remote system'
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class BaseSourceFile(dbtClassMixin, SerializableType):
|
class BaseSourceFile(dbtClassMixin, SerializableType):
|
||||||
@@ -155,6 +150,8 @@ class BaseSourceFile(dbtClassMixin, SerializableType):
|
|||||||
def file_id(self):
|
def file_id(self):
|
||||||
if isinstance(self.path, RemoteFile):
|
if isinstance(self.path, RemoteFile):
|
||||||
return None
|
return None
|
||||||
|
if self.checksum.name == 'none':
|
||||||
|
return None
|
||||||
return f'{self.project_name}://{self.path.original_file_path}'
|
return f'{self.project_name}://{self.path.original_file_path}'
|
||||||
|
|
||||||
def _serialize(self):
|
def _serialize(self):
|
||||||
|
|||||||
@@ -285,9 +285,6 @@ class SourceFreshnessOutput(dbtClassMixin):
|
|||||||
status: FreshnessStatus
|
status: FreshnessStatus
|
||||||
criteria: FreshnessThreshold
|
criteria: FreshnessThreshold
|
||||||
adapter_response: Dict[str, Any]
|
adapter_response: Dict[str, Any]
|
||||||
timing: List[TimingInfo]
|
|
||||||
thread_id: str
|
|
||||||
execution_time: float
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -336,10 +333,7 @@ def process_freshness_result(
|
|||||||
max_loaded_at_time_ago_in_s=result.age,
|
max_loaded_at_time_ago_in_s=result.age,
|
||||||
status=result.status,
|
status=result.status,
|
||||||
criteria=criteria,
|
criteria=criteria,
|
||||||
adapter_response=result.adapter_response,
|
adapter_response=result.adapter_response
|
||||||
timing=result.timing,
|
|
||||||
thread_id=result.thread_id,
|
|
||||||
execution_time=result.execution_time,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -121,9 +121,9 @@ class RPCDocsGenerateParameters(RPCParameters):
|
|||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class RPCBuildParameters(RPCParameters):
|
class RPCBuildParameters(RPCParameters):
|
||||||
resource_types: Optional[List[str]] = None
|
|
||||||
select: Union[None, str, List[str]] = None
|
|
||||||
threads: Optional[int] = None
|
threads: Optional[int] = None
|
||||||
|
models: Union[None, str, List[str]] = None
|
||||||
|
select: Union[None, str, List[str]] = None
|
||||||
exclude: Union[None, str, List[str]] = None
|
exclude: Union[None, str, List[str]] = None
|
||||||
selector: Optional[str] = None
|
selector: Optional[str] = None
|
||||||
state: Optional[str] = None
|
state: Optional[str] = None
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ class SelectorDefinition(dbtClassMixin):
|
|||||||
name: str
|
name: str
|
||||||
definition: Union[str, Dict[str, Any]]
|
definition: Union[str, Dict[str, Any]]
|
||||||
description: str = ''
|
description: str = ''
|
||||||
default: bool = False
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
|||||||
@@ -18,7 +18,6 @@ WRITE_JSON = None
|
|||||||
PARTIAL_PARSE = None
|
PARTIAL_PARSE = None
|
||||||
USE_COLORS = None
|
USE_COLORS = None
|
||||||
STORE_FAILURES = None
|
STORE_FAILURES = None
|
||||||
GREEDY = None
|
|
||||||
|
|
||||||
|
|
||||||
def env_set_truthy(key: str) -> Optional[str]:
|
def env_set_truthy(key: str) -> Optional[str]:
|
||||||
@@ -57,7 +56,7 @@ MP_CONTEXT = _get_context()
|
|||||||
def reset():
|
def reset():
|
||||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||||
STORE_FAILURES, GREEDY
|
STORE_FAILURES
|
||||||
|
|
||||||
STRICT_MODE = False
|
STRICT_MODE = False
|
||||||
FULL_REFRESH = False
|
FULL_REFRESH = False
|
||||||
@@ -70,13 +69,12 @@ def reset():
|
|||||||
MP_CONTEXT = _get_context()
|
MP_CONTEXT = _get_context()
|
||||||
USE_COLORS = True
|
USE_COLORS = True
|
||||||
STORE_FAILURES = False
|
STORE_FAILURES = False
|
||||||
GREEDY = False
|
|
||||||
|
|
||||||
|
|
||||||
def set_from_args(args):
|
def set_from_args(args):
|
||||||
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
|
||||||
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
|
||||||
STORE_FAILURES, GREEDY
|
STORE_FAILURES
|
||||||
|
|
||||||
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
|
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
|
||||||
|
|
||||||
@@ -101,7 +99,6 @@ def set_from_args(args):
|
|||||||
USE_COLORS = use_colors_override
|
USE_COLORS = use_colors_override
|
||||||
|
|
||||||
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
|
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
|
||||||
GREEDY = getattr(args, 'greedy', GREEDY)
|
|
||||||
|
|
||||||
|
|
||||||
# initialize everything to the defaults on module load
|
# initialize everything to the defaults on module load
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
# special support for CLI argument parsing.
|
# special support for CLI argument parsing.
|
||||||
from dbt import flags
|
|
||||||
import itertools
|
import itertools
|
||||||
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
|
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
|
||||||
|
|
||||||
@@ -67,7 +66,7 @@ def parse_union_from_default(
|
|||||||
def parse_difference(
|
def parse_difference(
|
||||||
include: Optional[List[str]], exclude: Optional[List[str]]
|
include: Optional[List[str]], exclude: Optional[List[str]]
|
||||||
) -> SelectionDifference:
|
) -> SelectionDifference:
|
||||||
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
|
included = parse_union_from_default(include, DEFAULT_INCLUDES)
|
||||||
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
|
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
|
||||||
return SelectionDifference(components=[included, excluded])
|
return SelectionDifference(components=[included, excluded])
|
||||||
|
|
||||||
@@ -181,7 +180,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
|||||||
union_def_parts = _get_list_dicts(definition, 'union')
|
union_def_parts = _get_list_dicts(definition, 'union')
|
||||||
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
|
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
|
||||||
|
|
||||||
union = SelectionUnion(components=include, greedy_warning=False)
|
union = SelectionUnion(components=include)
|
||||||
|
|
||||||
if exclude is None:
|
if exclude is None:
|
||||||
union.raw = definition
|
union.raw = definition
|
||||||
@@ -189,8 +188,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
|||||||
else:
|
else:
|
||||||
return SelectionDifference(
|
return SelectionDifference(
|
||||||
components=[union, exclude],
|
components=[union, exclude],
|
||||||
raw=definition,
|
raw=definition
|
||||||
greedy_warning=False
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -199,7 +197,7 @@ def parse_intersection_definition(
|
|||||||
) -> SelectionSpec:
|
) -> SelectionSpec:
|
||||||
intersection_def_parts = _get_list_dicts(definition, 'intersection')
|
intersection_def_parts = _get_list_dicts(definition, 'intersection')
|
||||||
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
|
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
|
||||||
intersection = SelectionIntersection(components=include, greedy_warning=False)
|
intersection = SelectionIntersection(components=include)
|
||||||
|
|
||||||
if exclude is None:
|
if exclude is None:
|
||||||
intersection.raw = definition
|
intersection.raw = definition
|
||||||
@@ -207,8 +205,7 @@ def parse_intersection_definition(
|
|||||||
else:
|
else:
|
||||||
return SelectionDifference(
|
return SelectionDifference(
|
||||||
components=[intersection, exclude],
|
components=[intersection, exclude],
|
||||||
raw=definition,
|
raw=definition
|
||||||
greedy_warning=False
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -242,7 +239,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
|
|||||||
if diff_arg is None:
|
if diff_arg is None:
|
||||||
return base
|
return base
|
||||||
else:
|
else:
|
||||||
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
|
return SelectionDifference(components=[base, diff_arg])
|
||||||
|
|
||||||
|
|
||||||
def parse_from_definition(
|
def parse_from_definition(
|
||||||
@@ -274,12 +271,10 @@ def parse_from_definition(
|
|||||||
|
|
||||||
def parse_from_selectors_definition(
|
def parse_from_selectors_definition(
|
||||||
source: SelectorFile
|
source: SelectorFile
|
||||||
) -> Dict[str, Dict[str, Union[SelectionSpec, bool]]]:
|
) -> Dict[str, SelectionSpec]:
|
||||||
result: Dict[str, Dict[str, Union[SelectionSpec, bool]]] = {}
|
result: Dict[str, SelectionSpec] = {}
|
||||||
selector: SelectorDefinition
|
selector: SelectorDefinition
|
||||||
for selector in source.selectors:
|
for selector in source.selectors:
|
||||||
result[selector.name] = {
|
result[selector.name] = parse_from_definition(selector.definition,
|
||||||
"default": selector.default,
|
rootlevel=True)
|
||||||
"definition": parse_from_definition(selector.definition, rootlevel=True)
|
|
||||||
}
|
|
||||||
return result
|
return result
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
|
||||||
from typing import Set, List, Optional, Tuple
|
from typing import Set, List, Optional, Tuple
|
||||||
|
|
||||||
from .graph import Graph, UniqueId
|
from .graph import Graph, UniqueId
|
||||||
@@ -29,24 +30,6 @@ def alert_non_existence(raw_spec, nodes):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def alert_unused_nodes(raw_spec, node_names):
|
|
||||||
summary_nodes_str = ("\n - ").join(node_names[:3])
|
|
||||||
debug_nodes_str = ("\n - ").join(node_names)
|
|
||||||
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
|
|
||||||
summary_msg = (
|
|
||||||
f"\nSome tests were excluded because at least one parent is not selected. "
|
|
||||||
f"Use the --greedy flag to include them."
|
|
||||||
f"\n - {summary_nodes_str}{and_more_str}"
|
|
||||||
)
|
|
||||||
logger.info(summary_msg)
|
|
||||||
if len(node_names) > 4:
|
|
||||||
debug_msg = (
|
|
||||||
f"Full list of tests that were excluded:"
|
|
||||||
f"\n - {debug_nodes_str}"
|
|
||||||
)
|
|
||||||
logger.debug(debug_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def can_select_indirectly(node):
|
def can_select_indirectly(node):
|
||||||
"""If a node is not selected itself, but its parent(s) are, it may qualify
|
"""If a node is not selected itself, but its parent(s) are, it may qualify
|
||||||
for indirect selection.
|
for indirect selection.
|
||||||
@@ -168,16 +151,16 @@ class NodeSelector(MethodManager):
|
|||||||
|
|
||||||
return direct_nodes, indirect_nodes
|
return direct_nodes, indirect_nodes
|
||||||
|
|
||||||
def select_nodes(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
|
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
|
||||||
"""Select the nodes in the graph according to the spec.
|
"""Select the nodes in the graph according to the spec.
|
||||||
|
|
||||||
This is the main point of entry for turning a spec into a set of nodes:
|
This is the main point of entry for turning a spec into a set of nodes:
|
||||||
- Recurse through spec, select by criteria, combine by set operation
|
- Recurse through spec, select by criteria, combine by set operation
|
||||||
- Return final (unfiltered) selection set
|
- Return final (unfiltered) selection set
|
||||||
"""
|
"""
|
||||||
|
|
||||||
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
|
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
|
||||||
indirect_only = indirect_nodes.difference(direct_nodes)
|
return direct_nodes
|
||||||
return direct_nodes, indirect_only
|
|
||||||
|
|
||||||
def _is_graph_member(self, unique_id: UniqueId) -> bool:
|
def _is_graph_member(self, unique_id: UniqueId) -> bool:
|
||||||
if unique_id in self.manifest.sources:
|
if unique_id in self.manifest.sources:
|
||||||
@@ -230,8 +213,6 @@ class NodeSelector(MethodManager):
|
|||||||
# - If ANY parent is missing, return it separately. We'll keep it around
|
# - If ANY parent is missing, return it separately. We'll keep it around
|
||||||
# for later and see if its other parents show up.
|
# for later and see if its other parents show up.
|
||||||
# We use this for INCLUSION.
|
# We use this for INCLUSION.
|
||||||
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
|
|
||||||
# or by specifying `greedy: true` in a yaml selector
|
|
||||||
|
|
||||||
direct_nodes = set(selected)
|
direct_nodes = set(selected)
|
||||||
indirect_nodes = set()
|
indirect_nodes = set()
|
||||||
@@ -270,24 +251,15 @@ class NodeSelector(MethodManager):
|
|||||||
|
|
||||||
- node selection. Based on the include/exclude sets, the set
|
- node selection. Based on the include/exclude sets, the set
|
||||||
of matched unique IDs is returned
|
of matched unique IDs is returned
|
||||||
- includes direct + indirect selection (for tests)
|
- expand the graph at each leaf node, before combination
|
||||||
|
- selectors might override this. for example, this is where
|
||||||
|
tests are added
|
||||||
- filtering:
|
- filtering:
|
||||||
- selectors can filter the nodes after all of them have been
|
- selectors can filter the nodes after all of them have been
|
||||||
selected
|
selected
|
||||||
"""
|
"""
|
||||||
selected_nodes, indirect_only = self.select_nodes(spec)
|
selected_nodes = self.select_nodes(spec)
|
||||||
filtered_nodes = self.filter_selection(selected_nodes)
|
filtered_nodes = self.filter_selection(selected_nodes)
|
||||||
|
|
||||||
if indirect_only:
|
|
||||||
filtered_unused_nodes = self.filter_selection(indirect_only)
|
|
||||||
if filtered_unused_nodes and spec.greedy_warning:
|
|
||||||
# log anything that didn't make the cut
|
|
||||||
unused_node_names = []
|
|
||||||
for unique_id in filtered_unused_nodes:
|
|
||||||
name = self.manifest.nodes[unique_id].name
|
|
||||||
unused_node_names.append(name)
|
|
||||||
alert_unused_nodes(spec, unused_node_names)
|
|
||||||
|
|
||||||
return filtered_nodes
|
return filtered_nodes
|
||||||
|
|
||||||
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:
|
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:
|
||||||
|
|||||||
@@ -67,7 +67,6 @@ class SelectionCriteria:
|
|||||||
children: bool
|
children: bool
|
||||||
children_depth: Optional[int]
|
children_depth: Optional[int]
|
||||||
greedy: bool = False
|
greedy: bool = False
|
||||||
greedy_warning: bool = False # do not raise warning for yaml selectors
|
|
||||||
|
|
||||||
def __post_init__(self):
|
def __post_init__(self):
|
||||||
if self.children and self.childrens_parents:
|
if self.children and self.childrens_parents:
|
||||||
@@ -125,11 +124,11 @@ class SelectionCriteria:
|
|||||||
parents_depth=parents_depth,
|
parents_depth=parents_depth,
|
||||||
children=bool(dct.get('children')),
|
children=bool(dct.get('children')),
|
||||||
children_depth=children_depth,
|
children_depth=children_depth,
|
||||||
greedy=(greedy or bool(dct.get('greedy'))),
|
greedy=greedy
|
||||||
)
|
)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def dict_from_single_spec(cls, raw: str):
|
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
|
||||||
result = RAW_SELECTOR_PATTERN.match(raw)
|
result = RAW_SELECTOR_PATTERN.match(raw)
|
||||||
if result is None:
|
if result is None:
|
||||||
return {'error': 'Invalid selector spec'}
|
return {'error': 'Invalid selector spec'}
|
||||||
@@ -146,8 +145,6 @@ class SelectionCriteria:
|
|||||||
dct['parents'] = bool(dct.get('parents'))
|
dct['parents'] = bool(dct.get('parents'))
|
||||||
if 'children' in dct:
|
if 'children' in dct:
|
||||||
dct['children'] = bool(dct.get('children'))
|
dct['children'] = bool(dct.get('children'))
|
||||||
if 'greedy' in dct:
|
|
||||||
dct['greedy'] = bool(dct.get('greedy'))
|
|
||||||
return dct
|
return dct
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -165,12 +162,10 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
|
|||||||
self,
|
self,
|
||||||
components: Iterable[SelectionSpec],
|
components: Iterable[SelectionSpec],
|
||||||
expect_exists: bool = False,
|
expect_exists: bool = False,
|
||||||
greedy_warning: bool = True,
|
|
||||||
raw: Any = None,
|
raw: Any = None,
|
||||||
):
|
):
|
||||||
self.components: List[SelectionSpec] = list(components)
|
self.components: List[SelectionSpec] = list(components)
|
||||||
self.expect_exists = expect_exists
|
self.expect_exists = expect_exists
|
||||||
self.greedy_warning = greedy_warning
|
|
||||||
self.raw = raw
|
self.raw = raw
|
||||||
|
|
||||||
def __iter__(self) -> Iterator[SelectionSpec]:
|
def __iter__(self) -> Iterator[SelectionSpec]:
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ with all_values as (
|
|||||||
count(*) as n_records
|
count(*) as n_records
|
||||||
|
|
||||||
from {{ model }}
|
from {{ model }}
|
||||||
group by {{ column_name }}
|
group by 1
|
||||||
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -10,23 +10,23 @@ from pathlib import Path
|
|||||||
|
|
||||||
import dbt.version
|
import dbt.version
|
||||||
import dbt.flags as flags
|
import dbt.flags as flags
|
||||||
|
import dbt.task.run as run_task
|
||||||
import dbt.task.build as build_task
|
import dbt.task.build as build_task
|
||||||
import dbt.task.clean as clean_task
|
|
||||||
import dbt.task.compile as compile_task
|
import dbt.task.compile as compile_task
|
||||||
import dbt.task.debug as debug_task
|
import dbt.task.debug as debug_task
|
||||||
|
import dbt.task.clean as clean_task
|
||||||
import dbt.task.deps as deps_task
|
import dbt.task.deps as deps_task
|
||||||
import dbt.task.freshness as freshness_task
|
|
||||||
import dbt.task.generate as generate_task
|
|
||||||
import dbt.task.init as init_task
|
import dbt.task.init as init_task
|
||||||
import dbt.task.list as list_task
|
|
||||||
import dbt.task.parse as parse_task
|
|
||||||
import dbt.task.run as run_task
|
|
||||||
import dbt.task.run_operation as run_operation_task
|
|
||||||
import dbt.task.seed as seed_task
|
import dbt.task.seed as seed_task
|
||||||
import dbt.task.serve as serve_task
|
|
||||||
import dbt.task.snapshot as snapshot_task
|
|
||||||
import dbt.task.test as test_task
|
import dbt.task.test as test_task
|
||||||
|
import dbt.task.snapshot as snapshot_task
|
||||||
|
import dbt.task.generate as generate_task
|
||||||
|
import dbt.task.serve as serve_task
|
||||||
|
import dbt.task.freshness as freshness_task
|
||||||
|
import dbt.task.run_operation as run_operation_task
|
||||||
|
import dbt.task.parse as parse_task
|
||||||
from dbt.profiler import profiler
|
from dbt.profiler import profiler
|
||||||
|
from dbt.task.list import ListTask
|
||||||
from dbt.task.rpc.server import RPCServerTask
|
from dbt.task.rpc.server import RPCServerTask
|
||||||
from dbt.adapters.factory import reset_adapters, cleanup_connections
|
from dbt.adapters.factory import reset_adapters, cleanup_connections
|
||||||
|
|
||||||
@@ -399,40 +399,6 @@ def _build_build_subparser(subparsers, base_subparser):
|
|||||||
Stop execution upon a first failure.
|
Stop execution upon a first failure.
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
sub.add_argument(
|
|
||||||
'--store-failures',
|
|
||||||
action='store_true',
|
|
||||||
help='''
|
|
||||||
Store test results (failing rows) in the database
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
sub.add_argument(
|
|
||||||
'--greedy',
|
|
||||||
action='store_true',
|
|
||||||
help='''
|
|
||||||
Select all tests that touch the selected resources,
|
|
||||||
even if they also depend on unselected resources
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
resource_values: List[str] = [
|
|
||||||
str(s) for s in build_task.BuildTask.ALL_RESOURCE_VALUES
|
|
||||||
] + ['all']
|
|
||||||
sub.add_argument('--resource-type',
|
|
||||||
choices=resource_values,
|
|
||||||
action='append',
|
|
||||||
default=[],
|
|
||||||
dest='resource_types')
|
|
||||||
# explicity don't support --models
|
|
||||||
sub.add_argument(
|
|
||||||
'-s',
|
|
||||||
'--select',
|
|
||||||
dest='select',
|
|
||||||
nargs='+',
|
|
||||||
help='''
|
|
||||||
Specify the nodes to include.
|
|
||||||
''',
|
|
||||||
)
|
|
||||||
_add_common_selector_arguments(sub)
|
|
||||||
return sub
|
return sub
|
||||||
|
|
||||||
|
|
||||||
@@ -645,7 +611,7 @@ def _add_table_mutability_arguments(*subparsers):
|
|||||||
'--full-refresh',
|
'--full-refresh',
|
||||||
action='store_true',
|
action='store_true',
|
||||||
help='''
|
help='''
|
||||||
If specified, dbt will drop incremental models and
|
If specified, DBT will drop incremental models and
|
||||||
fully-recalculate the incremental table from the model definition.
|
fully-recalculate the incremental table from the model definition.
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
@@ -761,14 +727,6 @@ def _build_test_subparser(subparsers, base_subparser):
|
|||||||
Store test results (failing rows) in the database
|
Store test results (failing rows) in the database
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
sub.add_argument(
|
|
||||||
'--greedy',
|
|
||||||
action='store_true',
|
|
||||||
help='''
|
|
||||||
Select all tests that touch the selected resources,
|
|
||||||
even if they also depend on unselected resources
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
sub.set_defaults(cls=test_task.TestTask, which='test', rpc_method='test')
|
sub.set_defaults(cls=test_task.TestTask, which='test', rpc_method='test')
|
||||||
return sub
|
return sub
|
||||||
@@ -857,9 +815,9 @@ def _build_list_subparser(subparsers, base_subparser):
|
|||||||
''',
|
''',
|
||||||
aliases=['ls'],
|
aliases=['ls'],
|
||||||
)
|
)
|
||||||
sub.set_defaults(cls=list_task.ListTask, which='list', rpc_method=None)
|
sub.set_defaults(cls=ListTask, which='list', rpc_method=None)
|
||||||
resource_values: List[str] = [
|
resource_values: List[str] = [
|
||||||
str(s) for s in list_task.ListTask.ALL_RESOURCE_VALUES
|
str(s) for s in ListTask.ALL_RESOURCE_VALUES
|
||||||
] + ['default', 'all']
|
] + ['default', 'all']
|
||||||
sub.add_argument('--resource-type',
|
sub.add_argument('--resource-type',
|
||||||
choices=resource_values,
|
choices=resource_values,
|
||||||
@@ -894,14 +852,6 @@ def _build_list_subparser(subparsers, base_subparser):
|
|||||||
metavar='SELECTOR',
|
metavar='SELECTOR',
|
||||||
required=False,
|
required=False,
|
||||||
)
|
)
|
||||||
sub.add_argument(
|
|
||||||
'--greedy',
|
|
||||||
action='store_true',
|
|
||||||
help='''
|
|
||||||
Select all tests that touch the selected resources,
|
|
||||||
even if they also depend on unselected resources
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
_add_common_selector_arguments(sub)
|
_add_common_selector_arguments(sub)
|
||||||
|
|
||||||
return sub
|
return sub
|
||||||
@@ -1112,7 +1062,7 @@ def parse_args(args, cls=DBTArgumentParser):
|
|||||||
# --select, --exclude
|
# --select, --exclude
|
||||||
# list_sub sets up its own arguments.
|
# list_sub sets up its own arguments.
|
||||||
_add_selection_arguments(
|
_add_selection_arguments(
|
||||||
run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
|
build_sub, run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
|
||||||
# --defer
|
# --defer
|
||||||
_add_defer_argument(run_sub, test_sub, build_sub)
|
_add_defer_argument(run_sub, test_sub, build_sub)
|
||||||
# --full-refresh
|
# --full-refresh
|
||||||
|
|||||||
@@ -72,13 +72,10 @@ class HookParser(SimpleParser[HookBlock, ParsedHookNode]):
|
|||||||
|
|
||||||
# Hooks are only in the dbt_project.yml file for the project
|
# Hooks are only in the dbt_project.yml file for the project
|
||||||
def get_path(self) -> FilePath:
|
def get_path(self) -> FilePath:
|
||||||
# There ought to be an existing file object for this, but
|
|
||||||
# until that is implemented use a dummy modification time
|
|
||||||
path = FilePath(
|
path = FilePath(
|
||||||
project_root=self.project.project_root,
|
project_root=self.project.project_root,
|
||||||
searched_path='.',
|
searched_path='.',
|
||||||
relative_path='dbt_project.yml',
|
relative_path='dbt_project.yml',
|
||||||
modification_time=0.0,
|
|
||||||
)
|
)
|
||||||
return path
|
return path
|
||||||
|
|
||||||
|
|||||||
@@ -203,11 +203,8 @@ class ManifestLoader:
|
|||||||
# used to get the SourceFiles from the manifest files.
|
# used to get the SourceFiles from the manifest files.
|
||||||
start_read_files = time.perf_counter()
|
start_read_files = time.perf_counter()
|
||||||
project_parser_files = {}
|
project_parser_files = {}
|
||||||
saved_files = {}
|
|
||||||
if self.saved_manifest:
|
|
||||||
saved_files = self.saved_manifest.files
|
|
||||||
for project in self.all_projects.values():
|
for project in self.all_projects.values():
|
||||||
read_files(project, self.manifest.files, project_parser_files, saved_files)
|
read_files(project, self.manifest.files, project_parser_files)
|
||||||
self._perf_info.path_count = len(self.manifest.files)
|
self._perf_info.path_count = len(self.manifest.files)
|
||||||
self._perf_info.read_files_elapsed = (time.perf_counter() - start_read_files)
|
self._perf_info.read_files_elapsed = (time.perf_counter() - start_read_files)
|
||||||
|
|
||||||
@@ -426,7 +423,7 @@ class ManifestLoader:
|
|||||||
if not self.partially_parsing and HookParser in parser_types:
|
if not self.partially_parsing and HookParser in parser_types:
|
||||||
hook_parser = HookParser(project, self.manifest, self.root_project)
|
hook_parser = HookParser(project, self.manifest, self.root_project)
|
||||||
path = hook_parser.get_path()
|
path = hook_parser.get_path()
|
||||||
file = load_source_file(path, ParseFileType.Hook, project.project_name, {})
|
file = load_source_file(path, ParseFileType.Hook, project.project_name)
|
||||||
if file:
|
if file:
|
||||||
file_block = FileBlock(file)
|
file_block = FileBlock(file)
|
||||||
hook_parser.parse_file(file_block)
|
hook_parser.parse_file(file_block)
|
||||||
@@ -651,7 +648,7 @@ class ManifestLoader:
|
|||||||
macro_parser = MacroParser(project, self.manifest)
|
macro_parser = MacroParser(project, self.manifest)
|
||||||
for path in macro_parser.get_paths():
|
for path in macro_parser.get_paths():
|
||||||
source_file = load_source_file(
|
source_file = load_source_file(
|
||||||
path, ParseFileType.Macro, project.project_name, {})
|
path, ParseFileType.Macro, project.project_name)
|
||||||
block = FileBlock(source_file)
|
block = FileBlock(source_file)
|
||||||
# This does not add the file to the manifest.files,
|
# This does not add the file to the manifest.files,
|
||||||
# but that shouldn't be necessary here.
|
# but that shouldn't be necessary here.
|
||||||
|
|||||||
@@ -1,17 +1,15 @@
|
|||||||
from dbt.context.context_config import ContextConfig
|
from dbt.context.context_config import ContextConfig
|
||||||
from dbt.contracts.graph.parsed import ParsedModelNode
|
from dbt.contracts.graph.parsed import ParsedModelNode
|
||||||
import dbt.flags as flags
|
import dbt.flags as flags
|
||||||
from dbt.logger import GLOBAL_LOGGER as logger
|
import dbt.tracking
|
||||||
from dbt.node_types import NodeType
|
from dbt.node_types import NodeType
|
||||||
from dbt.parser.base import SimpleSQLParser
|
from dbt.parser.base import SimpleSQLParser
|
||||||
from dbt.parser.search import FileBlock
|
from dbt.parser.search import FileBlock
|
||||||
import dbt.tracking as tracking
|
import dbt.tracking as tracking
|
||||||
from dbt import utils
|
from dbt import utils
|
||||||
from dbt_extractor import ExtractionError, py_extract_from_source # type: ignore
|
from dbt_extractor import ExtractionError, py_extract_from_source # type: ignore
|
||||||
from functools import reduce
|
|
||||||
from itertools import chain
|
|
||||||
import random
|
import random
|
||||||
from typing import Any, Dict, Iterator, List, Optional, Union
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
|
|
||||||
class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
||||||
@@ -28,52 +26,32 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
|||||||
def get_compiled_path(cls, block: FileBlock):
|
def get_compiled_path(cls, block: FileBlock):
|
||||||
return block.path.relative_path
|
return block.path.relative_path
|
||||||
|
|
||||||
# TODO when this is turned on by default, simplify the nasty if/else tree inside this method.
|
|
||||||
def render_update(
|
def render_update(
|
||||||
self, node: ParsedModelNode, config: ContextConfig
|
self, node: ParsedModelNode, config: ContextConfig
|
||||||
) -> None:
|
) -> None:
|
||||||
# TODO go back to 1/100 when this is turned on by default.
|
self.manifest._parsing_info.static_analysis_path_count += 1
|
||||||
# `True` roughly 1/50 times this function is called
|
|
||||||
sample: bool = random.randint(1, 51) == 50
|
|
||||||
|
|
||||||
# top-level declaration of variables
|
# `True` roughly 1/100 times this function is called
|
||||||
experimentally_parsed: Optional[Union[str, Dict[str, List[Any]]]] = None
|
sample: bool = random.randint(1, 101) == 100
|
||||||
config_call_dict: Dict[str, Any] = {}
|
|
||||||
source_calls: List[List[str]] = []
|
|
||||||
|
|
||||||
# run the experimental parser if the flag is on or if we're sampling
|
# run the experimental parser if the flag is on or if we're sampling
|
||||||
if flags.USE_EXPERIMENTAL_PARSER or sample:
|
if flags.USE_EXPERIMENTAL_PARSER or sample:
|
||||||
if self._has_banned_macro(node):
|
try:
|
||||||
# this log line is used for integration testing. If you change
|
experimentally_parsed: Dict[str, List[Any]] = py_extract_from_source(node.raw_sql)
|
||||||
# the code at the beginning of the line change the tests in
|
|
||||||
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
|
|
||||||
logger.debug(
|
|
||||||
f"1601: parser fallback to jinja because of macro override for {node.path}"
|
|
||||||
)
|
|
||||||
experimentally_parsed = "has_banned_macro"
|
|
||||||
else:
|
|
||||||
# run the experimental parser and return the results
|
|
||||||
try:
|
|
||||||
experimentally_parsed = py_extract_from_source(
|
|
||||||
node.raw_sql
|
|
||||||
)
|
|
||||||
logger.debug(f"1699: statically parsed {node.path}")
|
|
||||||
# if we want information on what features are barring the experimental
|
|
||||||
# parser from reading model files, this is where we would add that
|
|
||||||
# since that information is stored in the `ExtractionError`.
|
|
||||||
except ExtractionError:
|
|
||||||
experimentally_parsed = "cannot_parse"
|
|
||||||
|
|
||||||
# if the parser succeeded, extract some data in easy-to-compare formats
|
# second config format
|
||||||
if isinstance(experimentally_parsed, dict):
|
config_call_dict: Dict[str, Any] = {}
|
||||||
# create second config format
|
for c in experimentally_parsed['configs']:
|
||||||
for c in experimentally_parsed['configs']:
|
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
|
||||||
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
|
|
||||||
|
|
||||||
# format sources TODO change extractor to match this type
|
# format sources TODO change extractor to match this type
|
||||||
for s in experimentally_parsed['sources']:
|
source_calls: List[List[str]] = []
|
||||||
source_calls.append([s[0], s[1]])
|
for s in experimentally_parsed['sources']:
|
||||||
experimentally_parsed['sources'] = source_calls
|
source_calls.append([s[0], s[1]])
|
||||||
|
experimentally_parsed['sources'] = source_calls
|
||||||
|
|
||||||
|
except ExtractionError as e:
|
||||||
|
experimentally_parsed = e
|
||||||
|
|
||||||
# normal dbt run
|
# normal dbt run
|
||||||
if not flags.USE_EXPERIMENTAL_PARSER:
|
if not flags.USE_EXPERIMENTAL_PARSER:
|
||||||
@@ -81,19 +59,57 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
|||||||
super().render_update(node, config)
|
super().render_update(node, config)
|
||||||
# if we're sampling, compare for correctness
|
# if we're sampling, compare for correctness
|
||||||
if sample:
|
if sample:
|
||||||
result = _get_sample_result(
|
result: List[str] = []
|
||||||
experimentally_parsed,
|
# experimental parser couldn't parse
|
||||||
config_call_dict,
|
if isinstance(experimentally_parsed, Exception):
|
||||||
source_calls,
|
result += ["01_experimental_parser_cannot_parse"]
|
||||||
node,
|
else:
|
||||||
config
|
# look for false positive configs
|
||||||
)
|
for k in config_call_dict.keys():
|
||||||
|
if k not in config._config_call_dict:
|
||||||
|
result += ["02_false_positive_config_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# look for missed configs
|
||||||
|
for k in config._config_call_dict.keys():
|
||||||
|
if k not in config_call_dict:
|
||||||
|
result += ["03_missed_config_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# look for false positive sources
|
||||||
|
for s in experimentally_parsed['sources']:
|
||||||
|
if s not in node.sources:
|
||||||
|
result += ["04_false_positive_source_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# look for missed sources
|
||||||
|
for s in node.sources:
|
||||||
|
if s not in experimentally_parsed['sources']:
|
||||||
|
result += ["05_missed_source_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# look for false positive refs
|
||||||
|
for r in experimentally_parsed['refs']:
|
||||||
|
if r not in node.refs:
|
||||||
|
result += ["06_false_positive_ref_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# look for missed refs
|
||||||
|
for r in node.refs:
|
||||||
|
if r not in experimentally_parsed['refs']:
|
||||||
|
result += ["07_missed_ref_value"]
|
||||||
|
break
|
||||||
|
|
||||||
|
# if there are no errors, return a success value
|
||||||
|
if not result:
|
||||||
|
result = ["00_exact_match"]
|
||||||
|
|
||||||
# fire a tracking event. this fires one event for every sample
|
# fire a tracking event. this fires one event for every sample
|
||||||
# so that we have data on a per file basis. Not only can we expect
|
# so that we have data on a per file basis. Not only can we expect
|
||||||
# no false positives or misses, we can expect the number model
|
# no false positives or misses, we can expect the number model
|
||||||
# files parseable by the experimental parser to match our internal
|
# files parseable by the experimental parser to match our internal
|
||||||
# testing.
|
# testing.
|
||||||
if tracking.active_user is not None: # None in some tests
|
if dbt.tracking.active_user is not None: # None in some tests
|
||||||
tracking.track_experimental_parser_sample({
|
tracking.track_experimental_parser_sample({
|
||||||
"project_id": self.root_project.hashed_name(),
|
"project_id": self.root_project.hashed_name(),
|
||||||
"file_id": utils.get_hash(node),
|
"file_id": utils.get_hash(node),
|
||||||
@@ -101,7 +117,7 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
|||||||
})
|
})
|
||||||
|
|
||||||
# if the --use-experimental-parser flag was set, and the experimental parser succeeded
|
# if the --use-experimental-parser flag was set, and the experimental parser succeeded
|
||||||
elif isinstance(experimentally_parsed, Dict):
|
elif not isinstance(experimentally_parsed, Exception):
|
||||||
# since it doesn't need python jinja, fit the refs, sources, and configs
|
# since it doesn't need python jinja, fit the refs, sources, and configs
|
||||||
# into the node. Down the line the rest of the node will be updated with
|
# into the node. Down the line the rest of the node will be updated with
|
||||||
# this information. (e.g. depends_on etc.)
|
# this information. (e.g. depends_on etc.)
|
||||||
@@ -125,102 +141,7 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
|
|||||||
|
|
||||||
self.manifest._parsing_info.static_analysis_parsed_path_count += 1
|
self.manifest._parsing_info.static_analysis_parsed_path_count += 1
|
||||||
|
|
||||||
# the experimental parser didn't run on this model.
|
# the experimental parser tried and failed on this model.
|
||||||
# fall back to python jinja rendering.
|
|
||||||
elif experimentally_parsed in ["has_banned_macro"]:
|
|
||||||
# not logging here since the reason should have been logged above
|
|
||||||
super().render_update(node, config)
|
|
||||||
# the experimental parser ran on this model and failed.
|
|
||||||
# fall back to python jinja rendering.
|
# fall back to python jinja rendering.
|
||||||
else:
|
else:
|
||||||
logger.debug(
|
|
||||||
f"1602: parser fallback to jinja because of extractor failure for {node.path}"
|
|
||||||
)
|
|
||||||
super().render_update(node, config)
|
super().render_update(node, config)
|
||||||
|
|
||||||
# checks for banned macros
|
|
||||||
def _has_banned_macro(
|
|
||||||
self, node: ParsedModelNode
|
|
||||||
) -> bool:
|
|
||||||
# first check if there is a banned macro defined in scope for this model file
|
|
||||||
root_project_name = self.root_project.project_name
|
|
||||||
project_name = node.package_name
|
|
||||||
banned_macros = ['ref', 'source', 'config']
|
|
||||||
|
|
||||||
all_banned_macro_keys: Iterator[str] = chain.from_iterable(
|
|
||||||
map(
|
|
||||||
lambda name: [
|
|
||||||
f"macro.{project_name}.{name}",
|
|
||||||
f"macro.{root_project_name}.{name}"
|
|
||||||
],
|
|
||||||
banned_macros
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
return reduce(
|
|
||||||
lambda z, key: z or (key in self.manifest.macros),
|
|
||||||
all_banned_macro_keys,
|
|
||||||
False
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# returns a list of string codes to be sent as a tracking event
|
|
||||||
def _get_sample_result(
|
|
||||||
sample_output: Optional[Union[str, Dict[str, Any]]],
|
|
||||||
config_call_dict: Dict[str, Any],
|
|
||||||
source_calls: List[List[str]],
|
|
||||||
node: ParsedModelNode,
|
|
||||||
config: ContextConfig
|
|
||||||
) -> List[str]:
|
|
||||||
result: List[str] = []
|
|
||||||
# experimental parser didn't run
|
|
||||||
if sample_output is None:
|
|
||||||
result += ["09_experimental_parser_skipped"]
|
|
||||||
# experimental parser couldn't parse
|
|
||||||
elif (isinstance(sample_output, str)):
|
|
||||||
if sample_output == "cannot_parse":
|
|
||||||
result += ["01_experimental_parser_cannot_parse"]
|
|
||||||
elif sample_output == "has_banned_macro":
|
|
||||||
result += ["08_has_banned_macro"]
|
|
||||||
else:
|
|
||||||
# look for false positive configs
|
|
||||||
for k in config_call_dict.keys():
|
|
||||||
if k not in config._config_call_dict:
|
|
||||||
result += ["02_false_positive_config_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# look for missed configs
|
|
||||||
for k in config._config_call_dict.keys():
|
|
||||||
if k not in config_call_dict:
|
|
||||||
result += ["03_missed_config_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# look for false positive sources
|
|
||||||
for s in sample_output['sources']:
|
|
||||||
if s not in node.sources:
|
|
||||||
result += ["04_false_positive_source_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# look for missed sources
|
|
||||||
for s in node.sources:
|
|
||||||
if s not in sample_output['sources']:
|
|
||||||
result += ["05_missed_source_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# look for false positive refs
|
|
||||||
for r in sample_output['refs']:
|
|
||||||
if r not in node.refs:
|
|
||||||
result += ["06_false_positive_ref_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# look for missed refs
|
|
||||||
for r in node.refs:
|
|
||||||
if r not in sample_output['refs']:
|
|
||||||
result += ["07_missed_ref_value"]
|
|
||||||
break
|
|
||||||
|
|
||||||
# if there are no errors, return a success value
|
|
||||||
if not result:
|
|
||||||
result = ["00_exact_match"]
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|||||||
@@ -12,27 +12,13 @@ from typing import Optional
|
|||||||
# This loads the files contents and creates the SourceFile object
|
# This loads the files contents and creates the SourceFile object
|
||||||
def load_source_file(
|
def load_source_file(
|
||||||
path: FilePath, parse_file_type: ParseFileType,
|
path: FilePath, parse_file_type: ParseFileType,
|
||||||
project_name: str, saved_files,) -> Optional[AnySourceFile]:
|
project_name: str) -> Optional[AnySourceFile]:
|
||||||
|
file_contents = load_file_contents(path.absolute_path, strip=False)
|
||||||
|
checksum = FileHash.from_contents(file_contents)
|
||||||
sf_cls = SchemaSourceFile if parse_file_type == ParseFileType.Schema else SourceFile
|
sf_cls = SchemaSourceFile if parse_file_type == ParseFileType.Schema else SourceFile
|
||||||
source_file = sf_cls(path=path, checksum=FileHash.empty(),
|
source_file = sf_cls(path=path, checksum=checksum,
|
||||||
parse_file_type=parse_file_type, project_name=project_name)
|
parse_file_type=parse_file_type, project_name=project_name)
|
||||||
|
source_file.contents = file_contents.strip()
|
||||||
skip_loading_schema_file = False
|
|
||||||
if (parse_file_type == ParseFileType.Schema and
|
|
||||||
saved_files and source_file.file_id in saved_files):
|
|
||||||
old_source_file = saved_files[source_file.file_id]
|
|
||||||
if (source_file.path.modification_time != 0.0 and
|
|
||||||
old_source_file.path.modification_time == source_file.path.modification_time):
|
|
||||||
source_file.checksum = old_source_file.checksum
|
|
||||||
source_file.dfy = old_source_file.dfy
|
|
||||||
skip_loading_schema_file = True
|
|
||||||
|
|
||||||
if not skip_loading_schema_file:
|
|
||||||
file_contents = load_file_contents(path.absolute_path, strip=False)
|
|
||||||
source_file.checksum = FileHash.from_contents(file_contents)
|
|
||||||
source_file.contents = file_contents.strip()
|
|
||||||
|
|
||||||
if parse_file_type == ParseFileType.Schema and source_file.contents:
|
if parse_file_type == ParseFileType.Schema and source_file.contents:
|
||||||
dfy = yaml_from_file(source_file)
|
dfy = yaml_from_file(source_file)
|
||||||
if dfy:
|
if dfy:
|
||||||
@@ -83,7 +69,7 @@ def load_seed_source_file(match: FilePath, project_name) -> SourceFile:
|
|||||||
|
|
||||||
# Use the FilesystemSearcher to get a bunch of FilePaths, then turn
|
# Use the FilesystemSearcher to get a bunch of FilePaths, then turn
|
||||||
# them into a bunch of FileSource objects
|
# them into a bunch of FileSource objects
|
||||||
def get_source_files(project, paths, extension, parse_file_type, saved_files):
|
def get_source_files(project, paths, extension, parse_file_type):
|
||||||
# file path list
|
# file path list
|
||||||
fp_list = list(FilesystemSearcher(
|
fp_list = list(FilesystemSearcher(
|
||||||
project, paths, extension
|
project, paths, extension
|
||||||
@@ -94,17 +80,17 @@ def get_source_files(project, paths, extension, parse_file_type, saved_files):
|
|||||||
if parse_file_type == ParseFileType.Seed:
|
if parse_file_type == ParseFileType.Seed:
|
||||||
fb_list.append(load_seed_source_file(fp, project.project_name))
|
fb_list.append(load_seed_source_file(fp, project.project_name))
|
||||||
else:
|
else:
|
||||||
file = load_source_file(fp, parse_file_type, project.project_name, saved_files)
|
file = load_source_file(fp, parse_file_type, project.project_name)
|
||||||
# only append the list if it has contents. added to fix #3568
|
# only append the list if it has contents. added to fix #3568
|
||||||
if file:
|
if file:
|
||||||
fb_list.append(file)
|
fb_list.append(file)
|
||||||
return fb_list
|
return fb_list
|
||||||
|
|
||||||
|
|
||||||
def read_files_for_parser(project, files, dirs, extension, parse_ft, saved_files):
|
def read_files_for_parser(project, files, dirs, extension, parse_ft):
|
||||||
parser_files = []
|
parser_files = []
|
||||||
source_files = get_source_files(
|
source_files = get_source_files(
|
||||||
project, dirs, extension, parse_ft, saved_files
|
project, dirs, extension, parse_ft
|
||||||
)
|
)
|
||||||
for sf in source_files:
|
for sf in source_files:
|
||||||
files[sf.file_id] = sf
|
files[sf.file_id] = sf
|
||||||
@@ -116,46 +102,46 @@ def read_files_for_parser(project, files, dirs, extension, parse_ft, saved_files
|
|||||||
# dictionary needs to be passed in. What determines the order of
|
# dictionary needs to be passed in. What determines the order of
|
||||||
# the various projects? Is the root project always last? Do the
|
# the various projects? Is the root project always last? Do the
|
||||||
# non-root projects need to be done separately in order?
|
# non-root projects need to be done separately in order?
|
||||||
def read_files(project, files, parser_files, saved_files):
|
def read_files(project, files, parser_files):
|
||||||
|
|
||||||
project_files = {}
|
project_files = {}
|
||||||
|
|
||||||
project_files['MacroParser'] = read_files_for_parser(
|
project_files['MacroParser'] = read_files_for_parser(
|
||||||
project, files, project.macro_paths, '.sql', ParseFileType.Macro, saved_files
|
project, files, project.macro_paths, '.sql', ParseFileType.Macro,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['ModelParser'] = read_files_for_parser(
|
project_files['ModelParser'] = read_files_for_parser(
|
||||||
project, files, project.source_paths, '.sql', ParseFileType.Model, saved_files
|
project, files, project.source_paths, '.sql', ParseFileType.Model,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['SnapshotParser'] = read_files_for_parser(
|
project_files['SnapshotParser'] = read_files_for_parser(
|
||||||
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot, saved_files
|
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['AnalysisParser'] = read_files_for_parser(
|
project_files['AnalysisParser'] = read_files_for_parser(
|
||||||
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis, saved_files
|
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['DataTestParser'] = read_files_for_parser(
|
project_files['DataTestParser'] = read_files_for_parser(
|
||||||
project, files, project.test_paths, '.sql', ParseFileType.Test, saved_files
|
project, files, project.test_paths, '.sql', ParseFileType.Test,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['SeedParser'] = read_files_for_parser(
|
project_files['SeedParser'] = read_files_for_parser(
|
||||||
project, files, project.data_paths, '.csv', ParseFileType.Seed, saved_files
|
project, files, project.data_paths, '.csv', ParseFileType.Seed,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['DocumentationParser'] = read_files_for_parser(
|
project_files['DocumentationParser'] = read_files_for_parser(
|
||||||
project, files, project.docs_paths, '.md', ParseFileType.Documentation, saved_files
|
project, files, project.docs_paths, '.md', ParseFileType.Documentation,
|
||||||
)
|
)
|
||||||
|
|
||||||
project_files['SchemaParser'] = read_files_for_parser(
|
project_files['SchemaParser'] = read_files_for_parser(
|
||||||
project, files, project.all_source_paths, '.yml', ParseFileType.Schema, saved_files
|
project, files, project.all_source_paths, '.yml', ParseFileType.Schema,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Also read .yaml files for schema files. Might be better to change
|
# Also read .yaml files for schema files. Might be better to change
|
||||||
# 'read_files_for_parser' accept an array in the future.
|
# 'read_files_for_parser' accept an array in the future.
|
||||||
yaml_files = read_files_for_parser(
|
yaml_files = read_files_for_parser(
|
||||||
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema, saved_files
|
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema,
|
||||||
)
|
)
|
||||||
project_files['SchemaParser'].extend(yaml_files)
|
project_files['SchemaParser'].extend(yaml_files)
|
||||||
|
|
||||||
|
|||||||
@@ -84,7 +84,6 @@ class FilesystemSearcher(Iterable[FilePath]):
|
|||||||
file_match = FilePath(
|
file_match = FilePath(
|
||||||
searched_path=result['searched_path'],
|
searched_path=result['searched_path'],
|
||||||
relative_path=result['relative_path'],
|
relative_path=result['relative_path'],
|
||||||
modification_time=result['modification_time'],
|
|
||||||
project_root=root,
|
project_root=root,
|
||||||
)
|
)
|
||||||
yield file_match
|
yield file_match
|
||||||
|
|||||||
@@ -3,22 +3,19 @@ from .snapshot import SnapshotRunner as snapshot_model_runner
|
|||||||
from .seed import SeedRunner as seed_runner
|
from .seed import SeedRunner as seed_runner
|
||||||
from .test import TestRunner as test_runner
|
from .test import TestRunner as test_runner
|
||||||
|
|
||||||
from dbt.contracts.results import NodeStatus
|
|
||||||
from dbt.exceptions import InternalException
|
|
||||||
from dbt.graph import ResourceTypeSelector
|
from dbt.graph import ResourceTypeSelector
|
||||||
|
from dbt.exceptions import InternalException
|
||||||
from dbt.node_types import NodeType
|
from dbt.node_types import NodeType
|
||||||
from dbt.task.test import TestSelector
|
|
||||||
|
|
||||||
|
|
||||||
class BuildTask(RunTask):
|
class BuildTask(RunTask):
|
||||||
"""The Build task processes all assets of a given process and attempts to
|
"""The Build task processes all assets of a given process and attempts to 'build'
|
||||||
'build' them in an opinionated fashion. Every resource type outlined in
|
them in an opinionated fashion. Every resource type outlined in RUNNER_MAP
|
||||||
RUNNER_MAP will be processed by the mapped runner class.
|
will be processed by the mapped runner class.
|
||||||
|
|
||||||
I.E. a resource of type Model is handled by the ModelRunner which is
|
I.E. a resource of type Model is handled by the ModelRunner which is imported
|
||||||
imported as run_model_runner. """
|
as run_model_runner.
|
||||||
|
"""
|
||||||
MARK_DEPENDENT_ERRORS_STATUSES = [NodeStatus.Error, NodeStatus.Fail]
|
|
||||||
|
|
||||||
RUNNER_MAP = {
|
RUNNER_MAP = {
|
||||||
NodeType.Model: run_model_runner,
|
NodeType.Model: run_model_runner,
|
||||||
@@ -26,20 +23,6 @@ class BuildTask(RunTask):
|
|||||||
NodeType.Seed: seed_runner,
|
NodeType.Seed: seed_runner,
|
||||||
NodeType.Test: test_runner,
|
NodeType.Test: test_runner,
|
||||||
}
|
}
|
||||||
ALL_RESOURCE_VALUES = frozenset({x for x in RUNNER_MAP.keys()})
|
|
||||||
|
|
||||||
@property
|
|
||||||
def resource_types(self):
|
|
||||||
if not self.args.resource_types:
|
|
||||||
return list(self.ALL_RESOURCE_VALUES)
|
|
||||||
|
|
||||||
values = set(self.args.resource_types)
|
|
||||||
|
|
||||||
if 'all' in values:
|
|
||||||
values.remove('all')
|
|
||||||
values.update(self.ALL_RESOURCE_VALUES)
|
|
||||||
|
|
||||||
return list(values)
|
|
||||||
|
|
||||||
def get_node_selector(self) -> ResourceTypeSelector:
|
def get_node_selector(self) -> ResourceTypeSelector:
|
||||||
if self.manifest is None or self.graph is None:
|
if self.manifest is None or self.graph is None:
|
||||||
@@ -47,19 +30,11 @@ class BuildTask(RunTask):
|
|||||||
'manifest and graph must be set to get node selection'
|
'manifest and graph must be set to get node selection'
|
||||||
)
|
)
|
||||||
|
|
||||||
resource_types = self.resource_types
|
|
||||||
|
|
||||||
if resource_types == [NodeType.Test]:
|
|
||||||
return TestSelector(
|
|
||||||
graph=self.graph,
|
|
||||||
manifest=self.manifest,
|
|
||||||
previous_state=self.previous_state,
|
|
||||||
)
|
|
||||||
return ResourceTypeSelector(
|
return ResourceTypeSelector(
|
||||||
graph=self.graph,
|
graph=self.graph,
|
||||||
manifest=self.manifest,
|
manifest=self.manifest,
|
||||||
previous_state=self.previous_state,
|
previous_state=self.previous_state,
|
||||||
resource_types=resource_types,
|
resource_types=[x for x in self.RUNNER_MAP.keys()],
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_runner_type(self, node):
|
def get_runner_type(self, node):
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from .base import BaseRunner
|
|||||||
|
|
||||||
from dbt.contracts.results import RunStatus, RunResult
|
from dbt.contracts.results import RunStatus, RunResult
|
||||||
from dbt.exceptions import InternalException
|
from dbt.exceptions import InternalException
|
||||||
from dbt.graph import ResourceTypeSelector
|
from dbt.graph import ResourceTypeSelector, SelectionSpec, parse_difference
|
||||||
from dbt.logger import print_timestamped_line
|
from dbt.logger import print_timestamped_line
|
||||||
from dbt.node_types import NodeType
|
from dbt.node_types import NodeType
|
||||||
|
|
||||||
@@ -37,6 +37,13 @@ class CompileTask(GraphRunnableTask):
|
|||||||
def raise_on_first_error(self):
|
def raise_on_first_error(self):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def get_selection_spec(self) -> SelectionSpec:
|
||||||
|
if self.args.selector_name:
|
||||||
|
spec = self.config.get_selector(self.args.selector_name)
|
||||||
|
else:
|
||||||
|
spec = parse_difference(self.args.select, self.args.exclude)
|
||||||
|
return spec
|
||||||
|
|
||||||
def get_node_selector(self) -> ResourceTypeSelector:
|
def get_node_selector(self) -> ResourceTypeSelector:
|
||||||
if self.manifest is None or self.graph is None:
|
if self.manifest is None or self.graph is None:
|
||||||
raise InternalException(
|
raise InternalException(
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ from dbt.exceptions import RuntimeException, InternalException
|
|||||||
from dbt.logger import print_timestamped_line
|
from dbt.logger import print_timestamped_line
|
||||||
from dbt.node_types import NodeType
|
from dbt.node_types import NodeType
|
||||||
|
|
||||||
from dbt.graph import ResourceTypeSelector
|
from dbt.graph import ResourceTypeSelector, SelectionSpec, parse_difference
|
||||||
from dbt.contracts.graph.parsed import ParsedSourceDefinition
|
from dbt.contracts.graph.parsed import ParsedSourceDefinition
|
||||||
|
|
||||||
|
|
||||||
@@ -136,6 +136,19 @@ class FreshnessTask(GraphRunnableTask):
|
|||||||
def raise_on_first_error(self):
|
def raise_on_first_error(self):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def get_selection_spec(self) -> SelectionSpec:
|
||||||
|
"""Generates a selection spec from task arguments to use when
|
||||||
|
processing graph. A SelectionSpec describes what nodes to select
|
||||||
|
when creating queue from graph of nodes.
|
||||||
|
"""
|
||||||
|
if self.args.selector_name:
|
||||||
|
# use pre-defined selector (--selector) to create selection spec
|
||||||
|
spec = self.config.get_selector(self.args.selector_name)
|
||||||
|
else:
|
||||||
|
# use --select and --exclude args to create selection spec
|
||||||
|
spec = parse_difference(self.args.select, self.args.exclude)
|
||||||
|
return spec
|
||||||
|
|
||||||
def get_node_selector(self):
|
def get_node_selector(self):
|
||||||
if self.manifest is None or self.graph is None:
|
if self.manifest is None or self.graph is None:
|
||||||
raise InternalException(
|
raise InternalException(
|
||||||
|
|||||||
@@ -1,10 +1,15 @@
|
|||||||
import json
|
import json
|
||||||
|
from typing import Type
|
||||||
|
|
||||||
from dbt.contracts.graph.parsed import (
|
from dbt.contracts.graph.parsed import (
|
||||||
ParsedExposure,
|
ParsedExposure,
|
||||||
ParsedSourceDefinition
|
ParsedSourceDefinition
|
||||||
)
|
)
|
||||||
from dbt.graph import ResourceTypeSelector
|
from dbt.graph import (
|
||||||
|
parse_difference,
|
||||||
|
ResourceTypeSelector,
|
||||||
|
SelectionSpec,
|
||||||
|
)
|
||||||
from dbt.task.runnable import GraphRunnableTask, ManifestTask
|
from dbt.task.runnable import GraphRunnableTask, ManifestTask
|
||||||
from dbt.task.test import TestSelector
|
from dbt.task.test import TestSelector
|
||||||
from dbt.node_types import NodeType
|
from dbt.node_types import NodeType
|
||||||
@@ -160,19 +165,25 @@ class ListTask(GraphRunnableTask):
|
|||||||
return list(values)
|
return list(values)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def selection_arg(self):
|
def selector(self):
|
||||||
# for backwards compatibility, list accepts both --models and --select,
|
|
||||||
# with slightly different behavior: --models implies --resource-type model
|
|
||||||
if self.args.models:
|
if self.args.models:
|
||||||
return self.args.models
|
return self.args.models
|
||||||
else:
|
else:
|
||||||
return self.args.select
|
return self.args.select
|
||||||
|
|
||||||
|
def get_selection_spec(self) -> SelectionSpec:
|
||||||
|
if self.args.selector_name:
|
||||||
|
spec = self.config.get_selector(self.args.selector_name)
|
||||||
|
else:
|
||||||
|
spec = parse_difference(self.selector, self.args.exclude)
|
||||||
|
return spec
|
||||||
|
|
||||||
def get_node_selector(self):
|
def get_node_selector(self):
|
||||||
if self.manifest is None or self.graph is None:
|
if self.manifest is None or self.graph is None:
|
||||||
raise InternalException(
|
raise InternalException(
|
||||||
'manifest and graph must be set to get perform node selection'
|
'manifest and graph must be set to get perform node selection'
|
||||||
)
|
)
|
||||||
|
cls: Type[ResourceTypeSelector]
|
||||||
if self.resource_types == [NodeType.Test]:
|
if self.resource_types == [NodeType.Test]:
|
||||||
return TestSelector(
|
return TestSelector(
|
||||||
graph=self.graph,
|
graph=self.graph,
|
||||||
|
|||||||
@@ -320,12 +320,13 @@ class RemoteListTask(
|
|||||||
|
|
||||||
|
|
||||||
class RemoteBuildProjectTask(RPCCommandTask[RPCBuildParameters], BuildTask):
|
class RemoteBuildProjectTask(RPCCommandTask[RPCBuildParameters], BuildTask):
|
||||||
|
|
||||||
METHOD_NAME = 'build'
|
METHOD_NAME = 'build'
|
||||||
|
|
||||||
def set_args(self, params: RPCBuildParameters) -> None:
|
def set_args(self, params: RPCBuildParameters) -> None:
|
||||||
self.args.resource_types = self._listify(params.resource_types)
|
if params.models:
|
||||||
self.args.select = self._listify(params.select)
|
self.args.select = self._listify(params.models)
|
||||||
|
else:
|
||||||
|
self.args.select = self._listify(params.select)
|
||||||
self.args.exclude = self._listify(params.exclude)
|
self.args.exclude = self._listify(params.exclude)
|
||||||
self.args.selector_name = params.selector
|
self.args.selector_name = params.selector
|
||||||
|
|
||||||
|
|||||||
@@ -41,13 +41,7 @@ from dbt.exceptions import (
|
|||||||
FailFastException,
|
FailFastException,
|
||||||
)
|
)
|
||||||
|
|
||||||
from dbt.graph import (
|
from dbt.graph import GraphQueue, NodeSelector, SelectionSpec, Graph
|
||||||
GraphQueue,
|
|
||||||
NodeSelector,
|
|
||||||
SelectionSpec,
|
|
||||||
parse_difference,
|
|
||||||
Graph
|
|
||||||
)
|
|
||||||
from dbt.parser.manifest import ManifestLoader
|
from dbt.parser.manifest import ManifestLoader
|
||||||
|
|
||||||
import dbt.exceptions
|
import dbt.exceptions
|
||||||
@@ -89,9 +83,6 @@ class ManifestTask(ConfiguredTask):
|
|||||||
|
|
||||||
|
|
||||||
class GraphRunnableTask(ManifestTask):
|
class GraphRunnableTask(ManifestTask):
|
||||||
|
|
||||||
MARK_DEPENDENT_ERRORS_STATUSES = [NodeStatus.Error]
|
|
||||||
|
|
||||||
def __init__(self, args, config):
|
def __init__(self, args, config):
|
||||||
super().__init__(args, config)
|
super().__init__(args, config)
|
||||||
self.job_queue: Optional[GraphQueue] = None
|
self.job_queue: Optional[GraphQueue] = None
|
||||||
@@ -112,27 +103,11 @@ class GraphRunnableTask(ManifestTask):
|
|||||||
def index_offset(self, value: int) -> int:
|
def index_offset(self, value: int) -> int:
|
||||||
return value
|
return value
|
||||||
|
|
||||||
@property
|
@abstractmethod
|
||||||
def selection_arg(self):
|
|
||||||
return self.args.select
|
|
||||||
|
|
||||||
@property
|
|
||||||
def exclusion_arg(self):
|
|
||||||
return self.args.exclude
|
|
||||||
|
|
||||||
def get_selection_spec(self) -> SelectionSpec:
|
def get_selection_spec(self) -> SelectionSpec:
|
||||||
default_selector_name = self.config.get_default_selector_name()
|
raise NotImplementedException(
|
||||||
if self.args.selector_name:
|
f'get_selection_spec not implemented for task {type(self)}'
|
||||||
# use pre-defined selector (--selector)
|
)
|
||||||
spec = self.config.get_selector(self.args.selector_name)
|
|
||||||
elif not (self.selection_arg or self.exclusion_arg) and default_selector_name:
|
|
||||||
# use pre-defined selector (--selector) with default: true
|
|
||||||
logger.info(f"Using default selector {default_selector_name}")
|
|
||||||
spec = self.config.get_selector(default_selector_name)
|
|
||||||
else:
|
|
||||||
# use --select and --exclude args
|
|
||||||
spec = parse_difference(self.selection_arg, self.exclusion_arg)
|
|
||||||
return spec
|
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_node_selector(self) -> NodeSelector:
|
def get_node_selector(self) -> NodeSelector:
|
||||||
@@ -314,7 +289,7 @@ class GraphRunnableTask(ManifestTask):
|
|||||||
else:
|
else:
|
||||||
self.manifest.update_node(node)
|
self.manifest.update_node(node)
|
||||||
|
|
||||||
if result.status in self.MARK_DEPENDENT_ERRORS_STATUSES:
|
if result.status == NodeStatus.Error:
|
||||||
if is_ephemeral:
|
if is_ephemeral:
|
||||||
cause = result
|
cause = result
|
||||||
else:
|
else:
|
||||||
@@ -438,7 +413,7 @@ class GraphRunnableTask(ManifestTask):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if len(self._flattened_nodes) == 0:
|
if len(self._flattened_nodes) == 0:
|
||||||
logger.warning("\nWARNING: Nothing to do. Try checking your model "
|
logger.warning("WARNING: Nothing to do. Try checking your model "
|
||||||
"configs and model specification args")
|
"configs and model specification args")
|
||||||
result = self.get_result(
|
result = self.get_result(
|
||||||
results=[],
|
results=[],
|
||||||
|
|||||||
@@ -96,5 +96,5 @@ def _get_dbt_plugins_info():
|
|||||||
yield plugin_name, mod.version
|
yield plugin_name, mod.version
|
||||||
|
|
||||||
|
|
||||||
__version__ = '0.21.0b2'
|
__version__ = '0.21.0b1'
|
||||||
installed = get_installed_version()
|
installed = get_installed_version()
|
||||||
|
|||||||
@@ -284,12 +284,12 @@ def parse_args(argv=None):
|
|||||||
parser.add_argument('adapter')
|
parser.add_argument('adapter')
|
||||||
parser.add_argument('--title-case', '-t', default=None)
|
parser.add_argument('--title-case', '-t', default=None)
|
||||||
parser.add_argument('--dependency', action='append')
|
parser.add_argument('--dependency', action='append')
|
||||||
parser.add_argument('--dbt-core-version', default='0.21.0b2')
|
parser.add_argument('--dbt-core-version', default='0.21.0b1')
|
||||||
parser.add_argument('--email')
|
parser.add_argument('--email')
|
||||||
parser.add_argument('--author')
|
parser.add_argument('--author')
|
||||||
parser.add_argument('--url')
|
parser.add_argument('--url')
|
||||||
parser.add_argument('--sql', action='store_true')
|
parser.add_argument('--sql', action='store_true')
|
||||||
parser.add_argument('--package-version', default='0.21.0b2')
|
parser.add_argument('--package-version', default='0.21.0b1')
|
||||||
parser.add_argument('--project-version', default='1.0')
|
parser.add_argument('--project-version', default='1.0')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'--no-dependency', action='store_false', dest='set_dependency'
|
'--no-dependency', action='store_false', dest='set_dependency'
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ def read(fname):
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt-core"
|
package_name = "dbt-core"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """dbt (data build tool) is a command line tool that helps \
|
description = """dbt (data build tool) is a command line tool that helps \
|
||||||
analysts and engineers transform data in their warehouse more effectively"""
|
analysts and engineers transform data in their warehouse more effectively"""
|
||||||
|
|
||||||
|
|||||||
@@ -1,75 +0,0 @@
|
|||||||
agate==1.6.1
|
|
||||||
asn1crypto==1.4.0
|
|
||||||
attrs==21.2.0
|
|
||||||
azure-common==1.1.27
|
|
||||||
azure-core==1.17.0
|
|
||||||
azure-storage-blob==12.8.1
|
|
||||||
Babel==2.9.1
|
|
||||||
boto3==1.18.25
|
|
||||||
botocore==1.21.25
|
|
||||||
cachetools==4.2.2
|
|
||||||
certifi==2021.5.30
|
|
||||||
cffi==1.14.6
|
|
||||||
chardet==4.0.0
|
|
||||||
charset-normalizer==2.0.4
|
|
||||||
colorama==0.4.4
|
|
||||||
cryptography==3.4.7
|
|
||||||
google-api-core==1.31.2
|
|
||||||
google-auth==1.35.0
|
|
||||||
google-cloud-bigquery==2.24.1
|
|
||||||
google-cloud-core==1.7.2
|
|
||||||
google-crc32c==1.1.2
|
|
||||||
google-resumable-media==2.0.0
|
|
||||||
googleapis-common-protos==1.53.0
|
|
||||||
grpcio==1.39.0
|
|
||||||
hologram==0.0.14
|
|
||||||
idna==3.2
|
|
||||||
importlib-metadata==4.6.4
|
|
||||||
isodate==0.6.0
|
|
||||||
jeepney==0.7.1
|
|
||||||
Jinja2==2.11.3
|
|
||||||
jmespath==0.10.0
|
|
||||||
json-rpc==1.13.0
|
|
||||||
jsonschema==3.1.1
|
|
||||||
keyring==21.8.0
|
|
||||||
leather==0.3.3
|
|
||||||
Logbook==1.5.3
|
|
||||||
MarkupSafe==2.0.1
|
|
||||||
mashumaro==2.5
|
|
||||||
minimal-snowplow-tracker==0.0.2
|
|
||||||
msgpack==1.0.2
|
|
||||||
msrest==0.6.21
|
|
||||||
networkx==2.6.2
|
|
||||||
oauthlib==3.1.1
|
|
||||||
oscrypto==1.2.1
|
|
||||||
packaging==20.9
|
|
||||||
parsedatetime==2.6
|
|
||||||
proto-plus==1.19.0
|
|
||||||
protobuf==3.17.3
|
|
||||||
psycopg2-binary==2.9.1
|
|
||||||
pyasn1==0.4.8
|
|
||||||
pyasn1-modules==0.2.8
|
|
||||||
pycparser==2.20
|
|
||||||
pycryptodomex==3.10.1
|
|
||||||
PyJWT==2.1.0
|
|
||||||
pyOpenSSL==20.0.1
|
|
||||||
pyparsing==2.4.7
|
|
||||||
pyrsistent==0.18.0
|
|
||||||
python-dateutil==2.8.2
|
|
||||||
python-slugify==5.0.2
|
|
||||||
pytimeparse==1.1.8
|
|
||||||
pytz==2021.1
|
|
||||||
PyYAML==5.4.1
|
|
||||||
requests==2.26.0
|
|
||||||
requests-oauthlib==1.3.0
|
|
||||||
rsa==4.7.2
|
|
||||||
s3transfer==0.5.0
|
|
||||||
SecretStorage==3.3.1
|
|
||||||
six==1.16.0
|
|
||||||
snowflake-connector-python==2.5.1
|
|
||||||
sqlparse==0.3.1
|
|
||||||
text-unidecode==1.3
|
|
||||||
typing-extensions==3.10.0.0
|
|
||||||
urllib3==1.26.6
|
|
||||||
Werkzeug==2.0.1
|
|
||||||
zipp==3.5.0
|
|
||||||
1
performance/runner/.gitignore
vendored
1
performance/runner/.gitignore
vendored
@@ -1,3 +1,2 @@
|
|||||||
target/
|
target/
|
||||||
projects/*/logs
|
projects/*/logs
|
||||||
plots/
|
|
||||||
|
|||||||
662
performance/runner/Cargo.lock
generated
662
performance/runner/Cargo.lock
generated
@@ -2,12 +2,6 @@
|
|||||||
# It is not intended for manual editing.
|
# It is not intended for manual editing.
|
||||||
version = 3
|
version = 3
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "adler32"
|
|
||||||
version = "1.2.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "aae1277d39aeec15cb388266ecc24b11c80469deae6067e17a1a7aa9e5c1f234"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ansi_term"
|
name = "ansi_term"
|
||||||
version = "0.11.0"
|
version = "0.11.0"
|
||||||
@@ -28,62 +22,12 @@ dependencies = [
|
|||||||
"winapi",
|
"winapi",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "autocfg"
|
|
||||||
version = "1.0.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "bitflags"
|
name = "bitflags"
|
||||||
version = "1.2.1"
|
version = "1.2.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693"
|
checksum = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "bumpalo"
|
|
||||||
version = "3.7.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "9c59e7af012c713f529e7a3ee57ce9b31ddd858d4b512923602f74608b009631"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "bytemuck"
|
|
||||||
version = "1.7.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "72957246c41db82b8ef88a5486143830adeb8227ef9837740bdec67724cf2c5b"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "byteorder"
|
|
||||||
version = "1.4.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "cc"
|
|
||||||
version = "1.0.70"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "d26a6ce4b6a484fa3edb70f7efa6fc430fd2b87285fe8b84304fd0936faa0dc0"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "cfg-if"
|
|
||||||
version = "1.0.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "chrono"
|
|
||||||
version = "0.4.19"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "670ad68c9088c2a963aaa298cb369688cf3f9465ce5e2d4ca10e6e0098a1ce73"
|
|
||||||
dependencies = [
|
|
||||||
"libc",
|
|
||||||
"num-integer",
|
|
||||||
"num-traits",
|
|
||||||
"serde",
|
|
||||||
"time",
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "clap"
|
name = "clap"
|
||||||
version = "2.33.3"
|
version = "2.33.3"
|
||||||
@@ -99,230 +43,12 @@ dependencies = [
|
|||||||
"vec_map",
|
"vec_map",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "cmake"
|
|
||||||
version = "0.1.45"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "eb6210b637171dfba4cda12e579ac6dc73f5165ad56133e5d72ef3131f320855"
|
|
||||||
dependencies = [
|
|
||||||
"cc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "color_quant"
|
|
||||||
version = "1.1.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "3d7b894f5411737b7867f4827955924d7c254fc9f4d91a6aad6b097804b1018b"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "core-foundation"
|
|
||||||
version = "0.9.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "0a89e2ae426ea83155dccf10c0fa6b1463ef6d5fcb44cee0b224a408fa640a62"
|
|
||||||
dependencies = [
|
|
||||||
"core-foundation-sys",
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "core-foundation-sys"
|
|
||||||
version = "0.8.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "ea221b5284a47e40033bf9b66f35f984ec0ea2931eb03505246cd27a963f981b"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "core-graphics"
|
|
||||||
version = "0.22.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "269f35f69b542b80e736a20a89a05215c0ce80c2c03c514abb2e318b78379d86"
|
|
||||||
dependencies = [
|
|
||||||
"bitflags",
|
|
||||||
"core-foundation",
|
|
||||||
"core-graphics-types",
|
|
||||||
"foreign-types",
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "core-graphics-types"
|
|
||||||
version = "0.1.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "3a68b68b3446082644c91ac778bf50cd4104bfb002b5a6a7c44cca5a2c70788b"
|
|
||||||
dependencies = [
|
|
||||||
"bitflags",
|
|
||||||
"core-foundation",
|
|
||||||
"foreign-types",
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "core-text"
|
|
||||||
version = "19.2.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "99d74ada66e07c1cefa18f8abfba765b486f250de2e4a999e5727fc0dd4b4a25"
|
|
||||||
dependencies = [
|
|
||||||
"core-foundation",
|
|
||||||
"core-graphics",
|
|
||||||
"foreign-types",
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "crc32fast"
|
|
||||||
version = "1.2.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "81156fece84ab6a9f2afdb109ce3ae577e42b1228441eded99bd77f627953b1a"
|
|
||||||
dependencies = [
|
|
||||||
"cfg-if",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "deflate"
|
|
||||||
version = "0.8.6"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "73770f8e1fe7d64df17ca66ad28994a0a623ea497fa69486e14984e715c5d174"
|
|
||||||
dependencies = [
|
|
||||||
"adler32",
|
|
||||||
"byteorder",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "dirs-next"
|
|
||||||
version = "2.0.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "b98cf8ebf19c3d1b223e151f99a4f9f0690dca41414773390fc824184ac833e1"
|
|
||||||
dependencies = [
|
|
||||||
"cfg-if",
|
|
||||||
"dirs-sys-next",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "dirs-sys-next"
|
|
||||||
version = "0.1.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "4ebda144c4fe02d1f7ea1a7d9641b6fc6b580adcfa024ae48797ecdeb6825b4d"
|
|
||||||
dependencies = [
|
|
||||||
"libc",
|
|
||||||
"redox_users",
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "dwrote"
|
|
||||||
version = "0.11.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "439a1c2ba5611ad3ed731280541d36d2e9c4ac5e7fb818a27b604bdc5a6aa65b"
|
|
||||||
dependencies = [
|
|
||||||
"lazy_static",
|
|
||||||
"libc",
|
|
||||||
"winapi",
|
|
||||||
"wio",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "either"
|
name = "either"
|
||||||
version = "1.6.1"
|
version = "1.6.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e78d4f1cc4ae33bbfc157ed5d5a5ef3bc29227303d595861deb238fcec4e9457"
|
checksum = "e78d4f1cc4ae33bbfc157ed5d5a5ef3bc29227303d595861deb238fcec4e9457"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "expat-sys"
|
|
||||||
version = "2.1.6"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "658f19728920138342f68408b7cf7644d90d4784353d8ebc32e7e8663dbe45fa"
|
|
||||||
dependencies = [
|
|
||||||
"cmake",
|
|
||||||
"pkg-config",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "float-ord"
|
|
||||||
version = "0.2.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "7bad48618fdb549078c333a7a8528acb57af271d0433bdecd523eb620628364e"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "font-kit"
|
|
||||||
version = "0.10.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "46c9a156ec38864999bc9c4156e5f3b50224d4a5578028a64e5a3875caa9ee28"
|
|
||||||
dependencies = [
|
|
||||||
"bitflags",
|
|
||||||
"byteorder",
|
|
||||||
"core-foundation",
|
|
||||||
"core-graphics",
|
|
||||||
"core-text",
|
|
||||||
"dirs-next",
|
|
||||||
"dwrote",
|
|
||||||
"float-ord",
|
|
||||||
"freetype",
|
|
||||||
"lazy_static",
|
|
||||||
"libc",
|
|
||||||
"log",
|
|
||||||
"pathfinder_geometry",
|
|
||||||
"pathfinder_simd",
|
|
||||||
"servo-fontconfig",
|
|
||||||
"walkdir",
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "foreign-types"
|
|
||||||
version = "0.3.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
|
|
||||||
dependencies = [
|
|
||||||
"foreign-types-shared",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "foreign-types-shared"
|
|
||||||
version = "0.1.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "freetype"
|
|
||||||
version = "0.7.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "bee38378a9e3db1cc693b4f88d166ae375338a0ff75cb8263e1c601d51f35dc6"
|
|
||||||
dependencies = [
|
|
||||||
"freetype-sys",
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "freetype-sys"
|
|
||||||
version = "0.13.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "a37d4011c0cc628dfa766fcc195454f4b068d7afdc2adfd28861191d866e731a"
|
|
||||||
dependencies = [
|
|
||||||
"cmake",
|
|
||||||
"libc",
|
|
||||||
"pkg-config",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "getrandom"
|
|
||||||
version = "0.2.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "7fcd999463524c52659517fe2cea98493cfe485d10565e7b0fb07dbba7ad2753"
|
|
||||||
dependencies = [
|
|
||||||
"cfg-if",
|
|
||||||
"libc",
|
|
||||||
"wasi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "gif"
|
|
||||||
version = "0.11.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "5a668f699973d0f573d15749b7002a9ac9e1f9c6b220e7b165601334c173d8de"
|
|
||||||
dependencies = [
|
|
||||||
"color_quant",
|
|
||||||
"weezl",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "heck"
|
name = "heck"
|
||||||
version = "0.3.3"
|
version = "0.3.3"
|
||||||
@@ -341,22 +67,6 @@ dependencies = [
|
|||||||
"libc",
|
"libc",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "image"
|
|
||||||
version = "0.23.14"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "24ffcb7e7244a9bf19d35bf2883b9c080c4ced3c07a9895572178cdb8f13f6a1"
|
|
||||||
dependencies = [
|
|
||||||
"bytemuck",
|
|
||||||
"byteorder",
|
|
||||||
"color_quant",
|
|
||||||
"jpeg-decoder",
|
|
||||||
"num-iter",
|
|
||||||
"num-rational",
|
|
||||||
"num-traits",
|
|
||||||
"png",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "itertools"
|
name = "itertools"
|
||||||
version = "0.10.1"
|
version = "0.10.1"
|
||||||
@@ -372,21 +82,6 @@ version = "0.4.7"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "dd25036021b0de88a0aff6b850051563c6516d0bf53f8638938edbb9de732736"
|
checksum = "dd25036021b0de88a0aff6b850051563c6516d0bf53f8638938edbb9de732736"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "jpeg-decoder"
|
|
||||||
version = "0.1.22"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "229d53d58899083193af11e15917b5640cd40b29ff475a1fe4ef725deb02d0f2"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "js-sys"
|
|
||||||
version = "0.3.54"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "1866b355d9c878e5e607473cbe3f63282c0b7aad2db1dbebf55076c686918254"
|
|
||||||
dependencies = [
|
|
||||||
"wasm-bindgen",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lazy_static"
|
name = "lazy_static"
|
||||||
version = "1.4.0"
|
version = "1.4.0"
|
||||||
@@ -399,157 +94,6 @@ version = "0.2.98"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "320cfe77175da3a483efed4bc0adc1968ca050b098ce4f2f1c13a56626128790"
|
checksum = "320cfe77175da3a483efed4bc0adc1968ca050b098ce4f2f1c13a56626128790"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "log"
|
|
||||||
version = "0.4.14"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710"
|
|
||||||
dependencies = [
|
|
||||||
"cfg-if",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "miniz_oxide"
|
|
||||||
version = "0.3.7"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "791daaae1ed6889560f8c4359194f56648355540573244a5448a83ba1ecc7435"
|
|
||||||
dependencies = [
|
|
||||||
"adler32",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "num-integer"
|
|
||||||
version = "0.1.44"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db"
|
|
||||||
dependencies = [
|
|
||||||
"autocfg",
|
|
||||||
"num-traits",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "num-iter"
|
|
||||||
version = "0.1.42"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "b2021c8337a54d21aca0d59a92577a029af9431cb59b909b03252b9c164fad59"
|
|
||||||
dependencies = [
|
|
||||||
"autocfg",
|
|
||||||
"num-integer",
|
|
||||||
"num-traits",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "num-rational"
|
|
||||||
version = "0.3.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "12ac428b1cb17fce6f731001d307d351ec70a6d202fc2e60f7d4c5e42d8f4f07"
|
|
||||||
dependencies = [
|
|
||||||
"autocfg",
|
|
||||||
"num-integer",
|
|
||||||
"num-traits",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "num-traits"
|
|
||||||
version = "0.2.14"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290"
|
|
||||||
dependencies = [
|
|
||||||
"autocfg",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "pathfinder_geometry"
|
|
||||||
version = "0.5.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "0b7b7e7b4ea703700ce73ebf128e1450eb69c3a8329199ffbfb9b2a0418e5ad3"
|
|
||||||
dependencies = [
|
|
||||||
"log",
|
|
||||||
"pathfinder_simd",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "pathfinder_simd"
|
|
||||||
version = "0.5.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "39fe46acc5503595e5949c17b818714d26fdf9b4920eacf3b2947f0199f4a6ff"
|
|
||||||
dependencies = [
|
|
||||||
"rustc_version",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "pest"
|
|
||||||
version = "2.1.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "10f4872ae94d7b90ae48754df22fd42ad52ce740b8f370b03da4835417403e53"
|
|
||||||
dependencies = [
|
|
||||||
"ucd-trie",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "pkg-config"
|
|
||||||
version = "0.3.19"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "3831453b3449ceb48b6d9c7ad7c96d5ea673e9b470a1dc578c2ce6521230884c"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "plotters"
|
|
||||||
version = "0.3.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "32a3fd9ec30b9749ce28cd91f255d569591cdf937fe280c312143e3c4bad6f2a"
|
|
||||||
dependencies = [
|
|
||||||
"chrono",
|
|
||||||
"font-kit",
|
|
||||||
"image",
|
|
||||||
"lazy_static",
|
|
||||||
"num-traits",
|
|
||||||
"pathfinder_geometry",
|
|
||||||
"plotters-backend",
|
|
||||||
"plotters-bitmap",
|
|
||||||
"plotters-svg",
|
|
||||||
"ttf-parser",
|
|
||||||
"wasm-bindgen",
|
|
||||||
"web-sys",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "plotters-backend"
|
|
||||||
version = "0.3.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "d88417318da0eaf0fdcdb51a0ee6c3bed624333bff8f946733049380be67ac1c"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "plotters-bitmap"
|
|
||||||
version = "0.3.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "21362fa905695e5618aefd169358f52e0e8bc4a8e05333cf780fda8cddc00b54"
|
|
||||||
dependencies = [
|
|
||||||
"gif",
|
|
||||||
"image",
|
|
||||||
"plotters-backend",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "plotters-svg"
|
|
||||||
version = "0.3.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "521fa9638fa597e1dc53e9412a4f9cefb01187ee1f7413076f9e6749e2885ba9"
|
|
||||||
dependencies = [
|
|
||||||
"plotters-backend",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "png"
|
|
||||||
version = "0.16.8"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "3c3287920cb847dee3de33d301c463fba14dda99db24214ddf93f83d3021f4c6"
|
|
||||||
dependencies = [
|
|
||||||
"bitflags",
|
|
||||||
"crc32fast",
|
|
||||||
"deflate",
|
|
||||||
"miniz_oxide",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "proc-macro-error"
|
name = "proc-macro-error"
|
||||||
version = "1.0.4"
|
version = "1.0.4"
|
||||||
@@ -592,80 +136,23 @@ dependencies = [
|
|||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "redox_syscall"
|
|
||||||
version = "0.2.10"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "8383f39639269cde97d255a32bdb68c047337295414940c68bdd30c2e13203ff"
|
|
||||||
dependencies = [
|
|
||||||
"bitflags",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "redox_users"
|
|
||||||
version = "0.4.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "528532f3d801c87aec9def2add9ca802fe569e44a544afe633765267840abe64"
|
|
||||||
dependencies = [
|
|
||||||
"getrandom",
|
|
||||||
"redox_syscall",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "runner"
|
name = "runner"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"chrono",
|
|
||||||
"itertools",
|
"itertools",
|
||||||
"plotters",
|
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"structopt",
|
"structopt",
|
||||||
"thiserror",
|
"thiserror",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "rustc_version"
|
|
||||||
version = "0.3.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "f0dfe2087c51c460008730de8b57e6a320782fbfb312e1f4d520e6c6fae155ee"
|
|
||||||
dependencies = [
|
|
||||||
"semver",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ryu"
|
name = "ryu"
|
||||||
version = "1.0.5"
|
version = "1.0.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e"
|
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "same-file"
|
|
||||||
version = "1.0.6"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502"
|
|
||||||
dependencies = [
|
|
||||||
"winapi-util",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "semver"
|
|
||||||
version = "0.11.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "f301af10236f6df4160f7c3f04eec6dbc70ace82d23326abad5edee88801c6b6"
|
|
||||||
dependencies = [
|
|
||||||
"semver-parser",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "semver-parser"
|
|
||||||
version = "0.10.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "00b0bef5b7f9e0df16536d3961cfb6e84331c065b4066afb39768d0e319411f7"
|
|
||||||
dependencies = [
|
|
||||||
"pest",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde"
|
name = "serde"
|
||||||
version = "1.0.127"
|
version = "1.0.127"
|
||||||
@@ -697,27 +184,6 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "servo-fontconfig"
|
|
||||||
version = "0.5.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "c7e3e22fe5fd73d04ebf0daa049d3efe3eae55369ce38ab16d07ddd9ac5c217c"
|
|
||||||
dependencies = [
|
|
||||||
"libc",
|
|
||||||
"servo-fontconfig-sys",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "servo-fontconfig-sys"
|
|
||||||
version = "5.1.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "e36b879db9892dfa40f95da1c38a835d41634b825fbd8c4c418093d53c24b388"
|
|
||||||
dependencies = [
|
|
||||||
"expat-sys",
|
|
||||||
"freetype-sys",
|
|
||||||
"pkg-config",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "strsim"
|
name = "strsim"
|
||||||
version = "0.8.0"
|
version = "0.8.0"
|
||||||
@@ -788,29 +254,6 @@ dependencies = [
|
|||||||
"syn",
|
"syn",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "time"
|
|
||||||
version = "0.1.44"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "6db9e6914ab8b1ae1c260a4ae7a49b6c5611b40328a735b21862567685e73255"
|
|
||||||
dependencies = [
|
|
||||||
"libc",
|
|
||||||
"wasi",
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "ttf-parser"
|
|
||||||
version = "0.12.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "7ae2f58a822f08abdaf668897e96a5656fe72f5a9ce66422423e8849384872e6"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "ucd-trie"
|
|
||||||
version = "0.1.3"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "56dee185309b50d1f11bfedef0fe6d036842e3fb77413abef29f8f8d1c5d4c1c"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "unicode-segmentation"
|
name = "unicode-segmentation"
|
||||||
version = "1.8.0"
|
version = "1.8.0"
|
||||||
@@ -841,93 +284,6 @@ version = "0.9.3"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe"
|
checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "walkdir"
|
|
||||||
version = "2.3.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "808cf2735cd4b6866113f648b791c6adc5714537bc222d9347bb203386ffda56"
|
|
||||||
dependencies = [
|
|
||||||
"same-file",
|
|
||||||
"winapi",
|
|
||||||
"winapi-util",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasi"
|
|
||||||
version = "0.10.0+wasi-snapshot-preview1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "1a143597ca7c7793eff794def352d41792a93c481eb1042423ff7ff72ba2c31f"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasm-bindgen"
|
|
||||||
version = "0.2.77"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "5e68338db6becec24d3c7977b5bf8a48be992c934b5d07177e3931f5dc9b076c"
|
|
||||||
dependencies = [
|
|
||||||
"cfg-if",
|
|
||||||
"wasm-bindgen-macro",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasm-bindgen-backend"
|
|
||||||
version = "0.2.77"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "f34c405b4f0658583dba0c1c7c9b694f3cac32655db463b56c254a1c75269523"
|
|
||||||
dependencies = [
|
|
||||||
"bumpalo",
|
|
||||||
"lazy_static",
|
|
||||||
"log",
|
|
||||||
"proc-macro2",
|
|
||||||
"quote",
|
|
||||||
"syn",
|
|
||||||
"wasm-bindgen-shared",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasm-bindgen-macro"
|
|
||||||
version = "0.2.77"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "b9d5a6580be83b19dc570a8f9c324251687ab2184e57086f71625feb57ec77c8"
|
|
||||||
dependencies = [
|
|
||||||
"quote",
|
|
||||||
"wasm-bindgen-macro-support",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasm-bindgen-macro-support"
|
|
||||||
version = "0.2.77"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "e3775a030dc6f5a0afd8a84981a21cc92a781eb429acef9ecce476d0c9113e92"
|
|
||||||
dependencies = [
|
|
||||||
"proc-macro2",
|
|
||||||
"quote",
|
|
||||||
"syn",
|
|
||||||
"wasm-bindgen-backend",
|
|
||||||
"wasm-bindgen-shared",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wasm-bindgen-shared"
|
|
||||||
version = "0.2.77"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "c279e376c7a8e8752a8f1eaa35b7b0bee6bb9fb0cdacfa97cc3f1f289c87e2b4"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "web-sys"
|
|
||||||
version = "0.3.54"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "0a84d70d1ec7d2da2d26a5bd78f4bca1b8c3254805363ce743b7a05bc30d195a"
|
|
||||||
dependencies = [
|
|
||||||
"js-sys",
|
|
||||||
"wasm-bindgen",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "weezl"
|
|
||||||
version = "0.1.5"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "d8b77fdfd5a253be4ab714e4ffa3c49caf146b4de743e97510c0656cf90f1e8e"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "winapi"
|
name = "winapi"
|
||||||
version = "0.3.9"
|
version = "0.3.9"
|
||||||
@@ -944,26 +300,8 @@ version = "0.4.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "winapi-util"
|
|
||||||
version = "0.1.5"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
|
|
||||||
dependencies = [
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "winapi-x86_64-pc-windows-gnu"
|
name = "winapi-x86_64-pc-windows-gnu"
|
||||||
version = "0.4.0"
|
version = "0.4.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "wio"
|
|
||||||
version = "0.2.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "5d129932f4644ac2396cb456385cbf9e63b5b30c6e8dc4820bdca4eb082037a5"
|
|
||||||
dependencies = [
|
|
||||||
"winapi",
|
|
||||||
]
|
|
||||||
|
|||||||
@@ -4,9 +4,7 @@ version = "0.1.0"
|
|||||||
edition = "2018"
|
edition = "2018"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
chrono = { version = "0.4.19", features = ["serde"] }
|
|
||||||
itertools = "0.10.1"
|
itertools = "0.10.1"
|
||||||
plotters = "^0.3.1"
|
|
||||||
serde = { version = "1.0", features = ["derive"] }
|
serde = { version = "1.0", features = ["derive"] }
|
||||||
serde_json = "1.0"
|
serde_json = "1.0"
|
||||||
structopt = "0.3"
|
structopt = "0.3"
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
use crate::exceptions::{CalculateError, IOError};
|
use crate::exceptions::{CalculateError, IOError};
|
||||||
use chrono::prelude::*;
|
|
||||||
use itertools::Itertools;
|
use itertools::Itertools;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::fs;
|
use std::fs;
|
||||||
@@ -46,7 +45,6 @@ pub struct Data {
|
|||||||
pub struct Calculation {
|
pub struct Calculation {
|
||||||
pub metric: String,
|
pub metric: String,
|
||||||
pub regression: bool,
|
pub regression: bool,
|
||||||
pub ts: DateTime<Utc>,
|
|
||||||
pub data: Data,
|
pub data: Data,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -62,11 +60,6 @@ pub struct MeasurementGroup {
|
|||||||
// Given two measurements, return all the calculations. Calculations are
|
// Given two measurements, return all the calculations. Calculations are
|
||||||
// flagged as regressions or not regressions.
|
// flagged as regressions or not regressions.
|
||||||
fn calculate(metric: &str, dev: &Measurement, baseline: &Measurement) -> Vec<Calculation> {
|
fn calculate(metric: &str, dev: &Measurement, baseline: &Measurement) -> Vec<Calculation> {
|
||||||
// choosing the current timestamp for all calculations to be the same.
|
|
||||||
// this timestamp is not from the time of measurement becuase hyperfine
|
|
||||||
// controls that. Since calculation is run directly after, this is fine.
|
|
||||||
let ts = Utc::now();
|
|
||||||
|
|
||||||
let median_threshold = 1.05; // 5% regression threshold
|
let median_threshold = 1.05; // 5% regression threshold
|
||||||
let median_difference = dev.median / baseline.median;
|
let median_difference = dev.median / baseline.median;
|
||||||
|
|
||||||
@@ -77,7 +70,6 @@ fn calculate(metric: &str, dev: &Measurement, baseline: &Measurement) -> Vec<Cal
|
|||||||
Calculation {
|
Calculation {
|
||||||
metric: ["median", metric].join("_"),
|
metric: ["median", metric].join("_"),
|
||||||
regression: median_difference > median_threshold,
|
regression: median_difference > median_threshold,
|
||||||
ts: ts,
|
|
||||||
data: Data {
|
data: Data {
|
||||||
threshold: median_threshold,
|
threshold: median_threshold,
|
||||||
difference: median_difference,
|
difference: median_difference,
|
||||||
@@ -88,7 +80,6 @@ fn calculate(metric: &str, dev: &Measurement, baseline: &Measurement) -> Vec<Cal
|
|||||||
Calculation {
|
Calculation {
|
||||||
metric: ["stddev", metric].join("_"),
|
metric: ["stddev", metric].join("_"),
|
||||||
regression: stddev_difference > stddev_threshold,
|
regression: stddev_difference > stddev_threshold,
|
||||||
ts: ts,
|
|
||||||
data: Data {
|
data: Data {
|
||||||
threshold: stddev_threshold,
|
threshold: stddev_threshold,
|
||||||
difference: stddev_difference,
|
difference: stddev_difference,
|
||||||
|
|||||||
@@ -42,28 +42,6 @@ pub enum CalculateError {
|
|||||||
BadBranchNameErr(String, String),
|
BadBranchNameErr(String, String),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parent exception type for the different sub commands of the runner app.
|
|
||||||
#[derive(Debug, Error)]
|
|
||||||
pub enum PlotError {
|
|
||||||
#[error("{}", .0)]
|
|
||||||
PlotIOErr(IOError),
|
|
||||||
#[error("FilenameNotTimestampErr: {}", .0)]
|
|
||||||
FilenameNotTimestampErr(String),
|
|
||||||
#[error("BadJSONErr: JSON in file cannot be deserialized as expected.\nFilepath: {}\nOriginating Exception: {}", .0.to_string_lossy().into_owned(), .1.as_ref().map_or("None".to_owned(), |e| format!("{}", e)))]
|
|
||||||
BadJSONErr(PathBuf, Option<serde_json::Error>),
|
|
||||||
#[error("ChartErr: {}", .0)]
|
|
||||||
ChartErr(Box<dyn std::error::Error>),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parent exception type for the different sub commands of the runner app.
|
|
||||||
#[derive(Debug, Error)]
|
|
||||||
pub enum RunnerError {
|
|
||||||
#[error("CalculateErr: {}", .0)]
|
|
||||||
CalculateErr(CalculateError),
|
|
||||||
#[error("PlotErr: {}", .0)]
|
|
||||||
PlotErr(PlotError),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tests for exceptions
|
// Tests for exceptions
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
|
|||||||
@@ -3,12 +3,10 @@ extern crate structopt;
|
|||||||
mod calculate;
|
mod calculate;
|
||||||
mod exceptions;
|
mod exceptions;
|
||||||
mod measure;
|
mod measure;
|
||||||
mod plot;
|
|
||||||
|
|
||||||
use crate::calculate::Calculation;
|
use crate::calculate::Calculation;
|
||||||
use crate::exceptions::{CalculateError, RunnerError};
|
use crate::exceptions::CalculateError;
|
||||||
use chrono::offset::Utc;
|
use std::fs::File;
|
||||||
use std::fs::{metadata, File};
|
|
||||||
use std::io::Write;
|
use std::io::Write;
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use structopt::StructOpt;
|
use structopt::StructOpt;
|
||||||
@@ -31,11 +29,7 @@ enum Opt {
|
|||||||
#[structopt(parse(from_os_str))]
|
#[structopt(parse(from_os_str))]
|
||||||
#[structopt(short)]
|
#[structopt(short)]
|
||||||
results_dir: PathBuf,
|
results_dir: PathBuf,
|
||||||
#[structopt(parse(from_os_str))]
|
|
||||||
#[structopt(short)]
|
|
||||||
out_dir: PathBuf,
|
|
||||||
},
|
},
|
||||||
Plot,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// enables proper useage of exit() in main.
|
// enables proper useage of exit() in main.
|
||||||
@@ -43,7 +37,7 @@ enum Opt {
|
|||||||
//
|
//
|
||||||
// This is where all the printing should happen. Exiting happens
|
// This is where all the printing should happen. Exiting happens
|
||||||
// in main, and module functions should only return values.
|
// in main, and module functions should only return values.
|
||||||
fn run_app() -> Result<i32, RunnerError> {
|
fn run_app() -> Result<i32, CalculateError> {
|
||||||
// match what the user inputs from the cli
|
// match what the user inputs from the cli
|
||||||
match Opt::from_args() {
|
match Opt::from_args() {
|
||||||
// measure subcommand
|
// measure subcommand
|
||||||
@@ -54,8 +48,7 @@ fn run_app() -> Result<i32, RunnerError> {
|
|||||||
// if there are any nonzero exit codes from the hyperfine runs,
|
// if there are any nonzero exit codes from the hyperfine runs,
|
||||||
// return the first one. otherwise return zero.
|
// return the first one. otherwise return zero.
|
||||||
measure::measure(&projects_dir, &branch_name)
|
measure::measure(&projects_dir, &branch_name)
|
||||||
.or_else(|e| Err(CalculateError::CalculateIOError(e)))
|
.or_else(|e| Err(CalculateError::CalculateIOError(e)))?
|
||||||
.or_else(|e| Err(RunnerError::CalculateErr(e)))?
|
|
||||||
.iter()
|
.iter()
|
||||||
.map(|status| status.code())
|
.map(|status| status.code())
|
||||||
.flatten()
|
.flatten()
|
||||||
@@ -69,21 +62,9 @@ fn run_app() -> Result<i32, RunnerError> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// calculate subcommand
|
// calculate subcommand
|
||||||
Opt::Calculate {
|
Opt::Calculate { results_dir } => {
|
||||||
results_dir,
|
|
||||||
out_dir,
|
|
||||||
} => {
|
|
||||||
// validate output directory and exit early if it won't work.
|
|
||||||
let md = metadata(&out_dir)
|
|
||||||
.expect("Main: Failed to read specified output directory metadata. Does it exist?");
|
|
||||||
if !md.is_dir() {
|
|
||||||
eprintln!("Main: Output directory is not a directory");
|
|
||||||
return Ok(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// get all the calculations or gracefully show the user an exception
|
// get all the calculations or gracefully show the user an exception
|
||||||
let calculations = calculate::regressions(&results_dir)
|
let calculations = calculate::regressions(&results_dir)?;
|
||||||
.or_else(|e| Err(RunnerError::CalculateErr(e)))?;
|
|
||||||
|
|
||||||
// print all calculations to stdout so they can be easily debugged
|
// print all calculations to stdout so they can be easily debugged
|
||||||
// via CI.
|
// via CI.
|
||||||
@@ -96,18 +77,9 @@ fn run_app() -> Result<i32, RunnerError> {
|
|||||||
let json_calcs = serde_json::to_string_pretty(&calculations)
|
let json_calcs = serde_json::to_string_pretty(&calculations)
|
||||||
.expect("Main: Failed to serialize calculations to json");
|
.expect("Main: Failed to serialize calculations to json");
|
||||||
|
|
||||||
// if there are any calculations, use the first timestamp, if there are none
|
|
||||||
// just use the current time.
|
|
||||||
let ts = calculations
|
|
||||||
.first()
|
|
||||||
.map_or_else(|| Utc::now(), |calc| calc.ts);
|
|
||||||
|
|
||||||
// create the empty destination file, and write the json string
|
// create the empty destination file, and write the json string
|
||||||
let outfile = &mut out_dir.into_os_string();
|
let outfile = &mut results_dir.into_os_string();
|
||||||
outfile.push("/final_calculations_");
|
outfile.push("/final_calculations.json");
|
||||||
outfile.push(ts.timestamp().to_string());
|
|
||||||
outfile.push(".json");
|
|
||||||
|
|
||||||
let mut f = File::create(outfile).expect("Main: Unable to create file");
|
let mut f = File::create(outfile).expect("Main: Unable to create file");
|
||||||
f.write_all(json_calcs.as_bytes())
|
f.write_all(json_calcs.as_bytes())
|
||||||
.expect("Main: Unable to write data");
|
.expect("Main: Unable to write data");
|
||||||
@@ -133,13 +105,6 @@ fn run_app() -> Result<i32, RunnerError> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// plot subcommand
|
|
||||||
Opt::Plot => {
|
|
||||||
plot::draw_plot().or_else(|e| Err(RunnerError::PlotErr(e)))?;
|
|
||||||
|
|
||||||
Ok(0)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,189 +0,0 @@
|
|||||||
use crate::calculate::Calculation;
|
|
||||||
use crate::exceptions::{IOError, PlotError};
|
|
||||||
use chrono::prelude::*;
|
|
||||||
use itertools::Itertools;
|
|
||||||
use plotters::prelude::*;
|
|
||||||
use std::cmp::Ordering;
|
|
||||||
use std::fs;
|
|
||||||
use std::fs::DirEntry;
|
|
||||||
use std::path::{Path, PathBuf};
|
|
||||||
|
|
||||||
struct Graph {
|
|
||||||
title: String,
|
|
||||||
data: Vec<(f32, f32)>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Graph {
|
|
||||||
const DEFAULT_MIN_Y: f32 = -15.0;
|
|
||||||
const DEFAULT_MAX_Y: f32 = 15.0;
|
|
||||||
const DEFAULT_X_PADDING: f32 = 86400.0;
|
|
||||||
|
|
||||||
fn min_x(&self) -> f32 {
|
|
||||||
self.data
|
|
||||||
.clone()
|
|
||||||
.into_iter()
|
|
||||||
.map(|(x, _)| x)
|
|
||||||
.reduce(f32::min)
|
|
||||||
.unwrap()
|
|
||||||
- Graph::DEFAULT_X_PADDING
|
|
||||||
}
|
|
||||||
fn min_y(&self) -> f32 {
|
|
||||||
let min_data_point = self
|
|
||||||
.data
|
|
||||||
.clone()
|
|
||||||
.into_iter()
|
|
||||||
.map(|(_, y)| y)
|
|
||||||
.reduce(f32::min)
|
|
||||||
.unwrap();
|
|
||||||
f32::min(Graph::DEFAULT_MIN_Y, min_data_point)
|
|
||||||
}
|
|
||||||
fn max_x(&self) -> f32 {
|
|
||||||
self.data
|
|
||||||
.clone()
|
|
||||||
.into_iter()
|
|
||||||
.map(|(x, _)| x)
|
|
||||||
.reduce(f32::max)
|
|
||||||
.unwrap()
|
|
||||||
+ Graph::DEFAULT_X_PADDING
|
|
||||||
}
|
|
||||||
fn max_y(&self) -> f32 {
|
|
||||||
let max_data_point = self
|
|
||||||
.data
|
|
||||||
.clone()
|
|
||||||
.into_iter()
|
|
||||||
.map(|(_, y)| y)
|
|
||||||
.reduce(f32::max)
|
|
||||||
.unwrap();
|
|
||||||
f32::max(Graph::DEFAULT_MAX_Y, max_data_point)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn draw_plot() -> Result<(), PlotError> {
|
|
||||||
// TODO `as` type coersion sucks. swap it out for something safer.
|
|
||||||
let mut sorted_data: Vec<(NaiveDateTime, Calculation)> =
|
|
||||||
read_data(Path::new("plots/raw_data/"))?;
|
|
||||||
sorted_data.sort_by(|(ts_x, x), (ts_y, y)| {
|
|
||||||
// sort by calculation type, then by timestamp
|
|
||||||
match (&x.metric).cmp(&y.metric) {
|
|
||||||
Ordering::Equal => (&ts_x).cmp(&ts_y),
|
|
||||||
x => x,
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
let data_lines: Vec<Graph> = sorted_data
|
|
||||||
.into_iter()
|
|
||||||
.group_by(|(_, calc)| calc.metric.clone())
|
|
||||||
.into_iter()
|
|
||||||
.map(|(title, line)| Graph {
|
|
||||||
title: title.to_owned(),
|
|
||||||
data: line
|
|
||||||
.map(|(ts, calc)| (ts.timestamp() as f32, calc.data.difference as f32))
|
|
||||||
.collect(),
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
for graph in data_lines {
|
|
||||||
let title = format!("plots/{}.png", graph.title);
|
|
||||||
let root = BitMapBackend::new(&title, (1600, 1200)).into_drawing_area();
|
|
||||||
root.fill(&WHITE)
|
|
||||||
.or_else(|e| Err(PlotError::ChartErr(Box::new(e))))?;
|
|
||||||
let root = root.margin(10, 10, 10, 10);
|
|
||||||
|
|
||||||
// build chart foundation
|
|
||||||
let mut chart = ChartBuilder::on(&root)
|
|
||||||
.caption(&graph.title, ("sans-serif", 40).into_font())
|
|
||||||
.x_label_area_size(20)
|
|
||||||
.y_label_area_size(40)
|
|
||||||
.build_cartesian_2d(graph.min_x()..graph.max_x(), graph.min_y()..graph.max_y())
|
|
||||||
.or_else(|e| Err(PlotError::ChartErr(Box::new(e))))?;
|
|
||||||
|
|
||||||
// Draw Mesh
|
|
||||||
chart
|
|
||||||
.configure_mesh()
|
|
||||||
.x_labels(5)
|
|
||||||
.y_labels(5)
|
|
||||||
.y_label_formatter(&|x| format!("{:.3}", x))
|
|
||||||
.draw()
|
|
||||||
.or_else(|e| Err(PlotError::ChartErr(Box::new(e))))?;
|
|
||||||
|
|
||||||
// Draw Line
|
|
||||||
chart
|
|
||||||
.draw_series(LineSeries::new(graph.data.clone(), &RED))
|
|
||||||
.or_else(|e| Err(PlotError::ChartErr(Box::new(e))))?;
|
|
||||||
|
|
||||||
// Draw Points on Line
|
|
||||||
chart
|
|
||||||
.draw_series(PointSeries::of_element(
|
|
||||||
graph.data.clone(),
|
|
||||||
5,
|
|
||||||
&RED,
|
|
||||||
&|c, s, st| {
|
|
||||||
return EmptyElement::at(c)
|
|
||||||
+ Circle::new((0, 0), s, st.filled())
|
|
||||||
+ Text::new(format!("{:?}", c), (10, 0), ("sans-serif", 20).into_font());
|
|
||||||
},
|
|
||||||
))
|
|
||||||
.or_else(|e| Err(PlotError::ChartErr(Box::new(e))))?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn read_data(results_directory: &Path) -> Result<Vec<(NaiveDateTime, Calculation)>, PlotError> {
|
|
||||||
fs::read_dir(results_directory)
|
|
||||||
.or_else(|e| Err(IOError::ReadErr(results_directory.to_path_buf(), Some(e))))
|
|
||||||
.or_else(|e| Err(PlotError::PlotIOErr(e)))?
|
|
||||||
.into_iter()
|
|
||||||
.map(|entry| {
|
|
||||||
let ent: DirEntry = entry
|
|
||||||
.or_else(|e| Err(IOError::ReadErr(results_directory.to_path_buf(), Some(e))))
|
|
||||||
.or_else(|e| Err(PlotError::PlotIOErr(e)))?;
|
|
||||||
|
|
||||||
Ok(ent.path())
|
|
||||||
})
|
|
||||||
.collect::<Result<Vec<PathBuf>, PlotError>>()?
|
|
||||||
.iter()
|
|
||||||
.filter(|path| {
|
|
||||||
path.extension()
|
|
||||||
.and_then(|ext| ext.to_str())
|
|
||||||
.map_or(false, |ext| ext.ends_with("json"))
|
|
||||||
})
|
|
||||||
.map(|p| {
|
|
||||||
// TODO pull this filename nonsense out into a lib fn
|
|
||||||
let filename = p
|
|
||||||
.file_stem()
|
|
||||||
.ok_or_else(|| IOError::MissingFilenameErr(p.to_path_buf()))
|
|
||||||
.and_then(|name| {
|
|
||||||
name.to_str()
|
|
||||||
.ok_or_else(|| IOError::FilenameNotUnicodeErr(p.to_path_buf()))
|
|
||||||
})
|
|
||||||
.or_else(|e| Err(PlotError::PlotIOErr(e)));
|
|
||||||
|
|
||||||
let timestamp: Result<NaiveDateTime, PlotError> = filename.and_then(|fname| {
|
|
||||||
fname
|
|
||||||
.parse::<i64>()
|
|
||||||
// not a timestamp because it's not a number
|
|
||||||
.or_else(|_| Err(PlotError::FilenameNotTimestampErr(fname.to_owned())))
|
|
||||||
.and_then(|secs| {
|
|
||||||
// not a timestamp because the number is out of range
|
|
||||||
NaiveDateTime::from_timestamp_opt(secs, 0)
|
|
||||||
.ok_or_else(|| PlotError::FilenameNotTimestampErr(fname.to_owned()))
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
let x: Result<Vec<(NaiveDateTime, Calculation)>, PlotError> =
|
|
||||||
timestamp.and_then(|ts| {
|
|
||||||
fs::read_to_string(p)
|
|
||||||
.or_else(|e| Err(IOError::BadFileContentsErr(p.clone(), Some(e))))
|
|
||||||
.or_else(|e| Err(PlotError::PlotIOErr(e)))
|
|
||||||
.and_then(|contents| {
|
|
||||||
serde_json::from_str::<Vec<Calculation>>(&contents)
|
|
||||||
.or_else(|e| Err(PlotError::BadJSONErr(p.clone(), Some(e))))
|
|
||||||
.map(|calcs| calcs.iter().map(|c| (ts, c.clone())).collect())
|
|
||||||
})
|
|
||||||
});
|
|
||||||
x
|
|
||||||
})
|
|
||||||
.collect::<Result<Vec<Vec<(NaiveDateTime, Calculation)>>, PlotError>>()
|
|
||||||
.map(|x| x.concat())
|
|
||||||
}
|
|
||||||
@@ -1 +1 @@
|
|||||||
version = '0.21.0b2'
|
version = '0.21.0b1'
|
||||||
|
|||||||
@@ -83,7 +83,6 @@ class BigQueryCredentials(Credentials):
|
|||||||
# BigQuery allows an empty database / project, where it defers to the
|
# BigQuery allows an empty database / project, where it defers to the
|
||||||
# environment for the project
|
# environment for the project
|
||||||
database: Optional[str]
|
database: Optional[str]
|
||||||
execution_project: Optional[str] = None
|
|
||||||
timeout_seconds: Optional[int] = 300
|
timeout_seconds: Optional[int] = 300
|
||||||
location: Optional[str] = None
|
location: Optional[str] = None
|
||||||
priority: Optional[Priority] = None
|
priority: Optional[Priority] = None
|
||||||
@@ -131,9 +130,6 @@ class BigQueryCredentials(Credentials):
|
|||||||
if 'database' not in d:
|
if 'database' not in d:
|
||||||
_, database = get_bigquery_defaults()
|
_, database = get_bigquery_defaults()
|
||||||
d['database'] = database
|
d['database'] = database
|
||||||
# `execution_project` default to dataset/project
|
|
||||||
if 'execution_project' not in d:
|
|
||||||
d['execution_project'] = d['database']
|
|
||||||
return d
|
return d
|
||||||
|
|
||||||
|
|
||||||
@@ -256,12 +252,12 @@ class BigQueryConnectionManager(BaseConnectionManager):
|
|||||||
cls.get_impersonated_bigquery_credentials(profile_credentials)
|
cls.get_impersonated_bigquery_credentials(profile_credentials)
|
||||||
else:
|
else:
|
||||||
creds = cls.get_bigquery_credentials(profile_credentials)
|
creds = cls.get_bigquery_credentials(profile_credentials)
|
||||||
execution_project = profile_credentials.execution_project
|
database = profile_credentials.database
|
||||||
location = getattr(profile_credentials, 'location', None)
|
location = getattr(profile_credentials, 'location', None)
|
||||||
|
|
||||||
info = client_info.ClientInfo(user_agent=f'dbt-{dbt_version}')
|
info = client_info.ClientInfo(user_agent=f'dbt-{dbt_version}')
|
||||||
return google.cloud.bigquery.Client(
|
return google.cloud.bigquery.Client(
|
||||||
execution_project,
|
database,
|
||||||
creds,
|
creds,
|
||||||
location=location,
|
location=location,
|
||||||
client_info=info,
|
client_info=info,
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ except ImportError:
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt-bigquery"
|
package_name = "dbt-bigquery"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """The bigquery adapter plugin for dbt (data build tool)"""
|
description = """The bigquery adapter plugin for dbt (data build tool)"""
|
||||||
|
|
||||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
version = '0.21.0b2'
|
version = '0.21.0b1'
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt-postgres"
|
package_name = "dbt-postgres"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """The postgres adpter plugin for dbt (data build tool)"""
|
description = """The postgres adpter plugin for dbt (data build tool)"""
|
||||||
|
|
||||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
version = '0.21.0b2'
|
version = '0.21.0b1'
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ except ImportError:
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt-redshift"
|
package_name = "dbt-redshift"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """The redshift adapter plugin for dbt (data build tool)"""
|
description = """The redshift adapter plugin for dbt (data build tool)"""
|
||||||
|
|
||||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
version = '0.21.0b2'
|
version = '0.21.0b1'
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ except ImportError:
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt-snowflake"
|
package_name = "dbt-snowflake"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """The snowflake adapter plugin for dbt (data build tool)"""
|
description = """The snowflake adapter plugin for dbt (data build tool)"""
|
||||||
|
|
||||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|||||||
2
setup.py
2
setup.py
@@ -24,7 +24,7 @@ with open(os.path.join(this_directory, 'README.md')) as f:
|
|||||||
|
|
||||||
|
|
||||||
package_name = "dbt"
|
package_name = "dbt"
|
||||||
package_version = "0.21.0b2"
|
package_version = "0.21.0b1"
|
||||||
description = """With dbt, data analysts and engineers can build analytics \
|
description = """With dbt, data analysts and engineers can build analytics \
|
||||||
the way engineers build applications."""
|
the way engineers build applications."""
|
||||||
|
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
select 1 as id
|
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
version: 2
|
|
||||||
models:
|
|
||||||
- name: model
|
|
||||||
description: |
|
|
||||||
I'm testing the profile execution_project
|
|
||||||
tests:
|
|
||||||
- project_for_job_id:
|
|
||||||
region: region-us
|
|
||||||
project_id: "{{ project_id}}"
|
|
||||||
unique_schema_id: "{{ unique_schema_id }}"
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
{% test project_for_job_id(model, region, unique_schema_id, project_id) %}
|
|
||||||
select 1
|
|
||||||
from `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
|
|
||||||
where date(creation_time) = current_date
|
|
||||||
and job_project = {{project_id}}
|
|
||||||
and destination_table.dataset_id = {{unique_schema_id}}
|
|
||||||
{% endtest %}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
import os
|
|
||||||
from test.integration.base import DBTIntegrationTest, use_profile
|
|
||||||
|
|
||||||
|
|
||||||
class TestAlternateExecutionProjectBigQueryRun(DBTIntegrationTest):
|
|
||||||
@property
|
|
||||||
def schema(self):
|
|
||||||
return "bigquery_test_022"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "execution-project-models"
|
|
||||||
|
|
||||||
@use_profile('bigquery')
|
|
||||||
def test__bigquery_execute_project(self):
|
|
||||||
results = self.run_dbt(['run', '--models', 'model'])
|
|
||||||
self.assertEqual(len(results), 1)
|
|
||||||
execution_project = os.environ['BIGQUERY_TEST_ALT_DATABASE']
|
|
||||||
self.run_dbt(['test',
|
|
||||||
'--target', 'alternate',
|
|
||||||
'--vars', '{ project_id: %s, unique_schema_id: %s }'
|
|
||||||
% (execution_project, self.unique_schema())],
|
|
||||||
expect_pass=False)
|
|
||||||
@@ -272,21 +272,7 @@ class TestSourceFreshness(SuccessfulSourcesTest):
|
|||||||
'warn_after': {'count': 10, 'period': 'hour'},
|
'warn_after': {'count': 10, 'period': 'hour'},
|
||||||
'error_after': {'count': 18, 'period': 'hour'},
|
'error_after': {'count': 18, 'period': 'hour'},
|
||||||
},
|
},
|
||||||
'adapter_response': {},
|
'adapter_response': {}
|
||||||
'thread_id': AnyStringWith('Thread-'),
|
|
||||||
'execution_time': AnyFloat(),
|
|
||||||
'timing': [
|
|
||||||
{
|
|
||||||
'name': 'compile',
|
|
||||||
'started_at': AnyStringWith(),
|
|
||||||
'completed_at': AnyStringWith(),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'execute',
|
|
||||||
'started_at': AnyStringWith(),
|
|
||||||
'completed_at': AnyStringWith(),
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
])
|
])
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
from test.integration.base import DBTIntegrationTest, FakeArgs, use_profile
|
from test.integration.base import DBTIntegrationTest, FakeArgs, use_profile
|
||||||
import yaml
|
|
||||||
|
|
||||||
from dbt.task.test import TestTask
|
from dbt.task.test import TestTask
|
||||||
from dbt.task.list import ListTask
|
from dbt.task.list import ListTask
|
||||||
@@ -21,18 +20,12 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
|||||||
"test-paths": ["tests"]
|
"test-paths": ["tests"]
|
||||||
}
|
}
|
||||||
|
|
||||||
def list_tests_and_assert(self, include, exclude, expected_tests, greedy=False, selector_name=None):
|
def list_tests_and_assert(self, include, exclude, expected_tests):
|
||||||
list_args = [ 'ls', '--resource-type', 'test']
|
list_args = [ 'ls', '--resource-type', 'test']
|
||||||
if include:
|
if include:
|
||||||
list_args.extend(('--select', include))
|
list_args.extend(('--select', include))
|
||||||
if exclude:
|
if exclude:
|
||||||
list_args.extend(('--exclude', exclude))
|
list_args.extend(('--exclude', exclude))
|
||||||
if exclude:
|
|
||||||
list_args.extend(('--exclude', exclude))
|
|
||||||
if greedy:
|
|
||||||
list_args.append('--greedy')
|
|
||||||
if selector_name:
|
|
||||||
list_args.extend(('--selector', selector_name))
|
|
||||||
|
|
||||||
listed = self.run_dbt(list_args)
|
listed = self.run_dbt(list_args)
|
||||||
assert len(listed) == len(expected_tests)
|
assert len(listed) == len(expected_tests)
|
||||||
@@ -41,7 +34,7 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
|||||||
assert sorted(test_names) == sorted(expected_tests)
|
assert sorted(test_names) == sorted(expected_tests)
|
||||||
|
|
||||||
def run_tests_and_assert(
|
def run_tests_and_assert(
|
||||||
self, include, exclude, expected_tests, schema=False, data=False, greedy=False, selector_name=None
|
self, include, exclude, expected_tests, schema = False, data = False
|
||||||
):
|
):
|
||||||
results = self.run_dbt(['run'])
|
results = self.run_dbt(['run'])
|
||||||
self.assertEqual(len(results), 2)
|
self.assertEqual(len(results), 2)
|
||||||
@@ -55,10 +48,6 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
|||||||
test_args.append('--schema')
|
test_args.append('--schema')
|
||||||
if data:
|
if data:
|
||||||
test_args.append('--data')
|
test_args.append('--data')
|
||||||
if greedy:
|
|
||||||
test_args.append('--greedy')
|
|
||||||
if selector_name:
|
|
||||||
test_args.extend(('--selector', selector_name))
|
|
||||||
|
|
||||||
results = self.run_dbt(test_args)
|
results = self.run_dbt(test_args)
|
||||||
tests_run = [r.node.name for r in results]
|
tests_run = [r.node.name for r in results]
|
||||||
@@ -239,80 +228,3 @@ class TestSelectionExpansion(DBTIntegrationTest):
|
|||||||
|
|
||||||
self.list_tests_and_assert(select, exclude, expected)
|
self.list_tests_and_assert(select, exclude, expected)
|
||||||
self.run_tests_and_assert(select, exclude, expected)
|
self.run_tests_and_assert(select, exclude, expected)
|
||||||
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test__postgres__model_a_greedy(self):
|
|
||||||
select = 'model_a'
|
|
||||||
exclude = None
|
|
||||||
greedy = True
|
|
||||||
expected = [
|
|
||||||
'cf_a_b', 'cf_a_src', 'just_a',
|
|
||||||
'relationships_model_a_fun__fun__ref_model_b_',
|
|
||||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
|
||||||
'unique_model_a_fun'
|
|
||||||
]
|
|
||||||
|
|
||||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
|
||||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
|
||||||
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test__postgres__model_a_greedy_exclude_unique_tests(self):
|
|
||||||
select = 'model_a'
|
|
||||||
exclude = 'test_name:unique'
|
|
||||||
greedy = True
|
|
||||||
expected = [
|
|
||||||
'cf_a_b', 'cf_a_src', 'just_a',
|
|
||||||
'relationships_model_a_fun__fun__ref_model_b_',
|
|
||||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
|
||||||
]
|
|
||||||
|
|
||||||
self.list_tests_and_assert(select, exclude, expected, greedy)
|
|
||||||
self.run_tests_and_assert(select, exclude, expected, greedy=greedy)
|
|
||||||
|
|
||||||
class TestExpansionWithSelectors(TestSelectionExpansion):
|
|
||||||
|
|
||||||
@property
|
|
||||||
def selectors_config(self):
|
|
||||||
return yaml.safe_load('''
|
|
||||||
selectors:
|
|
||||||
- name: model_a_greedy_none
|
|
||||||
definition:
|
|
||||||
method: fqn
|
|
||||||
value: model_a
|
|
||||||
- name: model_a_greedy_false
|
|
||||||
definition:
|
|
||||||
method: fqn
|
|
||||||
value: model_a
|
|
||||||
greedy: false
|
|
||||||
- name: model_a_greedy_true
|
|
||||||
definition:
|
|
||||||
method: fqn
|
|
||||||
value: model_a
|
|
||||||
greedy: true
|
|
||||||
''')
|
|
||||||
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test__postgres__selector_model_a_not_greedy(self):
|
|
||||||
expected = ['just_a','unique_model_a_fun']
|
|
||||||
|
|
||||||
# when greedy is not specified, so implicitly False
|
|
||||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
|
||||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_none')
|
|
||||||
|
|
||||||
# when greedy is explicitly False
|
|
||||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
|
||||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_false')
|
|
||||||
|
|
||||||
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test__postgres__selector_model_a_yes_greedy(self):
|
|
||||||
expected = [
|
|
||||||
'cf_a_b', 'cf_a_src', 'just_a',
|
|
||||||
'relationships_model_a_fun__fun__ref_model_b_',
|
|
||||||
'relationships_model_a_fun__fun__source_my_src_my_tbl_',
|
|
||||||
'unique_model_a_fun'
|
|
||||||
]
|
|
||||||
|
|
||||||
# when greedy is explicitly False
|
|
||||||
self.list_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
|
||||||
self.run_tests_and_assert(include=None, exclude=None, expected_tests=expected, selector_name='model_a_greedy_true')
|
|
||||||
|
|||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('countries') }}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('model_0') }}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select '1' as "num"
|
|
||||||
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
version: 2
|
|
||||||
|
|
||||||
models:
|
|
||||||
- name: model_0
|
|
||||||
columns:
|
|
||||||
- name: iso3
|
|
||||||
tests:
|
|
||||||
- relationships:
|
|
||||||
to: ref('model_1')
|
|
||||||
field: iso3
|
|
||||||
|
|
||||||
- name: model_1
|
|
||||||
columns:
|
|
||||||
- name: iso3
|
|
||||||
tests:
|
|
||||||
- relationships:
|
|
||||||
to: ref('model_0')
|
|
||||||
field: iso3
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('model_1') }}
|
|
||||||
@@ -2,11 +2,15 @@ from test.integration.base import DBTIntegrationTest, use_profile
|
|||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
class TestBuildBase(DBTIntegrationTest):
|
class TestBuild(DBTIntegrationTest):
|
||||||
@property
|
@property
|
||||||
def schema(self):
|
def schema(self):
|
||||||
return "build_test_069"
|
return "build_test_069"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def models(self):
|
||||||
|
return "models"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def project_config(self):
|
def project_config(self):
|
||||||
return {
|
return {
|
||||||
@@ -27,55 +31,24 @@ class TestBuildBase(DBTIntegrationTest):
|
|||||||
|
|
||||||
return self.run_dbt(args, expect_pass=expect_pass)
|
return self.run_dbt(args, expect_pass=expect_pass)
|
||||||
|
|
||||||
|
|
||||||
class TestPassingBuild(TestBuildBase):
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "models"
|
|
||||||
|
|
||||||
@use_profile("postgres")
|
@use_profile("postgres")
|
||||||
def test__postgres_build_happy_path(self):
|
def test__postgres_build_happy_path(self):
|
||||||
self.build()
|
self.build()
|
||||||
|
|
||||||
|
|
||||||
class TestFailingBuild(TestBuildBase):
|
class TestFailingBuild(TestBuild):
|
||||||
|
@property
|
||||||
|
def schema(self):
|
||||||
|
return "build_test_069"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def models(self):
|
def models(self):
|
||||||
return "models-failing"
|
return "models-failing"
|
||||||
|
|
||||||
@use_profile("postgres")
|
@use_profile("postgres")
|
||||||
def test__postgres_build_happy_path(self):
|
def test__postgres_build_happy_path(self):
|
||||||
results = self.build(expect_pass=False)
|
results = self.build(expect_pass=False)
|
||||||
self.assertEqual(len(results), 13)
|
self.assertEqual(len(results), 12)
|
||||||
actual = [r.status for r in results]
|
actual = [r.status for r in results]
|
||||||
expected = ['error']*1 + ['skipped']*5 + ['pass']*2 + ['success']*5
|
expected = ['error']*1 + ['skipped']*4 + ['pass']*2 + ['success']*5
|
||||||
self.assertEqual(sorted(actual), sorted(expected))
|
|
||||||
|
|
||||||
|
|
||||||
class TestFailingTestsBuild(TestBuildBase):
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "tests-failing"
|
|
||||||
|
|
||||||
@use_profile("postgres")
|
|
||||||
def test__postgres_failing_test_skips_downstream(self):
|
|
||||||
results = self.build(expect_pass=False)
|
|
||||||
self.assertEqual(len(results), 13)
|
|
||||||
actual = [str(r.status) for r in results]
|
|
||||||
expected = ['fail'] + ['skipped']*6 + ['pass']*2 + ['success']*4
|
|
||||||
self.assertEqual(sorted(actual), sorted(expected))
|
|
||||||
|
|
||||||
|
|
||||||
class TestCircularRelationshipTestsBuild(TestBuildBase):
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "models-circular-relationship"
|
|
||||||
|
|
||||||
@use_profile("postgres")
|
|
||||||
def test__postgres_circular_relationship_test_success(self):
|
|
||||||
""" Ensure that tests that refer to each other's model don't create
|
|
||||||
a circular dependency. """
|
|
||||||
results = self.build()
|
|
||||||
actual = [r.status for r in results]
|
|
||||||
expected = ['success']*7 + ['pass']*2
|
|
||||||
self.assertEqual(sorted(actual), sorted(expected))
|
self.assertEqual(sorted(actual), sorted(expected))
|
||||||
|
|||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('countries') }}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('snap_0') }}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select * from {{ ref('snap_1') }}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{{ config(materialized='table') }}
|
|
||||||
|
|
||||||
select '1' as "num"
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
version: 2
|
|
||||||
|
|
||||||
models:
|
|
||||||
- name: model_0
|
|
||||||
columns:
|
|
||||||
- name: iso3
|
|
||||||
tests:
|
|
||||||
- unique
|
|
||||||
- not_null
|
|
||||||
- name: historical_iso_numeric
|
|
||||||
tests:
|
|
||||||
- not_null
|
|
||||||
- name: model_2
|
|
||||||
columns:
|
|
||||||
- name: iso3
|
|
||||||
tests:
|
|
||||||
- unique
|
|
||||||
- not_null
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{% macro config() %}
|
|
||||||
|
|
||||||
{% endmacro %}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
select 1 as id
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
version: 2
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{% macro ref(model_name) %}
|
|
||||||
|
|
||||||
{% endmacro %}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
select 1 as id
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
version: 2
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
{% macro source(source_name, table_name) %}
|
|
||||||
|
|
||||||
{% endmacro %}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
select 1 as id
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
version: 2
|
|
||||||
@@ -14,18 +14,17 @@ def get_manifest():
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
class TestBasicExperimentalParser(DBTIntegrationTest):
|
class TestAllExperimentalParser(DBTIntegrationTest):
|
||||||
@property
|
@property
|
||||||
def schema(self):
|
def schema(self):
|
||||||
return "072_basic"
|
return "072_experimental_parser"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def models(self):
|
def models(self):
|
||||||
return "basic"
|
return "models"
|
||||||
|
|
||||||
# test that the experimental parser extracts some basic ref, source, and config calls.
|
|
||||||
@use_profile('postgres')
|
@use_profile('postgres')
|
||||||
def test_postgres_experimental_parser_basic(self):
|
def test_postgres_experimental_parser(self):
|
||||||
results = self.run_dbt(['--use-experimental-parser', 'parse'])
|
results = self.run_dbt(['--use-experimental-parser', 'parse'])
|
||||||
manifest = get_manifest()
|
manifest = get_manifest()
|
||||||
node = manifest.nodes['model.test.model_a']
|
node = manifest.nodes['model.test.model_a']
|
||||||
@@ -33,93 +32,4 @@ class TestBasicExperimentalParser(DBTIntegrationTest):
|
|||||||
self.assertEqual(node.sources, [['my_src', 'my_tbl']])
|
self.assertEqual(node.sources, [['my_src', 'my_tbl']])
|
||||||
self.assertEqual(node.config._extra, {'x': True})
|
self.assertEqual(node.config._extra, {'x': True})
|
||||||
self.assertEqual(node.config.tags, ['hello', 'world'])
|
self.assertEqual(node.config.tags, ['hello', 'world'])
|
||||||
|
|
||||||
class TestRefOverrideExperimentalParser(DBTIntegrationTest):
|
|
||||||
@property
|
|
||||||
def schema(self):
|
|
||||||
return "072_ref_macro"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "ref_macro/models"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def project_config(self):
|
|
||||||
return {
|
|
||||||
'config-version': 2,
|
|
||||||
'macro-paths': ['source_macro', 'macros'],
|
|
||||||
}
|
|
||||||
|
|
||||||
# test that the experimental parser doesn't run if the ref built-in is overriden with a macro
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test_postgres_experimental_parser_ref_override(self):
|
|
||||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
|
||||||
|
|
||||||
print(log_output)
|
|
||||||
|
|
||||||
# successful static parsing
|
|
||||||
self.assertFalse("1699: " in log_output)
|
|
||||||
# ran static parser but failed
|
|
||||||
self.assertFalse("1602: " in log_output)
|
|
||||||
# didn't run static parser because dbt detected a built-in macro override
|
|
||||||
self.assertTrue("1601: " in log_output)
|
|
||||||
|
|
||||||
class TestSourceOverrideExperimentalParser(DBTIntegrationTest):
|
|
||||||
@property
|
|
||||||
def schema(self):
|
|
||||||
return "072_source_macro"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "source_macro/models"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def project_config(self):
|
|
||||||
return {
|
|
||||||
'config-version': 2,
|
|
||||||
'macro-paths': ['source_macro', 'macros'],
|
|
||||||
}
|
|
||||||
|
|
||||||
# test that the experimental parser doesn't run if the source built-in is overriden with a macro
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test_postgres_experimental_parser_source_override(self):
|
|
||||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
|
||||||
|
|
||||||
print(log_output)
|
|
||||||
|
|
||||||
# successful static parsing
|
|
||||||
self.assertFalse("1699: " in log_output)
|
|
||||||
# ran static parser but failed
|
|
||||||
self.assertFalse("1602: " in log_output)
|
|
||||||
# didn't run static parser because dbt detected a built-in macro override
|
|
||||||
self.assertTrue("1601: " in log_output)
|
|
||||||
|
|
||||||
class TestConfigOverrideExperimentalParser(DBTIntegrationTest):
|
|
||||||
@property
|
|
||||||
def schema(self):
|
|
||||||
return "072_config_macro"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return "config_macro/models"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def project_config(self):
|
|
||||||
return {
|
|
||||||
'config-version': 2,
|
|
||||||
'macro-paths': ['config_macro', 'macros'],
|
|
||||||
}
|
|
||||||
|
|
||||||
# test that the experimental parser doesn't run if the config built-in is overriden with a macro
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test_postgres_experimental_parser_config_override(self):
|
|
||||||
_, log_output = self.run_dbt_and_capture(['--debug', '--use-experimental-parser', 'parse'])
|
|
||||||
|
|
||||||
print(log_output)
|
|
||||||
|
|
||||||
# successful static parsing
|
|
||||||
self.assertFalse("1699: " in log_output)
|
|
||||||
# ran static parser but failed
|
|
||||||
self.assertFalse("1602: " in log_output)
|
|
||||||
# didn't run static parser because dbt detected a built-in macro override
|
|
||||||
self.assertTrue("1601: " in log_output)
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
fun,_loaded_at
|
|
||||||
1,2021-04-19 01:00:00
|
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
SELECT 1 AS fun
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
SELECT 1 AS fun
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
version: 2
|
|
||||||
|
|
||||||
sources:
|
|
||||||
- name: src
|
|
||||||
schema: "{{ target.schema }}"
|
|
||||||
freshness:
|
|
||||||
warn_after: {count: 24, period: hour}
|
|
||||||
loaded_at_field: _loaded_at
|
|
||||||
tables:
|
|
||||||
- name: source_a
|
|
||||||
identifier: model_c
|
|
||||||
columns:
|
|
||||||
- name: fun
|
|
||||||
- name: _loaded_at
|
|
||||||
- name: src
|
|
||||||
schema: "{{ target.schema }}"
|
|
||||||
freshness:
|
|
||||||
warn_after: {count: 24, period: hour}
|
|
||||||
loaded_at_field: _loaded_at
|
|
||||||
tables:
|
|
||||||
- name: source_b
|
|
||||||
identifier: model_c
|
|
||||||
columns:
|
|
||||||
- name: fun
|
|
||||||
- name: _loaded_at
|
|
||||||
|
|
||||||
models:
|
|
||||||
- name: model_a
|
|
||||||
columns:
|
|
||||||
- name: fun
|
|
||||||
tags: [marketing]
|
|
||||||
- name: model_b
|
|
||||||
columns:
|
|
||||||
- name: fun
|
|
||||||
tags: [finance]
|
|
||||||
@@ -1,77 +0,0 @@
|
|||||||
import yaml
|
|
||||||
from test.integration.base import DBTIntegrationTest, use_profile
|
|
||||||
|
|
||||||
|
|
||||||
class TestDefaultSelectors(DBTIntegrationTest):
|
|
||||||
'''Test the selectors default argument'''
|
|
||||||
@property
|
|
||||||
def schema(self):
|
|
||||||
return 'test_default_selectors_101'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def models(self):
|
|
||||||
return 'models'
|
|
||||||
|
|
||||||
@property
|
|
||||||
def project_config(self):
|
|
||||||
return {
|
|
||||||
'config-version': 2,
|
|
||||||
'source-paths': ['models'],
|
|
||||||
'data-paths': ['data'],
|
|
||||||
'seeds': {
|
|
||||||
'quote_columns': False,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def selectors_config(self):
|
|
||||||
return yaml.safe_load('''
|
|
||||||
selectors:
|
|
||||||
- name: default_selector
|
|
||||||
description: test default selector
|
|
||||||
definition:
|
|
||||||
union:
|
|
||||||
- method: source
|
|
||||||
value: "test.src.source_a"
|
|
||||||
- method: fqn
|
|
||||||
value: "model_a"
|
|
||||||
default: true
|
|
||||||
''')
|
|
||||||
|
|
||||||
def list_and_assert(self, expected):
|
|
||||||
'''list resources in the project with the selectors default'''
|
|
||||||
listed = self.run_dbt(['ls', '--resource-type', 'model'])
|
|
||||||
|
|
||||||
assert len(listed) == len(expected)
|
|
||||||
|
|
||||||
def compile_and_assert(self, expected):
|
|
||||||
'''Compile project with the selectors default'''
|
|
||||||
compiled = self.run_dbt(['compile'])
|
|
||||||
|
|
||||||
assert len(compiled.results) == len(expected)
|
|
||||||
assert compiled.results[0].node.name == expected[0]
|
|
||||||
|
|
||||||
def run_and_assert(self, expected):
|
|
||||||
run = self.run_dbt(['run'])
|
|
||||||
|
|
||||||
assert len(run.results) == len(expected)
|
|
||||||
assert run.results[0].node.name == expected[0]
|
|
||||||
|
|
||||||
def freshness_and_assert(self, expected):
|
|
||||||
self.run_dbt(['seed', '-s', 'test.model_c'])
|
|
||||||
freshness = self.run_dbt(['source', 'freshness'])
|
|
||||||
|
|
||||||
assert len(freshness.results) == len(expected)
|
|
||||||
assert freshness.results[0].node.name == expected[0]
|
|
||||||
|
|
||||||
@use_profile('postgres')
|
|
||||||
def test__postgres__model_a_only(self):
|
|
||||||
expected_model = ['model_a']
|
|
||||||
|
|
||||||
self.list_and_assert(expected_model)
|
|
||||||
self.compile_and_assert(expected_model)
|
|
||||||
|
|
||||||
def test__postgres__source_a_only(self):
|
|
||||||
expected_source = ['source_a']
|
|
||||||
|
|
||||||
self.freshness_and_assert(expected_source)
|
|
||||||
@@ -263,15 +263,6 @@ class DBTIntegrationTest(unittest.TestCase):
|
|||||||
'keyfile_json': credentials,
|
'keyfile_json': credentials,
|
||||||
'schema': self.unique_schema(),
|
'schema': self.unique_schema(),
|
||||||
},
|
},
|
||||||
'alternate': {
|
|
||||||
'type': 'bigquery',
|
|
||||||
'method': 'service-account-json',
|
|
||||||
'threads': 1,
|
|
||||||
'project': project_id,
|
|
||||||
'keyfile_json': credentials,
|
|
||||||
'schema': self.unique_schema(),
|
|
||||||
'execution_project': self.alternative_database,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
'target': 'default2'
|
'target': 'default2'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,7 +26,6 @@ snapshot_data = '''
|
|||||||
{% endsnapshot %}
|
{% endsnapshot %}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.supported('postgres')
|
@pytest.mark.supported('postgres')
|
||||||
def test_rpc_build_threads(
|
def test_rpc_build_threads(
|
||||||
project_root, profiles_root, dbt_profile, unique_schema
|
project_root, profiles_root, dbt_profile, unique_schema
|
||||||
@@ -113,84 +112,25 @@ def test_rpc_build_state(
|
|||||||
|
|
||||||
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
||||||
|
|
||||||
project.models['my_model.sql'] =\
|
project.models['my_model.sql'] = 'select * from {{ ref("data" )}} where id = 2'
|
||||||
'select * from {{ ref("data" )}} where id = 2'
|
|
||||||
project.write_models(project_root, remove=True)
|
project.write_models(project_root, remove=True)
|
||||||
querier.sighup()
|
querier.sighup()
|
||||||
assert querier.wait_for_status('ready') is True
|
assert querier.wait_for_status('ready') is True
|
||||||
|
|
||||||
results = querier.async_wait_for_result(
|
results = querier.async_wait_for_result(
|
||||||
querier.build(state='./state', select=['state:modified'])
|
querier.build(state='./state', models=['state:modified'])
|
||||||
)
|
)
|
||||||
assert len(results['results']) == 3
|
assert len(results['results']) == 3
|
||||||
|
|
||||||
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
get_write_manifest(querier, os.path.join(state_dir, 'manifest.json'))
|
||||||
|
|
||||||
results = querier.async_wait_for_result(
|
results = querier.async_wait_for_result(
|
||||||
querier.build(state='./state', select=['state:modified']),
|
querier.build(state='./state', models=['state:modified']),
|
||||||
)
|
)
|
||||||
assert len(results['results']) == 0
|
assert len(results['results']) == 0
|
||||||
|
|
||||||
# a better test of defer would require multiple targets
|
# a better test of defer would require multiple targets
|
||||||
results = querier.async_wait_for_result(
|
results = querier.async_wait_for_result(
|
||||||
querier.build(
|
querier.build(state='./state', models=['state:modified'], defer=True)
|
||||||
state='./state',
|
|
||||||
select=['state:modified'],
|
|
||||||
defer=True
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
assert len(results['results']) == 0
|
assert len(results['results']) == 0
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.supported('postgres')
|
|
||||||
def test_rpc_build_selectors(
|
|
||||||
project_root, profiles_root, dbt_profile, unique_schema
|
|
||||||
):
|
|
||||||
schema_yaml = {
|
|
||||||
'version': 2,
|
|
||||||
'models': [{
|
|
||||||
'name': 'my_model',
|
|
||||||
'columns': [
|
|
||||||
{
|
|
||||||
'name': 'id',
|
|
||||||
'tests': ['not_null', 'unique'],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
}],
|
|
||||||
}
|
|
||||||
project = ProjectDefinition(
|
|
||||||
name='test',
|
|
||||||
project_data={
|
|
||||||
'seeds': {'+quote_columns': False},
|
|
||||||
'models': {'test': {'my_model': {'+tags': 'example_tag'}}}
|
|
||||||
},
|
|
||||||
models={
|
|
||||||
'my_model.sql': 'select * from {{ ref("data") }}',
|
|
||||||
'schema.yml': yaml.safe_dump(schema_yaml)
|
|
||||||
},
|
|
||||||
seeds={'data.csv': 'id,message\n1,hello\n2,goodbye'},
|
|
||||||
snapshots={'my_snapshots.sql': snapshot_data},
|
|
||||||
)
|
|
||||||
querier_ctx = get_querier(
|
|
||||||
project_def=project,
|
|
||||||
project_dir=project_root,
|
|
||||||
profiles_dir=profiles_root,
|
|
||||||
schema=unique_schema,
|
|
||||||
test_kwargs={},
|
|
||||||
)
|
|
||||||
with querier_ctx as querier:
|
|
||||||
# test simple resource_types param
|
|
||||||
results = querier.async_wait_for_result(
|
|
||||||
querier.build(resource_types=['seed'])
|
|
||||||
)
|
|
||||||
assert len(results['results']) == 1
|
|
||||||
assert results['results'][0]['node']['resource_type'] == 'seed'
|
|
||||||
|
|
||||||
# test simple select param (should select tagged model and its tests)
|
|
||||||
results = querier.async_wait_for_result(
|
|
||||||
querier.build(select=['tag:example_tag'])
|
|
||||||
)
|
|
||||||
assert len(results['results']) == 3
|
|
||||||
assert sorted(
|
|
||||||
[result['node']['resource_type'] for result in results['results']]
|
|
||||||
) == ['model', 'test', 'test']
|
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ def test_rpc_test_state(
|
|||||||
querier.test(state='./state', models=['state:modified']),
|
querier.test(state='./state', models=['state:modified']),
|
||||||
)
|
)
|
||||||
assert len(results['results']) == 0
|
assert len(results['results']) == 0
|
||||||
|
|
||||||
# a better test of defer would require multiple targets
|
# a better test of defer would require multiple targets
|
||||||
results = querier.async_wait_for_result(
|
results = querier.async_wait_for_result(
|
||||||
querier.run(state='./state', models=['state:modified'], defer=True)
|
querier.run(state='./state', models=['state:modified'], defer=True)
|
||||||
|
|||||||
@@ -260,10 +260,10 @@ class Querier:
|
|||||||
def run_operation(
|
def run_operation(
|
||||||
self,
|
self,
|
||||||
macro: str,
|
macro: str,
|
||||||
args: Optional[Dict[str, Any]] = None,
|
args: Optional[Dict[str, Any]],
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
):
|
):
|
||||||
params: Dict[str, Any] = {'macro': macro}
|
params = {'macro': macro}
|
||||||
if args is not None:
|
if args is not None:
|
||||||
params['args'] = args
|
params['args'] = args
|
||||||
return self.request(
|
return self.request(
|
||||||
@@ -277,7 +277,7 @@ class Querier:
|
|||||||
show: bool = None,
|
show: bool = None,
|
||||||
threads: Optional[int] = None,
|
threads: Optional[int] = None,
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
state: Optional[str] = None,
|
state: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
params = {}
|
params = {}
|
||||||
if select is not None:
|
if select is not None:
|
||||||
@@ -300,7 +300,7 @@ class Querier:
|
|||||||
exclude: Optional[Union[str, List[str]]] = None,
|
exclude: Optional[Union[str, List[str]]] = None,
|
||||||
threads: Optional[int] = None,
|
threads: Optional[int] = None,
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
state: Optional[str] = None,
|
state: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
params = {}
|
params = {}
|
||||||
if select is not None:
|
if select is not None:
|
||||||
@@ -337,7 +337,7 @@ class Querier:
|
|||||||
exclude: Optional[Union[str, List[str]]] = None,
|
exclude: Optional[Union[str, List[str]]] = None,
|
||||||
threads: Optional[int] = None,
|
threads: Optional[int] = None,
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
state: Optional[str] = None,
|
state: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
params = {}
|
params = {}
|
||||||
if select is not None:
|
if select is not None:
|
||||||
@@ -361,7 +361,7 @@ class Querier:
|
|||||||
schema: bool = None,
|
schema: bool = None,
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
defer: Optional[bool] = None,
|
defer: Optional[bool] = None,
|
||||||
state: Optional[str] = None,
|
state: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
params = {}
|
params = {}
|
||||||
if models is not None:
|
if models is not None:
|
||||||
@@ -384,21 +384,18 @@ class Querier:
|
|||||||
|
|
||||||
def build(
|
def build(
|
||||||
self,
|
self,
|
||||||
select: Optional[Union[str, List[str]]] = None,
|
models: Optional[Union[str, List[str]]] = None,
|
||||||
exclude: Optional[Union[str, List[str]]] = None,
|
exclude: Optional[Union[str, List[str]]] = None,
|
||||||
resource_types: Optional[Union[str, List[str]]] = None,
|
|
||||||
threads: Optional[int] = None,
|
threads: Optional[int] = None,
|
||||||
request_id: int = 1,
|
request_id: int = 1,
|
||||||
defer: Optional[bool] = None,
|
defer: Optional[bool] = None,
|
||||||
state: Optional[str] = None,
|
state: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
params = {}
|
params = {}
|
||||||
if select is not None:
|
if models is not None:
|
||||||
params['select'] = select
|
params['models'] = models
|
||||||
if exclude is not None:
|
if exclude is not None:
|
||||||
params['exclude'] = exclude
|
params['exclude'] = exclude
|
||||||
if resource_types is not None:
|
|
||||||
params['resource_types'] = resource_types
|
|
||||||
if threads is not None:
|
if threads is not None:
|
||||||
params['threads'] = threads
|
params['threads'] = threads
|
||||||
if defer is not None:
|
if defer is not None:
|
||||||
|
|||||||
@@ -12,23 +12,10 @@ PGPORT="${PGPORT:-5432}"
|
|||||||
export PGPORT
|
export PGPORT
|
||||||
PGHOST="${PGHOST:-localhost}"
|
PGHOST="${PGHOST:-localhost}"
|
||||||
|
|
||||||
function connect_circle() {
|
until psql -h "$PGHOST" -c '\q'; do
|
||||||
# try to handle circleci/docker oddness
|
>&2 echo "Postgres is unavailable - sleeping"
|
||||||
let rc=1
|
sleep 1
|
||||||
while [[ $rc -eq 1 ]]; do
|
done
|
||||||
nc -z ${PGHOST} ${PGPORT}
|
|
||||||
let rc=$?
|
|
||||||
done
|
|
||||||
if [[ $rc -ne 0 ]]; then
|
|
||||||
echo "Fatal: Could not connect to $PGHOST"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# appveyor doesn't have 'nc', but it also doesn't have these issues
|
|
||||||
if [[ -n $CIRCLECI ]]; then
|
|
||||||
connect_circle
|
|
||||||
fi
|
|
||||||
|
|
||||||
createdb dbt
|
createdb dbt
|
||||||
psql -c "CREATE ROLE root WITH PASSWORD 'password';"
|
psql -c "CREATE ROLE root WITH PASSWORD 'password';"
|
||||||
|
|||||||
@@ -136,7 +136,6 @@ class DocumentationParserTest(unittest.TestCase):
|
|||||||
relative_path=relative_path,
|
relative_path=relative_path,
|
||||||
project_root=self.root_path,
|
project_root=self.root_path,
|
||||||
searched_path=self.subdir_path,
|
searched_path=self.subdir_path,
|
||||||
modification_time=0.0,
|
|
||||||
)
|
)
|
||||||
source_file = SourceFile(path=match, checksum=FileHash.empty())
|
source_file = SourceFile(path=match, checksum=FileHash.empty())
|
||||||
source_file.contents = contents
|
source_file.contents = contents
|
||||||
|
|||||||
@@ -122,7 +122,7 @@ class GraphTest(unittest.TestCase):
|
|||||||
# Create the source file patcher
|
# Create the source file patcher
|
||||||
self.load_source_file_patcher = patch('dbt.parser.read_files.load_source_file')
|
self.load_source_file_patcher = patch('dbt.parser.read_files.load_source_file')
|
||||||
self.mock_source_file = self.load_source_file_patcher.start()
|
self.mock_source_file = self.load_source_file_patcher.start()
|
||||||
def mock_load_source_file(path, parse_file_type, project_name, saved_files):
|
def mock_load_source_file(path, parse_file_type, project_name):
|
||||||
for sf in self.mock_models:
|
for sf in self.mock_models:
|
||||||
if sf.path == path:
|
if sf.path == path:
|
||||||
source_file = sf
|
source_file = sf
|
||||||
@@ -137,7 +137,6 @@ class GraphTest(unittest.TestCase):
|
|||||||
searched_path='.',
|
searched_path='.',
|
||||||
project_root=os.path.normcase(os.getcwd()),
|
project_root=os.path.normcase(os.getcwd()),
|
||||||
relative_path='dbt_project.yml',
|
relative_path='dbt_project.yml',
|
||||||
modification_time=0.0,
|
|
||||||
)
|
)
|
||||||
return path
|
return path
|
||||||
|
|
||||||
@@ -166,7 +165,6 @@ class GraphTest(unittest.TestCase):
|
|||||||
searched_path='models',
|
searched_path='models',
|
||||||
project_root=os.path.normcase(os.getcwd()),
|
project_root=os.path.normcase(os.getcwd()),
|
||||||
relative_path='{}.sql'.format(k),
|
relative_path='{}.sql'.format(k),
|
||||||
modification_time=0.0,
|
|
||||||
)
|
)
|
||||||
# FileHash can't be empty or 'search_key' will be None
|
# FileHash can't be empty or 'search_key' will be None
|
||||||
source_file = SourceFile(path=path, checksum=FileHash.from_contents('abc'))
|
source_file = SourceFile(path=path, checksum=FileHash.from_contents('abc'))
|
||||||
|
|||||||
@@ -120,7 +120,7 @@ def test_run_specs(include, exclude, expected):
|
|||||||
manifest = _get_manifest(graph)
|
manifest = _get_manifest(graph)
|
||||||
selector = graph_selector.NodeSelector(graph, manifest)
|
selector = graph_selector.NodeSelector(graph, manifest)
|
||||||
spec = graph_cli.parse_difference(include, exclude)
|
spec = graph_cli.parse_difference(include, exclude)
|
||||||
selected, _ = selector.select_nodes(spec)
|
selected = selector.select_nodes(spec)
|
||||||
|
|
||||||
assert selected == expected
|
assert selected == expected
|
||||||
|
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ def test_parse_simple():
|
|||||||
childrens_parents=False,
|
childrens_parents=False,
|
||||||
children_depth=None,
|
children_depth=None,
|
||||||
parents_depth=None,
|
parents_depth=None,
|
||||||
) == parsed['tagged_foo']["definition"]
|
) == parsed['tagged_foo']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_simple_childrens_parents():
|
def test_parse_simple_childrens_parents():
|
||||||
@@ -141,7 +141,7 @@ def test_parse_simple_childrens_parents():
|
|||||||
childrens_parents=True,
|
childrens_parents=True,
|
||||||
children_depth=None,
|
children_depth=None,
|
||||||
parents_depth=None,
|
parents_depth=None,
|
||||||
) == parsed['tagged_foo']["definition"]
|
) == parsed['tagged_foo']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_simple_arguments_with_modifiers():
|
def test_parse_simple_arguments_with_modifiers():
|
||||||
@@ -169,7 +169,7 @@ def test_parse_simple_arguments_with_modifiers():
|
|||||||
childrens_parents=False,
|
childrens_parents=False,
|
||||||
children_depth=2,
|
children_depth=2,
|
||||||
parents_depth=None,
|
parents_depth=None,
|
||||||
) == parsed['configured_view']["definition"]
|
) == parsed['configured_view']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_union():
|
def test_parse_union():
|
||||||
@@ -188,7 +188,7 @@ def test_parse_union():
|
|||||||
assert Union(
|
assert Union(
|
||||||
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
||||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
||||||
) == parsed['views-or-foos']["definition"]
|
) == parsed['views-or-foos']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_intersection():
|
def test_parse_intersection():
|
||||||
@@ -208,7 +208,7 @@ def test_parse_intersection():
|
|||||||
assert Intersection(
|
assert Intersection(
|
||||||
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
Criteria(method=MethodName.Config, value='view', method_arguments=['materialized']),
|
||||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[]),
|
Criteria(method=MethodName.Tag, value='foo', method_arguments=[]),
|
||||||
) == parsed['views-and-foos']["definition"]
|
) == parsed['views-and-foos']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_union_excluding():
|
def test_parse_union_excluding():
|
||||||
@@ -232,7 +232,7 @@ def test_parse_union_excluding():
|
|||||||
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
Criteria(method=MethodName.Tag, value='foo', method_arguments=[])
|
||||||
),
|
),
|
||||||
Criteria(method=MethodName.Tag, value='bar', method_arguments=[]),
|
Criteria(method=MethodName.Tag, value='bar', method_arguments=[]),
|
||||||
) == parsed['views-or-foos-not-bars']["definition"]
|
) == parsed['views-or-foos-not-bars']
|
||||||
|
|
||||||
|
|
||||||
def test_parse_yaml_complex():
|
def test_parse_yaml_complex():
|
||||||
@@ -272,7 +272,7 @@ def test_parse_yaml_complex():
|
|||||||
assert Union(
|
assert Union(
|
||||||
Criteria(method=MethodName.Tag, value='nightly'),
|
Criteria(method=MethodName.Tag, value='nightly'),
|
||||||
Criteria(method=MethodName.Tag, value='weeknights_only'),
|
Criteria(method=MethodName.Tag, value='weeknights_only'),
|
||||||
) == parsed['weeknights']["definition"]
|
) == parsed['weeknights']
|
||||||
|
|
||||||
assert Union(
|
assert Union(
|
||||||
Intersection(
|
Intersection(
|
||||||
@@ -300,4 +300,4 @@ def test_parse_yaml_complex():
|
|||||||
),
|
),
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
) == parsed['test_name']["definition"]
|
) == parsed['test_name']
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ from dbt.contracts.graph.parsed import (
|
|||||||
UnpatchedSourceDefinition
|
UnpatchedSourceDefinition
|
||||||
)
|
)
|
||||||
from dbt.contracts.graph.unparsed import Docs
|
from dbt.contracts.graph.unparsed import Docs
|
||||||
import itertools
|
|
||||||
from .utils import config_from_parts_or_dicts, normalize, generate_name_macros, MockNode, MockSource, MockDocumentation
|
from .utils import config_from_parts_or_dicts, normalize, generate_name_macros, MockNode, MockSource, MockDocumentation
|
||||||
|
|
||||||
|
|
||||||
@@ -146,7 +146,6 @@ class BaseParserTest(unittest.TestCase):
|
|||||||
searched_path=searched,
|
searched_path=searched,
|
||||||
relative_path=filename,
|
relative_path=filename,
|
||||||
project_root=root_dir,
|
project_root=root_dir,
|
||||||
modification_time=0.0,
|
|
||||||
)
|
)
|
||||||
sf_cls = SchemaSourceFile if filename.endswith('.yml') else SourceFile
|
sf_cls = SchemaSourceFile if filename.endswith('.yml') else SourceFile
|
||||||
source_file = sf_cls(
|
source_file = sf_cls(
|
||||||
@@ -522,57 +521,6 @@ class ModelParserTest(BaseParserTest):
|
|||||||
self.parser.parse_file(block)
|
self.parser.parse_file(block)
|
||||||
|
|
||||||
|
|
||||||
class StaticModelParserTest(BaseParserTest):
|
|
||||||
def setUp(self):
|
|
||||||
super().setUp()
|
|
||||||
self.parser = ModelParser(
|
|
||||||
project=self.snowplow_project_config,
|
|
||||||
manifest=self.manifest,
|
|
||||||
root_project=self.root_project_config,
|
|
||||||
)
|
|
||||||
|
|
||||||
def file_block_for(self, data, filename):
|
|
||||||
return super().file_block_for(data, filename, 'models')
|
|
||||||
|
|
||||||
# tests that when the ref built-in is overriden with a macro definition
|
|
||||||
# that the ModelParser can detect it. This does not test that the static
|
|
||||||
# parser does not run in this case. That test is in integration test suite 072
|
|
||||||
def test_built_in_macro_override_detection(self):
|
|
||||||
macro_unique_id = 'macro.root.ref'
|
|
||||||
self.parser.manifest.macros[macro_unique_id] = ParsedMacro(
|
|
||||||
name='ref',
|
|
||||||
resource_type=NodeType.Macro,
|
|
||||||
unique_id=macro_unique_id,
|
|
||||||
package_name='root',
|
|
||||||
original_file_path=normalize('macros/macro.sql'),
|
|
||||||
root_path=get_abs_os_path('./dbt_modules/root'),
|
|
||||||
path=normalize('macros/macro.sql'),
|
|
||||||
macro_sql='{% macro ref(model_name) %}{% set x = raise("boom") %}{% endmacro %}',
|
|
||||||
)
|
|
||||||
|
|
||||||
raw_sql = '{{ config(materialized="table") }}select 1 as id'
|
|
||||||
block = self.file_block_for(raw_sql, 'nested/model_1.sql')
|
|
||||||
node = ParsedModelNode(
|
|
||||||
alias='model_1',
|
|
||||||
name='model_1',
|
|
||||||
database='test',
|
|
||||||
schema='analytics',
|
|
||||||
resource_type=NodeType.Model,
|
|
||||||
unique_id='model.snowplow.model_1',
|
|
||||||
fqn=['snowplow', 'nested', 'model_1'],
|
|
||||||
package_name='snowplow',
|
|
||||||
original_file_path=normalize('models/nested/model_1.sql'),
|
|
||||||
root_path=get_abs_os_path('./dbt_modules/snowplow'),
|
|
||||||
config=NodeConfig(materialized='table'),
|
|
||||||
path=normalize('nested/model_1.sql'),
|
|
||||||
raw_sql=raw_sql,
|
|
||||||
checksum=block.file.checksum,
|
|
||||||
unrendered_config={'materialized': 'table'},
|
|
||||||
)
|
|
||||||
|
|
||||||
assert(self.parser._has_banned_macro(node))
|
|
||||||
|
|
||||||
|
|
||||||
class SnapshotParserTest(BaseParserTest):
|
class SnapshotParserTest(BaseParserTest):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
super().setUp()
|
super().setUp()
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
import unittest
|
import unittest
|
||||||
from unittest import mock
|
from unittest import mock
|
||||||
import time
|
|
||||||
|
|
||||||
import dbt.exceptions
|
import dbt.exceptions
|
||||||
from dbt.parser.partial import PartialParsing
|
from dbt.parser.partial import PartialParsing
|
||||||
@@ -18,14 +17,14 @@ class TestPartialParsing(unittest.TestCase):
|
|||||||
project_name = 'my_test'
|
project_name = 'my_test'
|
||||||
project_root = '/users/root'
|
project_root = '/users/root'
|
||||||
model_file = SourceFile(
|
model_file = SourceFile(
|
||||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='my_model.sql', modification_time=time.time()),
|
path=FilePath(project_root=project_root, searched_path='models', relative_path='my_model.sql'),
|
||||||
checksum=FileHash.from_contents('abcdef'),
|
checksum=FileHash.from_contents('abcdef'),
|
||||||
project_name=project_name,
|
project_name=project_name,
|
||||||
parse_file_type=ParseFileType.Model,
|
parse_file_type=ParseFileType.Model,
|
||||||
nodes=['model.my_test.my_model'],
|
nodes=['model.my_test.my_model'],
|
||||||
)
|
)
|
||||||
schema_file = SchemaSourceFile(
|
schema_file = SchemaSourceFile(
|
||||||
path=FilePath(project_root=project_root, searched_path='models', relative_path='schema.yml', modification_time=time.time()),
|
path=FilePath(project_root=project_root, searched_path='models', relative_path='schema.yml'),
|
||||||
checksum=FileHash.from_contents('ghijkl'),
|
checksum=FileHash.from_contents('ghijkl'),
|
||||||
project_name=project_name,
|
project_name=project_name,
|
||||||
parse_file_type=ParseFileType.Schema,
|
parse_file_type=ParseFileType.Schema,
|
||||||
|
|||||||
@@ -177,26 +177,3 @@ class SelectorUnitTest(unittest.TestCase):
|
|||||||
"not a valid method name"
|
"not a valid method name"
|
||||||
):
|
):
|
||||||
selector_config_from_data(dct)
|
selector_config_from_data(dct)
|
||||||
|
|
||||||
def test_multiple_default_true(self):
|
|
||||||
"""Test selector_config_from_data returns the correct error when multiple
|
|
||||||
default values are set
|
|
||||||
"""
|
|
||||||
dct = get_selector_dict('''\
|
|
||||||
selectors:
|
|
||||||
- name: summa_nothing
|
|
||||||
definition:
|
|
||||||
method: tag
|
|
||||||
value: nightly
|
|
||||||
default: true
|
|
||||||
- name: summa_something
|
|
||||||
definition:
|
|
||||||
method: tag
|
|
||||||
value: daily
|
|
||||||
default: true
|
|
||||||
''')
|
|
||||||
with self.assertRaisesRegex(
|
|
||||||
dbt.exceptions.DbtSelectorsError,
|
|
||||||
'Found multiple selectors with `default: true`:'
|
|
||||||
):
|
|
||||||
selector_config_from_data(dct)
|
|
||||||
@@ -153,8 +153,7 @@ class TestFindMatching(unittest.TestCase):
|
|||||||
expected_output = [{
|
expected_output = [{
|
||||||
'searched_path': relative_path,
|
'searched_path': relative_path,
|
||||||
'absolute_path': named_file.name,
|
'absolute_path': named_file.name,
|
||||||
'relative_path': os.path.basename(named_file.name),
|
'relative_path': os.path.basename(named_file.name)
|
||||||
'modification_time': out[0]['modification_time'],
|
|
||||||
}]
|
}]
|
||||||
self.assertEqual(out, expected_output)
|
self.assertEqual(out, expected_output)
|
||||||
|
|
||||||
@@ -168,8 +167,7 @@ class TestFindMatching(unittest.TestCase):
|
|||||||
expected_output = [{
|
expected_output = [{
|
||||||
'searched_path': relative_path,
|
'searched_path': relative_path,
|
||||||
'absolute_path': named_file.name,
|
'absolute_path': named_file.name,
|
||||||
'relative_path': os.path.basename(named_file.name),
|
'relative_path': os.path.basename(named_file.name)
|
||||||
'modification_time': out[0]['modification_time'],
|
|
||||||
}]
|
}]
|
||||||
self.assertEqual(out, expected_output)
|
self.assertEqual(out, expected_output)
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user