forked from repo-mirrors/dbt-core
Compare commits
8 Commits
jerco/pyth
...
v1.1.0rc3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5ae33b90bf | ||
|
|
37d78338c2 | ||
|
|
698420b420 | ||
|
|
e23f0e0747 | ||
|
|
106da05db7 | ||
|
|
09a396a731 | ||
|
|
9c233f27ab | ||
|
|
a09bc28768 |
@@ -1,5 +1,5 @@
|
||||
[bumpversion]
|
||||
current_version = 1.1.0b1
|
||||
current_version = 1.1.0rc3
|
||||
parse = (?P<major>\d+)
|
||||
\.(?P<minor>\d+)
|
||||
\.(?P<patch>\d+)
|
||||
|
||||
48
.changes/1.1.0-rc1.md
Normal file
48
.changes/1.1.0-rc1.md
Normal file
@@ -0,0 +1,48 @@
|
||||
## dbt-core 1.1.0-rc1 - April 12, 2022
|
||||
### Breaking Changes
|
||||
- For adapter plugin maintainers only: Internal adapter methods `set_relations_cache` + `_relations_cache_for_schemas` each take an additional argument, for use with experimental `CACHE_SELECTED_ONLY` config ([#4688](https://github.com/dbt-labs/dbt-core/issues/4688), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
### Features
|
||||
- Add `--cache_selected_only` flag to cache schema object of selected models only. ([#4688](https://github.com/dbt-labs/dbt-core/issues/4688), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
- Support custom names for generic tests ([#3348](https://github.com/dbt-labs/dbt-core/issues/3348), [#4898](https://github.com/dbt-labs/dbt-core/pull/4898))
|
||||
- Enable dbt jobs to run downstream models based on fresher sources. Compare the source freshness results between previous and current state. If any source is fresher and/or new in current vs. previous state, dbt will run and test the downstream models in scope. Example command: `dbt build --select source_status:fresher+` ([#4050](https://github.com/dbt-labs/dbt-core/issues/4050), [#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- converting unique key as list tests to new pytest format ([#4882](https://github.com/dbt-labs/dbt-core/issues/4882), [#4958](https://github.com/dbt-labs/dbt-core/pull/4958))
|
||||
- Add a variable called selected_resources in the Jinja context containing a list of all the resources matching the nodes for the --select, --exclude and/or --selector parameters. ([#3471](https://github.com/dbt-labs/dbt-core/issues/3471), [#5001](https://github.com/dbt-labs/dbt-core/pull/5001))
|
||||
- Support the DO_NOT_TRACK environment variable from the consoledonottrack.com initiative ([#3540](https://github.com/dbt-labs/dbt-core/issues/3540), [#5000](https://github.com/dbt-labs/dbt-core/pull/5000))
|
||||
- Add `--no-print` global flag ([#4710](https://github.com/dbt-labs/dbt-core/issues/4710), [#4854](https://github.com/dbt-labs/dbt-core/pull/4854))
|
||||
- add enabled as a source config ([#3662](https://github.com/dbt-labs/dbt-core/issues/3662), [#5008](https://github.com/dbt-labs/dbt-core/pull/5008))
|
||||
### Fixes
|
||||
- Inconsistent timestamps between inserted/updated and deleted rows in snapshots ([#4347](https://github.com/dbt-labs/dbt-core/issues/4347), [#4513](https://github.com/dbt-labs/dbt-core/pull/4513))
|
||||
- Catch more cases to retry package retrieval for deps pointing to the hub. Also start to cache the package requests. ([#4849](https://github.com/dbt-labs/dbt-core/issues/4849), [#4982](https://github.com/dbt-labs/dbt-core/pull/4982))
|
||||
- Make the warning message for a full event deque more descriptive ([#4962](https://github.com/dbt-labs/dbt-core/issues/4962), [#5011](https://github.com/dbt-labs/dbt-core/pull/5011))
|
||||
- Fix hard delete snapshot test ([#4916](https://github.com/dbt-labs/dbt-core/issues/4916), [#5020](https://github.com/dbt-labs/dbt-core/pull/5020))
|
||||
### Docs
|
||||
- Fixed capitalization in UI for exposures of `type: ml` ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- List packages and tags in alphabetical order ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- Bump jekyll from 3.8.7 to 3.9.0 ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- Updated docker README to reflect necessity of using BuildKit ([#4990](https://github.com/dbt-labs/dbt-core/issues/4990), [#5018](https://github.com/dbt-labs/dbt-core/pull/5018))
|
||||
### Under the Hood
|
||||
- add performance regression testing runner without orchestration ([#4021](https://github.com/dbt-labs/dbt-core/issues/4021), [#4602](https://github.com/dbt-labs/dbt-core/pull/4602))
|
||||
- Add Graph Compilation and Adapter Cache tracking ([#4625](https://github.com/dbt-labs/dbt-core/issues/4625), [#4912](https://github.com/dbt-labs/dbt-core/pull/4912))
|
||||
- Create a dbt.tests.adapter release when releasing dbt and postgres ([#4812](https://github.com/dbt-labs/dbt-core/issues/4812), [#4948](https://github.com/dbt-labs/dbt-core/pull/4948))
|
||||
- update docker image to use python 3.10.3 ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#4963](https://github.com/dbt-labs/dbt-core/pull/4963))
|
||||
- updates black to 22.3.0 which fixes dependency incompatibility when running with precommit. ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#4972](https://github.com/dbt-labs/dbt-core/pull/4972))
|
||||
- Adds config util for ad-hoc creation of project objs or dicts ([#4808](https://github.com/dbt-labs/dbt-core/issues/4808), [#4981](https://github.com/dbt-labs/dbt-core/pull/4981))
|
||||
- Remove TableComparison and convert existing calls to use dbt.tests.util ([#4778](https://github.com/dbt-labs/dbt-core/issues/4778), [#4986](https://github.com/dbt-labs/dbt-core/pull/4986))
|
||||
- Remove unneeded create_schema in snapshot materialization ([#4742](https://github.com/dbt-labs/dbt-core/issues/4742), [#4993](https://github.com/dbt-labs/dbt-core/pull/4993))
|
||||
- Added .git-blame-ignore-revs file to mask re-formmating commits from git blame ([#5004](https://github.com/dbt-labs/dbt-core/issues/5004), [#5019](https://github.com/dbt-labs/dbt-core/pull/5019))
|
||||
- Convert version tests to pytest ([#5024](https://github.com/dbt-labs/dbt-core/issues/5024), [#5026](https://github.com/dbt-labs/dbt-core/pull/5026))
|
||||
- Updating tests and docs to show that we now support Python 3.10 ([#4974](https://github.com/dbt-labs/dbt-core/issues/4974), [#5025](https://github.com/dbt-labs/dbt-core/pull/5025))
|
||||
- Update --version output and logic ([#4724](https://github.com/dbt-labs/dbt-core/issues/4724), [#5029](https://github.com/dbt-labs/dbt-core/pull/5029))
|
||||
- ([#5033](https://github.com/dbt-labs/dbt-core/issues/5033), [#5032](https://github.com/dbt-labs/dbt-core/pull/5032))
|
||||
|
||||
### Contributors
|
||||
- [@agoblet](https://github.com/agoblet) ([#5000](https://github.com/dbt-labs/dbt-core/pull/5000))
|
||||
- [@anaisvaillant](https://github.com/anaisvaillant) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- [@b-per](https://github.com/b-per) ([#5001](https://github.com/dbt-labs/dbt-core/pull/5001))
|
||||
- [@jonstacks](https://github.com/jonstacks) ([#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- [@kadero](https://github.com/kadero) ([#4513](https://github.com/dbt-labs/dbt-core/pull/4513))
|
||||
- [@karunpoudel](https://github.com/karunpoudel) ([#4860](https://github.com/dbt-labs/dbt-core/pull/4860), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
- [@matt-winkler](https://github.com/matt-winkler) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- [@pgoslatara](https://github.com/pgoslatara) ([#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- [@poloaraujo](https://github.com/poloaraujo) ([#4854](https://github.com/dbt-labs/dbt-core/pull/4854))
|
||||
- [@sungchun12](https://github.com/sungchun12) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
6
.changes/1.1.0-rc2.md
Normal file
6
.changes/1.1.0-rc2.md
Normal file
@@ -0,0 +1,6 @@
|
||||
## dbt-core 1.1.0-rc2 - April 21, 2022
|
||||
### Fixes
|
||||
- Restore ability to utilize `updated_at` for check_cols snapshots ([#5076](https://github.com/dbt-labs/dbt-core/issues/5076), [#5077](https://github.com/dbt-labs/dbt-core/pull/5077))
|
||||
|
||||
### Contributors
|
||||
- [@dbeatty10](https://github.com/dbeatty10) ([#5077](https://github.com/dbt-labs/dbt-core/pull/5077))
|
||||
5
.changes/1.1.0-rc3.md
Normal file
5
.changes/1.1.0-rc3.md
Normal file
@@ -0,0 +1,5 @@
|
||||
## dbt-core 1.1.0-rc3 - April 26, 2022
|
||||
### Fixes
|
||||
- Use yaml renderer (with target context) for rendering selectors ([#5131](https://github.com/dbt-labs/dbt-core/issues/5131), [#5136](https://github.com/dbt-labs/dbt-core/pull/5136))
|
||||
- Fix retry logic to return values after initial try ([#5023](https://github.com/dbt-labs/dbt-core/issues/5023), [#5137](https://github.com/dbt-labs/dbt-core/pull/5137))
|
||||
- Scrub secret env vars from CommandError in exception stacktrace ([#5151](https://github.com/dbt-labs/dbt-core/issues/5151), [#5152](https://github.com/dbt-labs/dbt-core/pull/5152))
|
||||
@@ -5,6 +5,6 @@ body: 'Enable dbt jobs to run downstream models based on fresher sources. Compar
|
||||
models in scope. Example command: `dbt build --select source_status:fresher+` '
|
||||
time: 2022-03-28T13:47:43.750709-05:00
|
||||
custom:
|
||||
Author: sungchun12, matt-winkler, anaisvaillant
|
||||
Author: sungchun12 matt-winkler anaisvaillant
|
||||
Issue: "4050"
|
||||
PR: "4256"
|
||||
7
.changes/1.1.0/Fixes-20220415-112927.yaml
Normal file
7
.changes/1.1.0/Fixes-20220415-112927.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: Restore ability to utilize `updated_at` for check_cols snapshots
|
||||
time: 2022-04-15T11:29:27.063462-06:00
|
||||
custom:
|
||||
Author: dbeatty10
|
||||
Issue: "5076"
|
||||
PR: "5077"
|
||||
7
.changes/1.1.0/Fixes-20220422-131227.yaml
Normal file
7
.changes/1.1.0/Fixes-20220422-131227.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: Fix retry logic to return values after initial try
|
||||
time: 2022-04-22T13:12:27.239055-05:00
|
||||
custom:
|
||||
Author: emmyoop
|
||||
Issue: "5023"
|
||||
PR: "5137"
|
||||
7
.changes/1.1.0/Fixes-20220422-135645.yaml
Normal file
7
.changes/1.1.0/Fixes-20220422-135645.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: Use yaml renderer (with target context) for rendering selectors
|
||||
time: 2022-04-22T13:56:45.147893-04:00
|
||||
custom:
|
||||
Author: gshank
|
||||
Issue: "5131"
|
||||
PR: "5136"
|
||||
7
.changes/1.1.0/Fixes-20220425-203924.yaml
Normal file
7
.changes/1.1.0/Fixes-20220425-203924.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
kind: Fixes
|
||||
body: Scrub secret env vars from CommandError in exception stacktrace
|
||||
time: 2022-04-25T20:39:24.365495+02:00
|
||||
custom:
|
||||
Author: jtcohen6
|
||||
Issue: "5151"
|
||||
PR: "5152"
|
||||
66
CHANGELOG.md
66
CHANGELOG.md
@@ -6,6 +6,72 @@
|
||||
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
|
||||
|
||||
|
||||
## dbt-core 1.1.0-rc3 - April 26, 2022
|
||||
### Fixes
|
||||
- Use yaml renderer (with target context) for rendering selectors ([#5131](https://github.com/dbt-labs/dbt-core/issues/5131), [#5136](https://github.com/dbt-labs/dbt-core/pull/5136))
|
||||
- Fix retry logic to return values after initial try ([#5023](https://github.com/dbt-labs/dbt-core/issues/5023), [#5137](https://github.com/dbt-labs/dbt-core/pull/5137))
|
||||
- Scrub secret env vars from CommandError in exception stacktrace ([#5151](https://github.com/dbt-labs/dbt-core/issues/5151), [#5152](https://github.com/dbt-labs/dbt-core/pull/5152))
|
||||
|
||||
|
||||
|
||||
## dbt-core 1.1.0-rc2 - April 21, 2022
|
||||
### Fixes
|
||||
- Restore ability to utilize `updated_at` for check_cols snapshots ([#5076](https://github.com/dbt-labs/dbt-core/issues/5076), [#5077](https://github.com/dbt-labs/dbt-core/pull/5077))
|
||||
|
||||
### Contributors
|
||||
- [@dbeatty10](https://github.com/dbeatty10) ([#5077](https://github.com/dbt-labs/dbt-core/pull/5077))
|
||||
|
||||
|
||||
## dbt-core 1.1.0-rc1 - April 12, 2022
|
||||
### Breaking Changes
|
||||
- For adapter plugin maintainers only: Internal adapter methods `set_relations_cache` + `_relations_cache_for_schemas` each take an additional argument, for use with experimental `CACHE_SELECTED_ONLY` config ([#4688](https://github.com/dbt-labs/dbt-core/issues/4688), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
### Features
|
||||
- Add `--cache_selected_only` flag to cache schema object of selected models only. ([#4688](https://github.com/dbt-labs/dbt-core/issues/4688), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
- Support custom names for generic tests ([#3348](https://github.com/dbt-labs/dbt-core/issues/3348), [#4898](https://github.com/dbt-labs/dbt-core/pull/4898))
|
||||
- Enable dbt jobs to run downstream models based on fresher sources. Compare the source freshness results between previous and current state. If any source is fresher and/or new in current vs. previous state, dbt will run and test the downstream models in scope. Example command: `dbt build --select source_status:fresher+` ([#4050](https://github.com/dbt-labs/dbt-core/issues/4050), [#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- converting unique key as list tests to new pytest format ([#4882](https://github.com/dbt-labs/dbt-core/issues/4882), [#4958](https://github.com/dbt-labs/dbt-core/pull/4958))
|
||||
- Add a variable called selected_resources in the Jinja context containing a list of all the resources matching the nodes for the --select, --exclude and/or --selector parameters. ([#3471](https://github.com/dbt-labs/dbt-core/issues/3471), [#5001](https://github.com/dbt-labs/dbt-core/pull/5001))
|
||||
- Support the DO_NOT_TRACK environment variable from the consoledonottrack.com initiative ([#3540](https://github.com/dbt-labs/dbt-core/issues/3540), [#5000](https://github.com/dbt-labs/dbt-core/pull/5000))
|
||||
- Add `--no-print` global flag ([#4710](https://github.com/dbt-labs/dbt-core/issues/4710), [#4854](https://github.com/dbt-labs/dbt-core/pull/4854))
|
||||
- add enabled as a source config ([#3662](https://github.com/dbt-labs/dbt-core/issues/3662), [#5008](https://github.com/dbt-labs/dbt-core/pull/5008))
|
||||
### Fixes
|
||||
- Inconsistent timestamps between inserted/updated and deleted rows in snapshots ([#4347](https://github.com/dbt-labs/dbt-core/issues/4347), [#4513](https://github.com/dbt-labs/dbt-core/pull/4513))
|
||||
- Catch more cases to retry package retrieval for deps pointing to the hub. Also start to cache the package requests. ([#4849](https://github.com/dbt-labs/dbt-core/issues/4849), [#4982](https://github.com/dbt-labs/dbt-core/pull/4982))
|
||||
- Make the warning message for a full event deque more descriptive ([#4962](https://github.com/dbt-labs/dbt-core/issues/4962), [#5011](https://github.com/dbt-labs/dbt-core/pull/5011))
|
||||
- Fix hard delete snapshot test ([#4916](https://github.com/dbt-labs/dbt-core/issues/4916), [#5020](https://github.com/dbt-labs/dbt-core/pull/5020))
|
||||
### Docs
|
||||
- Fixed capitalization in UI for exposures of `type: ml` ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- List packages and tags in alphabetical order ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- Bump jekyll from 3.8.7 to 3.9.0 ([#4984](https://github.com/dbt-labs/dbt-core/issues/4984), [#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- Updated docker README to reflect necessity of using BuildKit ([#4990](https://github.com/dbt-labs/dbt-core/issues/4990), [#5018](https://github.com/dbt-labs/dbt-core/pull/5018))
|
||||
### Under the Hood
|
||||
- add performance regression testing runner without orchestration ([#4021](https://github.com/dbt-labs/dbt-core/issues/4021), [#4602](https://github.com/dbt-labs/dbt-core/pull/4602))
|
||||
- Add Graph Compilation and Adapter Cache tracking ([#4625](https://github.com/dbt-labs/dbt-core/issues/4625), [#4912](https://github.com/dbt-labs/dbt-core/pull/4912))
|
||||
- Create a dbt.tests.adapter release when releasing dbt and postgres ([#4812](https://github.com/dbt-labs/dbt-core/issues/4812), [#4948](https://github.com/dbt-labs/dbt-core/pull/4948))
|
||||
- update docker image to use python 3.10.3 ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#4963](https://github.com/dbt-labs/dbt-core/pull/4963))
|
||||
- updates black to 22.3.0 which fixes dependency incompatibility when running with precommit. ([#4904](https://github.com/dbt-labs/dbt-core/issues/4904), [#4972](https://github.com/dbt-labs/dbt-core/pull/4972))
|
||||
- Adds config util for ad-hoc creation of project objs or dicts ([#4808](https://github.com/dbt-labs/dbt-core/issues/4808), [#4981](https://github.com/dbt-labs/dbt-core/pull/4981))
|
||||
- Remove TableComparison and convert existing calls to use dbt.tests.util ([#4778](https://github.com/dbt-labs/dbt-core/issues/4778), [#4986](https://github.com/dbt-labs/dbt-core/pull/4986))
|
||||
- Remove unneeded create_schema in snapshot materialization ([#4742](https://github.com/dbt-labs/dbt-core/issues/4742), [#4993](https://github.com/dbt-labs/dbt-core/pull/4993))
|
||||
- Added .git-blame-ignore-revs file to mask re-formmating commits from git blame ([#5004](https://github.com/dbt-labs/dbt-core/issues/5004), [#5019](https://github.com/dbt-labs/dbt-core/pull/5019))
|
||||
- Convert version tests to pytest ([#5024](https://github.com/dbt-labs/dbt-core/issues/5024), [#5026](https://github.com/dbt-labs/dbt-core/pull/5026))
|
||||
- Updating tests and docs to show that we now support Python 3.10 ([#4974](https://github.com/dbt-labs/dbt-core/issues/4974), [#5025](https://github.com/dbt-labs/dbt-core/pull/5025))
|
||||
- Update --version output and logic ([#4724](https://github.com/dbt-labs/dbt-core/issues/4724), [#5029](https://github.com/dbt-labs/dbt-core/pull/5029))
|
||||
- ([#5033](https://github.com/dbt-labs/dbt-core/issues/5033), [#5032](https://github.com/dbt-labs/dbt-core/pull/5032))
|
||||
|
||||
### Contributors
|
||||
- [@agoblet](https://github.com/agoblet) ([#5000](https://github.com/dbt-labs/dbt-core/pull/5000))
|
||||
- [@anaisvaillant](https://github.com/anaisvaillant) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- [@b-per](https://github.com/b-per) ([#5001](https://github.com/dbt-labs/dbt-core/pull/5001))
|
||||
- [@jonstacks](https://github.com/jonstacks) ([#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- [@kadero](https://github.com/kadero) ([#4513](https://github.com/dbt-labs/dbt-core/pull/4513))
|
||||
- [@karunpoudel](https://github.com/karunpoudel) ([#4860](https://github.com/dbt-labs/dbt-core/pull/4860), [#4860](https://github.com/dbt-labs/dbt-core/pull/4860))
|
||||
- [@matt-winkler](https://github.com/matt-winkler) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
- [@pgoslatara](https://github.com/pgoslatara) ([#4995](https://github.com/dbt-labs/dbt-core/pull/4995))
|
||||
- [@poloaraujo](https://github.com/poloaraujo) ([#4854](https://github.com/dbt-labs/dbt-core/pull/4854))
|
||||
- [@sungchun12](https://github.com/sungchun12) ([#4256](https://github.com/dbt-labs/dbt-core/pull/4256))
|
||||
|
||||
|
||||
## dbt-core 1.1.0-b1 - March 22, 2022
|
||||
### Breaking Changes
|
||||
- **Relevant to maintainers of adapter plugins _only_:** The abstractmethods `get_response` and `execute` now only return a `connection.AdapterReponse` in type hints. (Previously, they could return a string.) We encourage you to update your methods to return an object of class `AdapterResponse`, or implement a subclass specific to your adapter ([#4499](https://github.com/dbt-labs/dbt-core/issues/4499), [#4869](https://github.com/dbt-labs/dbt-core/pull/4869))
|
||||
|
||||
@@ -28,7 +28,7 @@ def _is_commit(revision: str) -> bool:
|
||||
|
||||
|
||||
def _raise_git_cloning_error(repo, revision, error):
|
||||
stderr = error.stderr.decode("utf-8").strip()
|
||||
stderr = error.stderr.strip()
|
||||
if "usage: git" in stderr:
|
||||
stderr = stderr.split("\nusage: git")[0]
|
||||
if re.match("fatal: destination path '(.+)' already exists", stderr):
|
||||
@@ -115,8 +115,8 @@ def checkout(cwd, repo, revision=None):
|
||||
try:
|
||||
return _checkout(cwd, repo, revision)
|
||||
except CommandResultError as exc:
|
||||
stderr = exc.stderr.decode("utf-8").strip()
|
||||
bad_package_spec(repo, revision, stderr)
|
||||
stderr = exc.stderr.strip()
|
||||
bad_package_spec(repo, revision, stderr)
|
||||
|
||||
|
||||
def get_current_sha(cwd):
|
||||
@@ -142,7 +142,7 @@ def clone_and_checkout(
|
||||
subdirectory=subdirectory,
|
||||
)
|
||||
except CommandResultError as exc:
|
||||
err = exc.stderr.decode("utf-8")
|
||||
err = exc.stderr
|
||||
exists = re.match("fatal: destination path '(.+)' already exists", err)
|
||||
if not exists:
|
||||
raise_git_cloning_problem(repo)
|
||||
|
||||
@@ -114,12 +114,10 @@ class DbtProjectYamlRenderer(BaseRenderer):
|
||||
def name(self):
|
||||
"Project config"
|
||||
|
||||
# Uses SecretRenderer
|
||||
def get_package_renderer(self) -> BaseRenderer:
|
||||
return PackageRenderer(self.ctx_obj.cli_vars)
|
||||
|
||||
def get_selector_renderer(self) -> BaseRenderer:
|
||||
return SelectorRenderer(self.ctx_obj.cli_vars)
|
||||
|
||||
def render_project(
|
||||
self,
|
||||
project: Dict[str, Any],
|
||||
@@ -136,8 +134,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
|
||||
return package_renderer.render_data(packages)
|
||||
|
||||
def render_selectors(self, selectors: Dict[str, Any]):
|
||||
selector_renderer = self.get_selector_renderer()
|
||||
return selector_renderer.render_data(selectors)
|
||||
return self.render_data(selectors)
|
||||
|
||||
def render_entry(self, value: Any, keypath: Keypath) -> Any:
|
||||
result = super().render_entry(value, keypath)
|
||||
@@ -165,18 +162,10 @@ class DbtProjectYamlRenderer(BaseRenderer):
|
||||
return True
|
||||
|
||||
|
||||
class SelectorRenderer(BaseRenderer):
|
||||
@property
|
||||
def name(self):
|
||||
return "Selector config"
|
||||
|
||||
|
||||
class SecretRenderer(BaseRenderer):
|
||||
def __init__(self, cli_vars: Optional[Dict[str, Any]] = None) -> None:
|
||||
def __init__(self, cli_vars: Dict[str, Any] = {}) -> None:
|
||||
# Generate contexts here because we want to save the context
|
||||
# object in order to retrieve the env_vars.
|
||||
if cli_vars is None:
|
||||
cli_vars = {}
|
||||
self.ctx_obj = SecretContext(cli_vars)
|
||||
context = self.ctx_obj.to_dict()
|
||||
super().__init__(context)
|
||||
|
||||
@@ -3,7 +3,7 @@ from typing import Dict, Any, Union
|
||||
from dbt.clients.yaml_helper import yaml, Loader, Dumper, load_yaml_text # noqa: F401
|
||||
from dbt.dataclass_schema import ValidationError
|
||||
|
||||
from .renderer import SelectorRenderer
|
||||
from .renderer import BaseRenderer
|
||||
|
||||
from dbt.clients.system import (
|
||||
load_file_contents,
|
||||
@@ -57,7 +57,7 @@ class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
|
||||
def render_from_dict(
|
||||
cls,
|
||||
data: Dict[str, Any],
|
||||
renderer: SelectorRenderer,
|
||||
renderer: BaseRenderer,
|
||||
) -> "SelectorConfig":
|
||||
try:
|
||||
rendered = renderer.render_data(data)
|
||||
@@ -72,7 +72,7 @@ class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
|
||||
def from_path(
|
||||
cls,
|
||||
path: Path,
|
||||
renderer: SelectorRenderer,
|
||||
renderer: BaseRenderer,
|
||||
) -> "SelectorConfig":
|
||||
try:
|
||||
data = load_yaml_text(load_file_contents(str(path)))
|
||||
|
||||
@@ -62,6 +62,8 @@ def get_project_config(
|
||||
user_config = read_user_config(flags.PROFILES_DIR)
|
||||
# Update flags
|
||||
flags.set_from_args(args, user_config)
|
||||
if cli_vars is None:
|
||||
cli_vars = {}
|
||||
profile = Profile.render_from_args(args, ProfileRenderer(cli_vars), profile_name)
|
||||
# Generate a project
|
||||
project = Project.from_project_root(
|
||||
|
||||
@@ -383,10 +383,11 @@ class FailedToConnectException(DatabaseException):
|
||||
|
||||
class CommandError(RuntimeException):
|
||||
def __init__(self, cwd, cmd, message="Error running command"):
|
||||
cmd_scrubbed = list(scrub_secrets(cmd_txt, env_secrets()) for cmd_txt in cmd)
|
||||
super().__init__(message)
|
||||
self.cwd = cwd
|
||||
self.cmd = cmd
|
||||
self.args = (cwd, cmd, message)
|
||||
self.cmd = cmd_scrubbed
|
||||
self.args = (cwd, cmd_scrubbed, message)
|
||||
|
||||
def __str__(self):
|
||||
if len(self.cmd) == 0:
|
||||
@@ -411,9 +412,9 @@ class CommandResultError(CommandError):
|
||||
def __init__(self, cwd, cmd, returncode, stdout, stderr, message="Got a non-zero returncode"):
|
||||
super().__init__(cwd, cmd, message)
|
||||
self.returncode = returncode
|
||||
self.stdout = stdout
|
||||
self.stderr = stderr
|
||||
self.args = (cwd, cmd, returncode, stdout, stderr, message)
|
||||
self.stdout = scrub_secrets(stdout.decode("utf-8"), env_secrets())
|
||||
self.stderr = scrub_secrets(stderr.decode("utf-8"), env_secrets())
|
||||
self.args = (cwd, self.cmd, returncode, self.stdout, self.stderr, message)
|
||||
|
||||
def __str__(self):
|
||||
return "{} running: {}".format(self.msg, self.cmd)
|
||||
@@ -704,7 +705,6 @@ def missing_materialization(model, adapter_type):
|
||||
|
||||
def bad_package_spec(repo, spec, error_message):
|
||||
msg = "Error checking out spec='{}' for repo {}\n{}".format(spec, repo, error_message)
|
||||
|
||||
raise InternalException(scrub_secrets(msg, env_secrets()))
|
||||
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@
|
||||
{% set check_cols_config = config['check_cols'] %}
|
||||
{% set primary_key = config['unique_key'] %}
|
||||
{% set invalidate_hard_deletes = config.get('invalidate_hard_deletes', false) %}
|
||||
{% set updated_at = snapshot_get_time() %}
|
||||
{% set updated_at = config.get('updated_at', snapshot_get_time()) %}
|
||||
|
||||
{% set column_added = false %}
|
||||
|
||||
|
||||
3
core/dbt/tests/fixtures/project.py
vendored
3
core/dbt/tests/fixtures/project.py
vendored
@@ -107,6 +107,9 @@ def test_data_dir(request):
|
||||
|
||||
# This contains the profile target information, for simplicity in setting
|
||||
# up different profiles, particularly in the adapter repos.
|
||||
# Note: because we load the profile to create the adapter, this
|
||||
# fixture can't be used to test vars and env_vars or errors. The
|
||||
# profile must be written out after the test starts.
|
||||
@pytest.fixture(scope="class")
|
||||
def dbt_profile_target():
|
||||
return {
|
||||
|
||||
@@ -23,6 +23,7 @@ from dbt.events.test_types import IntegrationTestDebug
|
||||
# read_file
|
||||
# get_artifact
|
||||
# update_config_file
|
||||
# write_config_file
|
||||
# get_unique_ids_in_results
|
||||
# check_result_nodes_by_name
|
||||
# check_result_nodes_by_unique_id
|
||||
@@ -143,6 +144,13 @@ def update_config_file(updates, *paths):
|
||||
write_file(new_yaml, *paths)
|
||||
|
||||
|
||||
# Write new config file
|
||||
def write_config_file(data, *paths):
|
||||
if type(data) is dict:
|
||||
data = yaml.safe_dump(data)
|
||||
write_file(data, *paths)
|
||||
|
||||
|
||||
# Get the unique_ids in dbt command results
|
||||
def get_unique_ids_in_results(results):
|
||||
unique_ids = []
|
||||
@@ -273,13 +281,15 @@ def check_relation_types(adapter, relation_to_type):
|
||||
# by doing a separate call for each set of tables/relations.
|
||||
# Wraps check_relations_equal_with_relations by creating relations
|
||||
# from the list of names passed in.
|
||||
def check_relations_equal(adapter, relation_names):
|
||||
def check_relations_equal(adapter, relation_names, compare_snapshot_cols=False):
|
||||
if len(relation_names) < 2:
|
||||
raise TestProcessingException(
|
||||
"Not enough relations to compare",
|
||||
)
|
||||
relations = [relation_from_name(adapter, name) for name in relation_names]
|
||||
return check_relations_equal_with_relations(adapter, relations)
|
||||
return check_relations_equal_with_relations(
|
||||
adapter, relations, compare_snapshot_cols=compare_snapshot_cols
|
||||
)
|
||||
|
||||
|
||||
# This can be used when checking relations in different schemas, by supplying
|
||||
@@ -288,16 +298,17 @@ def check_relations_equal(adapter, relation_names):
|
||||
# adapter.get_columns_in_relation
|
||||
# adapter.get_rows_different_sql
|
||||
# adapter.execute
|
||||
def check_relations_equal_with_relations(adapter, relations):
|
||||
def check_relations_equal_with_relations(adapter, relations, compare_snapshot_cols=False):
|
||||
|
||||
with get_connection(adapter):
|
||||
basis, compares = relations[0], relations[1:]
|
||||
# Skip columns starting with "dbt_" because we don't want to
|
||||
# compare those, since they are time sensitive
|
||||
# (unless comparing "dbt_" snapshot columns is explicitly enabled)
|
||||
column_names = [
|
||||
c.name
|
||||
for c in adapter.get_columns_in_relation(basis)
|
||||
if not c.name.lower().startswith("dbt_")
|
||||
if not c.name.lower().startswith("dbt_") or compare_snapshot_cols
|
||||
]
|
||||
|
||||
for relation in compares:
|
||||
|
||||
@@ -615,7 +615,7 @@ def _connection_exception_retry(fn, max_attempts: int, attempt: int = 0):
|
||||
fire_event(RecordRetryException(exc=exc))
|
||||
fire_event(RetryExternalCall(attempt=attempt, max=max_attempts))
|
||||
time.sleep(1)
|
||||
_connection_exception_retry(fn, max_attempts, attempt + 1)
|
||||
return _connection_exception_retry(fn, max_attempts, attempt + 1)
|
||||
else:
|
||||
raise ConnectionException("External connection exception occurred: " + str(exc))
|
||||
|
||||
|
||||
@@ -234,5 +234,5 @@ def _get_adapter_plugin_names() -> Iterator[str]:
|
||||
yield plugin_name
|
||||
|
||||
|
||||
__version__ = "1.1.0b1"
|
||||
__version__ = "1.1.0rc3"
|
||||
installed = get_installed_version()
|
||||
|
||||
@@ -273,12 +273,12 @@ def parse_args(argv=None):
|
||||
parser.add_argument("adapter")
|
||||
parser.add_argument("--title-case", "-t", default=None)
|
||||
parser.add_argument("--dependency", action="append")
|
||||
parser.add_argument("--dbt-core-version", default="1.1.0b1")
|
||||
parser.add_argument("--dbt-core-version", default="1.1.0rc3")
|
||||
parser.add_argument("--email")
|
||||
parser.add_argument("--author")
|
||||
parser.add_argument("--url")
|
||||
parser.add_argument("--sql", action="store_true")
|
||||
parser.add_argument("--package-version", default="1.1.0b1")
|
||||
parser.add_argument("--package-version", default="1.1.0rc3")
|
||||
parser.add_argument("--project-version", default="1.0")
|
||||
parser.add_argument("--no-dependency", action="store_false", dest="set_dependency")
|
||||
parsed = parser.parse_args()
|
||||
|
||||
@@ -25,7 +25,7 @@ with open(os.path.join(this_directory, "README.md")) as f:
|
||||
|
||||
|
||||
package_name = "dbt-core"
|
||||
package_version = "1.1.0b1"
|
||||
package_version = "1.1.0rc3"
|
||||
description = """With dbt, data analysts and engineers can build analytics \
|
||||
the way engineers build applications."""
|
||||
|
||||
|
||||
@@ -14,12 +14,12 @@ FROM --platform=$build_for python:3.10.3-slim-bullseye as base
|
||||
# N.B. The refs updated automagically every release via bumpversion
|
||||
# N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct
|
||||
|
||||
ARG dbt_core_ref=dbt-core@v1.1.0b1
|
||||
ARG dbt_postgres_ref=dbt-core@v1.1.0b1
|
||||
ARG dbt_redshift_ref=dbt-redshift@v1.0.0
|
||||
ARG dbt_bigquery_ref=dbt-bigquery@v1.0.0
|
||||
ARG dbt_snowflake_ref=dbt-snowflake@v1.0.0
|
||||
ARG dbt_spark_ref=dbt-spark@v1.0.0
|
||||
ARG dbt_core_ref=dbt-core@v1.1.0rc3
|
||||
ARG dbt_postgres_ref=dbt-core@v1.1.0rc3
|
||||
ARG dbt_redshift_ref=dbt-redshift@v1.1.0rc3
|
||||
ARG dbt_bigquery_ref=dbt-bigquery@v1.1.0rc3
|
||||
ARG dbt_snowflake_ref=dbt-snowflake@v1.1.0rc3
|
||||
ARG dbt_spark_ref=dbt-spark@v1.1.0rc3
|
||||
# special case args
|
||||
ARG dbt_spark_version=all
|
||||
ARG dbt_third_party
|
||||
|
||||
@@ -1 +1 @@
|
||||
version = "1.1.0b1"
|
||||
version = "1.1.0rc3"
|
||||
|
||||
@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
|
||||
|
||||
|
||||
package_name = "dbt-postgres"
|
||||
package_version = "1.1.0b1"
|
||||
package_version = "1.1.0rc3"
|
||||
description = """The postgres adpter plugin for dbt (data build tool)"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -221,6 +221,36 @@ class TestAllowSecretProfilePackage(DBTIntegrationTest):
|
||||
self.assertFalse("first_dependency" in log_output)
|
||||
|
||||
|
||||
class TestCloneFailSecretScrubbed(DBTIntegrationTest):
|
||||
|
||||
def setUp(self):
|
||||
os.environ[SECRET_ENV_PREFIX + "GIT_TOKEN"] = "abc123"
|
||||
DBTIntegrationTest.setUp(self)
|
||||
|
||||
@property
|
||||
def packages_config(self):
|
||||
return {
|
||||
"packages": [
|
||||
{"git": "https://fakeuser:{{ env_var('DBT_ENV_SECRET_GIT_TOKEN') }}@github.com/dbt-labs/fake-repo.git"},
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return "context_vars_013"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models"
|
||||
|
||||
@use_profile('postgres')
|
||||
def test_postgres_fail_clone_with_scrubbing(self):
|
||||
with self.assertRaises(dbt.exceptions.InternalException) as exc:
|
||||
_, log_output = self.run_dbt_and_capture(['deps'])
|
||||
|
||||
assert "abc123" not in str(exc.exception)
|
||||
|
||||
|
||||
class TestEmitWarning(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
|
||||
select
|
||||
'{{ var("variable_1") }}'::varchar as var_1,
|
||||
'{{ var("variable_2")[0] }}'::varchar as var_2,
|
||||
'{{ var("variable_3")["value"] }}'::varchar as var_3
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
version: 2
|
||||
models:
|
||||
- name: complex_model
|
||||
columns:
|
||||
- name: var_1
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- abc
|
||||
- name: var_2
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- def
|
||||
- name: var_3
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- jkl
|
||||
@@ -1,9 +0,0 @@
|
||||
version: 2
|
||||
models:
|
||||
- name: test_vars
|
||||
columns:
|
||||
- name: field
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- override
|
||||
@@ -1,3 +0,0 @@
|
||||
|
||||
|
||||
select '{{ var("required") }}'::varchar as field
|
||||
@@ -1,9 +0,0 @@
|
||||
version: 2
|
||||
models:
|
||||
- name: simple_model
|
||||
columns:
|
||||
- name: simple
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- abc
|
||||
@@ -1,4 +0,0 @@
|
||||
|
||||
select
|
||||
'{{ var("simple") }}'::varchar as simple
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
|
||||
|
||||
class TestCLIVarOverride(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "cli_vars_028"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models_override"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'vars': {
|
||||
'required': 'present',
|
||||
},
|
||||
}
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres_overriden_vars_global(self):
|
||||
self.use_default_project()
|
||||
self.use_profile('postgres')
|
||||
|
||||
# This should be "override"
|
||||
self.run_dbt(["run", "--vars", "{required: override}"])
|
||||
self.run_dbt(["test"])
|
||||
|
||||
|
||||
class TestCLIVarOverridePorject(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "cli_vars_028"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models_override"
|
||||
|
||||
@property
|
||||
def project_config(self):
|
||||
return {
|
||||
'config-version': 2,
|
||||
'vars': {
|
||||
'test': {
|
||||
'required': 'present',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres_overriden_vars_project_level(self):
|
||||
|
||||
# This should be "override"
|
||||
self.run_dbt(["run", "--vars", "{required: override}"])
|
||||
self.run_dbt(["test"])
|
||||
@@ -1,68 +0,0 @@
|
||||
from test.integration.base import DBTIntegrationTest, use_profile
|
||||
import yaml
|
||||
import json
|
||||
|
||||
|
||||
class TestCLIVars(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "cli_vars_028"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models_complex"
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres_cli_vars_longform(self):
|
||||
self.use_profile('postgres')
|
||||
self.use_default_project()
|
||||
|
||||
cli_vars = {
|
||||
"variable_1": "abc",
|
||||
"variable_2": ["def", "ghi"],
|
||||
"variable_3": {
|
||||
"value": "jkl"
|
||||
}
|
||||
}
|
||||
results = self.run_dbt(["run", "--vars", yaml.dump(cli_vars)])
|
||||
self.assertEqual(len(results), 1)
|
||||
results = self.run_dbt(["test", "--vars", yaml.dump(cli_vars)])
|
||||
self.assertEqual(len(results), 3)
|
||||
|
||||
|
||||
class TestCLIVarsSimple(DBTIntegrationTest):
|
||||
@property
|
||||
def schema(self):
|
||||
return "cli_vars_028"
|
||||
|
||||
@property
|
||||
def models(self):
|
||||
return "models_simple"
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres_cli_vars_shorthand(self):
|
||||
self.use_profile('postgres')
|
||||
self.use_default_project()
|
||||
|
||||
results = self.run_dbt(["run", "--vars", "simple: abc"])
|
||||
self.assertEqual(len(results), 1)
|
||||
results = self.run_dbt(["test", "--vars", "simple: abc"])
|
||||
self.assertEqual(len(results), 1)
|
||||
|
||||
@use_profile('postgres')
|
||||
def test__postgres_cli_vars_longer(self):
|
||||
self.use_profile('postgres')
|
||||
self.use_default_project()
|
||||
|
||||
results = self.run_dbt(["run", "--vars", "{simple: abc, unused: def}"])
|
||||
self.assertEqual(len(results), 1)
|
||||
results = self.run_dbt(["test", "--vars", "{simple: abc, unused: def}"])
|
||||
self.assertEqual(len(results), 1)
|
||||
run_results = _read_json('./target/run_results.json')
|
||||
self.assertEqual(run_results['args']['vars'], "{simple: abc, unused: def}")
|
||||
|
||||
|
||||
def _read_json(path):
|
||||
# read json generated by dbt.
|
||||
with open(path) as fp:
|
||||
return json.load(fp)
|
||||
@@ -1 +1 @@
|
||||
version = "1.1.0b1"
|
||||
version = "1.1.0rc3"
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
|
||||
package_name = "dbt-tests-adapter"
|
||||
package_version = "1.1.0b1"
|
||||
package_version = "1.1.0rc3"
|
||||
description = """The dbt adapter tests for adapter plugins"""
|
||||
|
||||
this_directory = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
68
tests/functional/cli_vars/test_cli_var_override.py
Normal file
68
tests/functional/cli_vars/test_cli_var_override.py
Normal file
@@ -0,0 +1,68 @@
|
||||
import pytest
|
||||
|
||||
from dbt.tests.util import run_dbt
|
||||
|
||||
|
||||
models_override__schema_yml = """
|
||||
version: 2
|
||||
models:
|
||||
- name: test_vars
|
||||
columns:
|
||||
- name: field
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- override
|
||||
"""
|
||||
|
||||
models_override__test_vars_sql = """
|
||||
select '{{ var("required") }}'::varchar as field
|
||||
"""
|
||||
|
||||
|
||||
# Tests that cli vars override vars set in the project config
|
||||
class TestCLIVarOverride:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_override__schema_yml,
|
||||
"test_vars.sql": models_override__test_vars_sql,
|
||||
}
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def project_config_update(self):
|
||||
return {
|
||||
"vars": {
|
||||
"required": "present",
|
||||
},
|
||||
}
|
||||
|
||||
def test__override_vars_global(self, project):
|
||||
run_dbt(["run", "--vars", "{required: override}"])
|
||||
run_dbt(["test"])
|
||||
|
||||
|
||||
# This one switches to setting a var in 'test'
|
||||
class TestCLIVarOverridePorject:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_override__schema_yml,
|
||||
"test_vars.sql": models_override__test_vars_sql,
|
||||
}
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def project_config_update(self):
|
||||
return {
|
||||
"vars": {
|
||||
"test": {
|
||||
"required": "present",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
def test__override_vars_project_level(self, project):
|
||||
|
||||
# This should be "override"
|
||||
run_dbt(["run", "--vars", "{required: override}"])
|
||||
run_dbt(["test"])
|
||||
212
tests/functional/cli_vars/test_cli_vars.py
Normal file
212
tests/functional/cli_vars/test_cli_vars.py
Normal file
@@ -0,0 +1,212 @@
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
from dbt.tests.util import run_dbt, get_artifact, write_config_file
|
||||
from dbt.exceptions import RuntimeException, CompilationException
|
||||
|
||||
|
||||
models_complex__schema_yml = """
|
||||
version: 2
|
||||
models:
|
||||
- name: complex_model
|
||||
columns:
|
||||
- name: var_1
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- abc
|
||||
- name: var_2
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- def
|
||||
- name: var_3
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- jkl
|
||||
"""
|
||||
|
||||
models_complex__complex_model_sql = """
|
||||
select
|
||||
'{{ var("variable_1") }}'::varchar as var_1,
|
||||
'{{ var("variable_2")[0] }}'::varchar as var_2,
|
||||
'{{ var("variable_3")["value"] }}'::varchar as var_3
|
||||
"""
|
||||
|
||||
models_simple__schema_yml = """
|
||||
version: 2
|
||||
models:
|
||||
- name: simple_model
|
||||
columns:
|
||||
- name: simple
|
||||
tests:
|
||||
- accepted_values:
|
||||
values:
|
||||
- abc
|
||||
"""
|
||||
|
||||
models_simple__simple_model_sql = """
|
||||
select
|
||||
'{{ var("simple") }}'::varchar as simple
|
||||
"""
|
||||
|
||||
really_simple_model_sql = """
|
||||
select 'abc' as simple
|
||||
"""
|
||||
|
||||
|
||||
class TestCLIVars:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_complex__schema_yml,
|
||||
"complex_model.sql": models_complex__complex_model_sql,
|
||||
}
|
||||
|
||||
def test__cli_vars_longform(self, project):
|
||||
cli_vars = {
|
||||
"variable_1": "abc",
|
||||
"variable_2": ["def", "ghi"],
|
||||
"variable_3": {"value": "jkl"},
|
||||
}
|
||||
results = run_dbt(["run", "--vars", yaml.dump(cli_vars)])
|
||||
assert len(results) == 1
|
||||
results = run_dbt(["test", "--vars", yaml.dump(cli_vars)])
|
||||
assert len(results) == 3
|
||||
|
||||
|
||||
class TestCLIVarsSimple:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_simple__schema_yml,
|
||||
"simple_model.sql": models_simple__simple_model_sql,
|
||||
}
|
||||
|
||||
def test__cli_vars_shorthand(self, project):
|
||||
results = run_dbt(["run", "--vars", "simple: abc"])
|
||||
assert len(results) == 1
|
||||
results = run_dbt(["test", "--vars", "simple: abc"])
|
||||
assert len(results) == 1
|
||||
|
||||
def test__cli_vars_longer(self, project):
|
||||
results = run_dbt(["run", "--vars", "{simple: abc, unused: def}"])
|
||||
assert len(results) == 1
|
||||
results = run_dbt(["test", "--vars", "{simple: abc, unused: def}"])
|
||||
assert len(results) == 1
|
||||
run_results = get_artifact(project.project_root, "target", "run_results.json")
|
||||
assert run_results["args"]["vars"] == "{simple: abc, unused: def}"
|
||||
|
||||
|
||||
class TestCLIVarsProfile:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_simple__schema_yml,
|
||||
"simple_model.sql": really_simple_model_sql,
|
||||
}
|
||||
|
||||
def test_cli_vars_in_profile(self, project, dbt_profile_data):
|
||||
profile = dbt_profile_data
|
||||
profile["test"]["outputs"]["default"]["host"] = "{{ var('db_host') }}"
|
||||
write_config_file(profile, project.profiles_dir, "profiles.yml")
|
||||
with pytest.raises(RuntimeException):
|
||||
results = run_dbt(["run"])
|
||||
results = run_dbt(["run", "--vars", "db_host: localhost"])
|
||||
assert len(results) == 1
|
||||
|
||||
|
||||
class TestCLIVarsPackages:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_simple__schema_yml,
|
||||
"simple_model.sql": really_simple_model_sql,
|
||||
}
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def packages_config(self):
|
||||
return {
|
||||
"packages": [
|
||||
{
|
||||
"git": "https://github.com/dbt-labs/dbt-integration-project",
|
||||
"revision": "1.1",
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
def test_cli_vars_in_packages(self, project, packages_config):
|
||||
# Run working deps and run commands
|
||||
run_dbt(["deps"])
|
||||
results = run_dbt(["run"])
|
||||
assert len(results) == 1
|
||||
|
||||
# Change packages.yml to contain a var
|
||||
packages = packages_config
|
||||
packages["packages"][0]["revision"] = "{{ var('dip_version') }}"
|
||||
write_config_file(packages, project.project_root, "packages.yml")
|
||||
|
||||
# Without vars args deps fails
|
||||
with pytest.raises(RuntimeException):
|
||||
run_dbt(["deps"])
|
||||
|
||||
# With vars arg deps succeeds
|
||||
results = run_dbt(["deps", "--vars", "dip_version: 1.1"])
|
||||
assert results is None
|
||||
|
||||
# Do a dbt run command so adapter exists
|
||||
results = run_dbt(["run", "--vars", "dip_version: 1.1"], expect_pass=False)
|
||||
assert len(results) == 4
|
||||
|
||||
|
||||
initial_selectors_yml = """
|
||||
selectors:
|
||||
- name: dev_defer_snapshots
|
||||
default: "{{ target.name == 'dev' | as_bool }}"
|
||||
definition:
|
||||
method: fqn
|
||||
value: '*'
|
||||
exclude:
|
||||
- method: config.materialized
|
||||
value: snapshot
|
||||
"""
|
||||
|
||||
var_selectors_yml = """
|
||||
selectors:
|
||||
- name: dev_defer_snapshots
|
||||
default: "{{ var('snapshot_target') == 'dev' | as_bool }}"
|
||||
definition:
|
||||
method: fqn
|
||||
value: '*'
|
||||
exclude:
|
||||
- method: config.materialized
|
||||
value: snapshot
|
||||
"""
|
||||
|
||||
|
||||
class TestCLIVarsSelectors:
|
||||
@pytest.fixture(scope="class")
|
||||
def models(self):
|
||||
return {
|
||||
"schema.yml": models_simple__schema_yml,
|
||||
"simple_model.sql": really_simple_model_sql,
|
||||
}
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def selectors(self):
|
||||
return initial_selectors_yml
|
||||
|
||||
def test_vars_in_selectors(self, project):
|
||||
# initially runs ok
|
||||
results = run_dbt(["run"])
|
||||
assert len(results) == 1
|
||||
|
||||
# Update the selectors.yml file to have a var
|
||||
write_config_file(var_selectors_yml, project.project_root, "selectors.yml")
|
||||
with pytest.raises(CompilationException):
|
||||
run_dbt(["run"])
|
||||
|
||||
# Var in cli_vars works
|
||||
results = run_dbt(["run", "--vars", "snapshot_target: dev"])
|
||||
assert len(results) == 1
|
||||
@@ -0,0 +1,113 @@
|
||||
import pytest
|
||||
from dbt.tests.util import run_dbt, check_relations_equal
|
||||
|
||||
snapshot_sql = """
|
||||
{% snapshot snapshot_check_cols_updated_at_actual %}
|
||||
{{
|
||||
config(
|
||||
target_database=database,
|
||||
target_schema=schema,
|
||||
unique_key='id',
|
||||
strategy='check',
|
||||
check_cols='all',
|
||||
updated_at="'" ~ var("updated_at") ~ "'::timestamp",
|
||||
)
|
||||
}}
|
||||
|
||||
{% if var('version') == 1 %}
|
||||
|
||||
select 'a' as id, 10 as counter, '2016-01-01T00:00:00Z'::timestamp as timestamp_col union all
|
||||
select 'b' as id, 20 as counter, '2016-01-01T00:00:00Z'::timestamp as timestamp_col
|
||||
|
||||
{% elif var('version') == 2 %}
|
||||
|
||||
select 'a' as id, 30 as counter, '2016-01-02T00:00:00Z'::timestamp as timestamp_col union all
|
||||
select 'b' as id, 20 as counter, '2016-01-01T00:00:00Z'::timestamp as timestamp_col union all
|
||||
select 'c' as id, 40 as counter, '2016-01-02T00:00:00Z'::timestamp as timestamp_col
|
||||
|
||||
{% else %}
|
||||
|
||||
select 'a' as id, 30 as counter, '2016-01-02T00:00:00Z'::timestamp as timestamp_col union all
|
||||
select 'c' as id, 40 as counter, '2016-01-02T00:00:00Z'::timestamp as timestamp_col
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% endsnapshot %}
|
||||
"""
|
||||
|
||||
expected_csv = """
|
||||
id,counter,timestamp_col,dbt_scd_id,dbt_updated_at,dbt_valid_from,dbt_valid_to
|
||||
a,10,2016-01-01 00:00:00.000,927354aa091feffd9437ead0bdae7ae1,2016-07-01 00:00:00.000,2016-07-01 00:00:00.000,2016-07-02 00:00:00.000
|
||||
b,20,2016-01-01 00:00:00.000,40ace4cbf8629f1720ec8a529ed76f8c,2016-07-01 00:00:00.000,2016-07-01 00:00:00.000,
|
||||
a,30,2016-01-02 00:00:00.000,e9133f2b302c50e36f43e770944cec9b,2016-07-02 00:00:00.000,2016-07-02 00:00:00.000,
|
||||
c,40,2016-01-02 00:00:00.000,09d33d35101e788c152f65d0530b6837,2016-07-02 00:00:00.000,2016-07-02 00:00:00.000,
|
||||
""".lstrip()
|
||||
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def snapshots():
|
||||
return {"snapshot_check_cols_updated_at_actual.sql": snapshot_sql}
|
||||
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def seeds():
|
||||
return {"snapshot_check_cols_updated_at_expected.csv": expected_csv}
|
||||
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def project_config_update():
|
||||
return {
|
||||
"seeds": {
|
||||
"quote_columns": False,
|
||||
"test": {
|
||||
"snapshot_check_cols_updated_at_expected": {
|
||||
"+column_types": {
|
||||
"timestamp_col": "timestamp without time zone",
|
||||
"dbt_updated_at": "timestamp without time zone",
|
||||
"dbt_valid_from": "timestamp without time zone",
|
||||
"dbt_valid_to": "timestamp without time zone",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def test_simple_snapshot(project):
|
||||
"""
|
||||
Test that the `dbt_updated_at` column reflects the `updated_at` timestamp expression in the config.
|
||||
|
||||
Approach:
|
||||
1. Create a table that represents the expected data after a series of snapshots
|
||||
- Use dbt seed to create the expected relation (`snapshot_check_cols_updated_at_expected`)
|
||||
2. Execute a series of snapshots to create the data
|
||||
- Use a series of (3) dbt snapshot commands to create the actual relation (`snapshot_check_cols_updated_at_actual`)
|
||||
- The logic can switch between 3 different versions of the data (depending on the `version` number)
|
||||
- The `updated_at` value is passed in via `--vars` and cast to a timestamp in the snapshot config
|
||||
3. Compare the two relations for equality
|
||||
"""
|
||||
|
||||
# 1. Create a table that represents the expected data after a series of snapshots
|
||||
results = run_dbt(["seed", "--show", "--vars", "{version: 1, updated_at: 2016-07-01}"])
|
||||
assert len(results) == 1
|
||||
|
||||
# 2. Execute a series of snapshots to create the data
|
||||
|
||||
# Snapshot day 1
|
||||
results = run_dbt(["snapshot", "--vars", "{version: 1, updated_at: 2016-07-01}"])
|
||||
assert len(results) == 1
|
||||
|
||||
# Snapshot day 2
|
||||
results = run_dbt(["snapshot", "--vars", "{version: 2, updated_at: 2016-07-02}"])
|
||||
assert len(results) == 1
|
||||
|
||||
# Snapshot day 3
|
||||
results = run_dbt(["snapshot", "--vars", "{version: 3, updated_at: 2016-07-03}"])
|
||||
assert len(results) == 1
|
||||
|
||||
# 3. Compare the two relations for equality
|
||||
check_relations_equal(
|
||||
project.adapter,
|
||||
["snapshot_check_cols_updated_at_actual", "snapshot_check_cols_updated_at_expected"],
|
||||
compare_snapshot_cols=True,
|
||||
)
|
||||
59
tests/unit/test_connection_retries.py
Normal file
59
tests/unit/test_connection_retries.py
Normal file
@@ -0,0 +1,59 @@
|
||||
import functools
|
||||
import pytest
|
||||
from requests.exceptions import RequestException
|
||||
from dbt.exceptions import ConnectionException
|
||||
from dbt.utils import _connection_exception_retry
|
||||
|
||||
|
||||
def no_retry_fn():
|
||||
return "success"
|
||||
|
||||
|
||||
class TestNoRetries:
|
||||
def test_no_retry(self):
|
||||
fn_to_retry = functools.partial(no_retry_fn)
|
||||
result = _connection_exception_retry(fn_to_retry, 3)
|
||||
|
||||
expected = "success"
|
||||
|
||||
assert result == expected
|
||||
|
||||
|
||||
def no_success_fn():
|
||||
raise RequestException("You'll never pass")
|
||||
return "failure"
|
||||
|
||||
|
||||
class TestMaxRetries:
|
||||
def test_no_retry(self):
|
||||
fn_to_retry = functools.partial(no_success_fn)
|
||||
|
||||
with pytest.raises(ConnectionException):
|
||||
_connection_exception_retry(fn_to_retry, 3)
|
||||
|
||||
|
||||
def single_retry_fn():
|
||||
global counter
|
||||
if counter == 0:
|
||||
counter += 1
|
||||
raise RequestException("You won't pass this one time")
|
||||
elif counter == 1:
|
||||
counter += 1
|
||||
return "success on 2"
|
||||
|
||||
return "How did we get here?"
|
||||
|
||||
|
||||
class TestSingleRetry:
|
||||
def test_no_retry(self):
|
||||
global counter
|
||||
counter = 0
|
||||
|
||||
fn_to_retry = functools.partial(single_retry_fn)
|
||||
result = _connection_exception_retry(fn_to_retry, 3)
|
||||
expected = "success on 2"
|
||||
|
||||
# We need to test the return value here, not just that it did not throw an error.
|
||||
# If the value is not being passed it causes cryptic errors
|
||||
assert result == expected
|
||||
assert counter == 2
|
||||
Reference in New Issue
Block a user