Compare commits

...

144 Commits

Author SHA1 Message Date
Kyle Wigley
ed7dbcec21 test commit 2021-09-23 13:40:46 -04:00
Kyle Wigley
fb84abd28c run performance action when code changes in performance dir 2021-09-23 13:39:32 -04:00
dave-connors-3
f4f5d31959 Feature/catalog relational objects (#3922)
* filter to relational nodes

* cleanup

* flake formatting

* changelog
2021-09-23 08:54:05 -07:00
Jeremy Cohen
e7e12075b9 Fix batching for Snowflake seeds >10k rows (#3942)
* Call get_batch_size in snowflake__load_csv_rows

* Git ignore big csv. Update changelog
2021-09-23 08:49:52 -07:00
Emily Rockman
74dda5aa19 Merge pull request #3893 from dbt-labs/2798_enact_deprecations
removed deprecation for materialization-return and replaced with exception
2021-09-22 14:35:05 -05:00
Emily Rockman
092e96ce70 Merge branch 'develop' into 2798_enact_deprecations 2021-09-22 14:09:35 -05:00
Kyle Wigley
18102027ba Pull in changes for the 0.21.0rc1 release (#3935)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-22 13:53:43 -05:00
Emily Rockman
f80825d63e updated changelog 2021-09-22 12:55:49 -05:00
Kyle Wigley
9316e47b77 Pull in changes for the 0.21.0rc1 release (#3935)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-22 13:25:46 -04:00
Emily Rockman
f99cf1218a fixed conflict 2021-09-22 11:36:22 -05:00
Emily Rockman
5871915ce9 Merge branch '2798_enact_deprecations' of https://github.com/dbt-labs/dbt into 2798_enact_deprecations
# Conflicts:
#	test/integration/012_deprecation_tests/test_deprecations.py
2021-09-22 11:34:51 -05:00
Emily Rockman
5ce290043f more explicit error check 2021-09-22 11:16:59 -05:00
Emily Rockman
080d27321b removed deprication for materialization-return and replaced it with an exception 2021-09-22 11:16:59 -05:00
Gerda Shank
1d0936bd14 Merge pull request #3889 from dbt-labs/3886_pp_log_levels
[#3886] Tweak partial parsing log messages
2021-09-22 10:48:21 -04:00
Gerda Shank
706b8ca9df Merge pull request #3839 from dbt-labs/2990_global_cli_flags
[#2990] Normalize global CLI args/flags
2021-09-22 10:47:54 -04:00
Nathaniel May
7dc491b7ba Merge pull request #3936 from dbt-labs/regression-test-tweaks
Performance Regression Testing: Add timestamps to results and make filenames unique.
2021-09-22 10:18:24 -04:00
Gerda Shank
779c789a64 [#2990] Normalize global CLI args/flags 2021-09-22 09:58:07 -04:00
Gerda Shank
409b4ba109 [#3886] Tweak partial parsing log messages 2021-09-22 09:20:24 -04:00
Nathaniel May
59d131d3ac add timestamps to results, and make filenames unique 2021-09-21 18:10:37 -04:00
Joel Labes
6563d09ba7 Add logging for skipped resources (#3833)
Add --greedy flag to subparser

Add greedy flag, override default behaviour when parsing union

Add greedy support to ls (I think?!)

That was suspiciously easy

Fix flake issues

Try adding tests with greedy support

Remove trailing whitespace

Fix return type for select_nodes

forgot to add greedy arg

Remove incorrectly expected test

Use named param for greedy

Pull alert_unused_nodes out into its own function

rename resources -> tests

Add expand_selection explanation of --greedy flag

Add greedy support to yaml selectors. Update warning, tests

Fix failing tests

Fix tests. Changelog

Fix flake8

Co-authored-by: Joel Labes c/o Jeremy Cohen <jeremy@dbtlabs.com>
2021-09-18 19:58:38 +02:00
Kyle Wigley
05dea18b62 update develop (#3914)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
Co-authored-by: sungchun12 <sungwonchung3@gmail.com>
Co-authored-by: Drew Banin <drew@fishtownanalytics.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-17 14:07:42 -04:00
Nathaniel May
d7177c7d89 Merge pull request #3910 from dbt-labs/sample-more-exp-parser
sample exp parser more often
2021-09-17 13:34:14 -04:00
Gerda Shank
35f0fea804 Merge pull request #3888 from dbt-labs/skip_reloading_files
[#3563] Don't reload and validate schema files if they haven't changed
2021-09-17 13:32:01 -04:00
Gerda Shank
8953c7c533 [#3563] Partial parsing: don't reload and validate schema files if they haven't changed 2021-09-17 13:27:25 -04:00
Jeremy Cohen
76c59a5545 Fix logging for default selector (#3892)
* Fix logging for default selector

* Fix flake8, mypy
2021-09-17 18:00:14 +02:00
Emily Rockman
237048c7ac more explicit error check 2021-09-17 10:51:53 -05:00
Emily Rockman
30ff395b7b removed deprication for materialization-return and replaced it with an exception 2021-09-17 10:51:53 -05:00
Nathaniel May
5c0a31b829 sample exp parser more often 2021-09-17 11:42:35 -04:00
Nathaniel May
243bc3d41d Merge pull request #3877 from dbt-labs/exp-parser-detect-macros
Make ModelParser detect macro overrides for ref, source, and config.
2021-09-17 11:39:16 -04:00
Sam Debruyn
67b594a950 group by column_name in accepted_values (#3906)
* group by column_name in accepted_values
Group by index is not ANSI SQL and not supported in every database engine (e.g. MS SQL). Use group by column_name in shared code.

* update changelog
2021-09-17 17:08:41 +02:00
Kyle Wigley
2493c21649 Cleanup CHANGELOG (#3902) 2021-09-17 09:22:37 -04:00
Kyle Wigley
d3826e670f Build task arg parity (#3884) 2021-09-16 16:21:37 -04:00
Drew Banin
4b5b1696b7 Merge pull request #3894 from dbt-labs/feature/source-freshness-timing-in-artifacts
Add timing and thread info to sources.json artifact
2021-09-16 14:59:18 -04:00
Drew Banin
abb59ef14f add changelog entry 2021-09-16 13:54:04 -04:00
Drew Banin
3b7c2816b9 (#3804) Add timing and thread info to sources.json artifact 2021-09-16 13:28:08 -04:00
Teddy
484517416f 3448 user defined default selection criteria (#3875)
* Added selector default

* Updated code from review feedback

* Fix schema.yml location

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-16 16:13:59 +02:00
Nathaniel May
39447055d3 add macro override detection for ref, source, and config for ModelParser 2021-09-15 12:09:47 -04:00
Kyle Wigley
95cca277c9 Handle failing tests for build command (#3792)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-15 10:39:19 -04:00
Christophe Oudar
96083dcaf5 add support for execution project for BigQuery adapter (#3707)
* add support for execution project for BigQuery adapter

* Add integration test for BigQuery execution_project

* review changes

* Update changelog
2021-09-15 14:09:08 +02:00
Emily Rockman
75b4cf691b Merge pull request #3879 from dbt-labs/emmyoop/update-contributing
Fixed typo and added some more setup details to CONTRIBUTING.md
2021-09-14 09:35:12 -05:00
Emily Rockman
7c9171b00b Fixed typo and added some more setup details 2021-09-13 14:03:02 -05:00
Kyle Wigley
3effade266 deepcopy args when passed down to rpc pask (#3850) 2021-09-10 16:34:57 -04:00
Jeremy Cohen
44e7390526 Specify macro_namespace of global_project dispatch macros (#3851)
* Specify macro_namespace of global_project dispatch macros

* Dispatch get_custom_alias, too

* Add integration tests

* Add changelog entry
2021-09-09 12:59:39 +02:00
Jeremy Cohen
c141798abc Use a macro for where subquery in tests (#3859)
* Use a macro for where subquery in tests

* Fix existing tests

* Add test case reproducing #3857

* Add changelog entry
2021-09-09 12:38:09 +02:00
sungchun12
df7ec3fb37 Fix dbt deps sorting behavior for non-standard version semantics (#3856)
* robust sorting and default to original version str

* more unique version semantics

* add another non-standard version example

* fix mypy issue
2021-09-09 12:13:09 +02:00
AndreasTA-AW
90e5507d03 #3682 Changed how tables and views are generated to be able to use differen… (#3691)
* Changed how tables and views are generated to be able to use different options

* 3682 added unit tests

* 3682 had conflict in changelog and became a bit messy

* 3682 Tested to add default kms to dataset and accidently pushed the changes
2021-09-09 12:01:02 +02:00
leahwicz
332d3494b3 Adding ADR directory, guidelines, first one (#3844)
* Adding ADR README

* Adding perf testing ADR

* fill in adr sections

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-09-07 14:05:21 -04:00
Anna Filippova
6393f5a5d7 Feature: Add support for Package name changes on the Hub (#3825)
* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* move warnings to deprecations

* Update core/dbt/clients/registry.py

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

* add comments for posterity

* Update core/dbt/deprecations.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* add deprecation test

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-07 11:53:02 -04:00
Kyle Wigley
ce97a9ca7a explicitly define __init__ for Profile class (#3855) 2021-09-07 10:38:51 -04:00
Jeremy Cohen
9af071bfe4 Add adapter_unique_id to invocation tracking (#3796)
* Add properties + methods for adapter_unique_id

* Turn on tracking
2021-09-03 16:38:20 +02:00
sungchun12
45a41202f3 fix prerelease imports with loose version semantics (#3852)
* fix prereleases imports with looseversion

* update for non-standard versions
2021-09-02 17:52:36 +02:00
Kyle Wigley
9768999ca1 Only set single_threaded for the rpc list task, not the cli list task (#3848) 2021-09-02 10:11:39 -04:00
juma-adoreme
fc0d11c0a5 Parametrize key selection for the list task (#3838)
* Parametrize key selectinf for list task

* Remove trailing whitespace

* Add output_keys to RPC List Parameters

* Move up changelog entry, add contributor note

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-02 14:58:00 +02:00
Joel Labes
e6344205bb Add colourful count of pass/fail tests in dbt debug (#3832)
* Add colourful count of pass/fail tests in dbt debug

* Remove number of checks, move error messages into shared list

* Fix flake issues

* Update CHANGELOG.md
2021-09-02 12:15:03 +02:00
Jason Gluck
9d7a6556ef configurable postgres connect timeout (#3582)
* configurable postgres connect timeout

* changelog for #3582

* test default and change connect_timeout

* Move up contributor note in changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 19:45:31 +02:00
Daniel Bartley
15f4add0b8 Add target_project and target_dataset config aliases for snapshots on BigQuery (#3834)
* add bq alias for target_project and target_dataset

* Update CHANGELOG.md

add #3694 to changelog

* Update CHANGELOG.md

Be more specific about the change to bigquery synonym for schema only.

* Set integration test bigquery configs to use alias

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 16:08:20 +02:00
Anders
464becacd0 fewer adapters will need to re-implemnt basic_load_csv_rows (#3623)
* fewer adapters will need to re-implemnt basic_load_csv_rows

* hack version

* reordering per convention

* make redundant basic_load_csv_rows

* for next version

* Update core/dbt/include/global_project/macros/materializations/seed/seed.sql

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Move up changelog entry

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 15:47:36 +02:00
sungchun12
51a76d0d63 Better dbt packages version logging to aid in upgrading outdated packages (#3759)
* start blueprinting changes

* extend registry handler for latest package version

* conditional logging for latest version

* remove todo

* add conditional logging

* Upgrades is clearer

* update if elif conditions and log msg

* remove TODO

* fix flake8 errors

* blueprint unit tests

* conditions specific to hub registry

* 1 passing test for get latest version

* DRY method calls

* move version latest to hub only

* add a new line

* remove other draft tests

* update changelog

* update log language for clarity

* pass flake8

* fix changelog

* Update test/unit/test_deps.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update changelog

* remove hub language

* sort for latest version and include prereleases

* fix flake8

* resolves another issue

* fix prerelease string formatting

* fix broken test

* update logging to past tense

* built-in version sorting

* handle prereleases for latest version checks

* get version latest unit test based on prerelease

* update unit test for sorting functionality

* consistent test names

* fix flake8

* clean up contributors list

* simplify if else logic

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 10:31:45 +02:00
Slava Kalashnikov
052e54d43a BigQuery copy materialization enhancement (#3606)
* Change BigQuery copy materialization

Change BigQuery copy materialization macros to copy data from several sources into single target

* Change BigQuery copy materialization

Change BigQuery connections.py to copy data from several sources into single target via copy materialization

* Change BigQuery copy materialization

Test to check default value of `copy_materialization` if it is absent in config

* Change BigQuery copy materialization

Update changelog

* Update changelog

* Var renaming + test addition

* Changelog updated

* Changelog updated

* Fix test for copy table

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update impl.py

* Update connections.py

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update connections.py

* Align calls from mock and from adapter

* Split long code ilnes

* Create additional.sql

* Update copy_as_several_tables.sql

* Update schema.yml

* Update copy.sql

* Update connections.py

* Update test_bigquery_copy_models.py

* Add contributor
2021-08-30 16:28:35 +02:00
Kyle Wigley
9e796671dd Update workflow concurrency (#3824) 2021-08-26 17:13:54 -04:00
Kyle Wigley
a9a6254f52 Address unexpected cancelled CI workflows and stop blocking Postgres integration tests (#3813) 2021-08-26 10:37:50 -04:00
Kyle Wigley
8b3a09c7ae Run postgres integration tests when dev dependency changes (#3819) 2021-08-26 10:36:24 -04:00
Jeremy Cohen
6aa4d812d4 Rewrite generic tests to support column expressions (#3812)
* Rewrite generic tests to support column expressions, too

* Fix naming
2021-08-26 10:30:03 -04:00
Kyle Wigley
07fa719fb0 Revert "Bump freezegun from 0.3.12 to 1.1.0" (#3818)
This reverts commit 650b34ae24.
2021-08-26 10:11:56 -04:00
dependabot[bot]
650b34ae24 Bump freezegun from 0.3.12 to 1.1.0 (#3206)
Bumps [freezegun](https://github.com/spulec/freezegun) from 0.3.12 to 1.1.0.
- [Release notes](https://github.com/spulec/freezegun/releases)
- [Changelog](https://github.com/spulec/freezegun/blob/master/CHANGELOG)
- [Commits](https://github.com/spulec/freezegun/compare/0.3.12...1.1.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 18:18:37 -04:00
dependabot[bot]
0a935855f3 Update sqlparse requirement from <0.4,>=0.2.3 to >=0.2.3,<0.5 in /core (#3074)
Updates the requirements on [sqlparse](https://github.com/andialbrecht/sqlparse) to permit the latest version.
- [Release notes](https://github.com/andialbrecht/sqlparse/releases)
- [Changelog](https://github.com/andialbrecht/sqlparse/blob/master/CHANGELOG)
- [Commits](https://github.com/andialbrecht/sqlparse/compare/0.2.3...0.4.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:59:36 -04:00
dependabot[bot]
d500aae4dc Bump ubuntu from 18.04 to 20.04 (#3073)
Bumps ubuntu from 18.04 to 20.04.

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:35:24 -04:00
Kyle Wigley
370d3e746d Remove fishtown-analytics references 😢 (#3801) 2021-08-25 09:24:41 -04:00
Kyle Wigley
ab06149c81 Moving CI to GitHub actions (#3669)
* test

* test test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix sh script for building dists

* fix windows build

* add concurrency

* fix random 'Cannot track experimental parser info when active user is None' error

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* temp commit to test

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feeback: update order of script for setting up postgres on macos runner

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* testing push event

* test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix windows build

* add concurrency

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* test dynamic matrix generation

* update label logic

* finishing touches

* align naming

* pass opts to pytest

* slim down push matrix, there are a lot of jobs

* test bump num of proc

* update matrix for all event triggers

* handle case when no changes require integration tests

* dev review feedback

* clean up and add branch name for testing

* Add test results publishing as artifact (#3794)

* Test failures file

* Add testing branch

* Adding upload steps

* Adding date to name

* Adding to integration

* Always upload artifacts

* Adding adapter type

* Always publish unit test results

* Adding comments

* rm unecessary env var

* fix changelog

* update job name

* clean up python deps

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-08-24 17:12:42 -04:00
Gerda Shank
e72895c7c9 Merge pull request #3791 from dbt-labs/3210_select_equals_model
[#3210] Make --models and --select synonyms, except for 'ls'
2021-08-24 14:44:24 -04:00
Gerda Shank
fe4a67daa4 Use 'select' instead of 'models' for internal args processing and RPC 2021-08-24 13:55:37 -04:00
leahwicz
09ea989d81 Retry GitHub download failures (#3729)
* Retry GitHub download failures

* Refactor and add tests

* Fixed linting and added comment

* Fixing unit test assertRaises

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Fixing casing

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Changing to use partial for function calls

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-08-24 13:35:09 -04:00
leahwicz
7fa14b6948 Fixing changelog (#3776) 2021-08-23 13:19:58 -04:00
Gerda Shank
d4974cd35c [#3210] Make --models and --select synonyms, except for 'ls' 2021-08-23 11:40:59 -04:00
Kyle Wigley
459178811b rm git backports for previous debian release, use git package (#3785) 2021-08-22 21:20:34 -04:00
Snyk bot
b37f6a010e fix: docker/Dockerfile to reduce vulnerabilities (#3771)
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-537598
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
2021-08-19 09:26:53 -04:00
Gerda Shank
e817164d31 Merge pull request #3767 from dbt-labs/3764_analysis_descriptions
[#3764] Fix bug in analysis patch application
2021-08-18 09:05:10 -04:00
Gerda Shank
09ce43edbf [#3764] Fix bug in analysis patch application 2021-08-17 18:07:35 -04:00
sungchun12
2980cd17df Fix/bigquery job label length (#3703)
* add blueprints to resolve issue

* revert to previous version

* intentionally failing test

* add imports

* add validation in existing function

* add passing test for length validation

* add current sanitized label

* remove duplicate var

* Make logging output 2 lines

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Raise RuntimeException to better handle error

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update test

* fix flake8 errors

* update changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-17 14:54:31 -04:00
Gerda Shank
8c804de643 Merge pull request #3758 from dbt-labs/3757_pp_version_mismatch
[#3757] Produce better information about partial parsing version mismatches
2021-08-17 13:23:23 -04:00
Gerda Shank
c8241b87e6 [#3757] Produce better information about partial parsing version
mismatches.
2021-08-17 12:48:20 -04:00
Gerda Shank
f204d24ed8 Merge pull request #3616 from dbt-labs/config_in_schema_files
Configs in schema files
2021-08-17 12:18:04 -04:00
Gerda Shank
d5461ccd8b [#2401] Configs in schema files 2021-08-17 11:50:08 -04:00
Gerda Shank
a20d2d93d3 Merge pull request #3750 from dbt-labs/fix_remove_tests
[#3711] Check that test unique_id exists in nodes when removing
2021-08-16 09:06:36 -04:00
Gerda Shank
57e1eec165 [#3711] Check that test unique_id exists in nodes when removing 2021-08-13 17:23:36 -04:00
Nathaniel May
d2dbe6afe4 Merge pull request #3739 from dbt-labs/perf-testing-tweak
Bump minimum performance runs to 20
2021-08-13 14:05:10 -04:00
Gerda Shank
72eb163223 Merge pull request #3733 from dbt-labs/pp_trap_errors
Trap partial parsing errors and switch to full reparse on exceptions
2021-08-13 13:54:40 -04:00
Gerda Shank
af16c74c3a [#3725] Switch to full reparse on partial parsing exceptions. Log and
report exception information.
2021-08-13 13:38:47 -04:00
Kyle Wigley
664f6584b9 add missing versions and format (#3738) 2021-08-13 13:31:38 -04:00
Nathaniel May
76fd3bdf8c minimum performance runs to 20 2021-08-13 13:20:30 -04:00
Jeremy Cohen
b633adb881 Use is_relational check for schema caching (#3716)
* Use is_relational check for schema caching

* Fix flake8

* Update changelog
2021-08-12 18:18:28 -04:00
Jeremy Cohen
b6e534cdd0 Feature: state:modified.macros (#3559)
* First cut at state:modified for macro changes

* First cut at state:modified subselectors

* Update test_graph_selector_methods

* Fix flake8

* Fix mypy

* Update 062_defer_state_test/test_modified_state

* PR feedback. Update changelog
2021-08-12 17:21:48 -04:00
Nathaniel May
1dc4adb86f Merge pull request #3732 from dbt-labs/perf-test-project-swap
Swap dummy perf testing projects for a real one
2021-08-12 10:27:20 -04:00
Nathaniel May
0a4d7c4831 Merge pull request #3731 from dbt-labs/perf-ci-quickfix
remove pr trigger for perf workflow
2021-08-12 10:25:13 -04:00
Nathaniel May
ad67e55d74 swapping dummy perf testing projects for real one 2021-08-11 14:24:26 -04:00
Nathaniel May
2fae64a488 remove pr trigger for perf workflow 2021-08-11 14:14:08 -04:00
Nathaniel May
1a984601ee Merge pull request #3602 from dbt-labs/performance-regression-testing
Add Performance Regression Testing [Rust]
2021-08-11 10:44:51 -04:00
Jeremy Cohen
454168204c Add build RPC method (#3674)
* Add build RPC method

* Add rpc test, some required flags

* Fix flake8

* PR feedback

* Update changelog [skip ci]

* Do not skip CI when rebasing
2021-08-10 10:51:43 -04:00
Drew Banin
43642956a2 Serialize Undefined values to JSON for rpc requests (#3687)
* (#3464) Serialize Undefined values to JSON for rpc requests

* Update changelog, fix typo
2021-08-09 21:26:09 -04:00
leahwicz
e7b8488be8 Remove converter.py since not used anymore (#3699) 2021-08-05 15:27:56 -04:00
Jeremy Cohen
0efaaf7daf Fix typo [skip ci] 2021-08-04 09:50:11 -04:00
Drew Banin
9ae7d68260 Merge pull request #3686 from dbt-labs/fix/cleanup-audit-integration-tests
Fix: Drop audit schema tests in tearDown for test suite
2021-08-03 19:54:36 -04:00
Github Build Bot
45fe76eef4 Merge remote-tracking branch 'origin/releases/0.21.0b1' into develop 2021-08-03 18:09:56 +00:00
Github Build Bot
ea772ae419 Release dbt v0.21.0b1 2021-08-03 17:30:32 +00:00
Drew Banin
c68fca7937 Fix: Drop audit schema tests in tearDown for test suite 2021-08-03 13:24:54 -04:00
Jeremy Cohen
159e79ee6b Update changelog in advance of v0.21.0b1 (#3678)
* Fixup Changelog

* More updates [skip ci]
2021-08-02 20:08:22 -04:00
leahwicz
57783bb5f6 Adding issue templates for different release types (#3644)
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-02 12:50:49 -04:00
Nathaniel May
d73ee588e5 Merge pull request #3637 from dbt-labs/experimental-parser-fix
make experimental parser respect config merge behavior
2021-08-02 10:03:42 -04:00
Nathaniel May
40089d710b experimental parser respects config merge behavior 2021-08-02 09:38:30 -04:00
Jeremy Cohen
6ec61950eb Handle exception from tracker.flush() (#3661) 2021-08-02 08:25:41 -04:00
Gerda Shank
72c831a80a Merge pull request #3659 from dbt-labs/pp_internal_macro_processing
[#3636] Check for unique_ids when recursively removing macros
2021-07-30 15:34:14 -04:00
Gerda Shank
929931a26a Merge pull request #3654 from dbt-labs/change_config_call_handling
Switch from config_call list to config_call_dict dictionary
2021-07-30 14:08:30 -04:00
Gerda Shank
577e2438c1 [#3636] Check for unique_ids when recursively removing macros 2021-07-30 14:01:40 -04:00
Kyle Wigley
2679792199 Add tracking event for full re-parse reasoning (#3652)
* add tracking event for full reparse reason

* update changelog
2021-07-30 09:39:09 -04:00
Kyle Wigley
2adf982991 update links to dbt repo (#3521) 2021-07-30 08:46:58 -04:00
Gerda Shank
1fb4a7f428 Switch from config_call list to config_call_dict dictionary 2021-07-29 18:46:59 -04:00
Kyle Wigley
30e72bc5e2 Use SchemaParser render context to render test configs (#3646)
* use available context when rendering test configs

* add test

* update changelog
2021-07-29 12:59:48 -04:00
Jeremy Cohen
35645a7233 Include dbt-docs changes for 0.20.1-rc1 (#3643) 2021-07-29 09:56:04 -04:00
Gerda Shank
d583c8d737 Merge pull request #3632 from dbt-labs/pp_delete_schema_macro_patch
[#3627] Improve findability of macro_patches, schedule right macro file for processing
2021-07-28 17:49:27 -04:00
Gerda Shank
a83f00c594 [#3627] Improve findability of macro_patches, schedule right macro file
for processing
2021-07-28 17:27:42 -04:00
Daniele Frigo
c448702c1b Use old_relation for renaming in default materializations (#3547)
* table and view materializations should rename from old_relation to manage changes from view to table and reverse

* edited changelog

* edited changelog

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-07-28 06:59:27 -04:00
Niall Woodward
558a6a03ac Fix PR link in changelog (#3639)
Fix a typo introduced in https://github.com/dbt-labs/dbt/pull/3624
2021-07-28 06:51:45 -04:00
Niall Woodward
52ec7907d3 dbt deps prerelease install bugs + add install-prerelease parameter to packages.yml (#3624)
* Fix dbt deps prerelease install bugs

* Add install-prerelease parameter to hub packages in packages.yml
2021-07-27 21:59:46 -04:00
Jeremy Cohen
792f39a888 Snowflake: no transactions, except for DML (#3510)
* Rm Snowflake txnal logic. Explicit for DML

* Be less clever. Update create_or_replace_view()

* Seed DML as well

* Changelog entry

* Fix unit test

* One semicolon can change the world
2021-07-27 18:13:35 -04:00
Gerda Shank
16264f58c1 Merge pull request #3621 from dbt-labs/pp_macro_link_processing_error
[#3584] Partial parsing: handle source tests when changing test macro
2021-07-27 16:59:26 -04:00
Nathaniel May
2317c0c3c8 Merge pull request #3630 from dbt-labs/nate-3568
fix awkward exception being raised by a yml file with all comments
2021-07-27 16:50:56 -04:00
Gerda Shank
3c09ab9736 [#3584] Partial parsing: handle source tests when changing test macro 2021-07-27 16:34:23 -04:00
Gerda Shank
f10dc0e1b3 Merge pull request #3618 from dbt-labs/pp_yaml_version
[#3567] Fix partial parsing error with version key if previous file is empty
2021-07-27 16:30:06 -04:00
leahwicz
634bc41d8a Secret scrubbing for env variables (#3617)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-27 16:06:10 -04:00
Gerda Shank
d7ea3648c6 [#3567] Fix partial parsing error with version key if previous file is
empty
2021-07-27 15:38:52 -04:00
Gerda Shank
e5c8e19ff2 Merge pull request #3619 from dbt-labs/model_config_iterator
[#3573] Put back config iterator for backwards compatibility
2021-07-27 15:34:51 -04:00
Nathaniel May
93cf1f085f handle None return value from yaml loading 2021-07-27 10:59:27 -04:00
Gerda Shank
a84f824a44 [#3573] Put back config iterator for backwards compatibility 2021-07-26 17:56:35 -04:00
Kyle Wigley
9c58f3465b Fix flaky test related to tracking events (#3604)
* skip all tracking event testing

* Turn off tracking in tests that hits model parsing code path
fix other random test that fails because global tracking.current_user exists but is null

* pytest did not respect skip mark

* fix gh actions
2021-07-26 16:55:16 -04:00
Gerda Shank
0e3778132b Merge pull request #3620 from dbt-labs/pp_already_removed_node
If SQL file already scheduled for parsing, don't reprocess
2021-07-26 15:49:10 -04:00
Jeremy Cohen
72722635f2 Fix error handling in dbt build (#3608)
* RunTask -> BuildTask

* Add test, changelog entry
2021-07-25 22:15:13 -04:00
Gerda Shank
a4c7c7fc55 If SQL file already scheduled for parsing, don't reprocess 2021-07-24 15:43:54 -04:00
Nathaniel May
2bad73eead Merge pull request #3610 from dbt-labs/derp-fix
fixing typo in test
2021-07-23 13:14:55 -04:00
Nathaniel May
67c194dcd1 fixing typo in test 2021-07-22 09:53:26 -04:00
matt-winkler
bd7010678a Feature: on_schema_change for incremental models (#3387)
* detect and act on schema changes

* update incremental helpers code

* update changelog

* fix error in diff_columns from testing

* abstract code a bit further

* address matching names vs. data types

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* updates from Jeremy's feedback

* multi-column add / remove with full_refresh

* simple changes from JC's feedback

* updated for snowflake

* reorganize postgres code

* reorganize approach

* updated full refresh trigger logic

* fixed unintentional wipe behavior

* catch final else condition

* remove WHERE string replace

* touch ups

* port core to snowflake

* added bigquery code

* updated impacted unit tests

* updates from linting tests

* updates from linting again

* snowflake updates from further testing

* fix logging

* clean up incremental logic

* updated for bigquery

* update postgres with new strategy

* update nodeconfig

* starting integration tests

* integration test for ignore case

* add test for append_new_columns

* add integration test for sync

* remove extra tests

* add unique key and snowflake test

* move incremental integration test dir

* update integration tests

* update integration tests

* Suggestions for #3387 (#3558)

* PR feedback: rationalize macros + logging, fix + expand tests

* Rm alter_column_types, always true for sync_all_columns

* update logging and integration test on sync

* update integration tests

* test fix SF integration tests

Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>

* rename integration test folder

* Update core/dbt/include/global_project/macros/materializations/incremental/incremental.sql

Accept Jeremy's suggested change

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Update changelog [skip ci]

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-21 15:49:19 -04:00
leahwicz
9f716b31b3 Moving unit tests into separate workflow (#3588)
* Moving unit tests into separate workflow

* Fixing CircleCI error
2021-07-21 12:35:04 -04:00
Kyle Wigley
3dd486d8fa Source freshness task node selection and cli command parity (#3554)
* cli: add selection args for source freshness command

* rename command to `source freshness` and maintain alias to old command

* update and add tests for source freshness command and node selection

* update changelog, add comments

* fix formatting

* update changelog
2021-07-21 10:31:40 -04:00
Jeremy Cohen
33217891ca Refactor relationships test to support where config (#3583)
* Rewrite relationships with CTEs

* Update changelog PR num [skip ci]
2021-07-20 19:28:09 -04:00
dependabot[bot]
1d37c4e555 Update snowflake-connector-python[secure-local-storage] requirement (#3594)
Updates the requirements on [snowflake-connector-python[secure-local-storage]](https://github.com/snowflakedb/snowflake-connector-python) to permit the latest version.
- [Release notes](https://github.com/snowflakedb/snowflake-connector-python/releases)
- [Commits](https://github.com/snowflakedb/snowflake-connector-python/commits)

---
updated-dependencies:
- dependency-name: snowflake-connector-python[secure-local-storage]
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-20 14:01:35 -04:00
4335 changed files with 60866 additions and 4743 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.21.0a1
current_version = 0.21.0rc1
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
@@ -47,3 +47,4 @@ first_value = 1
[bumpversion:file:plugins/snowflake/dbt/adapters/snowflake/__version__.py]
[bumpversion:file:plugins/bigquery/dbt/adapters/bigquery/__version__.py]

View File

@@ -1,123 +0,0 @@
version: 2.1
jobs:
unit:
docker: &test_only
- image: fishtownanalytics/test-container:12
environment:
DBT_INVOCATION_ENV: circle
DOCKER_TEST_DATABASE_HOST: "database"
TOX_PARALLEL_NO_SPINNER: 1
steps:
- checkout
- run: tox -p -e py36,py37,py38
lint:
docker: *test_only
steps:
- checkout
- run: tox -e mypy,flake8 -- -v
build-wheels:
docker: *test_only
steps:
- checkout
- run:
name: Build wheels
command: |
python3.8 -m venv "${PYTHON_ENV}"
export PYTHON_BIN="${PYTHON_ENV}/bin/python"
$PYTHON_BIN -m pip install -U pip setuptools
$PYTHON_BIN -m pip install -r requirements.txt
$PYTHON_BIN -m pip install -r dev-requirements.txt
/bin/bash ./scripts/build-wheels.sh
$PYTHON_BIN ./scripts/collect-dbt-contexts.py > ./dist/context_metadata.json
$PYTHON_BIN ./scripts/collect-artifact-schema.py > ./dist/artifact_schemas.json
environment:
PYTHON_ENV: /home/tox/build_venv/
- store_artifacts:
path: ./dist
destination: dist
integration-postgres:
docker:
- image: fishtownanalytics/test-container:12
environment:
DBT_INVOCATION_ENV: circle
DOCKER_TEST_DATABASE_HOST: "database"
TOX_PARALLEL_NO_SPINNER: 1
- image: postgres
name: database
environment:
POSTGRES_USER: "root"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "dbt"
steps:
- checkout
- run:
name: Setup postgres
command: bash test/setup_db.sh
environment:
PGHOST: database
PGUSER: root
PGPASSWORD: password
PGDATABASE: postgres
- run:
name: Postgres integration tests
command: tox -p -e py36-postgres,py38-postgres -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-snowflake:
docker: *test_only
steps:
- checkout
- run:
name: Snowflake integration tests
command: tox -p -e py36-snowflake,py38-snowflake -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-redshift:
docker: *test_only
steps:
- checkout
- run:
name: Redshift integration tests
command: tox -p -e py36-redshift,py38-redshift -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-bigquery:
docker: *test_only
steps:
- checkout
- run:
name: Bigquery integration test
command: tox -p -e py36-bigquery,py38-bigquery -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
workflows:
version: 2
test-everything:
jobs:
- lint
- unit
- integration-postgres:
requires:
- unit
- integration-redshift:
requires:
- unit
- integration-bigquery:
requires:
- unit
- integration-snowflake:
requires:
- unit
- build-wheels:
requires:
- lint
- unit
- integration-postgres
- integration-redshift
- integration-bigquery
- integration-snowflake

View File

@@ -0,0 +1,27 @@
---
name: Beta minor version release
about: Creates a tracking checklist of items for a Beta minor version release
title: "[Tracking] v#.##.#B# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Engineering] Verify new release branch is created in the repo
- [ ] [Product] Finalize migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Create an epic for the RC release

View File

@@ -0,0 +1,28 @@
---
name: Final minor version release
about: Creates a tracking checklist of items for a final minor version release
title: "[Tracking] v#.##.# final release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Merge `next` into `current` for docs.getdbt.com
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Update discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.

View File

@@ -1,29 +0,0 @@
---
name: Minor version release
about: Creates a tracking checklist of items for a minor version release
title: "[Tracking] v#.##.# release "
labels: ''
assignees: ''
---
### Release Core
- [ ] [Engineering] dbt-release workflow
- [ ] [Engineering] Create new protected `x.latest` branch
- [ ] [Product] Finalize migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Publish discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/fishtownanalytics/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#59571f5bc1a040d9a8fd096e23d2c7db) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] Spark
- [ ] Presto
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.
- [ ] [Engineering] If this isn't a final release, create an epic for the next release

View File

@@ -0,0 +1,29 @@
---
name: RC minor version release
about: Creates a tracking checklist of items for a RC minor version release
title: "[Tracking] v#.##.#RC# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Update migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Publish discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.
- [ ] [Engineering] Create an epic for the final release

View File

@@ -0,0 +1,10 @@
name: "Set up postgres (linux)"
description: "Set up postgres service on linux vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
sudo systemctl start postgresql.service
pg_isready
sudo -u postgres bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,24 @@
name: "Set up postgres (macos)"
description: "Set up postgres service on macos vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
brew services start postgresql
echo "Check PostgreSQL service is running"
i=10
COMMAND='pg_isready'
while [ $i -gt -1 ]; do
if [ $i == 0 ]; then
echo "PostgreSQL service not ready, all attempts exhausted"
exit 1
fi
echo "Check PostgreSQL service status"
eval $COMMAND && break
echo "PostgreSQL service not ready, wait 10 more sec, attempts left: $i"
sleep 10
((i--))
done
createuser -s postgres
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,12 @@
name: "Set up postgres (windows)"
description: "Set up postgres service on windows vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: pwsh
run: |
$pgService = Get-Service -Name postgresql*
Set-Service -InputObject $pgService -Status running -StartupType automatic
Start-Process -FilePath "$env:PGBIN\pg_isready" -Wait -PassThru
$env:Path += ";$env:PGBIN"
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -9,14 +9,13 @@ resolves #
resolves #1234
-->
### Description
<!--- Describe the Pull Request here -->
### Checklist
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.

View File

@@ -0,0 +1,95 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.6", "3.7", "3.8", "3.9"];
const supportedAdapters = ["snowflake", "postgres", "bigquery", "redshift"];
// if PR, generate matrix based on files changed and PR labels
if (context.eventName.includes("pull_request")) {
// `changes` is a list of adapter names that have related
// file changes in the PR
// ex: ['postgres', 'snowflake']
const changes = JSON.parse(process.env.CHANGES);
const labels = context.payload.pull_request.labels.map(({ name }) => name);
console.log("labels", labels);
console.log("changes", changes);
const testAllLabel = labels.includes("test all");
const include = [];
for (const adapter of supportedAdapters) {
if (
changes.includes(adapter) ||
testAllLabel ||
labels.includes(`test ${adapter}`)
) {
for (const pythonVersion of supportedPythonVersions) {
if (
pythonVersion === defaultPythonVersion ||
labels.includes(`test python${pythonVersion}`) ||
testAllLabel
) {
// always run tests on ubuntu by default
include.push({
os: "ubuntu-latest",
adapter,
"python-version": pythonVersion,
});
if (labels.includes("test windows") || testAllLabel) {
include.push({
os: "windows-latest",
adapter,
"python-version": pythonVersion,
});
}
if (labels.includes("test macos") || testAllLabel) {
include.push({
os: "macos-latest",
adapter,
"python-version": pythonVersion,
});
}
}
}
}
}
console.log("matrix", { include });
return {
include,
};
}
// if not PR, generate matrix of python version, adapter, and operating
// system to run integration tests on
const include = [];
// run for all adapters and python versions on ubuntu
for (const adapter of supportedAdapters) {
for (const pythonVersion of supportedPythonVersions) {
include.push({
os: 'ubuntu-latest',
adapter: adapter,
"python-version": pythonVersion,
});
}
}
// additionally include runs for all adapters, on macos and windows,
// but only for the default python version
for (const adapter of supportedAdapters) {
for (const operatingSystem of ["windows-latest", "macos-latest"]) {
include.push({
os: operatingSystem,
adapter: adapter,
"python-version": defaultPythonVersion,
});
}
}
console.log("matrix", { include });
return {
include,
};
};

266
.github/workflows/integration.yml vendored Normal file
View File

@@ -0,0 +1,266 @@
# **what?**
# This workflow runs all integration tests for supported OS
# and python versions and core adapters. If triggered by PR,
# the workflow will only run tests for adapters related
# to code changes. Use the `test all` and `test ${adapter}`
# label to run all or additional tests. Use `ok to test`
# label to mark PRs from forked repositories that are safe
# to run integration tests for. Requires secrets to run
# against different warehouses.
# **why?**
# This checks the functionality of dbt from a user's perspective
# and attempts to catch functional regressions.
# **when?**
# This workflow will run on every push to a protected branch
# and when manually triggered. It will also run for all PRs, including
# PRs from forks. The workflow will be skipped until there is a label
# to mark the PR as safe to run.
name: Adapter Integration Tests
on:
# pushes to release branches
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
# all PRs, important to note that `pull_request_target` workflows
# will run in the context of the target branch of a PR
pull_request_target:
# manual tigger
workflow_dispatch:
# explicitly turn off permissions for `GITHUB_TOKEN`
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
# sets default shell to bash, for all operating systems
defaults:
run:
shell: bash
jobs:
# generate test metadata about what files changed and the testing matrix to use
test-metadata:
# run if not a PR from a forked repository or has a label to mark as safe to test
if: >-
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.generate-matrix.outputs.result }}
steps:
- name: Check out the repository (non-PR)
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Check if relevant files changed
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: get-changes
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
postgres:
- 'core/**'
- 'plugins/postgres/**'
- 'dev-requirements.txt'
snowflake:
- 'core/**'
- 'plugins/snowflake/**'
bigquery:
- 'core/**'
- 'plugins/bigquery/**'
redshift:
- 'core/**'
- 'plugins/redshift/**'
- 'plugins/postgres/**'
- name: Generate integration test matrix
id: generate-matrix
uses: actions/github-script@v4
env:
CHANGES: ${{ steps.get-changes.outputs.changes }}
with:
script: |
const script = require('./.github/scripts/integration-test-matrix.js')
const matrix = script({ context })
console.log(matrix)
return matrix
test:
name: ${{ matrix.adapter }} / python ${{ matrix.python-version }} / ${{ matrix.os }}
# run if not a PR from a forked repository or has a label to mark as safe to test
# also checks that the matrix generated is not empty
if: >-
needs.test-metadata.outputs.matrix &&
fromJSON( needs.test-metadata.outputs.matrix ).include[0] &&
(
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
)
runs-on: ${{ matrix.os }}
needs: test-metadata
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.test-metadata.outputs.matrix) }}
env:
TOXENV: integration-${{ matrix.adapter }}
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
# explicity checkout the branch for the PR,
# this is necessary for the `pull_request_target` event
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox (postgres)
if: matrix.adapter == 'postgres'
run: tox
- name: Run tox (redshift)
if: matrix.adapter == 'redshift'
env:
REDSHIFT_TEST_DBNAME: ${{ secrets.REDSHIFT_TEST_DBNAME }}
REDSHIFT_TEST_PASS: ${{ secrets.REDSHIFT_TEST_PASS }}
REDSHIFT_TEST_USER: ${{ secrets.REDSHIFT_TEST_USER }}
REDSHIFT_TEST_PORT: ${{ secrets.REDSHIFT_TEST_PORT }}
REDSHIFT_TEST_HOST: ${{ secrets.REDSHIFT_TEST_HOST }}
run: tox
- name: Run tox (snowflake)
if: matrix.adapter == 'snowflake'
env:
SNOWFLAKE_TEST_ACCOUNT: ${{ secrets.SNOWFLAKE_TEST_ACCOUNT }}
SNOWFLAKE_TEST_PASSWORD: ${{ secrets.SNOWFLAKE_TEST_PASSWORD }}
SNOWFLAKE_TEST_USER: ${{ secrets.SNOWFLAKE_TEST_USER }}
SNOWFLAKE_TEST_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_WAREHOUSE }}
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: ${{ secrets.SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN }}
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_ID }}
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET }}
SNOWFLAKE_TEST_ALT_DATABASE: ${{ secrets.SNOWFLAKE_TEST_ALT_DATABASE }}
SNOWFLAKE_TEST_ALT_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_ALT_WAREHOUSE }}
SNOWFLAKE_TEST_DATABASE: ${{ secrets.SNOWFLAKE_TEST_DATABASE }}
SNOWFLAKE_TEST_QUOTED_DATABASE: ${{ secrets.SNOWFLAKE_TEST_QUOTED_DATABASE }}
SNOWFLAKE_TEST_ROLE: ${{ secrets.SNOWFLAKE_TEST_ROLE }}
run: tox
- name: Run tox (bigquery)
if: matrix.adapter == 'bigquery'
env:
BIGQUERY_TEST_SERVICE_ACCOUNT_JSON: ${{ secrets.BIGQUERY_TEST_SERVICE_ACCOUNT_JSON }}
BIGQUERY_TEST_ALT_DATABASE: ${{ secrets.BIGQUERY_TEST_ALT_DATABASE }}
run: tox
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs
path: ./logs
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.adapter }}-${{ steps.date.outputs.date }}.csv
path: integration_results.csv
require-label-comment:
runs-on: ubuntu-latest
needs: test
permissions:
pull-requests: write
steps:
- name: Needs permission PR comment
if: >-
needs.test.result == 'skipped' &&
github.event_name == 'pull_request_target' &&
github.event.pull_request.head.repo.full_name != github.repository
uses: unsplash/comment-on-pr@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: |
"You do not have permissions to run integration tests, @dbt-labs/core "\
"needs to label this PR with `ok to test` in order to run integration tests!"
check_for_duplicate_msg: true

206
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,206 @@
# **what?**
# Runs code quality checks, unit tests, and verifies python build on
# all code commited to the repository. This workflow should not
# require any secrets since it runs for PRs from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code for dbt meets a certain quality standard.
# **when?**
# This will run for all PRs, when code is pushed to a release
# branch, and when manually triggered.
name: Tests and Code Checks
on:
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
pull_request:
workflow_dispatch:
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
code-quality:
name: ${{ matrix.toxenv }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [flake8, mypy]
env:
TOXENV: ${{ matrix.toxenv }}
PYTEST_ADDOPTS: "-v --color=yes"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
unit:
name: unit test / python ${{ matrix.python-version }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: [3.6, 3.7, 3.8] # TODO: support unit testing for python 3.9 (https://github.com/dbt-labs/dbt/issues/3689)
env:
TOXENV: "unit"
PYTEST_ADDOPTS: "-v --color=yes --csv unit_results.csv"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: unit_results_${{ matrix.python-version }}-${{ steps.date.outputs.date }}.csv
path: unit_results.csv
build:
name: build packages
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
- name: Build distributions
run: ./scripts/build-dist.sh
- name: Show distributions
run: ls -lh dist/
- name: Check distribution descriptions
run: |
twine check dist/*
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test-build:
name: verify packages / python ${{ matrix.python-version }} / ${{ matrix.os }}
needs: build
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version

View File

@@ -1,20 +1,16 @@
name: Performance Regression Testing
name: Performance Regression Tests
# Schedule triggers
on:
# TODO this is just while developing
pull_request:
branches:
- 'develop'
- 'performance-regression-testing'
# runs twice a day at 10:05am and 10:05pm
schedule:
# runs twice a day at 10:05am and 10:05pm
- cron: '5 10,22 * * *'
- cron: "5 10,22 * * *"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
push:
paths:
- performance/**
jobs:
# checks fmt of runner code
# purposefully not a dependency of any other job
# will block merging, but not prevent developing
@@ -88,7 +84,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
python-version: "3.8"
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: install hyperfine
@@ -121,11 +117,11 @@ jobs:
- name: checkout latest
uses: actions/checkout@v2
with:
ref: '0.20.latest'
ref: "0.20.latest"
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
python-version: "3.8"
- name: move repo up a level
run: mkdir ${{ github.workspace }}/../baseline/ && cp -r ${{ github.workspace }} ${{ github.workspace }}/../baseline
- name: "[debug] ls new dbt location"
@@ -171,11 +167,13 @@ jobs:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: make results directory
run: mkdir ./final-output/
- name: run calculation
run: ./runner calculate -r ./
run: ./runner calculate -r ./ -o ./final-output/
# always attempt to upload the results even if there were regressions found
- uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: final-calculations
path: ./final_calculations.json
path: ./final-output/*

View File

@@ -1,178 +0,0 @@
# This is a workflow to run our unit and integration tests for windows and mac
name: dbt Tests
# Triggers
on:
# Triggers the workflow on push or pull request events and also adds a manual trigger
push:
branches:
- 'develop'
- '*.latest'
- 'releases/*'
pull_request_target:
branches:
- 'develop'
- '*.latest'
- 'pr/*'
- 'releases/*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
Linting:
runs-on: ubuntu-latest #no need to run on every OS
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Linting'
run: tox -e mypy,flake8 -- -v
UnitTest:
strategy:
matrix:
os: [windows-latest, ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run unit tests'
run: python -m tox -e py -- -v
PostgresIntegrationTest:
runs-on: 'windows-latest' #TODO: Add Mac support
environment: 'Postgres'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: 'Install postgresql and set up database'
shell: pwsh
run: |
$serviceName = Get-Service -Name postgresql*
Set-Service -InputObject $serviceName -StartupType Automatic
Start-Service -InputObject $serviceName
& $env:PGBIN\createdb.exe -U postgres dbt
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE root WITH PASSWORD '$env:ROOT_PASSWORD';"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE root WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CREATE, CONNECT ON DATABASE dbt TO root WITH GRANT OPTION;"
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE noaccess WITH PASSWORD '$env:NOACCESS_PASSWORD' NOSUPERUSER;"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE noaccess WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CONNECT ON DATABASE dbt TO noaccess;"
env:
ROOT_PASSWORD: ${{ secrets.ROOT_PASSWORD }}
NOACCESS_PASSWORD: ${{ secrets.NOACCESS_PASSWORD }}
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-postgres -- -v -n4
# These three are all similar except secure environment variables, which MUST be passed along to their tasks,
# but there's probably a better way to do this!
SnowflakeIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Snowflake'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-snowflake -- -v -n4
env:
SNOWFLAKE_TEST_ACCOUNT: ${{ secrets.SNOWFLAKE_TEST_ACCOUNT }}
SNOWFLAKE_TEST_PASSWORD: ${{ secrets.SNOWFLAKE_TEST_PASSWORD }}
SNOWFLAKE_TEST_USER: ${{ secrets.SNOWFLAKE_TEST_USER }}
SNOWFLAKE_TEST_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_WAREHOUSE }}
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: ${{ secrets.SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN }}
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_ID }}
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET }}
SNOWFLAKE_TEST_ALT_DATABASE: ${{ secrets.SNOWFLAKE_TEST_ALT_DATABASE }}
SNOWFLAKE_TEST_ALT_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_ALT_WAREHOUSE }}
SNOWFLAKE_TEST_DATABASE: ${{ secrets.SNOWFLAKE_TEST_DATABASE }}
SNOWFLAKE_TEST_QUOTED_DATABASE: ${{ secrets.SNOWFLAKE_TEST_QUOTED_DATABASE }}
SNOWFLAKE_TEST_ROLE: ${{ secrets.SNOWFLAKE_TEST_ROLE }}
BigQueryIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Bigquery'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-bigquery -- -v -n4
env:
BIGQUERY_SERVICE_ACCOUNT_JSON: ${{ secrets.BIGQUERY_SERVICE_ACCOUNT_JSON }}
BIGQUERY_TEST_ALT_DATABASE: ${{ secrets.BIGQUERY_TEST_ALT_DATABASE }}
RedshiftIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Redshift'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-redshift -- -v -n4
env:
REDSHIFT_TEST_DBNAME: ${{ secrets.REDSHIFT_TEST_DBNAME }}
REDSHIFT_TEST_PASS: ${{ secrets.REDSHIFT_TEST_PASS }}
REDSHIFT_TEST_USER: ${{ secrets.REDSHIFT_TEST_USER }}
REDSHIFT_TEST_PORT: ${{ secrets.REDSHIFT_TEST_PORT }}
REDSHIFT_TEST_HOST: ${{ secrets.REDSHIFT_TEST_HOST }}

View File

@@ -26,7 +26,7 @@ This is the docs website code. It comes from the dbt-docs repository, and is gen
## Adapters
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/fishtown-analytics/dbt-spark), [dbt-presto](https://github.com/fishtown-analytics/dbt-presto)).
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/dbt-labs/dbt-spark), [dbt-presto](https://github.com/dbt-labs/dbt-presto)).
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ Please note that all contributors to `dbt` must sign the [Contributor License Ag
### Defining the problem
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/fishtown-analytics/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/dbt-labs/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
> **Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
@@ -36,7 +36,7 @@ After you open an issue, a `dbt` maintainer will follow up by commenting on your
If an issue is appropriately well scoped and describes a beneficial change to the `dbt` codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/fishtown-analytics/dbt/contribute) page.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/dbt-labs/dbt/contribute) page.
Here's a good workflow:
- Comment on the open issue, expressing your interest in contributing the required code change
@@ -52,15 +52,15 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
| tag | description |
| --- | ----------- |
| [triage](https://github.com/fishtown-analytics/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/fishtown-analytics/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/fishtown-analytics/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/fishtown-analytics/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/fishtown-analytics/`dbt`/labels/help%20wanted) / [discussion](https://github.com/fishtown-analytics/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/fishtown-analytics/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/fishtown-analytics/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/fishtown-analytics/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/fishtown-analytics/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
| [triage](https://github.com/dbt-labs/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/dbt-labs/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/dbt-labs/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/dbt-labs/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/dbt-labs/dbt/labels/help%20wanted) / [discussion](https://github.com/dbt-labs/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/dbt-labs/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/dbt-labs/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/dbt-labs/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/dbt-labs/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
#### Branching Strategy
@@ -68,7 +68,7 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk branch or a specific release branch.
## Getting the code
@@ -78,17 +78,17 @@ You will need `git` in order to download and modify the `dbt` source code. On ma
### External contributors
If you are not a member of the `fishtown-analytics` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
1. fork the `dbt` repository
2. clone your fork locally
3. check out a new branch for your proposed changes
4. push changes to your fork
5. open a pull request against `fishtown-analytics/dbt` from your forked repository
5. open a pull request against `dbt-labs/dbt` from your forked repository
### Core contributors
If you are a member of the `fishtown-analytics` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
## Setting up an environment
@@ -135,7 +135,7 @@ brew install postgresql
### Installation
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt` (and its dependencies) with:
```sh
make dev
@@ -155,7 +155,7 @@ Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as
Getting the `dbt` integration tests set up in your local environment will be very helpful as you start to make changes to your local version of `dbt`. The section that follows outlines some helpful tips for setting up the test environment.
Since `dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, Fishtown Analytics provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
Since `dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, dbt Labs provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
### Initial setup
@@ -170,6 +170,8 @@ docker-compose up -d database
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
```
Note that you may need to run the previous command twice as it does not currently wait for the database to be running before attempting to run commands against it. This will be fixed with [#3876](https://github.com/dbt-labs/dbt/issues/3876).
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
```
@@ -224,7 +226,7 @@ python -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
> is a list of useful command-line options for `pytest` to use while developing.
## Submitting a Pull Request
Fishtown Analytics provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `fishtown-analytics/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
dbt Labs provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `dbt-labs/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
A `dbt` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.

View File

@@ -1,4 +1,4 @@
FROM ubuntu:18.04
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND noninteractive

View File

@@ -2,20 +2,17 @@
<img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt/actions/workflows/tests.yml?query=branch%3Adevelop">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/tests.yml/badge.svg" alt="GitHub Actions"/>
<a href="https://github.com/dbt-labs/dbt/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://circleci.com/gh/dbt-labs/dbt/tree/develop">
<img src="https://circleci.com/gh/dbt-labs/dbt/tree/develop.svg?style=svg" alt="CircleCI" />
</a>
<a href="https://dev.azure.com/fishtown-analytics/dbt/_build?definitionId=1&_a=summary&repositoryFilter=1&branchFilter=789%2C789%2C789%2C789">
<img src="https://dev.azure.com/fishtown-analytics/dbt/_apis/build/status/fishtown-analytics.dbt?branchName=develop" alt="Azure Pipelines" />
<a href="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
![dbt architecture](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
## Understanding dbt

View File

@@ -1,154 +0,0 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
trigger:
branches:
include:
- develop
- '*.latest'
- pr/*
jobs:
- job: UnitTest
pool:
vmImage: 'vs2017-win2016'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py -- -v
displayName: Run unit tests
- job: PostgresIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
steps:
- pwsh: |
$serviceName = Get-Service -Name postgresql*
Set-Service -InputObject $serviceName -StartupType Automatic
Start-Service -InputObject $serviceName
& $env:PGBIN\createdb.exe -U postgres dbt
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE root WITH PASSWORD 'password';"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE root WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CREATE, CONNECT ON DATABASE dbt TO root WITH GRANT OPTION;"
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE noaccess WITH PASSWORD 'password' NOSUPERUSER;"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE noaccess WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CONNECT ON DATABASE dbt TO noaccess;"
displayName: Install postgresql and set up database
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-postgres -- -v -n4
displayName: Run integration tests
# These three are all similar except secure environment variables, which MUST be passed along to their tasks,
# but there's probably a better way to do this!
- job: SnowflakeIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-snowflake -- -v -n4
env:
SNOWFLAKE_TEST_ACCOUNT: $(SNOWFLAKE_TEST_ACCOUNT)
SNOWFLAKE_TEST_PASSWORD: $(SNOWFLAKE_TEST_PASSWORD)
SNOWFLAKE_TEST_USER: $(SNOWFLAKE_TEST_USER)
SNOWFLAKE_TEST_WAREHOUSE: $(SNOWFLAKE_TEST_WAREHOUSE)
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: $(SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN)
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: $(SNOWFLAKE_TEST_OAUTH_CLIENT_ID)
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: $(SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET)
displayName: Run integration tests
- job: BigQueryIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-bigquery -- -v -n4
env:
BIGQUERY_SERVICE_ACCOUNT_JSON: $(BIGQUERY_SERVICE_ACCOUNT_JSON)
displayName: Run integration tests
- job: RedshiftIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-redshift -- -v -n4
env:
REDSHIFT_TEST_DBNAME: $(REDSHIFT_TEST_DBNAME)
REDSHIFT_TEST_PASS: $(REDSHIFT_TEST_PASS)
REDSHIFT_TEST_USER: $(REDSHIFT_TEST_USER)
REDSHIFT_TEST_PORT: $(REDSHIFT_TEST_PORT)
REDSHIFT_TEST_HOST: $(REDSHIFT_TEST_HOST)
displayName: Run integration tests
- job: BuildWheel
pool:
vmImage: 'vs2017-win2016'
dependsOn:
- UnitTest
- PostgresIntegrationTest
- RedshiftIntegrationTest
- SnowflakeIntegrationTest
- BigQueryIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip setuptools && python -m pip install -r requirements.txt && python -m pip install -r dev-requirements.txt
displayName: Install dependencies
- task: ShellScript@2
inputs:
scriptPath: scripts/build-wheels.sh
- task: CopyFiles@2
inputs:
contents: 'dist\?(*.whl|*.tar.gz)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: dists

View File

@@ -1,73 +0,0 @@
#!/usr/bin/env python
import json
import yaml
import sys
import argparse
from datetime import datetime, timezone
import dbt.clients.registry as registry
def yaml_type(fname):
with open(fname) as f:
return yaml.load(f)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--project", type=yaml_type, default="dbt_project.yml")
parser.add_argument("--namespace", required=True)
return parser.parse_args()
def get_full_name(args):
return "{}/{}".format(args.namespace, args.project["name"])
def init_project_in_packages(args, packages):
full_name = get_full_name(args)
if full_name not in packages:
packages[full_name] = {
"name": args.project["name"],
"namespace": args.namespace,
"latest": args.project["version"],
"assets": {},
"versions": {},
}
return packages[full_name]
def add_version_to_package(args, project_json):
project_json["versions"][args.project["version"]] = {
"id": "{}/{}".format(get_full_name(args), args.project["version"]),
"name": args.project["name"],
"version": args.project["version"],
"description": "",
"published_at": datetime.now(timezone.utc).astimezone().isoformat(),
"packages": args.project.get("packages") or [],
"works_with": [],
"_source": {
"type": "github",
"url": "",
"readme": "",
},
"downloads": {
"tarball": "",
"format": "tgz",
"sha1": "",
},
}
def main():
args = parse_args()
packages = registry.packages()
project_json = init_project_in_packages(args, packages)
if args.project["version"] in project_json["versions"]:
raise Exception("Version {} already in packages JSON"
.format(args.project["version"]),
file=sys.stderr)
add_version_to_package(args, project_json)
print(json.dumps(packages, indent=2))
if __name__ == "__main__":
main()

View File

@@ -238,12 +238,6 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
@classmethod
def _rollback(cls, connection: Connection) -> None:
"""Roll back the given connection."""
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In _rollback, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
f'Tried to rollback transaction on connection '
@@ -257,12 +251,6 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
@classmethod
def close(cls, connection: Connection) -> Connection:
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In close, got {connection} - not a Connection!'
)
# if the connection is in closed or init, there's nothing to do
if connection.state in {ConnectionState.CLOSED, ConnectionState.INIT}:
return connection

View File

@@ -16,7 +16,6 @@ from dbt.exceptions import (
get_relation_returned_multiple_results,
InternalException, NotImplementedException, RuntimeException,
)
from dbt import flags
from dbt import deprecations
from dbt.adapters.protocol import (
@@ -31,7 +30,6 @@ from dbt.contracts.graph.compiled import (
from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error
from dbt.node_types import NodeType
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import filter_null_values, executor
@@ -290,9 +288,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def _schema_is_cached(self, database: Optional[str], schema: str) -> bool:
"""Check if the schema is cached, and by default logs if it is not."""
if flags.USE_CACHE is False:
return False
elif (database, schema) not in self.cache:
if (database, schema) not in self.cache:
logger.debug(
'On "{}": cache miss for schema "{}.{}", this is inefficient'
.format(self.nice_connection_name(), database, schema)
@@ -310,8 +306,7 @@ class BaseAdapter(metaclass=AdapterMeta):
self.Relation.create_from(self.config, node).without_identifier()
for node in manifest.nodes.values()
if (
node.resource_type in NodeType.executable() and
not node.is_ephemeral_model
node.is_relational and not node.is_ephemeral_model
)
}
@@ -326,7 +321,9 @@ class BaseAdapter(metaclass=AdapterMeta):
"""
info_schema_name_map = SchemaSearchMap()
nodes: Iterator[CompileResultNode] = chain(
manifest.nodes.values(),
[node for node in manifest.nodes.values() if (
node.is_relational and not node.is_ephemeral_model
)],
manifest.sources.values(),
)
for node in nodes:
@@ -342,9 +339,6 @@ class BaseAdapter(metaclass=AdapterMeta):
"""Populate the relations cache for the given schemas. Returns an
iterable of the schemas populated, as strings.
"""
if not flags.USE_CACHE:
return
cache_schemas = self._get_cache_schemas(manifest)
with executor(self.config) as tpe:
futures: List[Future[List[BaseRelation]]] = []
@@ -377,9 +371,6 @@ class BaseAdapter(metaclass=AdapterMeta):
"""Run a query that gets a populated cache of the relations in the
database and set the cache on this adapter.
"""
if not flags.USE_CACHE:
return
with self.cache.lock:
if clear:
self.cache.clear()
@@ -393,8 +384,7 @@ class BaseAdapter(metaclass=AdapterMeta):
raise_compiler_error(
'Attempted to cache a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.add(relation)
self.cache.add(relation)
# so jinja doesn't render things
return ''
@@ -408,8 +398,7 @@ class BaseAdapter(metaclass=AdapterMeta):
raise_compiler_error(
'Attempted to drop a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.drop(relation)
self.cache.drop(relation)
return ''
@available
@@ -430,8 +419,7 @@ class BaseAdapter(metaclass=AdapterMeta):
.format(src_name, dst_name, name)
)
if flags.USE_CACHE:
self.cache.rename(from_relation, to_relation)
self.cache.rename(from_relation, to_relation)
return ''
###
@@ -513,7 +501,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def get_columns_in_relation(
self, relation: BaseRelation
) -> List[BaseColumn]:
"""Get a list of the columns in the given Relation."""
"""Get a list of the columns in the given Relation. """
raise NotImplementedException(
'`get_columns_in_relation` is not implemented for this adapter!'
)

View File

@@ -11,7 +11,6 @@ from dbt.contracts.connection import (
Connection, ConnectionState, AdapterResponse
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import flags
class SQLConnectionManager(BaseConnectionManager):
@@ -144,13 +143,6 @@ class SQLConnectionManager(BaseConnectionManager):
def begin(self):
connection = self.get_thread_connection()
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In begin, got {connection} - not a Connection!'
)
if connection.transaction_open is True:
raise dbt.exceptions.InternalException(
'Tried to begin a new transaction on connection "{}", but '
@@ -163,12 +155,6 @@ class SQLConnectionManager(BaseConnectionManager):
def commit(self):
connection = self.get_thread_connection()
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In commit, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
'Tried to commit transaction on connection "{}", but '

View File

@@ -153,7 +153,7 @@ def statically_parse_adapter_dispatch(func_call, ctx, db_wrapper):
package_name = packages_arg.node.node.name
macro_name = packages_arg.node.attr
if (macro_name.startswith('_get') and 'namespaces' in macro_name):
# noqa: https://github.com/fishtown-analytics/dbt-utils/blob/9e9407b/macros/cross_db_utils/_get_utils_namespaces.sql
# noqa: https://github.com/dbt-labs/dbt-utils/blob/9e9407b/macros/cross_db_utils/_get_utils_namespaces.sql
var_name = f'{package_name}_dispatch_list'
# hard code compatibility for fivetran_utils, just a teensy bit different
# noqa: https://github.com/fivetran/dbt_fivetran_utils/blob/0978ba2/macros/_get_utils_namespaces.sql

View File

@@ -1,10 +1,9 @@
from functools import wraps
import functools
import requests
from dbt.exceptions import RegistryException
from dbt.utils import memoized
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import deprecations
import os
import time
if os.getenv('DBT_PACKAGE_HUB_URL'):
DEFAULT_REGISTRY_BASE_URL = os.getenv('DBT_PACKAGE_HUB_URL')
@@ -19,26 +18,11 @@ def _get_url(url, registry_base_url=None):
return '{}{}'.format(registry_base_url, url)
def _wrap_exceptions(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
max_attempts = 5
attempt = 0
while True:
attempt += 1
try:
return fn(*args, **kwargs)
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout) as exc:
if attempt < max_attempts:
time.sleep(1)
continue
raise RegistryException(
'Unable to connect to registry hub'
) from exc
return wrapper
def _get_with_retries(path, registry_base_url=None):
get_fn = functools.partial(_get, path, registry_base_url)
return connection_exception_retry(get_fn, 5)
@_wrap_exceptions
def _get(path, registry_base_url=None):
url = _get_url(path, registry_base_url)
logger.debug('Making package registry request: GET {}'.format(url))
@@ -50,22 +34,44 @@ def _get(path, registry_base_url=None):
def index(registry_base_url=None):
return _get('api/v1/index.json', registry_base_url)
return _get_with_retries('api/v1/index.json', registry_base_url)
index_cached = memoized(index)
def packages(registry_base_url=None):
return _get('api/v1/packages.json', registry_base_url)
return _get_with_retries('api/v1/packages.json', registry_base_url)
def package(name, registry_base_url=None):
return _get('api/v1/{}.json'.format(name), registry_base_url)
response = _get_with_retries('api/v1/{}.json'.format(name), registry_base_url)
# Either redirectnamespace or redirectname in the JSON response indicate a redirect
# redirectnamespace redirects based on package ownership
# redirectname redirects based on package name
# Both can be present at the same time, or neither. Fails gracefully to old name
if ('redirectnamespace' in response) or ('redirectname' in response):
if ('redirectnamespace' in response) and response['redirectnamespace'] is not None:
use_namespace = response['redirectnamespace']
else:
use_namespace = response['namespace']
if ('redirectname' in response) and response['redirectname'] is not None:
use_name = response['redirectname']
else:
use_name = response['name']
new_nwo = use_namespace + "/" + use_name
deprecations.warn('package-redirect', old_name=name, new_name=new_nwo)
return response
def package_version(name, version, registry_base_url=None):
return _get('api/v1/{}/{}.json'.format(name, version), registry_base_url)
return _get_with_retries('api/v1/{}/{}.json'.format(name, version), registry_base_url)
def get_available_versions(name):

View File

@@ -1,4 +1,5 @@
import errno
import functools
import fnmatch
import json
import os
@@ -15,9 +16,8 @@ from typing import (
)
import dbt.exceptions
import dbt.utils
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import _connection_exception_retry as connection_exception_retry
if sys.platform == 'win32':
from ctypes import WinDLL, c_bool
@@ -30,7 +30,7 @@ def find_matching(
root_path: str,
relative_paths_to_search: List[str],
file_pattern: str,
) -> List[Dict[str, str]]:
) -> List[Dict[str, Any]]:
"""
Given an absolute `root_path`, a list of relative paths to that
absolute root path (`relative_paths_to_search`), and a `file_pattern`
@@ -61,11 +61,19 @@ def find_matching(
relative_path = os.path.relpath(
absolute_path, absolute_path_to_search
)
modification_time = 0.0
try:
modification_time = os.path.getmtime(absolute_path)
except OSError:
logger.exception(
f"Error retrieving modification time for file {absolute_path}"
)
if reobj.match(local_file):
matching.append({
'searched_path': relative_path_to_search,
'absolute_path': absolute_path,
'relative_path': relative_path,
'modification_time': modification_time,
})
return matching
@@ -441,6 +449,13 @@ def run_cmd(
return out, err
def download_with_retries(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:
download_fn = functools.partial(download, url, path, timeout)
connection_exception_retry(download_fn, 5)
def download(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:

View File

@@ -1,5 +1,5 @@
import dbt.exceptions
from typing import Any, Dict, Optional
import yaml
import yaml.scanner
@@ -56,7 +56,7 @@ def contextualized_yaml_error(raw_contents, error):
raw_error=error)
def safe_load(contents):
def safe_load(contents) -> Optional[Dict[str, Any]]:
return yaml.load(contents, Loader=SafeLoader)

View File

@@ -10,7 +10,7 @@ from dbt.adapters.factory import get_adapter
from dbt.clients import jinja
from dbt.clients.system import make_directory
from dbt.context.providers import generate_runtime_model
from dbt.contracts.graph.manifest import Manifest
from dbt.contracts.graph.manifest import Manifest, UniqueID
from dbt.contracts.graph.compiled import (
COMPILED_TYPES,
CompiledSchemaTestNode,
@@ -107,6 +107,18 @@ def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
_add_prepended_cte(prepended_ctes, new_cte)
def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueID]:
""" Get a list of tests that depend on the node with the
provided unique id """
return [
node.unique_id
for _, node in manifest.nodes.items()
if node.resource_type == NodeType.Test and
unique_id in node.depends_on_nodes
]
class Linker:
def __init__(self, data=None):
if data is None:
@@ -142,7 +154,7 @@ class Linker:
include all nodes in their corresponding graph entries.
"""
out_graph = self.graph.copy()
for node_id in self.graph.nodes():
for node_id in self.graph:
data = manifest.expect(node_id).to_dict(omit_none=True)
out_graph.add_node(node_id, **data)
nx.write_gpickle(out_graph, outfile)
@@ -412,13 +424,80 @@ class Compiler:
self.link_node(linker, node, manifest)
for exposure in manifest.exposures.values():
self.link_node(linker, exposure, manifest)
# linker.add_node(exposure.unique_id)
cycle = linker.find_cycles()
if cycle:
raise RuntimeError("Found a cycle: {}".format(cycle))
self.resolve_graph(linker, manifest)
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
""" This method adds additional edges to the DAG. For a given non-test
executable node, add an edge from an upstream test to the given node if
the set of nodes the test depends on is a proper/strict subset of the
upstream nodes for the given node. """
# Given a graph:
# model1 --> model2 --> model3
# | |
# | \/
# \/ test 2
# test1
#
# Produce the following graph:
# model1 --> model2 --> model3
# | | /\ /\
# | \/ | |
# \/ test2 ------- |
# test1 -------------------
for node_id in linker.graph:
# If node is executable (in manifest.nodes) and does _not_
# represent a test, continue.
if (
node_id in manifest.nodes and
manifest.nodes[node_id].resource_type != NodeType.Test
):
# Get *everything* upstream of the node
all_upstream_nodes = nx.traversal.bfs_tree(
linker.graph, node_id, reverse=True
)
# Get the set of upstream nodes not including the current node.
upstream_nodes = set([
n for n in all_upstream_nodes if n != node_id
])
# Get all tests that depend on any upstream nodes.
upstream_tests = []
for upstream_node in upstream_nodes:
upstream_tests += _get_tests_for_node(
manifest,
upstream_node
)
for upstream_test in upstream_tests:
# Get the set of all nodes that the test depends on
# including the upstream_node itself. This is necessary
# because tests can depend on multiple nodes (ex:
# relationship tests). Test nodes do not distinguish
# between what node the test is "testing" and what
# node(s) it depends on.
test_depends_on = set(
manifest.nodes[upstream_test].depends_on_nodes
)
# If the set of nodes that an upstream test depends on
# is a proper (or strict) subset of all upstream nodes of
# the current node, add an edge from the upstream test
# to the current node. Must be a proper/strict subset to
# avoid adding a circular dependency to the graph.
if (test_depends_on < upstream_nodes):
linker.graph.add_edge(
upstream_test,
node_id
)
def compile(self, manifest: Manifest, write=True) -> Graph:
self.initialize()
linker = Linker()

View File

@@ -1,4 +1,4 @@
# all these are just exports, they need "noqa" so flake8 will not complain.
from .profile import Profile, PROFILES_DIR, read_user_config # noqa
from .profile import Profile, read_user_config # noqa
from .project import Project, IsFQNResource # noqa
from .runtime import RuntimeConfig, UnsetProfileConfig # noqa

View File

@@ -20,10 +20,8 @@ from dbt.utils import coerce_dict_str
from .renderer import ProfileRenderer
DEFAULT_THREADS = 1
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
PROFILES_DIR = os.path.expanduser(
os.getenv('DBT_PROFILES_DIR', DEFAULT_PROFILES_DIR)
)
INVALID_PROFILE_MESSAGE = """
dbt encountered an error while trying to read your profiles.yml file.
@@ -43,7 +41,7 @@ Here, [profile name] should be replaced with a profile name
defined in your profiles.yml file. You can find profiles.yml here:
{profiles_file}/profiles.yml
""".format(profiles_file=PROFILES_DIR)
""".format(profiles_file=DEFAULT_PROFILES_DIR)
def read_profile(profiles_dir: str) -> Dict[str, Any]:
@@ -73,10 +71,10 @@ def read_user_config(directory: str) -> UserConfig:
try:
profile = read_profile(directory)
if profile:
user_cfg = coerce_dict_str(profile.get('config', {}))
if user_cfg is not None:
UserConfig.validate(user_cfg)
return UserConfig.from_dict(user_cfg)
user_config = coerce_dict_str(profile.get('config', {}))
if user_config is not None:
UserConfig.validate(user_config)
return UserConfig.from_dict(user_config)
except (RuntimeException, ValidationError):
pass
return UserConfig()
@@ -84,14 +82,32 @@ def read_user_config(directory: str) -> UserConfig:
# The Profile class is included in RuntimeConfig, so any attribute
# additions must also be set where the RuntimeConfig class is created
@dataclass
# `init=False` is a workaround for https://bugs.python.org/issue45081
@dataclass(init=False)
class Profile(HasCredentials):
profile_name: str
target_name: str
config: UserConfig
user_config: UserConfig
threads: int
credentials: Credentials
def __init__(
self,
profile_name: str,
target_name: str,
user_config: UserConfig,
threads: int,
credentials: Credentials
):
"""Explicitly defining `__init__` to work around bug in Python 3.9.7
https://bugs.python.org/issue45081
"""
self.profile_name = profile_name
self.target_name = target_name
self.user_config = user_config
self.threads = threads
self.credentials = credentials
def to_profile_info(
self, serialize_credentials: bool = False
) -> Dict[str, Any]:
@@ -106,12 +122,12 @@ class Profile(HasCredentials):
result = {
'profile_name': self.profile_name,
'target_name': self.target_name,
'config': self.config,
'user_config': self.user_config,
'threads': self.threads,
'credentials': self.credentials,
}
if serialize_credentials:
result['config'] = self.config.to_dict(omit_none=True)
result['user_config'] = self.user_config.to_dict(omit_none=True)
result['credentials'] = self.credentials.to_dict(omit_none=True)
return result
@@ -125,7 +141,7 @@ class Profile(HasCredentials):
'name': self.target_name,
'target_name': self.target_name,
'profile_name': self.profile_name,
'config': self.config.to_dict(omit_none=True),
'config': self.user_config.to_dict(omit_none=True),
})
return target
@@ -220,7 +236,7 @@ class Profile(HasCredentials):
threads: int,
profile_name: str,
target_name: str,
user_cfg: Optional[Dict[str, Any]] = None
user_config: Optional[Dict[str, Any]] = None
) -> 'Profile':
"""Create a profile from an existing set of Credentials and the
remaining information.
@@ -229,20 +245,20 @@ class Profile(HasCredentials):
:param threads: The number of threads to use for connections.
:param profile_name: The profile name used for this profile.
:param target_name: The target name used for this profile.
:param user_cfg: The user-level config block from the
:param user_config: The user-level config block from the
raw profiles, if specified.
:raises DbtProfileError: If the profile is invalid.
:returns: The new Profile object.
"""
if user_cfg is None:
user_cfg = {}
UserConfig.validate(user_cfg)
config = UserConfig.from_dict(user_cfg)
if user_config is None:
user_config = {}
UserConfig.validate(user_config)
user_config_obj: UserConfig = UserConfig.from_dict(user_config)
profile = cls(
profile_name=profile_name,
target_name=target_name,
config=config,
user_config=user_config_obj,
threads=threads,
credentials=credentials
)
@@ -295,7 +311,7 @@ class Profile(HasCredentials):
raw_profile: Dict[str, Any],
profile_name: str,
renderer: ProfileRenderer,
user_cfg: Optional[Dict[str, Any]] = None,
user_config: Optional[Dict[str, Any]] = None,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> 'Profile':
@@ -307,7 +323,7 @@ class Profile(HasCredentials):
disk as yaml and its values rendered with jinja.
:param profile_name: The profile name used.
:param renderer: The config renderer.
:param user_cfg: The global config for the user, if it
:param user_config: The global config for the user, if it
was present.
:param target_override: The target to use, if provided on
the command line.
@@ -317,9 +333,9 @@ class Profile(HasCredentials):
target could not be found
:returns: The new Profile object.
"""
# user_cfg is not rendered.
if user_cfg is None:
user_cfg = raw_profile.get('config')
# user_config is not rendered.
if user_config is None:
user_config = raw_profile.get('config')
# TODO: should it be, and the values coerced to bool?
target_name, profile_data = cls.render_profile(
raw_profile, profile_name, target_override, renderer
@@ -340,7 +356,7 @@ class Profile(HasCredentials):
profile_name=profile_name,
target_name=target_name,
threads=threads,
user_cfg=user_cfg
user_config=user_config
)
@classmethod
@@ -383,13 +399,13 @@ class Profile(HasCredentials):
error_string=msg
)
)
user_cfg = raw_profiles.get('config')
user_config = raw_profiles.get('config')
return cls.from_raw_profile_info(
raw_profile=raw_profile,
profile_name=profile_name,
renderer=renderer,
user_cfg=user_cfg,
user_config=user_config,
target_override=target_override,
threads_override=threads_override,
)

View File

@@ -645,13 +645,24 @@ class Project:
def hashed_name(self):
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
def get_selector(self, name: str) -> SelectionSpec:
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
if name not in self.selectors:
raise RuntimeException(
f'Could not find selector named {name}, expected one of '
f'{list(self.selectors)}'
)
return self.selectors[name]
return self.selectors[name]["definition"]
def get_default_selector_name(self) -> Union[str, None]:
"""This function fetch the default selector to use on `dbt run` (if any)
:return: either a selector if default is set or None
:rtype: Union[SelectionSpec, None]
"""
for selector_name, selector in self.selectors.items():
if selector["default"] is True:
return selector_name
return None
def get_macro_search_order(self, macro_namespace: str):
for dispatch_entry in self.dispatch:

View File

@@ -12,6 +12,7 @@ from .profile import Profile
from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt import tracking
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
@@ -117,7 +118,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
unrendered=project.unrendered,
profile_name=profile.profile_name,
target_name=profile.target_name,
config=profile.config,
user_config=profile.user_config,
threads=profile.threads,
credentials=profile.credentials,
args=args,
@@ -144,7 +145,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
project = Project.from_project_root(
project_root,
renderer,
verify_version=getattr(self.args, 'version_check', False),
verify_version=bool(flags.VERSION_CHECK),
)
cfg = self.from_parts(
@@ -197,7 +198,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = getattr(args, 'version_check', False)
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(
project_root,
verify_version=version_check
@@ -391,6 +392,10 @@ class UnsetCredentials(Credentials):
def type(self):
return None
@property
def unique_field(self):
return None
def connection_info(self, *args, **kwargs):
return {}
@@ -412,7 +417,7 @@ class UnsetConfig(UserConfig):
class UnsetProfile(Profile):
def __init__(self):
self.credentials = UnsetCredentials()
self.config = UnsetConfig()
self.user_config = UnsetConfig()
self.profile_name = ''
self.target_name = ''
self.threads = -1
@@ -509,7 +514,7 @@ class UnsetProfileConfig(RuntimeConfig):
unrendered=project.unrendered,
profile_name='',
target_name='',
config=UnsetConfig(),
user_config=UnsetConfig(),
threads=getattr(args, 'threads', 1),
credentials=UnsetCredentials(),
args=args,

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Dict, Any
from typing import Dict, Any, Union
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, Loader, Dumper, load_yaml_text
)
@@ -29,13 +29,14 @@ Validator Error:
"""
class SelectorConfig(Dict[str, SelectionSpec]):
class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
@classmethod
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
try:
SelectorFile.validate(data)
selector_file = SelectorFile.from_dict(data)
validate_selector_default(selector_file)
selectors = parse_from_selectors_definition(selector_file)
except ValidationError as exc:
yaml_sel_cfg = yaml.dump(exc.instance)
@@ -118,6 +119,24 @@ def selector_config_from_data(
return selectors
def validate_selector_default(selector_file: SelectorFile) -> None:
"""Check if a selector.yml file has more than 1 default key set to true"""
default_set: bool = False
default_selector_name: Union[str, None] = None
for selector in selector_file.selectors:
if selector.default is True and default_set is False:
default_set = True
default_selector_name = selector.name
continue
if selector.default is True and default_set is True:
raise DbtSelectorsError(
"Error when parsing the selector file. "
"Found multiple selectors with `default: true`:"
f"{default_selector_name} and {selector.name}"
)
# These are utilities to clean up the dictionary created from
# selectors.yml by turning the cli-string format entries into
# normalized dictionary entries. It parallels the flow in

View File

@@ -526,8 +526,6 @@ class BaseContext(metaclass=ContextMeta):
The list of valid flags are:
- `flags.STRICT_MODE`: True if `--strict` (or `-S`) was provided on the
command line
- `flags.FULL_REFRESH`: True if `--full-refresh` was provided on the
command line
- `flags.NON_DESTRUCTIVE`: True if `--non-destructive` was provided on

View File

@@ -120,11 +120,12 @@ class BaseContextConfigGenerator(Generic[T]):
def calculate_node_config(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any] = None
) -> BaseConfig:
own_config = self.get_node_project(project_name)
@@ -134,8 +135,15 @@ class BaseContextConfigGenerator(Generic[T]):
for fqn_config in project_configs:
result = self._update_from_config(result, fqn_config)
for config_call in config_calls:
result = self._update_from_config(result, config_call)
# When schema files patch config, it has lower precedence than
# config in the models (config_call_dict), so we add the patch_config_dict
# before the config_call_dict
if patch_config_dict:
result = self._update_from_config(result, patch_config_dict)
# config_calls are created in the 'experimental' model parser and
# the ParseConfigObject (via add_config_call)
result = self._update_from_config(result, config_call_dict)
if own_config.project_name != self._active_project.project_name:
for fqn_config in self._active_project_configs(fqn, resource_type):
@@ -147,11 +155,12 @@ class BaseContextConfigGenerator(Generic[T]):
@abstractmethod
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any],
) -> Dict[str, Any]:
...
@@ -186,18 +195,20 @@ class ContextConfigGenerator(BaseContextConfigGenerator[C]):
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
config = self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
finalized = config.finalize_and_validate()
return finalized.to_dict(omit_none=True)
@@ -209,18 +220,20 @@ class UnrenderedConfigGenerator(BaseContextConfigGenerator[Dict[str, Any]]):
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
return self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
def initial_result(
@@ -251,20 +264,39 @@ class ContextConfig:
resource_type: NodeType,
project_name: str,
) -> None:
self._config_calls: List[Dict[str, Any]] = []
self._config_call_dict: Dict[str, Any] = {}
self._active_project = active_project
self._fqn = fqn
self._resource_type = resource_type
self._project_name = project_name
def update_in_model_config(self, opts: Dict[str, Any]) -> None:
self._config_calls.append(opts)
def add_config_call(self, opts: Dict[str, Any]) -> None:
dct = self._config_call_dict
self._add_config_call(dct, opts)
@classmethod
def _add_config_call(cls, config_call_dict, opts: Dict[str, Any]) -> None:
for k, v in opts.items():
# MergeBehavior for post-hook and pre-hook is to collect all
# values, instead of overwriting
if k in BaseConfig.mergebehavior['append']:
if not isinstance(v, list):
v = [v]
if k in BaseConfig.mergebehavior['update'] and not isinstance(v, dict):
raise InternalException(f'expected dict, got {v}')
if k in config_call_dict and isinstance(config_call_dict[k], list):
config_call_dict[k].extend(v)
elif k in config_call_dict and isinstance(config_call_dict[k], dict):
config_call_dict[k].update(v)
else:
config_call_dict[k] = v
def build_config_dict(
self,
base: bool = False,
*,
rendered: bool = True,
patch_config_dict: dict = None
) -> Dict[str, Any]:
if rendered:
src = ContextConfigGenerator(self._active_project)
@@ -272,9 +304,10 @@ class ContextConfig:
src = UnrenderedConfigGenerator(self._active_project)
return src.calculate_node_config_dict(
config_calls=self._config_calls,
config_call_dict=self._config_call_dict,
fqn=self._fqn,
resource_type=self._resource_type,
project_name=self._project_name,
base=base,
patch_config_dict=patch_config_dict
)

View File

@@ -279,7 +279,7 @@ class Config(Protocol):
...
# `config` implementations
# Implementation of "config(..)" calls in models
class ParseConfigObject(Config):
def __init__(self, model, context_config: Optional[ContextConfig]):
self.model = model
@@ -316,7 +316,7 @@ class ParseConfigObject(Config):
raise RuntimeException(
'At parse time, did not receive a context config'
)
self.context_config.update_in_model_config(opts)
self.context_config.add_config_call(opts)
return ''
def set(self, name, value):
@@ -1243,7 +1243,7 @@ class ModelContext(ProviderContext):
@contextproperty
def pre_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [
h.to_dict(omit_none=True) for h in self.model.config.pre_hook
@@ -1251,7 +1251,7 @@ class ModelContext(ProviderContext):
@contextproperty
def post_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [
h.to_dict(omit_none=True) for h in self.model.config.post_hook

View File

@@ -1,5 +1,6 @@
import abc
import itertools
import hashlib
from dataclasses import dataclass, field
from typing import (
Any, ClassVar, Dict, Tuple, Iterable, Optional, List, Callable,
@@ -127,6 +128,15 @@ class Credentials(
'type not implemented for base credentials class'
)
@abc.abstractproperty
def unique_field(self) -> str:
raise NotImplementedError(
'type not implemented for base credentials class'
)
def hashed_unique_field(self) -> str:
return hashlib.md5(self.unique_field.encode('utf-8')).hexdigest()
def connection_info(
self, *, with_aliases: bool = False
) -> Iterable[Tuple[str, Any]]:
@@ -176,14 +186,11 @@ class UserConfigContract(Protocol):
partial_parse: Optional[bool] = None
printer_width: Optional[int] = None
def set_values(self, cookie_dir: str) -> None:
...
class HasCredentials(Protocol):
credentials: Credentials
profile_name: str
config: UserConfigContract
user_config: UserConfigContract
target_name: str
threads: int

View File

@@ -42,6 +42,7 @@ parse_file_type_to_parser = {
class FilePath(dbtClassMixin):
searched_path: str
relative_path: str
modification_time: float
project_root: str
@property
@@ -132,6 +133,10 @@ class RemoteFile(dbtClassMixin):
def original_file_path(self):
return 'from remote system'
@property
def modification_time(self):
return 'from remote system'
@dataclass
class BaseSourceFile(dbtClassMixin, SerializableType):
@@ -150,8 +155,6 @@ class BaseSourceFile(dbtClassMixin, SerializableType):
def file_id(self):
if isinstance(self.path, RemoteFile):
return None
if self.checksum.name == 'none':
return None
return f'{self.project_name}://{self.path.original_file_path}'
def _serialize(self):
@@ -220,7 +223,7 @@ class SchemaSourceFile(BaseSourceFile):
# node patches contain models, seeds, snapshots, analyses
ndp: List[str] = field(default_factory=list)
# any macro patches in this file by macro unique_id.
mcp: List[str] = field(default_factory=list)
mcp: Dict[str, str] = field(default_factory=dict)
# any source patches in this file. The entries are package, name pairs
# Patches are only against external sources. Sources can be
# created too, but those are in 'sources'

View File

@@ -109,7 +109,9 @@ class CompiledSnapshotNode(CompiledNode):
@dataclass
class CompiledDataTestNode(CompiledNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
@dataclass
@@ -117,7 +119,9 @@ class CompiledSchemaTestNode(CompiledNode, HasTestMetadata):
# keep this in sync with ParsedSchemaTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
def same_contents(self, other) -> bool:
if other is None:

View File

@@ -14,7 +14,7 @@ from dbt.contracts.graph.compiled import (
CompileResultNode, ManifestNode, NonSourceCompiledNode, GraphMemberNode
)
from dbt.contracts.graph.parsed import (
ParsedMacro, ParsedDocumentation, ParsedNodePatch, ParsedMacroPatch,
ParsedMacro, ParsedDocumentation,
ParsedSourceDefinition, ParsedExposure, HasUniqueID,
UnpatchedSourceDefinition, ManifestNodes
)
@@ -26,9 +26,7 @@ from dbt.contracts.util import (
from dbt.dataclass_schema import dbtClassMixin
from dbt.exceptions import (
CompilationException,
raise_duplicate_resource_name, raise_compiler_error, warn_or_error,
raise_duplicate_patch_name,
raise_duplicate_macro_patch_name, raise_duplicate_source_patch_name,
raise_duplicate_resource_name, raise_compiler_error,
)
from dbt.helper_types import PathSet
from dbt.logger import GLOBAL_LOGGER as logger
@@ -172,7 +170,7 @@ class RefableLookup(dbtClassMixin):
class AnalysisLookup(RefableLookup):
_lookup_types: ClassVar[set] = set(NodeType.Analysis)
_lookup_types: ClassVar[set] = set([NodeType.Analysis])
def _search_packages(
@@ -225,9 +223,7 @@ class ManifestMetadata(BaseArtifactMetadata):
self.user_id = tracking.active_user.id
if self.send_anonymous_usage_stats is None:
self.send_anonymous_usage_stats = (
not tracking.active_user.do_not_track
)
self.send_anonymous_usage_stats = flags.SEND_ANONYMOUS_USAGE_STATS
@classmethod
def default(cls):
@@ -718,60 +714,6 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
resource_fqns[resource_type_plural].add(tuple(resource.fqn))
return resource_fqns
# This is called by 'parse_patch' in the NodePatchParser
def add_patch(
self, source_file: SchemaSourceFile, patch: ParsedNodePatch,
) -> None:
if patch.yaml_key in ['models', 'seeds', 'snapshots']:
unique_id = self.ref_lookup.get_unique_id(patch.name, None)
elif patch.yaml_key == 'analyses':
unique_id = self.analysis_lookup.get_unique_id(patch.name, None)
else:
raise dbt.exceptions.InternalException(
f'Unexpected yaml_key {patch.yaml_key} for patch in '
f'file {source_file.path.original_file_path}'
)
if unique_id is None:
# This will usually happen when a node is disabled
return
# patches can't be overwritten
node = self.nodes.get(unique_id)
if node:
if node.patch_path:
package_name, existing_file_path = node.patch_path.split('://')
raise_duplicate_patch_name(patch, existing_file_path)
source_file.append_patch(patch.yaml_key, unique_id)
node.patch(patch)
def add_macro_patch(
self, source_file: SchemaSourceFile, patch: ParsedMacroPatch,
) -> None:
# macros are fully namespaced
unique_id = f'macro.{patch.package_name}.{patch.name}'
macro = self.macros.get(unique_id)
if not macro:
warn_or_error(
f'WARNING: Found documentation for macro "{patch.name}" '
f'which was not found'
)
return
if macro.patch_path:
package_name, existing_file_path = macro.patch_path.split('://')
raise_duplicate_macro_patch_name(patch, existing_file_path)
source_file.macro_patches.append(unique_id)
macro.patch(patch)
def add_source_patch(
self, source_file: SchemaSourceFile, patch: SourcePatch,
) -> None:
# source patches must be unique
key = (patch.overrides, patch.name)
if key in self.source_patches:
raise_duplicate_source_patch_name(patch, self.source_patches[key])
self.source_patches[key] = patch
source_file.source_patches.append(key)
def get_used_schemas(self, resource_types=None):
return frozenset({
(node.database, node.schema) for node in

View File

@@ -2,13 +2,13 @@ from dataclasses import field, Field, dataclass
from enum import Enum
from itertools import chain
from typing import (
Any, List, Optional, Dict, Union, Type, TypeVar
Any, List, Optional, Dict, Union, Type, TypeVar, Callable
)
from dbt.dataclass_schema import (
dbtClassMixin, ValidationError, register_pattern,
)
from dbt.contracts.graph.unparsed import AdditionalPropertiesAllowed
from dbt.exceptions import InternalException
from dbt.exceptions import InternalException, CompilationException
from dbt.contracts.util import Replaceable, list_str
from dbt import hooks
from dbt.node_types import NodeType
@@ -204,6 +204,34 @@ class BaseConfig(
else:
self._extra[key] = value
def __delitem__(self, key):
if hasattr(self, key):
msg = (
'Error, tried to delete config key "{}": Cannot delete '
'built-in keys'
).format(key)
raise CompilationException(msg)
else:
del self._extra[key]
def _content_iterator(self, include_condition: Callable[[Field], bool]):
seen = set()
for fld, _ in self._get_fields():
seen.add(fld.name)
if include_condition(fld):
yield fld.name
for key in self._extra:
if key not in seen:
seen.add(key)
yield key
def __iter__(self):
yield from self._content_iterator(include_condition=lambda f: True)
def __len__(self):
return len(self._get_fields()) + len(self._extra)
@staticmethod
def compare_key(
unrendered: Dict[str, Any],
@@ -239,8 +267,15 @@ class BaseConfig(
return False
return True
# This is used in 'add_config_call' to created the combined config_call_dict.
# 'meta' moved here from node
mergebehavior = {
"append": ['pre-hook', 'pre_hook', 'post-hook', 'post_hook', 'tags'],
"update": ['quoting', 'column_types', 'meta'],
}
@classmethod
def _extract_dict(
def _merge_dicts(
cls, src: Dict[str, Any], data: Dict[str, Any]
) -> Dict[str, Any]:
"""Find all the items in data that match a target_field on this class,
@@ -286,10 +321,10 @@ class BaseConfig(
adapter_config_cls = get_config_class_by_name(adapter_type)
self_merged = self._extract_dict(dct, data)
self_merged = self._merge_dicts(dct, data)
dct.update(self_merged)
adapter_merged = adapter_config_cls._extract_dict(dct, data)
adapter_merged = adapter_config_cls._merge_dicts(dct, data)
dct.update(adapter_merged)
# any remaining fields must be "clobber"
@@ -321,33 +356,8 @@ class SourceConfig(BaseConfig):
@dataclass
class NodeConfig(BaseConfig):
class NodeAndTestConfig(BaseConfig):
enabled: bool = True
materialized: str = 'view'
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
# this only applies for config v1, so it doesn't participate in comparison
vars: Dict[str, Any] = field(
default_factory=dict,
metadata=metas(CompareBehavior.Exclude, MergeBehavior.Update),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# these fields are included in serialized output, but are not part of
# config comparison (they are part of database_representation)
alias: Optional[str] = field(
@@ -368,7 +378,38 @@ class NodeConfig(BaseConfig):
MergeBehavior.Append,
CompareBehavior.Exclude),
)
meta: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
@dataclass
class NodeConfig(NodeAndTestConfig):
# Note: if any new fields are added with MergeBehavior, also update the
# 'mergebehavior' dictionary
materialized: str = 'view'
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
full_refresh: Optional[bool] = None
on_schema_change: Optional[str] = 'ignore'
@classmethod
def __pre_deserialize__(cls, data):
@@ -410,7 +451,8 @@ class SeedConfig(NodeConfig):
@dataclass
class TestConfig(NodeConfig):
class TestConfig(NodeAndTestConfig):
# this is repeated because of a different default
schema: Optional[str] = field(
default='dbt_test__audit',
metadata=CompareBehavior.Exclude.meta(),

View File

@@ -148,6 +148,7 @@ class ParsedNodeMixins(dbtClassMixin):
"""Given a ParsedNodePatch, add the new information to the node."""
# explicitly pick out the parts to update so we don't inadvertently
# step on the model name or anything
# Note: config should already be updated
self.patch_path: Optional[str] = patch.file_id
# update created_at so process_docs will run in partial parsing
self.created_at = int(time.time())
@@ -155,20 +156,10 @@ class ParsedNodeMixins(dbtClassMixin):
self.columns = patch.columns
self.meta = patch.meta
self.docs = patch.docs
if flags.STRICT_MODE:
# It seems odd that an instance can be invalid
# Maybe there should be validation or restrictions
# elsewhere?
assert isinstance(self, dbtClassMixin)
dct = self.to_dict(omit_none=False)
self.validate(dct)
def get_materialization(self):
return self.config.materialized
def local_vars(self):
return self.config.vars
@dataclass
class ParsedNodeMandatory(
@@ -203,6 +194,7 @@ class ParsedNodeDefaults(ParsedNodeMandatory):
deferred: bool = False
unrendered_config: Dict[str, Any] = field(default_factory=dict)
created_at: int = field(default_factory=lambda: int(time.time()))
config_call_dict: Dict[str, Any] = field(default_factory=dict)
def write_node(self, target_path: str, subdirectory: str, payload: str):
if (os.path.basename(self.path) ==
@@ -229,6 +221,11 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
def _serialize(self):
return self.to_dict()
def __post_serialize__(self, dct):
if 'config_call_dict' in dct:
del dct['config_call_dict']
return dct
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
# The serialized ParsedNodes do not differ from each other
@@ -258,10 +255,16 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
return cls.from_dict(dct)
def _persist_column_docs(self) -> bool:
return bool(self.config.persist_docs.get('columns'))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('columns'))
return False
def _persist_relation_docs(self) -> bool:
return bool(self.config.persist_docs.get('relation'))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('relation'))
return False
def same_body(self: T, other: T) -> bool:
return self.raw_sql == other.raw_sql
@@ -411,7 +414,9 @@ class HasTestMetadata(dbtClassMixin):
@dataclass
class ParsedDataTestNode(ParsedNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
@dataclass
@@ -419,7 +424,9 @@ class ParsedSchemaTestNode(ParsedNode, HasTestMetadata):
# keep this in sync with CompiledSchemaTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
def same_contents(self, other) -> bool:
if other is None:
@@ -456,6 +463,7 @@ class ParsedPatch(HasYamlMetadata, Replaceable):
description: str
meta: Dict[str, Any]
docs: Docs
config: Dict[str, Any]
# The parsed node update is only the 'patch', not the test. The test became a
@@ -487,9 +495,6 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
arguments: List[MacroArgument] = field(default_factory=list)
created_at: int = field(default_factory=lambda: int(time.time()))
def local_vars(self):
return {}
def patch(self, patch: ParsedMacroPatch):
self.patch_path: Optional[str] = patch.file_id
self.description = patch.description
@@ -497,11 +502,6 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
self.meta = patch.meta
self.docs = patch.docs
self.arguments = patch.arguments
if flags.STRICT_MODE:
# What does this actually validate?
assert isinstance(self, dbtClassMixin)
dct = self.to_dict(omit_none=False)
self.validate(dct)
def same_contents(self, other: Optional['ParsedMacro']) -> bool:
if other is None:
@@ -692,7 +692,7 @@ class ParsedSourceDefinition(
@property
def depends_on(self):
return {'nodes': []}
return DependsOn(macros=[], nodes=[])
@property
def refs(self):

View File

@@ -126,12 +126,17 @@ class HasYamlMetadata(dbtClassMixin):
@dataclass
class UnparsedAnalysisUpdate(HasColumnDocs, HasDocs, HasYamlMetadata):
class HasConfig():
config: Dict[str, Any] = field(default_factory=dict)
@dataclass
class UnparsedAnalysisUpdate(HasConfig, HasColumnDocs, HasDocs, HasYamlMetadata):
pass
@dataclass
class UnparsedNodeUpdate(HasColumnTests, HasTests, HasYamlMetadata):
class UnparsedNodeUpdate(HasConfig, HasColumnTests, HasTests, HasYamlMetadata):
quote_columns: Optional[bool] = None
@@ -143,7 +148,7 @@ class MacroArgument(dbtClassMixin):
@dataclass
class UnparsedMacroUpdate(HasDocs, HasYamlMetadata):
class UnparsedMacroUpdate(HasConfig, HasDocs, HasYamlMetadata):
arguments: List[MacroArgument] = field(default_factory=list)
@@ -261,6 +266,7 @@ class UnparsedSourceDefinition(dbtClassMixin, Replaceable):
loaded_at_field: Optional[str] = None
tables: List[UnparsedSourceTableDefinition] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
@property
def yaml_key(self) -> 'str':

View File

@@ -1,9 +1,7 @@
from dbt.contracts.util import Replaceable, Mergeable, list_str
from dbt.contracts.connection import UserConfigContract, QueryComment
from dbt.contracts.connection import QueryComment, UserConfigContract
from dbt.helper_types import NoValue
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt import tracking
from dbt import ui
from dbt.dataclass_schema import (
dbtClassMixin, ValidationError,
HyphenatedDbtClassMixin,
@@ -83,6 +81,7 @@ class GitPackage(Package):
class RegistryPackage(Package):
package: str
version: Union[RawVersion, List[RawVersion]]
install_prerelease: Optional[bool] = False
def get_versions(self) -> List[str]:
if isinstance(self.version, list):
@@ -229,25 +228,20 @@ class UserConfig(ExtensibleDbtClassMixin, Replaceable, UserConfigContract):
use_colors: Optional[bool] = None
partial_parse: Optional[bool] = None
printer_width: Optional[int] = None
def set_values(self, cookie_dir):
if self.send_anonymous_usage_stats:
tracking.initialize_tracking(cookie_dir)
else:
tracking.do_not_track()
if self.use_colors is not None:
ui.use_colors(self.use_colors)
if self.printer_width:
ui.printer_width(self.printer_width)
write_json: Optional[bool] = None
warn_error: Optional[bool] = None
log_format: Optional[bool] = None
debug: Optional[bool] = None
version_check: Optional[bool] = None
fail_fast: Optional[bool] = None
use_experimental_parser: Optional[bool] = None
@dataclass
class ProfileConfig(HyphenatedDbtClassMixin, Replaceable):
profile_name: str = field(metadata={'preserve_underscore': True})
target_name: str = field(metadata={'preserve_underscore': True})
config: UserConfig
user_config: UserConfig = field(metadata={'preserve_underscore': True})
threads: int
# TODO: make this a dynamic union of some kind?
credentials: Optional[Dict[str, Any]]

View File

@@ -285,6 +285,9 @@ class SourceFreshnessOutput(dbtClassMixin):
status: FreshnessStatus
criteria: FreshnessThreshold
adapter_response: Dict[str, Any]
timing: List[TimingInfo]
thread_id: str
execution_time: float
@dataclass
@@ -333,7 +336,10 @@ def process_freshness_result(
max_loaded_at_time_ago_in_s=result.age,
status=result.status,
criteria=criteria,
adapter_response=result.adapter_response
adapter_response=result.adapter_response,
timing=result.timing,
thread_id=result.thread_id,
execution_time=result.execution_time,
)

View File

@@ -58,6 +58,7 @@ class RPCExecParameters(RPCParameters):
class RPCCompileParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@@ -71,12 +72,14 @@ class RPCListParameters(RPCParameters):
select: Union[None, str, List[str]] = None
selector: Optional[str] = None
output: Optional[str] = 'json'
output_keys: Optional[List[str]] = None
@dataclass
class RPCRunParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@@ -116,6 +119,17 @@ class RPCDocsGenerateParameters(RPCParameters):
state: Optional[str] = None
@dataclass
class RPCBuildParameters(RPCParameters):
resource_types: Optional[List[str]] = None
select: Union[None, str, List[str]] = None
threads: Optional[int] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCCliParameters(RPCParameters):
cli: str
@@ -186,6 +200,8 @@ class RPCRunOperationParameters(RPCParameters):
class RPCSourceFreshnessParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
@dataclass

View File

@@ -9,6 +9,7 @@ class SelectorDefinition(dbtClassMixin):
name: str
definition: Union[str, Dict[str, Any]]
description: str = ''
default: bool = False
@dataclass

View File

@@ -57,22 +57,6 @@ class DispatchPackagesDeprecation(DBTDeprecation):
'''
class MaterializationReturnDeprecation(DBTDeprecation):
_name = 'materialization-return'
_description = '''\
The materialization ("{materialization}") did not explicitly return a list
of relations to add to the cache. By default the target relation will be
added, but this behavior will be removed in a future version of dbt.
For more information, see:
https://docs.getdbt.com/v0.15/docs/creating-new-materializations#section-6-returning-relations
'''
class NotADictionaryDeprecation(DBTDeprecation):
_name = 'not-a-dictionary'
@@ -131,6 +115,14 @@ class AdapterMacroDeprecation(DBTDeprecation):
'''
class PackageRedirectDeprecation(DBTDeprecation):
_name = 'package-redirect'
_description = '''\
The `{old_name}` package is deprecated in favor of `{new_name}`. Please update
your `packages.yml` configuration to use `{new_name}` instead.
'''
_adapter_renamed_description = """\
The adapter function `adapter.{old_name}` is deprecated and will be removed in
a future release of dbt. Please use `adapter.{new_name}` instead.
@@ -170,12 +162,12 @@ active_deprecations: Set[str] = set()
deprecations_list: List[DBTDeprecation] = [
DispatchPackagesDeprecation(),
MaterializationReturnDeprecation(),
NotADictionaryDeprecation(),
ColumnQuotingDeprecation(),
ModelsKeyNonModelDeprecation(),
ExecuteMacrosReleaseDeprecation(),
AdapterMacroDeprecation(),
PackageRedirectDeprecation()
]
deprecations: Dict[str, DBTDeprecation] = {

View File

@@ -30,9 +30,13 @@ class RegistryPackageMixin:
class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def __init__(self, package: str, version: str) -> None:
def __init__(self,
package: str,
version: str,
version_latest: str) -> None:
super().__init__(package)
self.version = version
self.version_latest = version_latest
@property
def name(self):
@@ -44,6 +48,9 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def get_version(self):
return self.version
def get_version_latest(self):
return self.version_latest
def nice_version_name(self):
return 'version {}'.format(self.version)
@@ -61,7 +68,7 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
system.make_directory(os.path.dirname(tar_path))
download_url = metadata.downloads.tarball
system.download(download_url, tar_path)
system.download_with_retries(download_url, tar_path)
deps_path = project.modules_path
package_name = self.get_project_name(project, renderer)
system.untar_package(tar_path, deps_path, package_name)
@@ -71,10 +78,14 @@ class RegistryUnpinnedPackage(
RegistryPackageMixin, UnpinnedPackage[RegistryPinnedPackage]
):
def __init__(
self, package: str, versions: List[semver.VersionSpecifier]
self,
package: str,
versions: List[semver.VersionSpecifier],
install_prerelease: bool
) -> None:
super().__init__(package)
self.versions = versions
self.install_prerelease = install_prerelease
def _check_in_index(self):
index = registry.index_cached()
@@ -91,13 +102,18 @@ class RegistryUnpinnedPackage(
semver.VersionSpecifier.from_version_string(v)
for v in raw_version
]
return cls(package=contract.package, versions=versions)
return cls(
package=contract.package,
versions=versions,
install_prerelease=contract.install_prerelease
)
def incorporate(
self, other: 'RegistryUnpinnedPackage'
) -> 'RegistryUnpinnedPackage':
return RegistryUnpinnedPackage(
package=self.package,
install_prerelease=self.install_prerelease,
versions=self.versions + other.versions,
)
@@ -111,12 +127,18 @@ class RegistryUnpinnedPackage(
raise DependencyException(new_msg) from e
available = registry.get_available_versions(self.package)
installable = semver.filter_installable(
available,
self.install_prerelease
)
available_latest = installable[-1]
# for now, pick a version and then recurse. later on,
# we'll probably want to traverse multiple options
# so we can match packages. not going to make a difference
# right now.
target = semver.resolve_to_specific_version(range_, available)
target = semver.resolve_to_specific_version(range_, installable)
if not target:
package_version_not_found(self.package, range_, available)
return RegistryPinnedPackage(package=self.package, version=target)
package_version_not_found(self.package, range_, installable)
return RegistryPinnedPackage(package=self.package, version=target,
version_latest=available_latest)

View File

@@ -710,11 +710,11 @@ def system_error(operation_name):
raise_compiler_error(
"dbt encountered an error when attempting to {}. "
"If this error persists, please create an issue at: \n\n"
"https://github.com/fishtown-analytics/dbt"
"https://github.com/dbt-labs/dbt"
.format(operation_name))
class RegistryException(Exception):
class ConnectionException(Exception):
pass

View File

@@ -6,18 +6,47 @@ if os.name != 'nt':
from pathlib import Path
from typing import Optional
# initially all flags are set to None, the on-load call of reset() will set
# them for their first time.
STRICT_MODE = None
FULL_REFRESH = None
USE_CACHE = None
WARN_ERROR = None
TEST_NEW_PARSER = None
# PROFILES_DIR must be set before the other flags
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
PROFILES_DIR = os.path.expanduser(
os.getenv('DBT_PROFILES_DIR', DEFAULT_PROFILES_DIR)
)
STRICT_MODE = False # Only here for backwards compatibility
FULL_REFRESH = False # subcommand
STORE_FAILURES = False # subcommand
GREEDY = None # subcommand
# Global CLI commands
USE_EXPERIMENTAL_PARSER = None
WARN_ERROR = None
WRITE_JSON = None
PARTIAL_PARSE = None
USE_COLORS = None
STORE_FAILURES = None
DEBUG = None
LOG_FORMAT = None
VERSION_CHECK = None
FAIL_FAST = None
SEND_ANONYMOUS_USAGE_STATS = None
PRINTER_WIDTH = 80
# Global CLI defaults. These flags are set from three places:
# CLI args, environment variables, and user_config (profiles.yml).
# Environment variables use the pattern 'DBT_{flag name}', like DBT_PROFILES_DIR
flag_defaults = {
"USE_EXPERIMENTAL_PARSER": False,
"WARN_ERROR": False,
"WRITE_JSON": True,
"PARTIAL_PARSE": False,
"USE_COLORS": True,
"PROFILES_DIR": DEFAULT_PROFILES_DIR,
"DEBUG": False,
"LOG_FORMAT": None,
"VERSION_CHECK": True,
"FAIL_FAST": False,
"SEND_ANONYMOUS_USAGE_STATS": True,
"PRINTER_WIDTH": 80
}
def env_set_truthy(key: str) -> Optional[str]:
@@ -30,6 +59,12 @@ def env_set_truthy(key: str) -> Optional[str]:
return value
def env_set_bool(env_value):
if env_value in ('1', 't', 'true', 'y', 'yes'):
return True
return False
def env_set_path(key: str) -> Optional[Path]:
value = os.getenv(key)
if value is None:
@@ -50,56 +85,72 @@ def _get_context():
return multiprocessing.get_context('spawn')
# This is not a flag, it's a place to store the lock
MP_CONTEXT = _get_context()
def reset():
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
STRICT_MODE = False
FULL_REFRESH = False
USE_CACHE = True
WARN_ERROR = False
TEST_NEW_PARSER = False
USE_EXPERIMENTAL_PARSER = False
WRITE_JSON = True
PARTIAL_PARSE = False
MP_CONTEXT = _get_context()
USE_COLORS = True
STORE_FAILURES = False
def set_from_args(args):
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
def set_from_args(args, user_config):
global STRICT_MODE, FULL_REFRESH, WARN_ERROR, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, USE_COLORS, \
STORE_FAILURES, PROFILES_DIR, DEBUG, LOG_FORMAT, GREEDY, \
VERSION_CHECK, FAIL_FAST, SEND_ANONYMOUS_USAGE_STATS, PRINTER_WIDTH
STRICT_MODE = False # backwards compatibility
# cli args without user_config or env var option
FULL_REFRESH = getattr(args, 'full_refresh', FULL_REFRESH)
STRICT_MODE = getattr(args, 'strict', STRICT_MODE)
WARN_ERROR = (
STRICT_MODE or
getattr(args, 'warn_error', STRICT_MODE or WARN_ERROR)
)
TEST_NEW_PARSER = getattr(args, 'test_new_parser', TEST_NEW_PARSER)
USE_EXPERIMENTAL_PARSER = getattr(args, 'use_experimental_parser', USE_EXPERIMENTAL_PARSER)
WRITE_JSON = getattr(args, 'write_json', WRITE_JSON)
PARTIAL_PARSE = getattr(args, 'partial_parse', None)
MP_CONTEXT = _get_context()
# The use_colors attribute will always have a value because it is assigned
# None by default from the add_mutually_exclusive_group function
use_colors_override = getattr(args, 'use_colors')
if use_colors_override is not None:
USE_COLORS = use_colors_override
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
GREEDY = getattr(args, 'greedy', GREEDY)
# global cli flags with env var and user_config alternatives
USE_EXPERIMENTAL_PARSER = get_flag_value('USE_EXPERIMENTAL_PARSER', args, user_config)
WARN_ERROR = get_flag_value('WARN_ERROR', args, user_config)
WRITE_JSON = get_flag_value('WRITE_JSON', args, user_config)
PARTIAL_PARSE = get_flag_value('PARTIAL_PARSE', args, user_config)
USE_COLORS = get_flag_value('USE_COLORS', args, user_config)
DEBUG = get_flag_value('DEBUG', args, user_config)
LOG_FORMAT = get_flag_value('LOG_FORMAT', args, user_config)
VERSION_CHECK = get_flag_value('VERSION_CHECK', args, user_config)
FAIL_FAST = get_flag_value('FAIL_FAST', args, user_config)
SEND_ANONYMOUS_USAGE_STATS = get_flag_value('SEND_ANONYMOUS_USAGE_STATS', args, user_config)
PRINTER_WIDTH = get_flag_value('PRINTER_WIDTH', args, user_config)
# initialize everything to the defaults on module load
reset()
def get_flag_value(flag, args, user_config):
lc_flag = flag.lower()
flag_value = getattr(args, lc_flag, None)
if flag_value is None:
# Environment variables use pattern 'DBT_{flag name}'
env_flag = f"DBT_{flag}"
env_value = os.getenv(env_flag)
if env_value is not None and env_value != '':
env_value = env_value.lower()
# non Boolean values
if flag in ['LOG_FORMAT', 'PRINTER_WIDTH']:
flag_value = env_value
else:
flag_value = env_set_bool(env_value)
elif user_config is not None and getattr(user_config, lc_flag, None) is not None:
flag_value = getattr(user_config, lc_flag)
else:
flag_value = flag_defaults[flag]
if flag == 'PRINTER_WIDTH': # printer_width must be an int or it hangs
flag_value = int(flag_value)
return flag_value
def get_flag_dict():
return {
"use_experimental_parser": USE_EXPERIMENTAL_PARSER,
"warn_error": WARN_ERROR,
"write_json": WRITE_JSON,
"partial_parse": PARTIAL_PARSE,
"use_colors": USE_COLORS,
"profiles_dir": PROFILES_DIR,
"debug": DEBUG,
"log_format": LOG_FORMAT,
"version_check": VERSION_CHECK,
"fail_fast": FAIL_FAST,
"send_anonymous_usage_stats": SEND_ANONYMOUS_USAGE_STATS,
"printer_width": PRINTER_WIDTH,
}

View File

@@ -1,4 +1,5 @@
# special support for CLI argument parsing.
from dbt import flags
import itertools
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
@@ -66,7 +67,7 @@ def parse_union_from_default(
def parse_difference(
include: Optional[List[str]], exclude: Optional[List[str]]
) -> SelectionDifference:
included = parse_union_from_default(include, DEFAULT_INCLUDES)
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
return SelectionDifference(components=[included, excluded])
@@ -180,7 +181,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
union_def_parts = _get_list_dicts(definition, 'union')
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
union = SelectionUnion(components=include)
union = SelectionUnion(components=include, greedy_warning=False)
if exclude is None:
union.raw = definition
@@ -188,7 +189,8 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
else:
return SelectionDifference(
components=[union, exclude],
raw=definition
raw=definition,
greedy_warning=False
)
@@ -197,7 +199,7 @@ def parse_intersection_definition(
) -> SelectionSpec:
intersection_def_parts = _get_list_dicts(definition, 'intersection')
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
intersection = SelectionIntersection(components=include)
intersection = SelectionIntersection(components=include, greedy_warning=False)
if exclude is None:
intersection.raw = definition
@@ -205,7 +207,8 @@ def parse_intersection_definition(
else:
return SelectionDifference(
components=[intersection, exclude],
raw=definition
raw=definition,
greedy_warning=False
)
@@ -239,7 +242,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
if diff_arg is None:
return base
else:
return SelectionDifference(components=[base, diff_arg])
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
def parse_from_definition(
@@ -271,10 +274,12 @@ def parse_from_definition(
def parse_from_selectors_definition(
source: SelectorFile
) -> Dict[str, SelectionSpec]:
result: Dict[str, SelectionSpec] = {}
) -> Dict[str, Dict[str, Union[SelectionSpec, bool]]]:
result: Dict[str, Dict[str, Union[SelectionSpec, bool]]] = {}
selector: SelectorDefinition
for selector in source.selectors:
result[selector.name] = parse_from_definition(selector.definition,
rootlevel=True)
result[selector.name] = {
"default": selector.default,
"definition": parse_from_definition(selector.definition, rootlevel=True)
}
return result

View File

@@ -1,4 +1,3 @@
from typing import Set, List, Optional, Tuple
from .graph import Graph, UniqueId
@@ -30,6 +29,24 @@ def alert_non_existence(raw_spec, nodes):
)
def alert_unused_nodes(raw_spec, node_names):
summary_nodes_str = ("\n - ").join(node_names[:3])
debug_nodes_str = ("\n - ").join(node_names)
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
summary_msg = (
f"\nSome tests were excluded because at least one parent is not selected. "
f"Use the --greedy flag to include them."
f"\n - {summary_nodes_str}{and_more_str}"
)
logger.info(summary_msg)
if len(node_names) > 4:
debug_msg = (
f"Full list of tests that were excluded:"
f"\n - {debug_nodes_str}"
)
logger.debug(debug_msg)
def can_select_indirectly(node):
"""If a node is not selected itself, but its parent(s) are, it may qualify
for indirect selection.
@@ -151,16 +168,16 @@ class NodeSelector(MethodManager):
return direct_nodes, indirect_nodes
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
def select_nodes(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
"""Select the nodes in the graph according to the spec.
This is the main point of entry for turning a spec into a set of nodes:
- Recurse through spec, select by criteria, combine by set operation
- Return final (unfiltered) selection set
"""
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
return direct_nodes
indirect_only = indirect_nodes.difference(direct_nodes)
return direct_nodes, indirect_only
def _is_graph_member(self, unique_id: UniqueId) -> bool:
if unique_id in self.manifest.sources:
@@ -213,6 +230,8 @@ class NodeSelector(MethodManager):
# - If ANY parent is missing, return it separately. We'll keep it around
# for later and see if its other parents show up.
# We use this for INCLUSION.
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
# or by specifying `greedy: true` in a yaml selector
direct_nodes = set(selected)
indirect_nodes = set()
@@ -251,15 +270,24 @@ class NodeSelector(MethodManager):
- node selection. Based on the include/exclude sets, the set
of matched unique IDs is returned
- expand the graph at each leaf node, before combination
- selectors might override this. for example, this is where
tests are added
- includes direct + indirect selection (for tests)
- filtering:
- selectors can filter the nodes after all of them have been
selected
"""
selected_nodes = self.select_nodes(spec)
selected_nodes, indirect_only = self.select_nodes(spec)
filtered_nodes = self.filter_selection(selected_nodes)
if indirect_only:
filtered_unused_nodes = self.filter_selection(indirect_only)
if filtered_unused_nodes and spec.greedy_warning:
# log anything that didn't make the cut
unused_node_names = []
for unique_id in filtered_unused_nodes:
name = self.manifest.nodes[unique_id].name
unused_node_names.append(name)
alert_unused_nodes(spec, unused_node_names)
return filtered_nodes
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:

View File

@@ -22,13 +22,11 @@ from dbt.contracts.graph.parsed import (
ParsedSourceDefinition,
)
from dbt.contracts.state import PreviousState
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.exceptions import (
InternalException,
RuntimeException,
)
from dbt.node_types import NodeType
from dbt.ui import warning_tag
SELECTOR_GLOB = '*'
@@ -381,7 +379,7 @@ class TestTypeSelectorMethod(SelectorMethod):
class StateSelectorMethod(SelectorMethod):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.macros_were_modified: Optional[List[str]] = None
self.modified_macros: Optional[List[str]] = None
def _macros_modified(self) -> List[str]:
# we checked in the caller!
@@ -394,44 +392,74 @@ class StateSelectorMethod(SelectorMethod):
modified = []
for uid, macro in new_macros.items():
name = f'{macro.package_name}.{macro.name}'
if uid in old_macros:
old_macro = old_macros[uid]
if macro.macro_sql != old_macro.macro_sql:
modified.append(f'{name} changed')
modified.append(uid)
else:
modified.append(f'{name} added')
modified.append(uid)
for uid, macro in old_macros.items():
if uid not in new_macros:
modified.append(f'{macro.package_name}.{macro.name} removed')
modified.append(uid)
return modified[:3]
return modified
def check_modified(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def recursively_check_macros_modified(self, node):
# check if there are any changes in macros the first time
if self.modified_macros is None:
self.modified_macros = self._macros_modified()
# loop through all macros that this node depends on
for macro_uid in node.depends_on.macros:
# is this macro one of the modified macros?
if macro_uid in self.modified_macros:
return True
# if not, and this macro depends on other macros, keep looping
macro = self.manifest.macros[macro_uid]
if len(macro.depends_on.macros) > 0:
return self.recursively_check_macros_modified(macro)
else:
return False
return False
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
different_contents = not new.same_contents(old) # type: ignore
upstream_macro_change = self.recursively_check_macros_modified(new)
return different_contents or upstream_macro_change
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_body"):
return not new.same_body(old) # type: ignore
else:
return False
def check_modified_configs(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_config"):
return not new.same_config(old) # type: ignore
else:
return False
def check_modified_persisted_descriptions(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
# check if there are any changes in macros, if so, log a warning the
# first time
if self.macros_were_modified is None:
self.macros_were_modified = self._macros_modified()
if self.macros_were_modified:
log_str = ', '.join(self.macros_were_modified)
logger.warning(warning_tag(
f'During a state comparison, dbt detected a change in '
f'macros. This will not be marked as a modification. Some '
f'macros: {log_str}'
))
if hasattr(new, "same_persisted_description"):
return not new.same_persisted_description(old) # type: ignore
else:
return False
return not new.same_contents(old) # type: ignore
def check_new(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def check_modified_relation(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
if hasattr(new, "same_database_representation"):
return not new.same_database_representation(old) # type: ignore
else:
return False
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
return self.recursively_check_macros_modified(new)
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
return old is None
def search(
@@ -443,8 +471,15 @@ class StateSelectorMethod(SelectorMethod):
)
state_checks = {
# it's new if there is no old version
'new': lambda old, _: old is None,
# use methods defined above to compare properties of old + new
'modified': self.check_modified,
'new': self.check_new,
'modified.body': self.check_modified_body,
'modified.configs': self.check_modified_configs,
'modified.persisted_descriptions': self.check_modified_persisted_descriptions,
'modified.relation': self.check_modified_relation,
'modified.macros': self.check_modified_macros,
}
if selector in state_checks:
checker = state_checks[selector]

View File

@@ -67,6 +67,7 @@ class SelectionCriteria:
children: bool
children_depth: Optional[int]
greedy: bool = False
greedy_warning: bool = False # do not raise warning for yaml selectors
def __post_init__(self):
if self.children and self.childrens_parents:
@@ -124,11 +125,11 @@ class SelectionCriteria:
parents_depth=parents_depth,
children=bool(dct.get('children')),
children_depth=children_depth,
greedy=greedy
greedy=(greedy or bool(dct.get('greedy'))),
)
@classmethod
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
def dict_from_single_spec(cls, raw: str):
result = RAW_SELECTOR_PATTERN.match(raw)
if result is None:
return {'error': 'Invalid selector spec'}
@@ -145,6 +146,8 @@ class SelectionCriteria:
dct['parents'] = bool(dct.get('parents'))
if 'children' in dct:
dct['children'] = bool(dct.get('children'))
if 'greedy' in dct:
dct['greedy'] = bool(dct.get('greedy'))
return dct
@classmethod
@@ -162,10 +165,12 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
self,
components: Iterable[SelectionSpec],
expect_exists: bool = False,
greedy_warning: bool = True,
raw: Any = None,
):
self.components: List[SelectionSpec] = list(components)
self.expect_exists = expect_exists
self.greedy_warning = greedy_warning
self.raw = raw
def __iter__(self) -> Iterator[SelectionSpec]:

View File

@@ -1,5 +1,5 @@
{% macro get_columns_in_query(select_sql) -%}
{{ return(adapter.dispatch('get_columns_in_query')(select_sql)) }}
{{ return(adapter.dispatch('get_columns_in_query', 'dbt')(select_sql)) }}
{% endmacro %}
{% macro default__get_columns_in_query(select_sql) %}
@@ -15,7 +15,7 @@
{% endmacro %}
{% macro create_schema(relation) -%}
{{ adapter.dispatch('create_schema')(relation) }}
{{ adapter.dispatch('create_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__create_schema(relation) -%}
@@ -25,7 +25,7 @@
{% endmacro %}
{% macro drop_schema(relation) -%}
{{ adapter.dispatch('drop_schema')(relation) }}
{{ adapter.dispatch('drop_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__drop_schema(relation) -%}
@@ -35,7 +35,7 @@
{% endmacro %}
{% macro create_table_as(temporary, relation, sql) -%}
{{ adapter.dispatch('create_table_as')(temporary, relation, sql) }}
{{ adapter.dispatch('create_table_as', 'dbt')(temporary, relation, sql) }}
{%- endmacro %}
{% macro default__create_table_as(temporary, relation, sql) -%}
@@ -52,7 +52,7 @@
{% endmacro %}
{% macro get_create_index_sql(relation, index_dict) -%}
{{ return(adapter.dispatch('get_create_index_sql')(relation, index_dict)) }}
{{ return(adapter.dispatch('get_create_index_sql', 'dbt')(relation, index_dict)) }}
{% endmacro %}
{% macro default__get_create_index_sql(relation, index_dict) -%}
@@ -60,7 +60,7 @@
{% endmacro %}
{% macro create_indexes(relation) -%}
{{ adapter.dispatch('create_indexes')(relation) }}
{{ adapter.dispatch('create_indexes', 'dbt')(relation) }}
{%- endmacro %}
{% macro default__create_indexes(relation) -%}
@@ -75,7 +75,7 @@
{% endmacro %}
{% macro create_view_as(relation, sql) -%}
{{ adapter.dispatch('create_view_as')(relation, sql) }}
{{ adapter.dispatch('create_view_as', 'dbt')(relation, sql) }}
{%- endmacro %}
{% macro default__create_view_as(relation, sql) -%}
@@ -89,7 +89,7 @@
{% macro get_catalog(information_schema, schemas) -%}
{{ return(adapter.dispatch('get_catalog')(information_schema, schemas)) }}
{{ return(adapter.dispatch('get_catalog', 'dbt')(information_schema, schemas)) }}
{%- endmacro %}
{% macro default__get_catalog(information_schema, schemas) -%}
@@ -104,7 +104,7 @@
{% macro get_columns_in_relation(relation) -%}
{{ return(adapter.dispatch('get_columns_in_relation')(relation)) }}
{{ return(adapter.dispatch('get_columns_in_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro sql_convert_columns_in_relation(table) -%}
@@ -121,13 +121,13 @@
{% endmacro %}
{% macro alter_column_type(relation, column_name, new_column_type) -%}
{{ return(adapter.dispatch('alter_column_type')(relation, column_name, new_column_type)) }}
{{ return(adapter.dispatch('alter_column_type', 'dbt')(relation, column_name, new_column_type)) }}
{% endmacro %}
{% macro alter_column_comment(relation, column_dict) -%}
{{ return(adapter.dispatch('alter_column_comment')(relation, column_dict)) }}
{{ return(adapter.dispatch('alter_column_comment', 'dbt')(relation, column_dict)) }}
{% endmacro %}
{% macro default__alter_column_comment(relation, column_dict) -%}
@@ -136,7 +136,7 @@
{% endmacro %}
{% macro alter_relation_comment(relation, relation_comment) -%}
{{ return(adapter.dispatch('alter_relation_comment')(relation, relation_comment)) }}
{{ return(adapter.dispatch('alter_relation_comment', 'dbt')(relation, relation_comment)) }}
{% endmacro %}
{% macro default__alter_relation_comment(relation, relation_comment) -%}
@@ -145,7 +145,7 @@
{% endmacro %}
{% macro persist_docs(relation, model, for_relation=true, for_columns=true) -%}
{{ return(adapter.dispatch('persist_docs')(relation, model, for_relation, for_columns)) }}
{{ return(adapter.dispatch('persist_docs', 'dbt')(relation, model, for_relation, for_columns)) }}
{% endmacro %}
{% macro default__persist_docs(relation, model, for_relation, for_columns) -%}
@@ -180,7 +180,7 @@
{% macro drop_relation(relation) -%}
{{ return(adapter.dispatch('drop_relation')(relation)) }}
{{ return(adapter.dispatch('drop_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -191,7 +191,7 @@
{% endmacro %}
{% macro truncate_relation(relation) -%}
{{ return(adapter.dispatch('truncate_relation')(relation)) }}
{{ return(adapter.dispatch('truncate_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -202,7 +202,7 @@
{% endmacro %}
{% macro rename_relation(from_relation, to_relation) -%}
{{ return(adapter.dispatch('rename_relation')(from_relation, to_relation)) }}
{{ return(adapter.dispatch('rename_relation', 'dbt')(from_relation, to_relation)) }}
{% endmacro %}
{% macro default__rename_relation(from_relation, to_relation) -%}
@@ -214,7 +214,7 @@
{% macro information_schema_name(database) %}
{{ return(adapter.dispatch('information_schema_name')(database)) }}
{{ return(adapter.dispatch('information_schema_name', 'dbt')(database)) }}
{% endmacro %}
{% macro default__information_schema_name(database) -%}
@@ -227,7 +227,7 @@
{% macro list_schemas(database) -%}
{{ return(adapter.dispatch('list_schemas')(database)) }}
{{ return(adapter.dispatch('list_schemas', 'dbt')(database)) }}
{% endmacro %}
{% macro default__list_schemas(database) -%}
@@ -241,7 +241,7 @@
{% macro check_schema_exists(information_schema, schema) -%}
{{ return(adapter.dispatch('check_schema_exists')(information_schema, schema)) }}
{{ return(adapter.dispatch('check_schema_exists', 'dbt')(information_schema, schema)) }}
{% endmacro %}
{% macro default__check_schema_exists(information_schema, schema) -%}
@@ -256,7 +256,7 @@
{% macro list_relations_without_caching(schema_relation) %}
{{ return(adapter.dispatch('list_relations_without_caching')(schema_relation)) }}
{{ return(adapter.dispatch('list_relations_without_caching', 'dbt')(schema_relation)) }}
{% endmacro %}
@@ -267,7 +267,7 @@
{% macro current_timestamp() -%}
{{ adapter.dispatch('current_timestamp')() }}
{{ adapter.dispatch('current_timestamp', 'dbt')() }}
{%- endmacro %}
@@ -278,7 +278,7 @@
{% macro collect_freshness(source, loaded_at_field, filter) %}
{{ return(adapter.dispatch('collect_freshness')(source, loaded_at_field, filter))}}
{{ return(adapter.dispatch('collect_freshness', 'dbt')(source, loaded_at_field, filter))}}
{% endmacro %}
@@ -296,7 +296,7 @@
{% endmacro %}
{% macro make_temp_relation(base_relation, suffix='__dbt_tmp') %}
{{ return(adapter.dispatch('make_temp_relation')(base_relation, suffix))}}
{{ return(adapter.dispatch('make_temp_relation', 'dbt')(base_relation, suffix))}}
{% endmacro %}
{% macro default__make_temp_relation(base_relation, suffix) %}
@@ -311,3 +311,34 @@
{{ config.set('sql_header', caller()) }}
{%- endmacro %}
{% macro alter_relation_add_remove_columns(relation, add_columns = none, remove_columns = none) -%}
{{ return(adapter.dispatch('alter_relation_add_remove_columns', 'dbt')(relation, add_columns, remove_columns)) }}
{% endmacro %}
{% macro default__alter_relation_add_remove_columns(relation, add_columns, remove_columns) %}
{% if add_columns is none %}
{% set add_columns = [] %}
{% endif %}
{% if remove_columns is none %}
{% set remove_columns = [] %}
{% endif %}
{% set sql -%}
alter {{ relation.type }} {{ relation }}
{% for column in add_columns %}
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
{% endfor %}{{ ',' if remove_columns | length > 0 }}
{% for column in remove_columns %}
drop column {{ column.name }}{{ ',' if not loop.last }}
{% endfor %}
{%- endset -%}
{% do run_query(sql) %}
{% endmacro %}

View File

@@ -13,6 +13,10 @@
#}
{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_alias_name', 'dbt')(custom_alias_name, node)) %}
{%- endmacro %}
{% macro default__generate_alias_name(custom_alias_name=none, node=none) -%}
{%- if custom_alias_name is none -%}

View File

@@ -14,7 +14,7 @@
#}
{% macro generate_database_name(custom_database_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_database_name')(custom_database_name, node)) %}
{% do return(adapter.dispatch('generate_database_name', 'dbt')(custom_database_name, node)) %}
{%- endmacro %}
{% macro default__generate_database_name(custom_database_name=none, node=none) -%}

View File

@@ -15,6 +15,10 @@
#}
{% macro generate_schema_name(custom_schema_name, node) -%}
{{ return(adapter.dispatch('generate_schema_name', 'dbt')(custom_schema_name, node)) }}
{% endmacro %}
{% macro default__generate_schema_name(custom_schema_name, node) -%}
{%- set default_schema = target.schema -%}
{%- if custom_schema_name is none -%}

View File

@@ -0,0 +1,15 @@
{% macro get_where_subquery(relation) -%}
{% do return(adapter.dispatch('get_where_subquery')(relation)) %}
{%- endmacro %}
{% macro default__get_where_subquery(relation) -%}
{% set where = config.get('where', '') %}
{% if where %}
{%- set filtered -%}
(select * from {{ relation }} where {{ where }}) dbt_subquery
{%- endset -%}
{% do return(filtered) %}
{%- else -%}
{% do return(relation) %}
{%- endif -%}
{%- endmacro %}

View File

@@ -1,17 +1,17 @@
{% macro get_merge_sql(target, source, unique_key, dest_columns, predicates=none) -%}
{{ adapter.dispatch('get_merge_sql')(target, source, unique_key, dest_columns, predicates) }}
{{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, predicates) }}
{%- endmacro %}
{% macro get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{{ adapter.dispatch('get_delete_insert_merge_sql')(target, source, unique_key, dest_columns) }}
{{ adapter.dispatch('get_delete_insert_merge_sql', 'dbt')(target, source, unique_key, dest_columns) }}
{%- endmacro %}
{% macro get_insert_overwrite_merge_sql(target, source, dest_columns, predicates, include_sql_header=false) -%}
{{ adapter.dispatch('get_insert_overwrite_merge_sql')(target, source, dest_columns, predicates, include_sql_header) }}
{{ adapter.dispatch('get_insert_overwrite_merge_sql', 'dbt')(target, source, dest_columns, predicates, include_sql_header) }}
{%- endmacro %}
@@ -79,7 +79,7 @@
(
select {{ dest_cols_csv }}
from {{ source }}
);
)
{%- endmacro %}

View File

@@ -1,5 +1,6 @@
{% macro incremental_upsert(tmp_relation, target_relation, unique_key=none, statement_name="main") %}
{%- set dest_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set dest_cols_csv = dest_columns | map(attribute='quoted') | join(', ') -%}

View File

@@ -5,6 +5,10 @@
{% set target_relation = this.incorporate(type='table') %}
{% set existing_relation = load_relation(this) %}
{% set tmp_relation = make_temp_relation(target_relation) %}
{%- set full_refresh_mode = (should_full_refresh()) -%}
{% set on_schema_change = incremental_validate_on_schema_change(config.get('on_schema_change'), default='ignore') %}
{% set tmp_identifier = model['name'] + '__dbt_tmp' %}
{% set backup_identifier = model['name'] + "__dbt_backup" %}
@@ -28,9 +32,16 @@
{{ run_hooks(pre_hooks, inside_transaction=True) }}
{% set to_drop = [] %}
{# -- first check whether we want to full refresh for source view or config reasons #}
{% set trigger_full_refresh = (full_refresh_mode or existing_relation.is_view) %}
{% if existing_relation is none %}
{% set build_sql = create_table_as(False, target_relation, sql) %}
{% elif existing_relation.is_view or should_full_refresh() %}
{% elif trigger_full_refresh %}
{#-- Make sure the backup doesn't exist so we don't encounter issues with the rename below #}
{% set tmp_identifier = model['name'] + '__dbt_tmp' %}
{% set backup_identifier = model['name'] + '__dbt_backup' %}
{% set intermediate_relation = existing_relation.incorporate(path={"identifier": tmp_identifier}) %}
{% set backup_relation = existing_relation.incorporate(path={"identifier": backup_identifier}) %}
@@ -38,12 +49,13 @@
{% set need_swap = true %}
{% do to_drop.append(backup_relation) %}
{% else %}
{% set tmp_relation = make_temp_relation(target_relation) %}
{% do run_query(create_table_as(True, tmp_relation, sql)) %}
{% do adapter.expand_target_column_types(
{% do run_query(create_table_as(True, tmp_relation, sql)) %}
{% do adapter.expand_target_column_types(
from_relation=tmp_relation,
to_relation=target_relation) %}
{% set build_sql = incremental_upsert(tmp_relation, target_relation, unique_key=unique_key) %}
{% do process_schema_changes(on_schema_change, tmp_relation, existing_relation) %}
{% set build_sql = incremental_upsert(tmp_relation, target_relation, unique_key=unique_key) %}
{% endif %}
{% call statement("main") %}

View File

@@ -0,0 +1,164 @@
{% macro incremental_validate_on_schema_change(on_schema_change, default='ignore') %}
{% if on_schema_change not in ['sync_all_columns', 'append_new_columns', 'fail', 'ignore'] %}
{% set log_message = 'Invalid value for on_schema_change (%s) specified. Setting default value of %s.' % (on_schema_change, default) %}
{% do log(log_message) %}
{{ return(default) }}
{% else %}
{{ return(on_schema_change) }}
{% endif %}
{% endmacro %}
{% macro diff_columns(source_columns, target_columns) %}
{% set result = [] %}
{% set source_names = source_columns | map(attribute = 'column') | list %}
{% set target_names = target_columns | map(attribute = 'column') | list %}
{# --check whether the name attribute exists in the target - this does not perform a data type check #}
{% for sc in source_columns %}
{% if sc.name not in target_names %}
{{ result.append(sc) }}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro diff_column_data_types(source_columns, target_columns) %}
{% set result = [] %}
{% for sc in source_columns %}
{% set tc = target_columns | selectattr("name", "equalto", sc.name) | list | first %}
{% if tc %}
{% if sc.data_type != tc.data_type %}
{{ result.append( { 'column_name': tc.name, 'new_type': sc.data_type } ) }}
{% endif %}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro check_for_schema_changes(source_relation, target_relation) %}
{% set schema_changed = False %}
{%- set source_columns = adapter.get_columns_in_relation(source_relation) -%}
{%- set target_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set source_not_in_target = diff_columns(source_columns, target_columns) -%}
{%- set target_not_in_source = diff_columns(target_columns, source_columns) -%}
{% set new_target_types = diff_column_data_types(source_columns, target_columns) %}
{% if source_not_in_target != [] %}
{% set schema_changed = True %}
{% elif target_not_in_source != [] or new_target_types != [] %}
{% set schema_changed = True %}
{% elif new_target_types != [] %}
{% set schema_changed = True %}
{% endif %}
{% set changes_dict = {
'schema_changed': schema_changed,
'source_not_in_target': source_not_in_target,
'target_not_in_source': target_not_in_source,
'new_target_types': new_target_types
} %}
{% set msg %}
In {{ target_relation }}:
Schema changed: {{ schema_changed }}
Source columns not in target: {{ source_not_in_target }}
Target columns not in source: {{ target_not_in_source }}
New column types: {{ new_target_types }}
{% endset %}
{% do log(msg) %}
{{ return(changes_dict) }}
{% endmacro %}
{% macro sync_column_schemas(on_schema_change, target_relation, schema_changes_dict) %}
{%- set add_to_target_arr = schema_changes_dict['source_not_in_target'] -%}
{%- if on_schema_change == 'append_new_columns'-%}
{%- if add_to_target_arr | length > 0 -%}
{%- do alter_relation_add_remove_columns(target_relation, add_to_target_arr, none) -%}
{%- endif -%}
{% elif on_schema_change == 'sync_all_columns' %}
{%- set remove_from_target_arr = schema_changes_dict['target_not_in_source'] -%}
{%- set new_target_types = schema_changes_dict['new_target_types'] -%}
{% if add_to_target_arr | length > 0 or remove_from_target_arr | length > 0 %}
{%- do alter_relation_add_remove_columns(target_relation, add_to_target_arr, remove_from_target_arr) -%}
{% endif %}
{% if new_target_types != [] %}
{% for ntt in new_target_types %}
{% set column_name = ntt['column_name'] %}
{% set new_type = ntt['new_type'] %}
{% do alter_column_type(target_relation, column_name, new_type) %}
{% endfor %}
{% endif %}
{% endif %}
{% set schema_change_message %}
In {{ target_relation }}:
Schema change approach: {{ on_schema_change }}
Columns added: {{ add_to_target_arr }}
Columns removed: {{ remove_from_target_arr }}
Data types changed: {{ new_target_types }}
{% endset %}
{% do log(schema_change_message) %}
{% endmacro %}
{% macro process_schema_changes(on_schema_change, source_relation, target_relation) %}
{% if on_schema_change != 'ignore' %}
{% set schema_changes_dict = check_for_schema_changes(source_relation, target_relation) %}
{% if schema_changes_dict['schema_changed'] %}
{% if on_schema_change == 'fail' %}
{% set fail_msg %}
The source and target schemas on this incremental model are out of sync!
They can be reconciled in several ways:
- set the `on_schema_change` config to either append_new_columns or sync_all_columns, depending on your situation.
- Re-run the incremental model with `full_refresh: True` to update the target schema.
- update the schema manually and re-run the process.
{% endset %}
{% do exceptions.raise_compiler_error(fail_msg) %}
{# -- unless we ignore, run the sync operation per the config #}
{% else %}
{% do sync_column_schemas(on_schema_change, target_relation, schema_changes_dict) %}
{% endif %}
{% endif %}
{% endif %}
{% endmacro %}

View File

@@ -1,14 +1,6 @@
{% macro create_csv_table(model, agate_table) -%}
{{ adapter.dispatch('create_csv_table')(model, agate_table) }}
{%- endmacro %}
{% macro reset_csv_table(model, full_refresh, old_relation, agate_table) -%}
{{ adapter.dispatch('reset_csv_table')(model, full_refresh, old_relation, agate_table) }}
{%- endmacro %}
{% macro load_csv_rows(model, agate_table) -%}
{{ adapter.dispatch('load_csv_rows')(model, agate_table) }}
{{ adapter.dispatch('create_csv_table', 'dbt')(model, agate_table) }}
{%- endmacro %}
{% macro default__create_csv_table(model, agate_table) %}
@@ -33,6 +25,9 @@
{{ return(sql) }}
{% endmacro %}
{% macro reset_csv_table(model, full_refresh, old_relation, agate_table) -%}
{{ adapter.dispatch('reset_csv_table', 'dbt')(model, full_refresh, old_relation, agate_table) }}
{%- endmacro %}
{% macro default__reset_csv_table(model, full_refresh, old_relation, agate_table) %}
{% set sql = "" %}
@@ -47,6 +42,21 @@
{{ return(sql) }}
{% endmacro %}
{% macro get_binding_char() -%}
{{ adapter.dispatch('get_binding_char', 'dbt')() }}
{%- endmacro %}
{% macro default__get_binding_char() %}
{{ return('%s') }}
{% endmacro %}
{% macro get_batch_size() -%}
{{ return(adapter.dispatch('get_batch_size', 'dbt')()) }}
{%- endmacro %}
{% macro default__get_batch_size() %}
{{ return(10000) }}
{% endmacro %}
{% macro get_seed_column_quoted_csv(model, column_names) %}
{%- set quote_seed_column = model['config'].get('quote_columns', None) -%}
@@ -59,47 +69,47 @@
{{ return(dest_cols_csv) }}
{% endmacro %}
{% macro basic_load_csv_rows(model, batch_size, agate_table) %}
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
{% set bindings = [] %}
{% set statements = [] %}
{% for chunk in agate_table.rows | batch(batch_size) %}
{% set bindings = [] %}
{% for row in chunk %}
{% do bindings.extend(row) %}
{% endfor %}
{% set sql %}
insert into {{ this.render() }} ({{ cols_sql }}) values
{% for row in chunk -%}
({%- for column in agate_table.column_names -%}
%s
{%- if not loop.last%},{%- endif %}
{%- endfor -%})
{%- if not loop.last%},{%- endif %}
{%- endfor %}
{% endset %}
{% do adapter.add_query(sql, bindings=bindings, abridge_sql_log=True) %}
{% if loop.index0 == 0 %}
{% do statements.append(sql) %}
{% endif %}
{% endfor %}
{# Return SQL so we can render it out into the compiled files #}
{{ return(statements[0]) }}
{% endmacro %}
{% macro load_csv_rows(model, agate_table) -%}
{{ adapter.dispatch('load_csv_rows', 'dbt')(model, agate_table) }}
{%- endmacro %}
{% macro default__load_csv_rows(model, agate_table) %}
{{ return(basic_load_csv_rows(model, 10000, agate_table) )}}
{% endmacro %}
{% set batch_size = get_batch_size() %}
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
{% set bindings = [] %}
{% set statements = [] %}
{% for chunk in agate_table.rows | batch(batch_size) %}
{% set bindings = [] %}
{% for row in chunk %}
{% do bindings.extend(row) %}
{% endfor %}
{% set sql %}
insert into {{ this.render() }} ({{ cols_sql }}) values
{% for row in chunk -%}
({%- for column in agate_table.column_names -%}
{{ get_binding_char() }}
{%- if not loop.last%},{%- endif %}
{%- endfor -%})
{%- if not loop.last%},{%- endif %}
{%- endfor %}
{% endset %}
{% do adapter.add_query(sql, bindings=bindings, abridge_sql_log=True) %}
{% if loop.index0 == 0 %}
{% do statements.append(sql) %}
{% endif %}
{% endfor %}
{# Return SQL so we can render it out into the compiled files #}
{{ return(statements[0]) }}
{% endmacro %}
{% materialization seed, default %}

View File

@@ -2,7 +2,7 @@
Add new columns to the table if applicable
#}
{% macro create_columns(relation, columns) %}
{{ adapter.dispatch('create_columns')(relation, columns) }}
{{ adapter.dispatch('create_columns', 'dbt')(relation, columns) }}
{% endmacro %}
{% macro default__create_columns(relation, columns) %}
@@ -15,7 +15,7 @@
{% macro post_snapshot(staging_relation) %}
{{ adapter.dispatch('post_snapshot')(staging_relation) }}
{{ adapter.dispatch('post_snapshot', 'dbt')(staging_relation) }}
{% endmacro %}
{% macro default__post_snapshot(staging_relation) %}

View File

@@ -1,6 +1,6 @@
{% macro snapshot_merge_sql(target, source, insert_cols) -%}
{{ adapter.dispatch('snapshot_merge_sql')(target, source, insert_cols) }}
{{ adapter.dispatch('snapshot_merge_sql', 'dbt')(target, source, insert_cols) }}
{%- endmacro %}
@@ -21,7 +21,6 @@
and DBT_INTERNAL_SOURCE.dbt_change_type = 'insert'
then insert ({{ insert_cols_csv }})
values ({{ insert_cols_csv }})
;
{% endmacro %}

View File

@@ -36,7 +36,7 @@
Create SCD Hash SQL fields cross-db
#}
{% macro snapshot_hash_arguments(args) -%}
{{ adapter.dispatch('snapshot_hash_arguments')(args) }}
{{ adapter.dispatch('snapshot_hash_arguments', 'dbt')(args) }}
{%- endmacro %}
@@ -52,7 +52,7 @@
Get the current time cross-db
#}
{% macro snapshot_get_time() -%}
{{ adapter.dispatch('snapshot_get_time')() }}
{{ adapter.dispatch('snapshot_get_time', 'dbt')() }}
{%- endmacro %}
{% macro default__snapshot_get_time() -%}
@@ -75,7 +75,7 @@
table instead of assuming that the user-supplied {{ updated_at }}
will be present in the historical data.
See https://github.com/fishtown-analytics/dbt/issues/2350
See https://github.com/dbt-labs/dbt/issues/2350
*/ #}
{% set row_changed_expr -%}
({{ snapshotted_rel }}.dbt_valid_from < {{ current_rel }}.{{ updated_at }})
@@ -94,7 +94,7 @@
{% macro snapshot_string_as_time(timestamp) -%}
{{ adapter.dispatch('snapshot_string_as_time')(timestamp) }}
{{ adapter.dispatch('snapshot_string_as_time', 'dbt')(timestamp) }}
{%- endmacro %}

View File

@@ -48,7 +48,7 @@
-- cleanup
{% if old_relation is not none %}
{{ adapter.rename_relation(target_relation, backup_relation) }}
{{ adapter.rename_relation(old_relation, backup_relation) }}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -1,5 +1,5 @@
{% macro get_test_sql(main_sql, fail_calc, warn_if, error_if, limit) -%}
{{ adapter.dispatch('get_test_sql')(main_sql, fail_calc, warn_if, error_if, limit) }}
{{ adapter.dispatch('get_test_sql', 'dbt')(main_sql, fail_calc, warn_if, error_if, limit) }}
{%- endmacro %}

View File

@@ -1,9 +1,10 @@
{% macro handle_existing_table(full_refresh, old_relation) %}
{{ adapter.dispatch('handle_existing_table', macro_namespace = 'dbt')(full_refresh, old_relation) }}
{{ adapter.dispatch('handle_existing_table', 'dbt')(full_refresh, old_relation) }}
{% endmacro %}
{% macro default__handle_existing_table(full_refresh, old_relation) %}
{{ log("Dropping relation " ~ old_relation ~ " because it is of type " ~ old_relation.type) }}
{{ adapter.drop_relation(old_relation) }}
{% endmacro %}
@@ -19,7 +20,7 @@
*/
#}
{% macro create_or_replace_view(run_outside_transaction_hooks=True) %}
{% macro create_or_replace_view() %}
{%- set identifier = model['alias'] -%}
{%- set old_relation = adapter.get_relation(database=database, schema=schema, identifier=identifier) -%}
@@ -30,13 +31,7 @@
identifier=identifier, schema=schema, database=database,
type='view') -%}
{% if run_outside_transaction_hooks %}
-- no transactions on BigQuery
{{ run_hooks(pre_hooks, inside_transaction=False) }}
{% endif %}
-- `BEGIN` happens here on Snowflake
{{ run_hooks(pre_hooks, inside_transaction=True) }}
{{ run_hooks(pre_hooks) }}
-- If there's a table with the same name and we weren't told to full refresh,
-- that's an error. If we were told to full refresh, drop it. This behavior differs
@@ -50,14 +45,7 @@
{{ create_view_as(target_relation, sql) }}
{%- endcall %}
{{ run_hooks(post_hooks, inside_transaction=True) }}
{{ adapter.commit() }}
{% if run_outside_transaction_hooks %}
-- No transactions on BigQuery
{{ run_hooks(post_hooks, inside_transaction=False) }}
{% endif %}
{{ run_hooks(post_hooks) }}
{{ return({'relations': [target_relation]}) }}

View File

@@ -54,7 +54,7 @@
-- cleanup
-- move the existing view out of the way
{% if old_relation is not none %}
{{ adapter.rename_relation(target_relation, backup_relation) }}
{{ adapter.rename_relation(old_relation, backup_relation) }}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -7,7 +7,7 @@ with all_values as (
count(*) as n_records
from {{ model }}
group by 1
group by {{ column_name }}
)
@@ -28,6 +28,6 @@ where value_field not in (
{% test accepted_values(model, column_name, values, quote=True) %}
{% set macro = adapter.dispatch('test_accepted_values') %}
{% set macro = adapter.dispatch('test_accepted_values', 'dbt') %}
{{ macro(model, column_name, values, quote) }}
{% endtest %}

View File

@@ -8,6 +8,6 @@ where {{ column_name }} is null
{% test not_null(model, column_name) %}
{% set macro = adapter.dispatch('test_not_null') %}
{% set macro = adapter.dispatch('test_not_null', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

View File

@@ -1,21 +1,30 @@
{% macro default__test_relationships(model, column_name, to, field) %}
with child as (
select {{ column_name }} as from_field
from {{ model }}
where {{ column_name }} is not null
),
parent as (
select {{ field }} as to_field
from {{ to }}
)
select
child.{{ column_name }}
from_field
from {{ model }} as child
from child
left join parent
on child.from_field = parent.to_field
left join {{ to }} as parent
on child.{{ column_name }} = parent.{{ field }}
where child.{{ column_name }} is not null
and parent.{{ field }} is null
where parent.to_field is null
{% endmacro %}
{% test relationships(model, column_name, to, field) %}
{% set macro = adapter.dispatch('test_relationships') %}
{% set macro = adapter.dispatch('test_relationships', 'dbt') %}
{{ macro(model, column_name, to, field) }}
{% endtest %}

View File

@@ -1,7 +1,7 @@
{% macro default__test_unique(model, column_name) %}
select
{{ column_name }},
{{ column_name }} as unique_field,
count(*) as n_records
from {{ model }}
@@ -13,6 +13,6 @@ having count(*) > 1
{% test unique(model, column_name) %}
{% set macro = adapter.dispatch('test_unique') %}
{% set macro = adapter.dispatch('test_unique', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

File diff suppressed because one or more lines are too long

View File

@@ -43,6 +43,15 @@ DEBUG_LOG_FORMAT = (
'{record.message}'
)
SECRET_ENV_PREFIX = 'DBT_ENV_SECRET_'
def get_secret_env() -> List[str]:
return [
v for k, v in os.environ.items()
if k.startswith(SECRET_ENV_PREFIX)
]
ExceptionInformation = str
@@ -333,6 +342,12 @@ class TimestampNamed(logbook.Processor):
record.extra[self.name] = datetime.utcnow().isoformat()
class ScrubSecrets(logbook.Processor):
def process(self, record):
for secret in get_secret_env():
record.message = record.message.replace(secret, "*****")
logger = logbook.Logger('dbt')
# provide this for the cache, disabled by default
CACHE_LOGGER = logbook.Logger('dbt.cache')
@@ -473,7 +488,8 @@ class LogManager(logbook.NestedSetup):
self._file_handler = DelayedFileHandler()
self._relevel_processor = Relevel(allowed=['dbt', 'werkzeug'])
self._state_processor = DbtProcessState('internal')
# keep track of wheter we've already entered to decide if we should
self._scrub_processor = ScrubSecrets()
# keep track of whether we've already entered to decide if we should
# be actually pushing. This allows us to log in main() and also
# support entering dbt execution via handle_and_check.
self._stack_depth = 0
@@ -483,6 +499,7 @@ class LogManager(logbook.NestedSetup):
self._file_handler,
self._relevel_processor,
self._state_processor,
self._scrub_processor
])
def push_application(self):

View File

@@ -10,30 +10,30 @@ from pathlib import Path
import dbt.version
import dbt.flags as flags
import dbt.task.run as run_task
import dbt.task.build as build_task
import dbt.task.clean as clean_task
import dbt.task.compile as compile_task
import dbt.task.debug as debug_task
import dbt.task.clean as clean_task
import dbt.task.deps as deps_task
import dbt.task.init as init_task
import dbt.task.seed as seed_task
import dbt.task.test as test_task
import dbt.task.snapshot as snapshot_task
import dbt.task.generate as generate_task
import dbt.task.serve as serve_task
import dbt.task.freshness as freshness_task
import dbt.task.run_operation as run_operation_task
import dbt.task.generate as generate_task
import dbt.task.init as init_task
import dbt.task.list as list_task
import dbt.task.parse as parse_task
import dbt.task.run as run_task
import dbt.task.run_operation as run_operation_task
import dbt.task.seed as seed_task
import dbt.task.serve as serve_task
import dbt.task.snapshot as snapshot_task
import dbt.task.test as test_task
from dbt.profiler import profiler
from dbt.task.list import ListTask
from dbt.task.rpc.server import RPCServerTask
from dbt.adapters.factory import reset_adapters, cleanup_connections
import dbt.tracking
from dbt.utils import ExitCodes
from dbt.config import PROFILES_DIR, read_user_config
from dbt.config.profile import DEFAULT_PROFILES_DIR, read_user_config
from dbt.exceptions import RuntimeException, InternalException
@@ -41,6 +41,7 @@ class DBTVersion(argparse.Action):
"""This is very very similar to the builtin argparse._Version action,
except it just calls dbt.version.get_version_information().
"""
def __init__(self,
option_strings,
version=None,
@@ -159,17 +160,6 @@ def handle(args):
return res
def initialize_config_values(parsed):
"""Given the parsed args, initialize the dbt tracking code.
It would be nice to re-use this profile later on instead of parsing it
twice, but dbt's intialization is not structured in a way that makes that
easy.
"""
cfg = read_user_config(parsed.profiles_dir)
cfg.set_values(parsed.profiles_dir)
@contextmanager
def adapter_management():
reset_adapters()
@@ -183,8 +173,15 @@ def handle_and_check(args):
with log_manager.applicationbound():
parsed = parse_args(args)
# we've parsed the args - we can now decide if we're debug or not
if parsed.debug:
# Set flags from args, user config, and env vars
user_config = read_user_config(parsed.profiles_dir) # This is read again later
flags.set_from_args(parsed, user_config)
dbt.tracking.initialize_from_flags()
# Set log_format from flags
parsed.cls.set_log_format()
# we've parsed the args and set the flags - we can now decide if we're debug or not
if flags.DEBUG:
log_manager.set_debug()
profiler_enabled = False
@@ -197,8 +194,6 @@ def handle_and_check(args):
outfile=parsed.record_timing_info
):
initialize_config_values(parsed)
with adapter_management():
task, res = run_from_args(parsed)
@@ -232,15 +227,17 @@ def track_run(task):
def run_from_args(parsed):
log_cache_events(getattr(parsed, 'log_cache_events', False))
flags.set_from_args(parsed)
parsed.cls.pre_init_hook(parsed)
# we can now use the logger for stdout
# set log_format in the logger
parsed.cls.pre_init_hook(parsed)
logger.info("Running with dbt{}".format(dbt.version.installed))
# this will convert DbtConfigErrors into RuntimeExceptions
# task could be any one of the task objects
task = parsed.cls.from_args(args=parsed)
logger.debug("running dbt with arguments {parsed}", parsed=str(parsed))
log_path = None
@@ -274,11 +271,12 @@ def _build_base_subparser():
base_subparser.add_argument(
'--profiles-dir',
default=PROFILES_DIR,
default=None,
dest='sub_profiles_dir', # Main cli arg precedes subcommand
type=str,
help='''
Which directory to look in for the profiles.yml file. Default = {}
'''.format(PROFILES_DIR)
'''.format(DEFAULT_PROFILES_DIR)
)
base_subparser.add_argument(
@@ -318,15 +316,6 @@ def _build_base_subparser():
help=argparse.SUPPRESS,
)
base_subparser.add_argument(
'--bypass-cache',
action='store_false',
dest='use_cache',
help='''
If set, bypass the adapter-level cache of database state
''',
)
base_subparser.set_defaults(defer=None, state=None)
return base_subparser
@@ -393,11 +382,46 @@ def _build_build_subparser(subparsers, base_subparser):
sub.add_argument(
'-x',
'--fail-fast',
dest='sub_fail_fast',
action='store_true',
help='''
Stop execution upon a first failure.
'''
)
sub.add_argument(
'--store-failures',
action='store_true',
help='''
Store test results (failing rows) in the database
'''
)
sub.add_argument(
'--greedy',
action='store_true',
help='''
Select all tests that touch the selected resources,
even if they also depend on unselected resources
'''
)
resource_values: List[str] = [
str(s) for s in build_task.BuildTask.ALL_RESOURCE_VALUES
] + ['all']
sub.add_argument('--resource-type',
choices=resource_values,
action='append',
default=[],
dest='resource_types')
# explicity don't support --models
sub.add_argument(
'-s',
'--select',
dest='select',
nargs='+',
help='''
Specify the nodes to include.
''',
)
_add_common_selector_arguments(sub)
return sub
@@ -496,6 +520,7 @@ def _build_run_subparser(subparsers, base_subparser):
run_sub.add_argument(
'-x',
'--fail-fast',
dest='sub_fail_fast',
action='store_true',
help='''
Stop execution upon a first failure.
@@ -553,39 +578,6 @@ def _build_docs_generate_subparser(subparsers, base_subparser):
return generate_sub
def _add_models_argument(sub, help_override=None, **kwargs):
help_str = '''
Specify the models to include.
'''
if help_override is not None:
help_str = help_override
sub.add_argument(
'-m',
'--models',
dest='models',
nargs='+',
help=help_str,
**kwargs
)
def _add_select_argument(sub, dest='models', help_override=None, **kwargs):
help_str = '''
Specify the nodes to include.
'''
if help_override is not None:
help_str = help_override
sub.add_argument(
'-s',
'--select',
dest=dest,
nargs='+',
help=help_str,
**kwargs
)
def _add_common_selector_arguments(sub):
sub.add_argument(
'--exclude',
@@ -614,17 +606,26 @@ def _add_common_selector_arguments(sub):
)
def _add_selection_arguments(*subparsers, **kwargs):
models_name = kwargs.get('models_name', 'models')
def _add_selection_arguments(*subparsers):
for sub in subparsers:
if models_name == 'models':
_add_models_argument(sub)
elif models_name == 'select':
# these still get stored in 'models', so they present the same
# interface to the task
_add_select_argument(sub)
else:
raise InternalException(f'Unknown models style {models_name}')
sub.add_argument(
'-m',
'--models',
dest='select',
nargs='+',
help='''
Specify the nodes to include.
''',
)
sub.add_argument(
'-s',
'--select',
dest='select',
nargs='+',
help='''
Specify the nodes to include.
''',
)
_add_common_selector_arguments(sub)
@@ -634,7 +635,7 @@ def _add_table_mutability_arguments(*subparsers):
'--full-refresh',
action='store_true',
help='''
If specified, DBT will drop incremental models and
If specified, dbt will drop incremental models and
fully-recalculate the incremental table from the model definition.
'''
)
@@ -643,8 +644,9 @@ def _add_table_mutability_arguments(*subparsers):
def _add_version_check(sub):
sub.add_argument(
'--no-version-check',
dest='version_check',
dest='sub_version_check', # main cli arg precedes subcommands
action='store_false',
default=None,
help='''
If set, skip ensuring dbt's version matches the one specified in
the dbt_project.yml file ('require-dbt-version')
@@ -738,6 +740,7 @@ def _build_test_subparser(subparsers, base_subparser):
sub.add_argument(
'-x',
'--fail-fast',
dest='sub_fail_fast',
action='store_true',
help='''
Stop execution upon a first test failure.
@@ -750,28 +753,27 @@ def _build_test_subparser(subparsers, base_subparser):
Store test results (failing rows) in the database
'''
)
sub.add_argument(
'--greedy',
action='store_true',
help='''
Select all tests that touch the selected resources,
even if they also depend on unselected resources
'''
)
sub.set_defaults(cls=test_task.TestTask, which='test', rpc_method='test')
return sub
def _build_source_snapshot_freshness_subparser(subparsers, base_subparser):
def _build_source_freshness_subparser(subparsers, base_subparser):
sub = subparsers.add_parser(
'snapshot-freshness',
'freshness',
parents=[base_subparser],
help='''
Snapshots the current freshness of the project's sources
''',
)
sub.add_argument(
'-s',
'--select',
required=False,
nargs='+',
help='''
Specify the sources to snapshot freshness
''',
dest='selected'
aliases=['snapshot-freshness'],
)
sub.add_argument(
'-o',
@@ -792,9 +794,19 @@ def _build_source_snapshot_freshness_subparser(subparsers, base_subparser):
)
sub.set_defaults(
cls=freshness_task.FreshnessTask,
which='snapshot-freshness',
rpc_method='snapshot-freshness',
which='source-freshness',
rpc_method='source-freshness',
)
sub.add_argument(
'-s',
'--select',
dest='select',
nargs='+',
help='''
Specify the nodes to include.
''',
)
_add_common_selector_arguments(sub)
return sub
@@ -837,9 +849,9 @@ def _build_list_subparser(subparsers, base_subparser):
''',
aliases=['ls'],
)
sub.set_defaults(cls=ListTask, which='list', rpc_method=None)
sub.set_defaults(cls=list_task.ListTask, which='list', rpc_method=None)
resource_values: List[str] = [
str(s) for s in ListTask.ALL_RESOURCE_VALUES
str(s) for s in list_task.ListTask.ALL_RESOURCE_VALUES
] + ['default', 'all']
sub.add_argument('--resource-type',
choices=resource_values,
@@ -849,22 +861,39 @@ def _build_list_subparser(subparsers, base_subparser):
sub.add_argument('--output',
choices=['json', 'name', 'path', 'selector'],
default='selector')
sub.add_argument('--output-keys')
_add_models_argument(
sub,
help_override='''
sub.add_argument(
'-m',
'--models',
dest='models',
nargs='+',
help='''
Specify the models to select and set the resource-type to 'model'.
Mutually exclusive with '--select' (or '-s') and '--resource-type'
''',
metavar='SELECTOR',
required=False
required=False,
)
_add_select_argument(
sub,
sub.add_argument(
'-s',
'--select',
dest='select',
nargs='+',
help='''
Specify the nodes to include.
''',
metavar='SELECTOR',
required=False,
)
sub.add_argument(
'--greedy',
action='store_true',
help='''
Select all tests that touch the selected resources,
even if they also depend on unselected resources
'''
)
_add_common_selector_arguments(sub)
return sub
@@ -935,6 +964,7 @@ def parse_args(args, cls=DBTArgumentParser):
'-d',
'--debug',
action='store_true',
default=None,
help='''
Display debug logging during dbt execution. Useful for debugging and
making bug reports.
@@ -944,13 +974,14 @@ def parse_args(args, cls=DBTArgumentParser):
p.add_argument(
'--log-format',
choices=['text', 'json', 'default'],
default='default',
default=None,
help='''Specify the log format, overriding the command's default.'''
)
p.add_argument(
'--no-write-json',
action='store_false',
default=None,
dest='write_json',
help='''
If set, skip writing the manifest and run_results.json files to disk
@@ -961,6 +992,7 @@ def parse_args(args, cls=DBTArgumentParser):
'--use-colors',
action='store_const',
const=True,
default=None,
dest='use_colors',
help='''
Colorize the output DBT prints to the terminal. Output is colorized by
@@ -982,18 +1014,17 @@ def parse_args(args, cls=DBTArgumentParser):
)
p.add_argument(
'-S',
'--strict',
action='store_true',
'--printer-width',
dest='printer_width',
help='''
Run schema validations at runtime. This will surface bugs in dbt, but
may incur a performance penalty.
Sets the width of terminal output
'''
)
p.add_argument(
'--warn-error',
action='store_true',
default=None,
help='''
If dbt would normally warn, instead raise an exception. Examples
include --models that selects nothing, deprecations, configurations
@@ -1002,13 +1033,22 @@ def parse_args(args, cls=DBTArgumentParser):
'''
)
p.add_argument(
'--no-version-check',
dest='version_check',
action='store_false',
default=None,
help='''
If set, skip ensuring dbt's version matches the one specified in
the dbt_project.yml file ('require-dbt-version')
'''
)
p.add_optional_argument_inverse(
'--partial-parse',
enable_help='''
Allow for partial parsing by looking for and writing to a pickle file
in the target directory. This overrides the user configuration file.
WARNING: This can result in unexpected behavior if you use env_var()!
''',
disable_help='''
Disallow partial parsing. This overrides the user configuration file.
@@ -1026,26 +1066,48 @@ def parse_args(args, cls=DBTArgumentParser):
help=argparse.SUPPRESS,
)
# if set, extract all models and blocks with the jinja block extractor, and
# verify that we don't fail anywhere the actual jinja parser passes. The
# reverse (passing files that ends up failing jinja) is fine.
# TODO remove?
p.add_argument(
'--test-new-parser',
action='store_true',
help=argparse.SUPPRESS
)
# if set, will use the tree-sitter-jinja2 parser and extractor instead of
# jinja rendering when possible.
p.add_argument(
'--use-experimental-parser',
action='store_true',
default=None,
help='''
Uses an experimental parser to extract jinja values.
'''
)
p.add_argument(
'--profiles-dir',
default=None,
dest='profiles_dir',
type=str,
help='''
Which directory to look in for the profiles.yml file. Default = {}
'''.format(DEFAULT_PROFILES_DIR)
)
p.add_argument(
'--no-anonymous-usage-stats',
action='store_false',
default=None,
dest='send_anonymous_usage_stats',
help='''
Do not send anonymous usage stat to dbt Labs
'''
)
p.add_argument(
'-x',
'--fail-fast',
dest='fail_fast',
action='store_true',
default=None,
help='''
Stop execution upon a first failure.
'''
)
subs = p.add_subparsers(title="Available sub-commands")
base_subparser = _build_base_subparser()
@@ -1073,18 +1135,18 @@ def parse_args(args, cls=DBTArgumentParser):
seed_sub = _build_seed_subparser(subs, base_subparser)
# --threads, --no-version-check
_add_common_arguments(run_sub, compile_sub, generate_sub, test_sub,
rpc_sub, seed_sub, parse_sub)
# --models, --exclude
rpc_sub, seed_sub, parse_sub, build_sub)
# --select, --exclude
# list_sub sets up its own arguments.
_add_selection_arguments(build_sub, run_sub, compile_sub, generate_sub, test_sub)
_add_selection_arguments(snapshot_sub, seed_sub, models_name='select')
_add_selection_arguments(
run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
# --defer
_add_defer_argument(run_sub, test_sub)
_add_defer_argument(run_sub, test_sub, build_sub)
# --full-refresh
_add_table_mutability_arguments(run_sub, compile_sub)
_add_table_mutability_arguments(run_sub, compile_sub, build_sub)
_build_docs_serve_subparser(docs_subs, base_subparser)
_build_source_snapshot_freshness_subparser(source_subs, base_subparser)
_build_source_freshness_subparser(source_subs, base_subparser)
_build_run_operation_subparser(subs, base_subparser)
if len(args) == 0:
@@ -1093,8 +1155,28 @@ def parse_args(args, cls=DBTArgumentParser):
parsed = p.parse_args(args)
if hasattr(parsed, 'profiles_dir'):
# profiles_dir is set before subcommands and after, so normalize
if hasattr(parsed, 'sub_profiles_dir'):
if parsed.sub_profiles_dir is not None:
parsed.profiles_dir = parsed.sub_profiles_dir
delattr(parsed, 'sub_profiles_dir')
if hasattr(parsed, 'profiles_dir') and parsed.profiles_dir is not None:
parsed.profiles_dir = os.path.abspath(parsed.profiles_dir)
# needs to be set before the other flags, because it's needed to
# read the profile that contains them
flags.PROFILES_DIR = parsed.profiles_dir
# version_check is set before subcommands and after, so normalize
if hasattr(parsed, 'sub_version_check'):
if parsed.sub_version_check is False:
parsed.version_check = False
delattr(parsed, 'sub_version_check')
# fail_fast is set before subcommands and after, so normalize
if hasattr(parsed, 'sub_fail_fast'):
if parsed.sub_fail_fast is True:
parsed.fail_fast = True
delattr(parsed, 'sub_fail_fast')
if getattr(parsed, 'project_dir', None) is not None:
expanded_user = os.path.expanduser(parsed.project_dir)

View File

@@ -256,9 +256,7 @@ class ConfiguredParser(
parsed_node, self.root_project, self.manifest, config
)
def render_with_context(
self, parsed_node: IntermediateNode, config: ContextConfig
) -> None:
def render_with_context(self, parsed_node: IntermediateNode, config: ContextConfig):
# Given the parsed node and a ContextConfig to use during parsing,
# render the node's sql wtih macro capture enabled.
# Note: this mutates the config object when config calls are rendered.
@@ -273,11 +271,12 @@ class ConfiguredParser(
get_rendered(
parsed_node.raw_sql, context, parsed_node, capture_macros=True
)
return context
# This is taking the original config for the node, converting it to a dict,
# updating the config with new config passed in, then re-creating the
# config from the dict in the node.
def update_parsed_node_config(
def update_parsed_node_config_dict(
self, parsed_node: IntermediateNode, config_dict: Dict[str, Any]
) -> None:
# Overwrite node config
@@ -294,28 +293,50 @@ class ConfiguredParser(
self._update_node_schema(parsed_node, config_dict)
self._update_node_alias(parsed_node, config_dict)
def update_parsed_node(
self, parsed_node: IntermediateNode, config: ContextConfig
def update_parsed_node_config(
self, parsed_node: IntermediateNode, config: ContextConfig,
context=None, patch_config_dict=None
) -> None:
"""Given the ContextConfig used for parsing and the parsed node,
generate and set the true values to use, overriding the temporary parse
values set in _build_intermediate_parsed_node.
"""
config_dict = config.build_config_dict()
# Set tags on node provided in config blocks
# build_config_dict takes the config_call_dict in the ContextConfig object
# and calls calculate_node_config to combine dbt_project configs and
# config calls from SQL files
config_dict = config.build_config_dict(patch_config_dict=patch_config_dict)
# Set tags on node provided in config blocks. Tags are additive, so even if
# config has been built before, we don't have to reset tags in the parsed_node.
model_tags = config_dict.get('tags', [])
parsed_node.tags.extend(model_tags)
for tag in model_tags:
if tag not in parsed_node.tags:
parsed_node.tags.append(tag)
# If we have meta in the config, copy to node level, for backwards
# compatibility with earlier node-only config.
if 'meta' in config_dict and config_dict['meta']:
parsed_node.meta = config_dict['meta']
# unrendered_config is used to compare the original database/schema/alias
# values and to handle 'same_config' and 'same_contents' calls
parsed_node.unrendered_config = config.build_config_dict(
rendered=False
)
parsed_node.config_call_dict = config._config_call_dict
# do this once before we parse the node database/schema/alias, so
# parsed_node.config is what it would be if they did nothing
self.update_parsed_node_config(parsed_node, config_dict)
self.update_parsed_node_config_dict(parsed_node, config_dict)
# This updates the node database/schema/alias
self.update_parsed_node_name(parsed_node, config_dict)
# tests don't have hooks
if parsed_node.resource_type == NodeType.Test:
return
# at this point, we've collected our hooks. Use the node context to
# render each hook and collect refs/sources
hooks = list(itertools.chain(parsed_node.config.pre_hook,
@@ -323,9 +344,8 @@ class ConfiguredParser(
# skip context rebuilding if there aren't any hooks
if not hooks:
return
# we could cache the original context from parsing this node. Is that
# worth the cost in memory/complexity?
context = self._context_for(parsed_node, config)
if not context:
context = self._context_for(parsed_node, config)
for hook in hooks:
get_rendered(hook.sql, context, parsed_node, capture_macros=True)
@@ -357,8 +377,8 @@ class ConfiguredParser(
self, node: IntermediateNode, config: ContextConfig
) -> None:
try:
self.render_with_context(node, config)
self.update_parsed_node(node, config)
context = self.render_with_context(node, config)
self.update_parsed_node_config(node, config, context=context)
except ValidationError as exc:
# we got a ValidationError - probably bad types in config()
msg = validator_error_message(exc)

View File

@@ -72,10 +72,13 @@ class HookParser(SimpleParser[HookBlock, ParsedHookNode]):
# Hooks are only in the dbt_project.yml file for the project
def get_path(self) -> FilePath:
# There ought to be an existing file object for this, but
# until that is implemented use a dummy modification time
path = FilePath(
project_root=self.project.project_root,
searched_path='.',
relative_path='dbt_project.yml',
modification_time=0.0,
)
return path

View File

@@ -1,8 +1,9 @@
from dataclasses import dataclass
from dataclasses import field
import os
import traceback
from typing import (
Dict, Optional, Mapping, Callable, Any, List, Type, Union
Dict, Optional, Mapping, Callable, Any, List, Type, Union, Tuple
)
import time
@@ -59,11 +60,21 @@ from dbt.parser.sources import SourcePatcher
from dbt.ui import warning_tag
from dbt.version import __version__
from dbt.dataclass_schema import dbtClassMixin
from dbt.dataclass_schema import StrEnum, dbtClassMixin
PARTIAL_PARSE_FILE_NAME = 'partial_parse.msgpack'
PARSING_STATE = DbtProcessState('parsing')
DEFAULT_PARTIAL_PARSE = False
class ReparseReason(StrEnum):
version_mismatch = '01_version_mismatch'
file_not_found = '02_file_not_found'
vars_changed = '03_vars_changed'
profile_changed = '04_profile_changed'
deps_changed = '05_deps_changed'
project_config_changed = '06_project_config_changed'
load_file_failure = '07_load_file_failure'
exception = '08_exception'
# Part of saved performance info
@@ -189,14 +200,13 @@ class ManifestLoader:
# Read files creates a dictionary of projects to a dictionary
# of parsers to lists of file strings. The file strings are
# used to get the SourceFiles from the manifest files.
# In the future the loaded files will be used to control
# partial parsing, but right now we're just moving the
# file loading out of the individual parsers and doing it
# all at once.
start_read_files = time.perf_counter()
project_parser_files = {}
saved_files = {}
if self.saved_manifest:
saved_files = self.saved_manifest.files
for project in self.all_projects.values():
read_files(project, self.manifest.files, project_parser_files)
read_files(project, self.manifest.files, project_parser_files, saved_files)
self._perf_info.path_count = len(self.manifest.files)
self._perf_info.read_files_elapsed = (time.perf_counter() - start_read_files)
@@ -204,21 +214,57 @@ class ManifestLoader:
if self.saved_manifest is not None:
partial_parsing = PartialParsing(self.saved_manifest, self.manifest.files)
skip_parsing = partial_parsing.skip_parsing()
if not skip_parsing:
if skip_parsing:
# nothing changed, so we don't need to generate project_parser_files
self.manifest = self.saved_manifest
else:
# create child_map and parent_map
self.saved_manifest.build_parent_and_child_maps()
# files are different, we need to create a new set of
# project_parser_files.
project_parser_files = partial_parsing.get_parsing_files()
self.partially_parsing = True
try:
project_parser_files = partial_parsing.get_parsing_files()
self.partially_parsing = True
self.manifest = self.saved_manifest
except Exception:
# pp_files should still be the full set and manifest is new manifest,
# since get_parsing_files failed
logger.info("Partial parsing enabled but an error occurred. "
"Switching to a full re-parse.")
self.manifest = self.saved_manifest
# Get traceback info
tb_info = traceback.format_exc()
formatted_lines = tb_info.splitlines()
(_, line, method) = formatted_lines[-3].split(', ')
exc_info = {
"traceback": tb_info,
"exception": formatted_lines[-1],
"code": formatted_lines[-2],
"location": f"{line} {method}",
}
# get file info for local logs
parse_file_type = None
file_id = partial_parsing.processing_file
if file_id and file_id in self.manifest.files:
old_file = self.manifest.files[file_id]
parse_file_type = old_file.parse_file_type
logger.debug(f"Partial parsing exception processing file {file_id}")
file_dict = old_file.to_dict()
logger.debug(f"PP file: {file_dict}")
exc_info['parse_file_type'] = parse_file_type
logger.debug(f"PP exception info: {exc_info}")
# Send event
if dbt.tracking.active_user is not None:
exc_info['full_reparse_reason'] = ReparseReason.exception
dbt.tracking.track_partial_parser(exc_info)
if self.manifest._parsing_info is None:
self.manifest._parsing_info = ParsingInfo()
if skip_parsing:
logger.info("Partial parsing enabled, no changes found, skipping parsing")
logger.debug("Partial parsing enabled, no changes found, skipping parsing")
else:
# Load Macros
# We need to parse the macros first, so they're resolvable when
@@ -379,10 +425,10 @@ class ManifestLoader:
if not self.partially_parsing and HookParser in parser_types:
hook_parser = HookParser(project, self.manifest, self.root_project)
path = hook_parser.get_path()
file_block = FileBlock(
load_source_file(path, ParseFileType.Hook, project.project_name)
)
hook_parser.parse_file(file_block)
file = load_source_file(path, ParseFileType.Hook, project.project_name, {})
if file:
file_block = FileBlock(file)
hook_parser.parse_file(file_block)
# Store the performance info
elapsed = time.perf_counter() - start_timer
@@ -434,6 +480,12 @@ class ManifestLoader:
path = os.path.join(self.root_project.target_path,
PARTIAL_PARSE_FILE_NAME)
try:
# This shouldn't be necessary, but we have gotten bug reports (#3757) of the
# saved manifest not matching the code version.
if self.manifest.metadata.dbt_version != __version__:
logger.debug("Manifest metadata did not contain correct version. "
f"Contained '{self.manifest.metadata.dbt_version}' instead.")
self.manifest.metadata.dbt_version = __version__
manifest_msgpack = self.manifest.to_msgpack()
make_directory(os.path.dirname(path))
with open(path, 'wb') as fp:
@@ -441,24 +493,31 @@ class ManifestLoader:
except Exception:
raise
def matching_parse_results(self, manifest: Manifest) -> bool:
def is_partial_parsable(self, manifest: Manifest) -> Tuple[bool, Optional[str]]:
"""Compare the global hashes of the read-in parse results' values to
the known ones, and return if it is ok to re-use the results.
"""
valid = True
reparse_reason = None
if manifest.metadata.dbt_version != __version__:
logger.info("Unable to do partial parsing because of a dbt version mismatch")
return False # If the version is wrong, the other checks might not work
# #3757 log both versions because of reports of invalid cases of mismatch.
logger.info("Unable to do partial parsing because of a dbt version mismatch. "
f"Saved manifest version: {manifest.metadata.dbt_version}. "
f"Current version: {__version__}.")
# If the version is wrong, the other checks might not work
return False, ReparseReason.version_mismatch
if self.manifest.state_check.vars_hash != manifest.state_check.vars_hash:
logger.info("Unable to do partial parsing because config vars, "
"config profile, or config target have changed")
valid = False
reparse_reason = ReparseReason.vars_changed
if self.manifest.state_check.profile_hash != manifest.state_check.profile_hash:
# Note: This should be made more granular. We shouldn't need to invalidate
# partial parsing if a non-used profile section has changed.
logger.info("Unable to do partial parsing because profile has changed")
valid = False
reparse_reason = ReparseReason.profile_changed
missing_keys = {
k for k in self.manifest.state_check.project_hashes
@@ -467,6 +526,7 @@ class ManifestLoader:
if missing_keys:
logger.info("Unable to do partial parsing because a project dependency has been added")
valid = False
reparse_reason = ReparseReason.deps_changed
for key, new_value in self.manifest.state_check.project_hashes.items():
if key in manifest.state_check.project_hashes:
@@ -475,25 +535,18 @@ class ManifestLoader:
logger.info("Unable to do partial parsing because "
"a project config has changed")
valid = False
return valid
def _partial_parse_enabled(self):
# if the CLI is set, follow that
if flags.PARTIAL_PARSE is not None:
return flags.PARTIAL_PARSE
# if the config is set, follow that
elif self.root_project.config.partial_parse is not None:
return self.root_project.config.partial_parse
else:
return DEFAULT_PARTIAL_PARSE
reparse_reason = ReparseReason.project_config_changed
return valid, reparse_reason
def read_manifest_for_partial_parse(self) -> Optional[Manifest]:
if not self._partial_parse_enabled():
if not flags.PARTIAL_PARSE:
logger.debug('Partial parsing not enabled')
return None
path = os.path.join(self.root_project.target_path,
PARTIAL_PARSE_FILE_NAME)
reparse_reason = None
if os.path.exists(path):
try:
with open(path, 'rb') as fp:
@@ -502,7 +555,8 @@ class ManifestLoader:
# keep this check inside the try/except in case something about
# the file has changed in weird ways, perhaps due to being a
# different version of dbt
if self.matching_parse_results(manifest):
is_partial_parseable, reparse_reason = self.is_partial_parsable(manifest)
if is_partial_parseable:
return manifest
except Exception as exc:
logger.debug(
@@ -510,14 +564,19 @@ class ManifestLoader:
.format(path, exc),
exc_info=True
)
reparse_reason = ReparseReason.load_file_failure
else:
logger.info(f"Unable to do partial parsing because {path} not found")
logger.info("Partial parse save file not found. Starting full parse.")
reparse_reason = ReparseReason.file_not_found
# this event is only fired if a full reparse is needed
dbt.tracking.track_partial_parser({'full_reparse_reason': reparse_reason})
return None
def build_perf_info(self):
mli = ManifestLoaderInfo(
is_partial_parse_enabled=self._partial_parse_enabled(),
is_partial_parse_enabled=flags.PARTIAL_PARSE,
is_static_analysis_enabled=flags.USE_EXPERIMENTAL_PARSER
)
for project in self.all_projects.values():
@@ -581,7 +640,7 @@ class ManifestLoader:
macro_parser = MacroParser(project, self.manifest)
for path in macro_parser.get_paths():
source_file = load_source_file(
path, ParseFileType.Macro, project.project_name)
path, ParseFileType.Macro, project.project_name, {})
block = FileBlock(source_file)
# This does not add the file to the manifest.files,
# but that shouldn't be necessary here.

View File

@@ -1,15 +1,17 @@
from dbt.context.context_config import ContextConfig
from dbt.contracts.graph.parsed import ParsedModelNode
import dbt.flags as flags
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.node_types import NodeType
from dbt.parser.base import SimpleSQLParser
from dbt.parser.search import FileBlock
import dbt.tracking as tracking
from dbt import utils
from dbt_extractor import ExtractionError, py_extract_from_source # type: ignore
import itertools
from functools import reduce
from itertools import chain
import random
from typing import Any, Dict, List, Tuple
from typing import Any, Dict, Iterator, List, Optional, Union
class ModelParser(SimpleSQLParser[ParsedModelNode]):
@@ -26,32 +28,52 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
def get_compiled_path(cls, block: FileBlock):
return block.path.relative_path
# TODO when this is turned on by default, simplify the nasty if/else tree inside this method.
def render_update(
self, node: ParsedModelNode, config: ContextConfig
) -> None:
self.manifest._parsing_info.static_analysis_path_count += 1
# TODO go back to 1/100 when this is turned on by default.
# `True` roughly 1/50 times this function is called
sample: bool = random.randint(1, 51) == 50
# `True` roughly 1/100 times this function is called
sample: bool = random.randint(1, 101) == 100
# top-level declaration of variables
experimentally_parsed: Optional[Union[str, Dict[str, List[Any]]]] = None
config_call_dict: Dict[str, Any] = {}
source_calls: List[List[str]] = []
# run the experimental parser if the flag is on or if we're sampling
if flags.USE_EXPERIMENTAL_PARSER or sample:
try:
experimentally_parsed: Dict[str, List[Any]] = py_extract_from_source(node.raw_sql)
if self._has_banned_macro(node):
# this log line is used for integration testing. If you change
# the code at the beginning of the line change the tests in
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
logger.debug(
f"1601: parser fallback to jinja because of macro override for {node.path}"
)
experimentally_parsed = "has_banned_macro"
else:
# run the experimental parser and return the results
try:
experimentally_parsed = py_extract_from_source(
node.raw_sql
)
logger.debug(f"1699: statically parsed {node.path}")
# if we want information on what features are barring the experimental
# parser from reading model files, this is where we would add that
# since that information is stored in the `ExtractionError`.
except ExtractionError:
experimentally_parsed = "cannot_parse"
# second config format
config_calls: List[Dict[str, str]] = []
for c in experimentally_parsed['configs']:
config_calls.append({c[0]: c[1]})
# if the parser succeeded, extract some data in easy-to-compare formats
if isinstance(experimentally_parsed, dict):
# create second config format
for c in experimentally_parsed['configs']:
ContextConfig._add_config_call(config_call_dict, {c[0]: c[1]})
# format sources TODO change extractor to match this type
source_calls: List[List[str]] = []
for s in experimentally_parsed['sources']:
source_calls.append([s[0], s[1]])
experimentally_parsed['sources'] = source_calls
except ExtractionError as e:
experimentally_parsed = e
# format sources TODO change extractor to match this type
for s in experimentally_parsed['sources']:
source_calls.append([s[0], s[1]])
experimentally_parsed['sources'] = source_calls
# normal dbt run
if not flags.USE_EXPERIMENTAL_PARSER:
@@ -59,94 +81,146 @@ class ModelParser(SimpleSQLParser[ParsedModelNode]):
super().render_update(node, config)
# if we're sampling, compare for correctness
if sample:
result: List[str] = []
# experimental parser couldn't parse
if isinstance(experimentally_parsed, Exception):
result += ["01_experimental_parser_cannot_parse"]
else:
# rearrange existing configs to match:
real_configs: List[Tuple[str, Any]] = list(
itertools.chain.from_iterable(
map(lambda x: x.items(), config._config_calls)
)
)
# look for false positive configs
for c in experimentally_parsed['configs']:
if c not in real_configs:
result += ["02_false_positive_config_value"]
break
# look for missed configs
for c in real_configs:
if c not in experimentally_parsed['configs']:
result += ["03_missed_config_value"]
break
# look for false positive sources
for s in experimentally_parsed['sources']:
if s not in node.sources:
result += ["04_false_positive_source_value"]
break
# look for missed sources
for s in node.sources:
if s not in experimentally_parsed['sources']:
result += ["05_missed_source_value"]
break
# look for false positive refs
for r in experimentally_parsed['refs']:
if r not in node.refs:
result += ["06_false_positive_ref_value"]
break
# look for missed refs
for r in node.refs:
if r not in experimentally_parsed['refs']:
result += ["07_missed_ref_value"]
break
# if there are no errors, return a success value
if not result:
result = ["00_exact_match"]
result = _get_sample_result(
experimentally_parsed,
config_call_dict,
source_calls,
node,
config
)
# fire a tracking event. this fires one event for every sample
# so that we have data on a per file basis. Not only can we expect
# no false positives or misses, we can expect the number model
# files parseable by the experimental parser to match our internal
# testing.
tracking.track_experimental_parser_sample({
"project_id": self.root_project.hashed_name(),
"file_id": utils.get_hash(node),
"status": result
})
if tracking.active_user is not None: # None in some tests
tracking.track_experimental_parser_sample({
"project_id": self.root_project.hashed_name(),
"file_id": utils.get_hash(node),
"status": result
})
# if the --use-experimental-parser flag was set, and the experimental parser succeeded
elif not isinstance(experimentally_parsed, Exception):
elif isinstance(experimentally_parsed, Dict):
# since it doesn't need python jinja, fit the refs, sources, and configs
# into the node. Down the line the rest of the node will be updated with
# this information. (e.g. depends_on etc.)
config._config_calls = config_calls
config._config_call_dict = config_call_dict
# this uses the updated config to set all the right things in the node.
# if there are hooks present, it WILL render jinja. Will need to change
# when the experimental parser supports hooks
self.update_parsed_node(node, config)
self.update_parsed_node_config(node, config)
# update the unrendered config with values from the file.
# values from yaml files are in there already
node.unrendered_config.update(dict(experimentally_parsed['configs']))
# set refs, sources, and configs on the node object
# set refs and sources on the node object
node.refs += experimentally_parsed['refs']
node.sources += experimentally_parsed['sources']
for configv in experimentally_parsed['configs']:
node.config[configv[0]] = configv[1]
# configs don't need to be merged into the node
# setting them in config._config_call_dict is sufficient
self.manifest._parsing_info.static_analysis_parsed_path_count += 1
# the experimental parser tried and failed on this model.
# the experimental parser didn't run on this model.
# fall back to python jinja rendering.
elif experimentally_parsed in ["has_banned_macro"]:
# not logging here since the reason should have been logged above
super().render_update(node, config)
# the experimental parser ran on this model and failed.
# fall back to python jinja rendering.
else:
logger.debug(
f"1602: parser fallback to jinja because of extractor failure for {node.path}"
)
super().render_update(node, config)
# checks for banned macros
def _has_banned_macro(
self, node: ParsedModelNode
) -> bool:
# first check if there is a banned macro defined in scope for this model file
root_project_name = self.root_project.project_name
project_name = node.package_name
banned_macros = ['ref', 'source', 'config']
all_banned_macro_keys: Iterator[str] = chain.from_iterable(
map(
lambda name: [
f"macro.{project_name}.{name}",
f"macro.{root_project_name}.{name}"
],
banned_macros
)
)
return reduce(
lambda z, key: z or (key in self.manifest.macros),
all_banned_macro_keys,
False
)
# returns a list of string codes to be sent as a tracking event
def _get_sample_result(
sample_output: Optional[Union[str, Dict[str, Any]]],
config_call_dict: Dict[str, Any],
source_calls: List[List[str]],
node: ParsedModelNode,
config: ContextConfig
) -> List[str]:
result: List[str] = []
# experimental parser didn't run
if sample_output is None:
result += ["09_experimental_parser_skipped"]
# experimental parser couldn't parse
elif (isinstance(sample_output, str)):
if sample_output == "cannot_parse":
result += ["01_experimental_parser_cannot_parse"]
elif sample_output == "has_banned_macro":
result += ["08_has_banned_macro"]
else:
# look for false positive configs
for k in config_call_dict.keys():
if k not in config._config_call_dict:
result += ["02_false_positive_config_value"]
break
# look for missed configs
for k in config._config_call_dict.keys():
if k not in config_call_dict:
result += ["03_missed_config_value"]
break
# look for false positive sources
for s in sample_output['sources']:
if s not in node.sources:
result += ["04_false_positive_source_value"]
break
# look for missed sources
for s in node.sources:
if s not in sample_output['sources']:
result += ["05_missed_source_value"]
break
# look for false positive refs
for r in sample_output['refs']:
if r not in node.refs:
result += ["06_false_positive_ref_value"]
break
# look for missed refs
for r in node.refs:
if r not in sample_output['refs']:
result += ["07_missed_ref_value"]
break
# if there are no errors, return a success value
if not result:
result = ["00_exact_match"]
return result

View File

@@ -46,6 +46,7 @@ class PartialParsing:
self.deleted_manifest = Manifest()
self.macro_child_map: Dict[str, List[str]] = {}
self.build_file_diff()
self.processing_file = None
def skip_parsing(self):
return (
@@ -104,10 +105,10 @@ class PartialParsing:
}
if changed_or_deleted_macro_file:
self.macro_child_map = self.saved_manifest.build_macro_child_map()
logger.info(f"Partial parsing enabled: "
f"{len(deleted) + len(deleted_schema_files)} files deleted, "
f"{len(added)} files added, "
f"{len(changed) + len(changed_schema_files)} files changed.")
logger.debug(f"Partial parsing enabled: "
f"{len(deleted) + len(deleted_schema_files)} files deleted, "
f"{len(added)} files added, "
f"{len(changed) + len(changed_schema_files)} files changed.")
self.file_diff = file_diff
# generate the list of files that need parsing
@@ -118,16 +119,21 @@ class PartialParsing:
# Need to add new files first, because changes in schema files
# might refer to them
for file_id in self.file_diff['added']:
self.processing_file = file_id
self.add_to_saved(file_id)
# Need to process schema files next, because the dictionaries
# need to be in place for handling SQL file changes
for file_id in self.file_diff['changed_schema_files']:
self.processing_file = file_id
self.change_schema_file(file_id)
for file_id in self.file_diff['deleted_schema_files']:
self.processing_file = file_id
self.delete_schema_file(file_id)
for file_id in self.file_diff['deleted']:
self.processing_file = file_id
self.delete_from_saved(file_id)
for file_id in self.file_diff['changed']:
self.processing_file = file_id
self.update_in_saved(file_id)
return self.project_parser_files
@@ -147,6 +153,18 @@ class PartialParsing:
file_id not in self.file_diff['deleted']):
self.project_parser_files[project_name][parser_name].append(file_id)
def already_scheduled_for_parsing(self, source_file):
file_id = source_file.file_id
project_name = source_file.project_name
if project_name not in self.project_parser_files:
return False
parser_name = parse_file_type_to_parser[source_file.parse_file_type]
if parser_name not in self.project_parser_files[project_name]:
return False
if file_id not in self.project_parser_files[project_name][parser_name]:
return False
return True
# Add new files, including schema files
def add_to_saved(self, file_id):
# add file object to saved manifest.files
@@ -211,6 +229,9 @@ class PartialParsing:
# Updated schema files should have been processed already.
def update_mssat_in_saved(self, new_source_file, old_source_file):
if self.already_scheduled_for_parsing(old_source_file):
return
# These files only have one node.
unique_id = old_source_file.nodes[0]
@@ -251,12 +272,16 @@ class PartialParsing:
schema_file.node_patches.remove(unique_id)
def update_macro_in_saved(self, new_source_file, old_source_file):
if self.already_scheduled_for_parsing(old_source_file):
return
self.handle_macro_file_links(old_source_file, follow_references=True)
file_id = new_source_file.file_id
self.saved_files[file_id] = new_source_file
self.add_to_pp_files(new_source_file)
def update_doc_in_saved(self, new_source_file, old_source_file):
if self.already_scheduled_for_parsing(old_source_file):
return
self.delete_doc_node(old_source_file)
self.saved_files[new_source_file.file_id] = new_source_file
self.add_to_pp_files(new_source_file)
@@ -343,7 +368,8 @@ class PartialParsing:
for unique_id in macros:
if unique_id not in self.saved_manifest.macros:
# This happens when a macro has already been removed
source_file.macros.remove(unique_id)
if unique_id in source_file.macros:
source_file.macros.remove(unique_id)
continue
base_macro = self.saved_manifest.macros.pop(unique_id)
@@ -369,7 +395,9 @@ class PartialParsing:
macro_patch = self.get_schema_element(macro_patches, base_macro.name)
self.delete_schema_macro_patch(schema_file, macro_patch)
self.merge_patch(schema_file, 'macros', macro_patch)
source_file.macros.remove(unique_id)
# The macro may have already been removed by handling macro children
if unique_id in source_file.macros:
source_file.macros.remove(unique_id)
# similar to schedule_nodes_for_parsing but doesn't do sources and exposures
# and handles schema tests
@@ -385,12 +413,21 @@ class PartialParsing:
patch_list = []
if key in schema_file.dict_from_yaml:
patch_list = schema_file.dict_from_yaml[key]
node_patch = self.get_schema_element(patch_list, name)
if node_patch:
self.delete_schema_mssa_links(schema_file, key, node_patch)
self.merge_patch(schema_file, key, node_patch)
if unique_id in schema_file.node_patches:
schema_file.node_patches.remove(unique_id)
patch = self.get_schema_element(patch_list, name)
if patch:
if key in ['models', 'seeds', 'snapshots']:
self.delete_schema_mssa_links(schema_file, key, patch)
self.merge_patch(schema_file, key, patch)
if unique_id in schema_file.node_patches:
schema_file.node_patches.remove(unique_id)
elif key == 'sources':
# re-schedule source
if 'overrides' in patch:
# This is a source patch; need to re-parse orig source
self.remove_source_override_target(patch)
self.delete_schema_source(schema_file, patch)
self.remove_tests(schema_file, 'sources', patch['name'])
self.merge_patch(schema_file, 'sources', patch)
else:
file_id = node.file_id
if file_id in self.saved_files and file_id not in self.file_diff['deleted']:
@@ -426,7 +463,13 @@ class PartialParsing:
new_schema_file = self.new_files[file_id]
saved_yaml_dict = saved_schema_file.dict_from_yaml
new_yaml_dict = new_schema_file.dict_from_yaml
saved_schema_file.pp_dict = {"version": saved_yaml_dict['version']}
if 'version' in new_yaml_dict:
# despite the fact that this goes in the saved_schema_file, it
# should represent the new yaml dictionary, and should produce
# an error if the updated yaml file doesn't have a version
saved_schema_file.pp_dict = {"version": new_yaml_dict['version']}
else:
saved_schema_file.pp_dict = {}
self.handle_schema_file_changes(saved_schema_file, saved_yaml_dict, new_yaml_dict)
# copy from new schema_file to saved_schema_file to preserve references
@@ -611,8 +654,9 @@ class PartialParsing:
def remove_tests(self, schema_file, dict_key, name):
tests = schema_file.get_tests(dict_key, name)
for test_unique_id in tests:
node = self.saved_manifest.nodes.pop(test_unique_id)
self.deleted_manifest.nodes[test_unique_id] = node
if test_unique_id in self.saved_manifest.nodes:
node = self.saved_manifest.nodes.pop(test_unique_id)
self.deleted_manifest.nodes[test_unique_id] = node
schema_file.remove_tests(dict_key, name)
def delete_schema_source(self, schema_file, source_dict):
@@ -634,19 +678,17 @@ class PartialParsing:
def delete_schema_macro_patch(self, schema_file, macro):
# This is just macro patches that need to be reapplied
for unique_id in schema_file.macro_patches:
parts = unique_id.split('.')
macro_name = parts[-1]
if macro_name == macro['name']:
macro_unique_id = unique_id
break
macro_unique_id = None
if macro['name'] in schema_file.macro_patches:
macro_unique_id = schema_file.macro_patches[macro['name']]
del schema_file.macro_patches[macro['name']]
if macro_unique_id and macro_unique_id in self.saved_manifest.macros:
macro = self.saved_manifest.macros.pop(macro_unique_id)
self.deleted_manifest.macros[macro_unique_id] = macro
macro_file_id = macro.file_id
self.add_to_pp_files(self.saved_files[macro_file_id])
if macro_unique_id in schema_file.macro_patches:
schema_file.macro_patches.remove(macro_unique_id)
if macro_file_id in self.new_files:
self.saved_files[macro_file_id] = self.new_files[macro_file_id]
self.add_to_pp_files(self.saved_files[macro_file_id])
# exposures are created only from schema files, so just delete
# the exposure.

View File

@@ -6,22 +6,40 @@ from dbt.contracts.files import (
from dbt.parser.schemas import yaml_from_file, schema_file_keys, check_format_version
from dbt.exceptions import CompilationException
from dbt.parser.search import FilesystemSearcher
from typing import Optional
# This loads the files contents and creates the SourceFile object
def load_source_file(
path: FilePath, parse_file_type: ParseFileType,
project_name: str) -> AnySourceFile:
file_contents = load_file_contents(path.absolute_path, strip=False)
checksum = FileHash.from_contents(file_contents)
project_name: str, saved_files,) -> Optional[AnySourceFile]:
sf_cls = SchemaSourceFile if parse_file_type == ParseFileType.Schema else SourceFile
source_file = sf_cls(path=path, checksum=checksum,
source_file = sf_cls(path=path, checksum=FileHash.empty(),
parse_file_type=parse_file_type, project_name=project_name)
source_file.contents = file_contents.strip()
skip_loading_schema_file = False
if (parse_file_type == ParseFileType.Schema and
saved_files and source_file.file_id in saved_files):
old_source_file = saved_files[source_file.file_id]
if (source_file.path.modification_time != 0.0 and
old_source_file.path.modification_time == source_file.path.modification_time):
source_file.checksum = old_source_file.checksum
source_file.dfy = old_source_file.dfy
skip_loading_schema_file = True
if not skip_loading_schema_file:
file_contents = load_file_contents(path.absolute_path, strip=False)
source_file.checksum = FileHash.from_contents(file_contents)
source_file.contents = file_contents.strip()
if parse_file_type == ParseFileType.Schema and source_file.contents:
dfy = yaml_from_file(source_file)
validate_yaml(source_file.path.original_file_path, dfy)
source_file.dfy = dfy
if dfy:
validate_yaml(source_file.path.original_file_path, dfy)
source_file.dfy = dfy
else:
source_file = None
return source_file
@@ -65,7 +83,7 @@ def load_seed_source_file(match: FilePath, project_name) -> SourceFile:
# Use the FilesystemSearcher to get a bunch of FilePaths, then turn
# them into a bunch of FileSource objects
def get_source_files(project, paths, extension, parse_file_type):
def get_source_files(project, paths, extension, parse_file_type, saved_files):
# file path list
fp_list = list(FilesystemSearcher(
project, paths, extension
@@ -76,15 +94,17 @@ def get_source_files(project, paths, extension, parse_file_type):
if parse_file_type == ParseFileType.Seed:
fb_list.append(load_seed_source_file(fp, project.project_name))
else:
fb_list.append(load_source_file(
fp, parse_file_type, project.project_name))
file = load_source_file(fp, parse_file_type, project.project_name, saved_files)
# only append the list if it has contents. added to fix #3568
if file:
fb_list.append(file)
return fb_list
def read_files_for_parser(project, files, dirs, extension, parse_ft):
def read_files_for_parser(project, files, dirs, extension, parse_ft, saved_files):
parser_files = []
source_files = get_source_files(
project, dirs, extension, parse_ft
project, dirs, extension, parse_ft, saved_files
)
for sf in source_files:
files[sf.file_id] = sf
@@ -96,46 +116,46 @@ def read_files_for_parser(project, files, dirs, extension, parse_ft):
# dictionary needs to be passed in. What determines the order of
# the various projects? Is the root project always last? Do the
# non-root projects need to be done separately in order?
def read_files(project, files, parser_files):
def read_files(project, files, parser_files, saved_files):
project_files = {}
project_files['MacroParser'] = read_files_for_parser(
project, files, project.macro_paths, '.sql', ParseFileType.Macro,
project, files, project.macro_paths, '.sql', ParseFileType.Macro, saved_files
)
project_files['ModelParser'] = read_files_for_parser(
project, files, project.source_paths, '.sql', ParseFileType.Model,
project, files, project.source_paths, '.sql', ParseFileType.Model, saved_files
)
project_files['SnapshotParser'] = read_files_for_parser(
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot,
project, files, project.snapshot_paths, '.sql', ParseFileType.Snapshot, saved_files
)
project_files['AnalysisParser'] = read_files_for_parser(
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis,
project, files, project.analysis_paths, '.sql', ParseFileType.Analysis, saved_files
)
project_files['DataTestParser'] = read_files_for_parser(
project, files, project.test_paths, '.sql', ParseFileType.Test,
project, files, project.test_paths, '.sql', ParseFileType.Test, saved_files
)
project_files['SeedParser'] = read_files_for_parser(
project, files, project.data_paths, '.csv', ParseFileType.Seed,
project, files, project.data_paths, '.csv', ParseFileType.Seed, saved_files
)
project_files['DocumentationParser'] = read_files_for_parser(
project, files, project.docs_paths, '.md', ParseFileType.Documentation,
project, files, project.docs_paths, '.md', ParseFileType.Documentation, saved_files
)
project_files['SchemaParser'] = read_files_for_parser(
project, files, project.all_source_paths, '.yml', ParseFileType.Schema,
project, files, project.all_source_paths, '.yml', ParseFileType.Schema, saved_files
)
# Also read .yaml files for schema files. Might be better to change
# 'read_files_for_parser' accept an array in the future.
yaml_files = read_files_for_parser(
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema,
project, files, project.all_source_paths, '.yaml', ParseFileType.Schema, saved_files
)
project_files['SchemaParser'].extend(yaml_files)

View File

@@ -190,9 +190,9 @@ class TestBuilder(Generic[Testable]):
r'(?P<test_name>([a-zA-Z_][0-9a-zA-Z_]*))'
)
# kwargs representing test configs
MODIFIER_ARGS = (
CONFIG_ARGS = (
'severity', 'tags', 'enabled', 'where', 'limit', 'warn_if', 'error_if',
'fail_calc', 'store_failures'
'fail_calc', 'store_failures', 'meta', 'database', 'schema', 'alias',
)
def __init__(
@@ -224,13 +224,24 @@ class TestBuilder(Generic[Testable]):
groups = match.groupdict()
self.name: str = groups['test_name']
self.namespace: str = groups['test_namespace']
self.modifiers: Dict[str, Any] = {}
for key in self.MODIFIER_ARGS:
self.config: Dict[str, Any] = {}
for key in self.CONFIG_ARGS:
value = self.args.pop(key, None)
# 'modifier' config could be either top level arg or in config
if value and 'config' in self.args and key in self.args['config']:
raise_compiler_error(
'Test cannot have the same key at the top-level and in config'
)
if not value and 'config' in self.args:
value = self.args['config'].pop(key, None)
if isinstance(value, str):
value = get_rendered(value, render_ctx, native=True)
if value is not None:
self.modifiers[key] = value
self.config[key] = value
if 'config' in self.args:
del self.args['config']
if self.namespace is not None:
self.package_name = self.namespace
@@ -240,8 +251,8 @@ class TestBuilder(Generic[Testable]):
self.fqn_name: str = fqn_name
# use hashed name as alias if too long
if compiled_name != fqn_name:
self.modifiers['alias'] = compiled_name
if compiled_name != fqn_name and 'alias' not in self.config:
self.config['alias'] = compiled_name
def _bad_type(self) -> TypeError:
return TypeError('invalid target type "{}"'.format(type(self.target)))
@@ -282,15 +293,15 @@ class TestBuilder(Generic[Testable]):
@property
def enabled(self) -> Optional[bool]:
return self.modifiers.get('enabled')
return self.config.get('enabled')
@property
def alias(self) -> Optional[str]:
return self.modifiers.get('alias')
return self.config.get('alias')
@property
def severity(self) -> Optional[str]:
sev = self.modifiers.get('severity')
sev = self.config.get('severity')
if sev:
return sev.upper()
else:
@@ -298,30 +309,72 @@ class TestBuilder(Generic[Testable]):
@property
def store_failures(self) -> Optional[bool]:
return self.modifiers.get('store_failures')
return self.config.get('store_failures')
@property
def where(self) -> Optional[str]:
return self.modifiers.get('where')
return self.config.get('where')
@property
def limit(self) -> Optional[int]:
return self.modifiers.get('limit')
return self.config.get('limit')
@property
def warn_if(self) -> Optional[str]:
return self.modifiers.get('warn_if')
return self.config.get('warn_if')
@property
def error_if(self) -> Optional[str]:
return self.modifiers.get('error_if')
return self.config.get('error_if')
@property
def fail_calc(self) -> Optional[str]:
return self.modifiers.get('fail_calc')
return self.config.get('fail_calc')
@property
def meta(self) -> Optional[dict]:
return self.config.get('meta')
@property
def database(self) -> Optional[str]:
return self.config.get('database')
@property
def schema(self) -> Optional[str]:
return self.config.get('schema')
def get_static_config(self):
config = {}
if self.alias is not None:
config['alias'] = self.alias
if self.severity is not None:
config['severity'] = self.severity
if self.enabled is not None:
config['enabled'] = self.enabled
if self.where is not None:
config['where'] = self.where
if self.limit is not None:
config['limit'] = self.limit
if self.warn_if is not None:
config['warn_if'] = self.warn_if
if self.error_if is not None:
config['error_id'] = self.error_if
if self.fail_calc is not None:
config['fail_calc'] = self.fail_calc
if self.store_failures is not None:
config['store_failures'] = self.store_failures
if self.meta is not None:
config['meta'] = self.meta
if self.database is not None:
config['database'] = self.database
if self.schema is not None:
config['schema'] = self.schema
if self.alias is not None:
config['alias'] = self.alias
return config
def tags(self) -> List[str]:
tags = self.modifiers.get('tags', [])
tags = self.config.get('tags', [])
if isinstance(tags, str):
tags = [tags]
if not isinstance(tags, list):
@@ -360,7 +413,7 @@ class TestBuilder(Generic[Testable]):
else str(value)
)
for key, value
in self.modifiers.items()
in self.config.items()
])
if configs:
return f"{{{{ config({configs}) }}}}"
@@ -380,12 +433,8 @@ class TestBuilder(Generic[Testable]):
def build_model_str(self):
targ = self.target
cfg_where = "config.get('where')"
if isinstance(self.target, UnparsedNodeUpdate):
identifier = self.target.name
target_str = f"{{{{ ref('{targ.name}') }}}}"
target_str = f"ref('{targ.name}')"
elif isinstance(self.target, UnpatchedSourceDefinition):
identifier = self.target.table.name
target_str = f"{{{{ source('{targ.source.name}', '{targ.table.name}') }}}}"
filtered = f"(select * from {target_str} where {{{{{cfg_where}}}}}) {identifier}"
return f"{{% if {cfg_where} %}}{filtered}{{% else %}}{target_str}{{% endif %}}"
target_str = f"source('{targ.source.name}', '{targ.table.name}')"
return f"{{{{ get_where_subquery({target_str}) }}}}"

View File

@@ -22,8 +22,7 @@ from dbt.context.providers import (
generate_parse_exposure, generate_test_context
)
from dbt.context.macro_resolver import MacroResolver
from dbt.contracts.files import FileHash
from dbt.contracts.graph.manifest import SchemaSourceFile
from dbt.contracts.files import FileHash, SchemaSourceFile
from dbt.contracts.graph.parsed import (
ParsedNodePatch,
ColumnInfo,
@@ -47,7 +46,10 @@ from dbt.contracts.graph.unparsed import (
from dbt.exceptions import (
validator_error_message, JSONValidationException,
raise_invalid_schema_yml_version, ValidationException,
CompilationException,
CompilationException, raise_duplicate_patch_name,
raise_duplicate_macro_patch_name, InternalException,
raise_duplicate_source_patch_name,
warn_or_error,
)
from dbt.node_types import NodeType
from dbt.parser.base import SimpleParser
@@ -171,15 +173,15 @@ class SchemaParser(SimpleParser[SchemaTestBlock, ParsedSchemaTestNode]):
self.project.config_version == 2
)
if all_v_2:
ctx = generate_schema_yml(
self.render_ctx = generate_schema_yml(
self.root_project, self.project.project_name
)
else:
ctx = generate_target_context(
self.render_ctx = generate_target_context(
self.root_project, self.root_project.cli_vars
)
self.raw_renderer = SchemaYamlRenderer(ctx)
self.raw_renderer = SchemaYamlRenderer(self.render_ctx)
internal_package_names = get_adapter_package_names(
self.root_project.credentials.type
@@ -287,17 +289,13 @@ class SchemaParser(SimpleParser[SchemaTestBlock, ParsedSchemaTestNode]):
tags: List[str],
column_name: Optional[str],
) -> ParsedSchemaTestNode:
render_ctx = generate_target_context(
self.root_project, self.root_project.cli_vars
)
try:
builder = TestBuilder(
test=test,
target=target,
column_name=column_name,
package_name=target.package_name,
render_ctx=render_ctx,
render_ctx=self.render_ctx,
)
except CompilationException as exc:
context = _trimmed(str(target))
@@ -318,8 +316,8 @@ class SchemaParser(SimpleParser[SchemaTestBlock, ParsedSchemaTestNode]):
# is not necessarily this package's name
fqn = self.get_fqn(fqn_path, builder.fqn_name)
# this is the config that is used in render_update
config = self.initial_config(fqn)
# this is the ContextConfig that is used in render_update
config: ContextConfig = self.initial_config(fqn)
metadata = {
'namespace': builder.namespace,
@@ -360,37 +358,10 @@ class SchemaParser(SimpleParser[SchemaTestBlock, ParsedSchemaTestNode]):
node.depends_on.add_macro(macro_unique_id)
if (macro_unique_id in
['macro.dbt.test_not_null', 'macro.dbt.test_unique']):
self.update_parsed_node(node, config)
# manually set configs
# note: this does not respect generate_alias_name() macro
if builder.alias is not None:
node.unrendered_config['alias'] = builder.alias
node.config['alias'] = builder.alias
node.alias = builder.alias
if builder.severity is not None:
node.unrendered_config['severity'] = builder.severity
node.config['severity'] = builder.severity
if builder.enabled is not None:
node.unrendered_config['enabled'] = builder.enabled
node.config['enabled'] = builder.enabled
if builder.where is not None:
node.unrendered_config['where'] = builder.where
node.config['where'] = builder.where
if builder.limit is not None:
node.unrendered_config['limit'] = builder.limit
node.config['limit'] = builder.limit
if builder.warn_if is not None:
node.unrendered_config['warn_if'] = builder.warn_if
node.config['warn_if'] = builder.warn_if
if builder.error_if is not None:
node.unrendered_config['error_if'] = builder.error_if
node.config['error_if'] = builder.error_if
if builder.fail_calc is not None:
node.unrendered_config['fail_calc'] = builder.fail_calc
node.config['fail_calc'] = builder.fail_calc
if builder.store_failures is not None:
node.unrendered_config['store_failures'] = builder.store_failures
node.config['store_failures'] = builder.store_failures
config_call_dict = builder.get_static_config()
config._config_call_dict = config_call_dict
# This sets the config from dbt_project
self.update_parsed_node_config(node, config)
# source node tests are processed at patch_source time
if isinstance(builder.target, UnpatchedSourceDefinition):
sources = [builder.target.fqn[-2], builder.target.fqn[-1]]
@@ -410,7 +381,7 @@ class SchemaParser(SimpleParser[SchemaTestBlock, ParsedSchemaTestNode]):
get_rendered(
node.raw_sql, context, node, capture_macros=True
)
self.update_parsed_node(node, config)
self.update_parsed_node_config(node, config)
except ValidationError as exc:
# we got a ValidationError - probably bad types in config()
msg = validator_error_message(exc)
@@ -678,7 +649,14 @@ class SourceParser(YamlDocsReader):
if is_override:
data['path'] = self.yaml.path.original_file_path
patch = self._target_from_dict(SourcePatch, data)
self.manifest.add_source_patch(self.yaml.file, patch)
assert isinstance(self.yaml.file, SchemaSourceFile)
source_file = self.yaml.file
# source patches must be unique
key = (patch.overrides, patch.name)
if key in self.manifest.source_patches:
raise_duplicate_source_patch_name(patch, self.manifest.source_patches[key])
self.manifest.source_patches[key] = patch
source_file.source_patches.append(key)
else:
source = self._target_from_dict(UnparsedSourceDefinition, data)
self.add_source_definitions(source)
@@ -775,6 +753,9 @@ class NonSourceParser(YamlDocsReader, Generic[NonSourceTarget, Parsed]):
# target_type: UnparsedNodeUpdate, UnparsedAnalysisUpdate,
# or UnparsedMacroUpdate
self._target_type().validate(data)
if self.key != 'macros':
# macros don't have the 'config' key support yet
self.normalize_meta_attribute(data, path)
node = self._target_type().from_dict(data)
except (ValidationError, JSONValidationException) as exc:
msg = error_context(path, self.key, data, exc)
@@ -782,6 +763,33 @@ class NonSourceParser(YamlDocsReader, Generic[NonSourceTarget, Parsed]):
else:
yield node
# We want to raise an error if 'meta' is in two places, and move 'meta'
# from toplevel to config if necessary
def normalize_meta_attribute(self, data, path):
if 'meta' in data:
if 'config' in data and 'meta' in data['config']:
raise CompilationException(f"""
In {path}: found meta dictionary in 'config' dictionary and as top-level key.
Remove the top-level key and define it under 'config' dictionary only.
""".strip())
else:
if 'config' not in data:
data['config'] = {}
data['config']['meta'] = data.pop('meta')
def patch_node_config(self, node, patch):
# Get the ContextConfig that's used in calculating the config
# This must match the model resource_type that's being patched
config = ContextConfig(
self.schema_parser.root_project,
node.fqn,
node.resource_type,
self.schema_parser.project.project_name,
)
# We need to re-apply the config_call_dict after the patch config
config._config_call_dict = node.config_call_dict
self.schema_parser.update_parsed_node_config(node, config, patch_config_dict=patch.config)
class NodePatchParser(
NonSourceParser[NodeTarget, ParsedNodePatch],
@@ -790,6 +798,9 @@ class NodePatchParser(
def parse_patch(
self, block: TargetBlock[NodeTarget], refs: ParserRef
) -> None:
# We're not passing the ParsedNodePatch around anymore, so we
# could possibly skip creating one. Leaving here for now for
# code consistency.
patch = ParsedNodePatch(
name=block.target.name,
original_file_path=block.target.original_file_path,
@@ -799,8 +810,35 @@ class NodePatchParser(
columns=refs.column_info,
meta=block.target.meta,
docs=block.target.docs,
config=block.target.config,
)
self.manifest.add_patch(self.yaml.file, patch)
assert isinstance(self.yaml.file, SchemaSourceFile)
source_file: SchemaSourceFile = self.yaml.file
if patch.yaml_key in ['models', 'seeds', 'snapshots']:
unique_id = self.manifest.ref_lookup.get_unique_id(patch.name, None)
elif patch.yaml_key == 'analyses':
unique_id = self.manifest.analysis_lookup.get_unique_id(patch.name, None)
else:
raise InternalException(
f'Unexpected yaml_key {patch.yaml_key} for patch in '
f'file {source_file.path.original_file_path}'
)
if unique_id is None:
# This will usually happen when a node is disabled
return
# patches can't be overwritten
node = self.manifest.nodes.get(unique_id)
if node:
if node.patch_path:
package_name, existing_file_path = node.patch_path.split('://')
raise_duplicate_patch_name(patch, existing_file_path)
source_file.append_patch(patch.yaml_key, unique_id)
# If this patch has config changes, re-calculate the node config
# with the patch config
if patch.config:
self.patch_node_config(node, patch)
node.patch(patch)
class TestablePatchParser(NodePatchParser[UnparsedNodeUpdate]):
@@ -838,8 +876,24 @@ class MacroPatchParser(NonSourceParser[UnparsedMacroUpdate, ParsedMacroPatch]):
description=block.target.description,
meta=block.target.meta,
docs=block.target.docs,
config=block.target.config,
)
self.manifest.add_macro_patch(self.yaml.file, patch)
assert isinstance(self.yaml.file, SchemaSourceFile)
source_file = self.yaml.file
# macros are fully namespaced
unique_id = f'macro.{patch.package_name}.{patch.name}'
macro = self.manifest.macros.get(unique_id)
if not macro:
warn_or_error(
f'WARNING: Found patch for macro "{patch.name}" '
f'which was not found'
)
return
if macro.patch_path:
package_name, existing_file_path = macro.patch_path.split('://')
raise_duplicate_macro_patch_name(patch, existing_file_path)
source_file.macro_patches[patch.name] = unique_id
macro.patch(patch)
class ExposureParser(YamlReader):

View File

@@ -84,6 +84,7 @@ class FilesystemSearcher(Iterable[FilePath]):
file_match = FilePath(
searched_path=result['searched_path'],
relative_path=result['relative_path'],
modification_time=result['modification_time'],
project_root=root,
)
yield file_match

View File

@@ -286,7 +286,7 @@ class SourcePatcher:
)
return generator.calculate_node_config(
config_calls=[],
config_call_dict={},
fqn=fqn,
resource_type=NodeType.Source,
project_name=project_name,

View File

@@ -1,6 +1,5 @@
import inspect
from abc import abstractmethod
from copy import deepcopy
from typing import List, Optional, Type, TypeVar, Generic, Dict, Any
from dbt.dataclass_schema import dbtClassMixin, ValidationError
@@ -21,7 +20,7 @@ class RemoteMethod(Generic[Parameters, Result]):
METHOD_NAME: Optional[str] = None
def __init__(self, args, config):
self.args = deepcopy(args)
self.args = args
self.config = config
@classmethod

View File

@@ -67,15 +67,16 @@ class BootstrapProcess(dbt.flags.MP_CONTEXT.Process):
keeps everything in memory.
"""
# reset flags
dbt.flags.set_from_args(self.task.args)
user_config = None
if self.task.config is not None:
user_config = self.task.config.user_config
dbt.flags.set_from_args(self.task.args, user_config)
dbt.tracking.initialize_from_flags()
# reload the active plugin
load_plugin(self.task.config.credentials.type)
# register it
register_adapter(self.task.config)
# reset tracking, etc
self.task.config.config.set_values(self.task.args.profiles_dir)
def task_exec(self) -> None:
"""task_exec runs first inside the child process"""
if type(self.task) != RemoteListTask:

View File

@@ -1,3 +1,4 @@
from copy import deepcopy
import threading
import uuid
from datetime import datetime
@@ -155,7 +156,7 @@ class TaskManager:
f'Manifest should not be None if the last parse state is '
f'{state}'
)
return task(self.args, self.config, self.manifest)
return task(deepcopy(self.args), self.config, self.manifest)
def rpc_task(
self, method_name: str
@@ -167,7 +168,7 @@ class TaskManager:
elif issubclass(task, RemoteManifestMethod):
return self._get_manifest_callable(task)
elif issubclass(task, RemoteMethod):
return task(self.args, self.config)
return task(deepcopy(self.args), self.config)
else:
raise dbt.exceptions.InternalException(
f'Got a task with an invalid type! {task} with method '

View File

@@ -1,5 +1,8 @@
from dataclasses import dataclass
import re
from typing import List
from packaging import version as packaging_version
from dbt.exceptions import VersionsNotCompatibleException
import dbt.utils
@@ -125,12 +128,26 @@ class VersionSpecifier(VersionSpecification):
if self.is_unbounded or other.is_unbounded:
return 0
for key in ['major', 'minor', 'patch']:
comparison = int(getattr(self, key)) - int(getattr(other, key))
if comparison > 0:
for key in ['major', 'minor', 'patch', 'prerelease']:
(a, b) = (getattr(self, key), getattr(other, key))
if key == 'prerelease':
if a is None and b is None:
continue
if a is None:
if self.matcher == Matchers.LESS_THAN:
# If 'a' is not a pre-release but 'b' is, and b must be
# less than a, return -1 to prevent installations of
# pre-releases with greater base version than a
# maximum specified non-pre-release version.
return -1
# Otherwise, stable releases are considered greater than
# pre-release
return 1
if b is None:
return -1
if packaging_version.parse(a) > packaging_version.parse(b):
return 1
elif comparison < 0:
elif packaging_version.parse(a) < packaging_version.parse(b):
return -1
equal = ((self.matcher == Matchers.GREATER_THAN_OR_EQUAL and
@@ -408,10 +425,27 @@ def resolve_to_specific_version(requested_range, available_versions):
version = VersionSpecifier.from_version_string(version_string)
if(versions_compatible(version,
requested_range.start,
requested_range.end) and
requested_range.start, requested_range.end) and
(max_version is None or max_version.compare(version) < 0)):
max_version = version
max_version_string = version_string
return max_version_string
def filter_installable(
versions: List[str],
install_prerelease: bool
) -> List[str]:
installable = []
installable_dict = {}
for version_string in versions:
version = VersionSpecifier.from_version_string(version_string)
if install_prerelease or not version.prerelease:
installable.append(version)
installable_dict[str(version)] = version_string
sorted_installable = sorted(installable)
sorted_installable_original_versions = [
str(installable_dict.get(str(version))) for version in sorted_installable
]
return sorted_installable_original_versions

View File

@@ -7,6 +7,7 @@ from typing import Type, Union, Dict, Any, Optional
from dbt import tracking
from dbt import ui
from dbt import flags
from dbt.contracts.graph.manifest import Manifest
from dbt.contracts.results import (
NodeStatus, RunResult, collect_timing_info, RunStatus
@@ -21,7 +22,7 @@ from .printer import print_skip_caused_by_error, print_skip_line
from dbt.adapters.factory import register_adapter
from dbt.config import RuntimeConfig, Project
from dbt.config.profile import read_profile, PROFILES_DIR
from dbt.config.profile import read_profile
import dbt.exceptions
@@ -34,7 +35,7 @@ class NoneConfig:
def read_profiles(profiles_dir=None):
"""This is only used for some error handling"""
if profiles_dir is None:
profiles_dir = PROFILES_DIR
profiles_dir = flags.PROFILES_DIR
raw_profiles = read_profile(profiles_dir)
@@ -69,6 +70,13 @@ class BaseTask(metaclass=ABCMeta):
else:
log_manager.format_text()
@classmethod
def set_log_format(cls):
if flags.LOG_FORMAT == 'json':
log_manager.format_json()
else:
log_manager.format_text()
@classmethod
def from_args(cls, args):
try:
@@ -158,7 +166,7 @@ class ConfiguredTask(BaseTask):
INTERNAL_ERROR_STRING = """This is an error in dbt. Please try again. If \
the error persists, open an issue at https://github.com/fishtown-analytics/dbt
the error persists, open an issue at https://github.com/dbt-labs/dbt
""".strip()

View File

@@ -1,23 +1,24 @@
from .compile import CompileTask
from .run import ModelRunner as run_model_runner
from .run import RunTask, ModelRunner as run_model_runner
from .snapshot import SnapshotRunner as snapshot_model_runner
from .seed import SeedRunner as seed_runner
from .test import TestRunner as test_runner
from dbt.graph import ResourceTypeSelector
from dbt.contracts.results import NodeStatus
from dbt.exceptions import InternalException
from dbt.graph import ResourceTypeSelector
from dbt.node_types import NodeType
from dbt.task.test import TestSelector
class BuildTask(CompileTask):
"""The Build task processes all assets of a given process and attempts to 'build'
them in an opinionated fashion. Every resource type outlined in RUNNER_MAP
will be processed by the mapped runner class.
class BuildTask(RunTask):
"""The Build task processes all assets of a given process and attempts to
'build' them in an opinionated fashion. Every resource type outlined in
RUNNER_MAP will be processed by the mapped runner class.
I.E. a resource of type Model is handled by the ModelRunner which is imported
as run_model_runner.
"""
I.E. a resource of type Model is handled by the ModelRunner which is
imported as run_model_runner. """
MARK_DEPENDENT_ERRORS_STATUSES = [NodeStatus.Error, NodeStatus.Fail]
RUNNER_MAP = {
NodeType.Model: run_model_runner,
@@ -25,6 +26,20 @@ class BuildTask(CompileTask):
NodeType.Seed: seed_runner,
NodeType.Test: test_runner,
}
ALL_RESOURCE_VALUES = frozenset({x for x in RUNNER_MAP.keys()})
@property
def resource_types(self):
if not self.args.resource_types:
return list(self.ALL_RESOURCE_VALUES)
values = set(self.args.resource_types)
if 'all' in values:
values.remove('all')
values.update(self.ALL_RESOURCE_VALUES)
return list(values)
def get_node_selector(self) -> ResourceTypeSelector:
if self.manifest is None or self.graph is None:
@@ -32,11 +47,19 @@ class BuildTask(CompileTask):
'manifest and graph must be set to get node selection'
)
resource_types = self.resource_types
if resource_types == [NodeType.Test]:
return TestSelector(
graph=self.graph,
manifest=self.manifest,
previous_state=self.previous_state,
)
return ResourceTypeSelector(
graph=self.graph,
manifest=self.manifest,
previous_state=self.previous_state,
resource_types=[x for x in self.RUNNER_MAP.keys()],
resource_types=resource_types,
)
def get_runner_type(self, node):

Some files were not shown because too many files have changed in this diff Show More