Compare commits

..

456 Commits

Author SHA1 Message Date
Kyle Wigley
03dfb11d2b wait for successful connection before setting up db 2021-09-13 11:43:17 -04:00
Kyle Wigley
3effade266 deepcopy args when passed down to rpc pask (#3850) 2021-09-10 16:34:57 -04:00
Jeremy Cohen
44e7390526 Specify macro_namespace of global_project dispatch macros (#3851)
* Specify macro_namespace of global_project dispatch macros

* Dispatch get_custom_alias, too

* Add integration tests

* Add changelog entry
2021-09-09 12:59:39 +02:00
Jeremy Cohen
c141798abc Use a macro for where subquery in tests (#3859)
* Use a macro for where subquery in tests

* Fix existing tests

* Add test case reproducing #3857

* Add changelog entry
2021-09-09 12:38:09 +02:00
sungchun12
df7ec3fb37 Fix dbt deps sorting behavior for non-standard version semantics (#3856)
* robust sorting and default to original version str

* more unique version semantics

* add another non-standard version example

* fix mypy issue
2021-09-09 12:13:09 +02:00
AndreasTA-AW
90e5507d03 #3682 Changed how tables and views are generated to be able to use differen… (#3691)
* Changed how tables and views are generated to be able to use different options

* 3682 added unit tests

* 3682 had conflict in changelog and became a bit messy

* 3682 Tested to add default kms to dataset and accidently pushed the changes
2021-09-09 12:01:02 +02:00
leahwicz
332d3494b3 Adding ADR directory, guidelines, first one (#3844)
* Adding ADR README

* Adding perf testing ADR

* fill in adr sections

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-09-07 14:05:21 -04:00
Anna Filippova
6393f5a5d7 Feature: Add support for Package name changes on the Hub (#3825)
* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* move warnings to deprecations

* Update core/dbt/clients/registry.py

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

* add comments for posterity

* Update core/dbt/deprecations.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* add deprecation test

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-07 11:53:02 -04:00
Kyle Wigley
ce97a9ca7a explicitly define __init__ for Profile class (#3855) 2021-09-07 10:38:51 -04:00
Jeremy Cohen
9af071bfe4 Add adapter_unique_id to invocation tracking (#3796)
* Add properties + methods for adapter_unique_id

* Turn on tracking
2021-09-03 16:38:20 +02:00
sungchun12
45a41202f3 fix prerelease imports with loose version semantics (#3852)
* fix prereleases imports with looseversion

* update for non-standard versions
2021-09-02 17:52:36 +02:00
Kyle Wigley
9768999ca1 Only set single_threaded for the rpc list task, not the cli list task (#3848) 2021-09-02 10:11:39 -04:00
juma-adoreme
fc0d11c0a5 Parametrize key selection for the list task (#3838)
* Parametrize key selectinf for list task

* Remove trailing whitespace

* Add output_keys to RPC List Parameters

* Move up changelog entry, add contributor note

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-02 14:58:00 +02:00
Joel Labes
e6344205bb Add colourful count of pass/fail tests in dbt debug (#3832)
* Add colourful count of pass/fail tests in dbt debug

* Remove number of checks, move error messages into shared list

* Fix flake issues

* Update CHANGELOG.md
2021-09-02 12:15:03 +02:00
Jason Gluck
9d7a6556ef configurable postgres connect timeout (#3582)
* configurable postgres connect timeout

* changelog for #3582

* test default and change connect_timeout

* Move up contributor note in changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 19:45:31 +02:00
Daniel Bartley
15f4add0b8 Add target_project and target_dataset config aliases for snapshots on BigQuery (#3834)
* add bq alias for target_project and target_dataset

* Update CHANGELOG.md

add #3694 to changelog

* Update CHANGELOG.md

Be more specific about the change to bigquery synonym for schema only.

* Set integration test bigquery configs to use alias

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 16:08:20 +02:00
Anders
464becacd0 fewer adapters will need to re-implemnt basic_load_csv_rows (#3623)
* fewer adapters will need to re-implemnt basic_load_csv_rows

* hack version

* reordering per convention

* make redundant basic_load_csv_rows

* for next version

* Update core/dbt/include/global_project/macros/materializations/seed/seed.sql

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Move up changelog entry

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 15:47:36 +02:00
sungchun12
51a76d0d63 Better dbt packages version logging to aid in upgrading outdated packages (#3759)
* start blueprinting changes

* extend registry handler for latest package version

* conditional logging for latest version

* remove todo

* add conditional logging

* Upgrades is clearer

* update if elif conditions and log msg

* remove TODO

* fix flake8 errors

* blueprint unit tests

* conditions specific to hub registry

* 1 passing test for get latest version

* DRY method calls

* move version latest to hub only

* add a new line

* remove other draft tests

* update changelog

* update log language for clarity

* pass flake8

* fix changelog

* Update test/unit/test_deps.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update changelog

* remove hub language

* sort for latest version and include prereleases

* fix flake8

* resolves another issue

* fix prerelease string formatting

* fix broken test

* update logging to past tense

* built-in version sorting

* handle prereleases for latest version checks

* get version latest unit test based on prerelease

* update unit test for sorting functionality

* consistent test names

* fix flake8

* clean up contributors list

* simplify if else logic

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 10:31:45 +02:00
Slava Kalashnikov
052e54d43a BigQuery copy materialization enhancement (#3606)
* Change BigQuery copy materialization

Change BigQuery copy materialization macros to copy data from several sources into single target

* Change BigQuery copy materialization

Change BigQuery connections.py to copy data from several sources into single target via copy materialization

* Change BigQuery copy materialization

Test to check default value of `copy_materialization` if it is absent in config

* Change BigQuery copy materialization

Update changelog

* Update changelog

* Var renaming + test addition

* Changelog updated

* Changelog updated

* Fix test for copy table

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update impl.py

* Update connections.py

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update connections.py

* Align calls from mock and from adapter

* Split long code ilnes

* Create additional.sql

* Update copy_as_several_tables.sql

* Update schema.yml

* Update copy.sql

* Update connections.py

* Update test_bigquery_copy_models.py

* Add contributor
2021-08-30 16:28:35 +02:00
Kyle Wigley
9e796671dd Update workflow concurrency (#3824) 2021-08-26 17:13:54 -04:00
Kyle Wigley
a9a6254f52 Address unexpected cancelled CI workflows and stop blocking Postgres integration tests (#3813) 2021-08-26 10:37:50 -04:00
Kyle Wigley
8b3a09c7ae Run postgres integration tests when dev dependency changes (#3819) 2021-08-26 10:36:24 -04:00
Jeremy Cohen
6aa4d812d4 Rewrite generic tests to support column expressions (#3812)
* Rewrite generic tests to support column expressions, too

* Fix naming
2021-08-26 10:30:03 -04:00
Kyle Wigley
07fa719fb0 Revert "Bump freezegun from 0.3.12 to 1.1.0" (#3818)
This reverts commit 650b34ae24.
2021-08-26 10:11:56 -04:00
dependabot[bot]
650b34ae24 Bump freezegun from 0.3.12 to 1.1.0 (#3206)
Bumps [freezegun](https://github.com/spulec/freezegun) from 0.3.12 to 1.1.0.
- [Release notes](https://github.com/spulec/freezegun/releases)
- [Changelog](https://github.com/spulec/freezegun/blob/master/CHANGELOG)
- [Commits](https://github.com/spulec/freezegun/compare/0.3.12...1.1.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 18:18:37 -04:00
dependabot[bot]
0a935855f3 Update sqlparse requirement from <0.4,>=0.2.3 to >=0.2.3,<0.5 in /core (#3074)
Updates the requirements on [sqlparse](https://github.com/andialbrecht/sqlparse) to permit the latest version.
- [Release notes](https://github.com/andialbrecht/sqlparse/releases)
- [Changelog](https://github.com/andialbrecht/sqlparse/blob/master/CHANGELOG)
- [Commits](https://github.com/andialbrecht/sqlparse/compare/0.2.3...0.4.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:59:36 -04:00
dependabot[bot]
d500aae4dc Bump ubuntu from 18.04 to 20.04 (#3073)
Bumps ubuntu from 18.04 to 20.04.

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:35:24 -04:00
Kyle Wigley
370d3e746d Remove fishtown-analytics references 😢 (#3801) 2021-08-25 09:24:41 -04:00
Kyle Wigley
ab06149c81 Moving CI to GitHub actions (#3669)
* test

* test test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix sh script for building dists

* fix windows build

* add concurrency

* fix random 'Cannot track experimental parser info when active user is None' error

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* temp commit to test

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feeback: update order of script for setting up postgres on macos runner

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* testing push event

* test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix windows build

* add concurrency

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* test dynamic matrix generation

* update label logic

* finishing touches

* align naming

* pass opts to pytest

* slim down push matrix, there are a lot of jobs

* test bump num of proc

* update matrix for all event triggers

* handle case when no changes require integration tests

* dev review feedback

* clean up and add branch name for testing

* Add test results publishing as artifact (#3794)

* Test failures file

* Add testing branch

* Adding upload steps

* Adding date to name

* Adding to integration

* Always upload artifacts

* Adding adapter type

* Always publish unit test results

* Adding comments

* rm unecessary env var

* fix changelog

* update job name

* clean up python deps

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-08-24 17:12:42 -04:00
Gerda Shank
e72895c7c9 Merge pull request #3791 from dbt-labs/3210_select_equals_model
[#3210] Make --models and --select synonyms, except for 'ls'
2021-08-24 14:44:24 -04:00
Gerda Shank
fe4a67daa4 Use 'select' instead of 'models' for internal args processing and RPC 2021-08-24 13:55:37 -04:00
leahwicz
09ea989d81 Retry GitHub download failures (#3729)
* Retry GitHub download failures

* Refactor and add tests

* Fixed linting and added comment

* Fixing unit test assertRaises

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Fixing casing

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Changing to use partial for function calls

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-08-24 13:35:09 -04:00
leahwicz
7fa14b6948 Fixing changelog (#3776) 2021-08-23 13:19:58 -04:00
Gerda Shank
d4974cd35c [#3210] Make --models and --select synonyms, except for 'ls' 2021-08-23 11:40:59 -04:00
Kyle Wigley
459178811b rm git backports for previous debian release, use git package (#3785) 2021-08-22 21:20:34 -04:00
Snyk bot
b37f6a010e fix: docker/Dockerfile to reduce vulnerabilities (#3771)
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-537598
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
2021-08-19 09:26:53 -04:00
Gerda Shank
e817164d31 Merge pull request #3767 from dbt-labs/3764_analysis_descriptions
[#3764] Fix bug in analysis patch application
2021-08-18 09:05:10 -04:00
Gerda Shank
09ce43edbf [#3764] Fix bug in analysis patch application 2021-08-17 18:07:35 -04:00
sungchun12
2980cd17df Fix/bigquery job label length (#3703)
* add blueprints to resolve issue

* revert to previous version

* intentionally failing test

* add imports

* add validation in existing function

* add passing test for length validation

* add current sanitized label

* remove duplicate var

* Make logging output 2 lines

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Raise RuntimeException to better handle error

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update test

* fix flake8 errors

* update changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-17 14:54:31 -04:00
Gerda Shank
8c804de643 Merge pull request #3758 from dbt-labs/3757_pp_version_mismatch
[#3757] Produce better information about partial parsing version mismatches
2021-08-17 13:23:23 -04:00
Gerda Shank
c8241b87e6 [#3757] Produce better information about partial parsing version
mismatches.
2021-08-17 12:48:20 -04:00
Gerda Shank
f204d24ed8 Merge pull request #3616 from dbt-labs/config_in_schema_files
Configs in schema files
2021-08-17 12:18:04 -04:00
Gerda Shank
d5461ccd8b [#2401] Configs in schema files 2021-08-17 11:50:08 -04:00
Gerda Shank
a20d2d93d3 Merge pull request #3750 from dbt-labs/fix_remove_tests
[#3711] Check that test unique_id exists in nodes when removing
2021-08-16 09:06:36 -04:00
Gerda Shank
57e1eec165 [#3711] Check that test unique_id exists in nodes when removing 2021-08-13 17:23:36 -04:00
Nathaniel May
d2dbe6afe4 Merge pull request #3739 from dbt-labs/perf-testing-tweak
Bump minimum performance runs to 20
2021-08-13 14:05:10 -04:00
Gerda Shank
72eb163223 Merge pull request #3733 from dbt-labs/pp_trap_errors
Trap partial parsing errors and switch to full reparse on exceptions
2021-08-13 13:54:40 -04:00
Gerda Shank
af16c74c3a [#3725] Switch to full reparse on partial parsing exceptions. Log and
report exception information.
2021-08-13 13:38:47 -04:00
Kyle Wigley
664f6584b9 add missing versions and format (#3738) 2021-08-13 13:31:38 -04:00
Nathaniel May
76fd3bdf8c minimum performance runs to 20 2021-08-13 13:20:30 -04:00
Jeremy Cohen
b633adb881 Use is_relational check for schema caching (#3716)
* Use is_relational check for schema caching

* Fix flake8

* Update changelog
2021-08-12 18:18:28 -04:00
Jeremy Cohen
b6e534cdd0 Feature: state:modified.macros (#3559)
* First cut at state:modified for macro changes

* First cut at state:modified subselectors

* Update test_graph_selector_methods

* Fix flake8

* Fix mypy

* Update 062_defer_state_test/test_modified_state

* PR feedback. Update changelog
2021-08-12 17:21:48 -04:00
Nathaniel May
1dc4adb86f Merge pull request #3732 from dbt-labs/perf-test-project-swap
Swap dummy perf testing projects for a real one
2021-08-12 10:27:20 -04:00
Nathaniel May
0a4d7c4831 Merge pull request #3731 from dbt-labs/perf-ci-quickfix
remove pr trigger for perf workflow
2021-08-12 10:25:13 -04:00
Nathaniel May
ad67e55d74 swapping dummy perf testing projects for real one 2021-08-11 14:24:26 -04:00
Nathaniel May
2fae64a488 remove pr trigger for perf workflow 2021-08-11 14:14:08 -04:00
Nathaniel May
1a984601ee Merge pull request #3602 from dbt-labs/performance-regression-testing
Add Performance Regression Testing [Rust]
2021-08-11 10:44:51 -04:00
Jeremy Cohen
454168204c Add build RPC method (#3674)
* Add build RPC method

* Add rpc test, some required flags

* Fix flake8

* PR feedback

* Update changelog [skip ci]

* Do not skip CI when rebasing
2021-08-10 10:51:43 -04:00
Drew Banin
43642956a2 Serialize Undefined values to JSON for rpc requests (#3687)
* (#3464) Serialize Undefined values to JSON for rpc requests

* Update changelog, fix typo
2021-08-09 21:26:09 -04:00
Nathaniel May
1fe53750fa add more future work 2021-08-09 12:55:42 -04:00
Nathaniel May
8609c02383 minor change 2021-08-09 12:53:53 -04:00
Nathaniel May
355b0c496e remove unnecessary newline printing 2021-08-09 12:53:07 -04:00
Nathaniel May
cd6894acf4 add to future work 2021-08-09 12:52:53 -04:00
Nathaniel May
b90b3a9c19 avoid abandoned destructors by refactoring usage of exit 2021-08-06 14:46:10 -04:00
leahwicz
e7b8488be8 Remove converter.py since not used anymore (#3699) 2021-08-05 15:27:56 -04:00
Nathaniel May
06cc0c57e8 changelog 2021-08-05 12:27:21 -04:00
Nathaniel May
87072707ed point to manifest 2021-08-05 11:40:48 -04:00
Nathaniel May
ef63319733 add comment 2021-08-05 11:39:19 -04:00
Nathaniel May
2068dd5510 fmt 2021-08-05 11:31:19 -04:00
Nathaniel May
3e1e171c66 add test for regression calculation 2021-08-05 11:31:00 -04:00
Nathaniel May
5f9ed1a83c run twice a day 2021-08-05 11:15:40 -04:00
Nathaniel May
3d9e54d970 add fmt check 2021-08-05 11:14:30 -04:00
Nathaniel May
52a0fdef6c fmt 2021-08-05 11:10:07 -04:00
Nathaniel May
d9b02fb0a0 add lots of comments 2021-08-05 11:03:13 -04:00
Nathaniel May
6c8de62b24 up stddev threshold 2021-08-05 09:48:33 -04:00
Nathaniel May
2d3d1b030a rename to dummy project 2021-08-05 09:47:34 -04:00
Nathaniel May
88acf0727b remove clone 2021-08-04 17:50:32 -04:00
Nathaniel May
02839ec779 iter instead of into_iter 2021-08-04 17:48:13 -04:00
Nathaniel May
44a8f6a3bf more reference improvements 2021-08-04 17:46:45 -04:00
Nathaniel May
751ea92576 make the happy path use references and clone in the exception paths 2021-08-04 17:33:29 -04:00
Nathaniel May
02007b3619 more reference handling 2021-08-04 17:05:23 -04:00
Nathaniel May
fe0b9e7ef5 refactor to use references better 2021-08-04 17:00:16 -04:00
Nathaniel May
4b1c6b51f9 fix spacing 2021-08-04 16:39:53 -04:00
Nathaniel May
0b4689f311 address PR feedback 2021-08-04 15:14:37 -04:00
Nathaniel May
b77eff8f6f errors to warnings in ci 2021-08-04 13:34:46 -04:00
Nathaniel May
2782a33ecf minor refactor 2021-08-04 13:31:48 -04:00
Nathaniel May
94c6cf1b3c remove some clones 2021-08-04 13:19:39 -04:00
Nathaniel May
3c8daacd3e refactor for simpler flow 2021-08-04 13:12:01 -04:00
Nathaniel May
2f9907b072 remove one more unwrap 2021-08-04 12:34:57 -04:00
Nathaniel May
287c4d2b03 revamp exception hierarchy 2021-08-04 12:15:29 -04:00
Nathaniel May
ba9d76b3f9 make final json human readable 2021-08-04 10:11:09 -04:00
Jeremy Cohen
0efaaf7daf Fix typo [skip ci] 2021-08-04 09:50:11 -04:00
Drew Banin
9ae7d68260 Merge pull request #3686 from dbt-labs/fix/cleanup-audit-integration-tests
Fix: Drop audit schema tests in tearDown for test suite
2021-08-03 19:54:36 -04:00
Nathaniel May
486afa9fcd upload results even when regressions are detected 2021-08-03 18:04:22 -04:00
Nathaniel May
1f189f5225 added small paragraph to readme 2021-08-03 17:59:53 -04:00
Nathaniel May
580b1fdd68 minor print change 2021-08-03 17:56:30 -04:00
Nathaniel May
bad0198a36 minor readme changes 2021-08-03 17:55:03 -04:00
Nathaniel May
252280b56e write out results as artifact 2021-08-03 17:34:10 -04:00
Nathaniel May
64bf9c8885 test fix 2021-08-03 17:04:13 -04:00
Nathaniel May
935c138736 type gymnastics 2021-08-03 17:03:37 -04:00
Nathaniel May
5891b59790 better variable names 2021-08-03 16:52:28 -04:00
Nathaniel May
4e020c3878 enforce branch names at calculation time 2021-08-03 16:50:08 -04:00
Nathaniel May
3004969a93 move measure io to main 2021-08-03 16:26:29 -04:00
Nathaniel May
873e9714f8 move io operations to main 2021-08-03 16:22:02 -04:00
Nathaniel May
fe24dd43d4 return all calculations not just regressions 2021-08-03 16:10:16 -04:00
Nathaniel May
ed91ded2c1 branches named dev and baseline 2021-08-03 15:04:05 -04:00
Nathaniel May
757614d57f add stddev regression 2021-08-03 14:59:21 -04:00
Nathaniel May
faff8c00b3 group by run only 2021-08-03 14:37:49 -04:00
Github Build Bot
45fe76eef4 Merge remote-tracking branch 'origin/releases/0.21.0b1' into develop 2021-08-03 18:09:56 +00:00
Nathaniel May
80244a09fe add error for groups that are not two elements large 2021-08-03 14:00:30 -04:00
Github Build Bot
ea772ae419 Release dbt v0.21.0b1 2021-08-03 17:30:32 +00:00
Drew Banin
c68fca7937 Fix: Drop audit schema tests in tearDown for test suite 2021-08-03 13:24:54 -04:00
Nathaniel May
37e86257f5 point to the downloaded artifacts correctly 2021-08-03 12:57:19 -04:00
Nathaniel May
c182c05c2f error when there are no results to process 2021-08-03 12:48:21 -04:00
Nathaniel May
b02875a12b add debug line 2021-08-03 12:35:32 -04:00
Nathaniel May
03332b2955 more gracefully exit than unwrapping 2021-08-03 12:32:56 -04:00
Nathaniel May
f1f99a2371 add name to action 2021-08-03 12:15:44 -04:00
Nathaniel May
95116dbb5b fix measure call 2021-08-03 12:12:25 -04:00
Nathaniel May
868fd64adf download correct artifact 2021-08-03 12:08:00 -04:00
Nathaniel May
2f7ab2d038 unify rust apps as subcommands 2021-08-03 12:04:45 -04:00
Jeremy Cohen
159e79ee6b Update changelog in advance of v0.21.0b1 (#3678)
* Fixup Changelog

* More updates [skip ci]
2021-08-02 20:08:22 -04:00
Nathaniel May
3d4a82cca2 add comments 2021-08-02 18:09:31 -04:00
Nathaniel May
6ba837d73d fix download artifact name 2021-08-02 18:03:04 -04:00
Nathaniel May
f4775d7673 change job name 2021-08-02 17:56:29 -04:00
Nathaniel May
429396aa02 move step dependencies around 2021-08-02 17:55:26 -04:00
Nathaniel May
8a5e9b71a5 add some output to comparitor 2021-08-02 17:50:33 -04:00
Nathaniel May
fa78102eaf add comparison to workflow 2021-08-02 17:23:30 -04:00
Nathaniel May
5466d474c5 add test for exception display 2021-08-02 17:07:41 -04:00
Nathaniel May
80951ae973 fmt 2021-08-02 16:34:47 -04:00
Nathaniel May
d5662ef34c wrap up comparsion logic 2021-08-02 16:33:43 -04:00
leahwicz
57783bb5f6 Adding issue templates for different release types (#3644)
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-02 12:50:49 -04:00
Nathaniel May
d73ee588e5 Merge pull request #3637 from dbt-labs/experimental-parser-fix
make experimental parser respect config merge behavior
2021-08-02 10:03:42 -04:00
Nathaniel May
40089d710b experimental parser respects config merge behavior 2021-08-02 09:38:30 -04:00
Jeremy Cohen
6ec61950eb Handle exception from tracker.flush() (#3661) 2021-08-02 08:25:41 -04:00
Gerda Shank
72c831a80a Merge pull request #3659 from dbt-labs/pp_internal_macro_processing
[#3636] Check for unique_ids when recursively removing macros
2021-07-30 15:34:14 -04:00
Gerda Shank
929931a26a Merge pull request #3654 from dbt-labs/change_config_call_handling
Switch from config_call list to config_call_dict dictionary
2021-07-30 14:08:30 -04:00
Gerda Shank
577e2438c1 [#3636] Check for unique_ids when recursively removing macros 2021-07-30 14:01:40 -04:00
Kyle Wigley
2679792199 Add tracking event for full re-parse reasoning (#3652)
* add tracking event for full reparse reason

* update changelog
2021-07-30 09:39:09 -04:00
Kyle Wigley
2adf982991 update links to dbt repo (#3521) 2021-07-30 08:46:58 -04:00
Gerda Shank
1fb4a7f428 Switch from config_call list to config_call_dict dictionary 2021-07-29 18:46:59 -04:00
Kyle Wigley
30e72bc5e2 Use SchemaParser render context to render test configs (#3646)
* use available context when rendering test configs

* add test

* update changelog
2021-07-29 12:59:48 -04:00
Jeremy Cohen
35645a7233 Include dbt-docs changes for 0.20.1-rc1 (#3643) 2021-07-29 09:56:04 -04:00
Gerda Shank
d583c8d737 Merge pull request #3632 from dbt-labs/pp_delete_schema_macro_patch
[#3627] Improve findability of macro_patches, schedule right macro file for processing
2021-07-28 17:49:27 -04:00
Gerda Shank
a83f00c594 [#3627] Improve findability of macro_patches, schedule right macro file
for processing
2021-07-28 17:27:42 -04:00
Nathaniel May
45bb955b55 still wip but compiles 2021-07-28 15:54:04 -04:00
Daniele Frigo
c448702c1b Use old_relation for renaming in default materializations (#3547)
* table and view materializations should rename from old_relation to manage changes from view to table and reverse

* edited changelog

* edited changelog

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-07-28 06:59:27 -04:00
Niall Woodward
558a6a03ac Fix PR link in changelog (#3639)
Fix a typo introduced in https://github.com/dbt-labs/dbt/pull/3624
2021-07-28 06:51:45 -04:00
Niall Woodward
52ec7907d3 dbt deps prerelease install bugs + add install-prerelease parameter to packages.yml (#3624)
* Fix dbt deps prerelease install bugs

* Add install-prerelease parameter to hub packages in packages.yml
2021-07-27 21:59:46 -04:00
Jeremy Cohen
792f39a888 Snowflake: no transactions, except for DML (#3510)
* Rm Snowflake txnal logic. Explicit for DML

* Be less clever. Update create_or_replace_view()

* Seed DML as well

* Changelog entry

* Fix unit test

* One semicolon can change the world
2021-07-27 18:13:35 -04:00
Gerda Shank
16264f58c1 Merge pull request #3621 from dbt-labs/pp_macro_link_processing_error
[#3584] Partial parsing: handle source tests when changing test macro
2021-07-27 16:59:26 -04:00
Nathaniel May
2317c0c3c8 Merge pull request #3630 from dbt-labs/nate-3568
fix awkward exception being raised by a yml file with all comments
2021-07-27 16:50:56 -04:00
Gerda Shank
3c09ab9736 [#3584] Partial parsing: handle source tests when changing test macro 2021-07-27 16:34:23 -04:00
Gerda Shank
f10dc0e1b3 Merge pull request #3618 from dbt-labs/pp_yaml_version
[#3567] Fix partial parsing error with version key if previous file is empty
2021-07-27 16:30:06 -04:00
leahwicz
634bc41d8a Secret scrubbing for env variables (#3617)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-27 16:06:10 -04:00
Gerda Shank
d7ea3648c6 [#3567] Fix partial parsing error with version key if previous file is
empty
2021-07-27 15:38:52 -04:00
Gerda Shank
e5c8e19ff2 Merge pull request #3619 from dbt-labs/model_config_iterator
[#3573] Put back config iterator for backwards compatibility
2021-07-27 15:34:51 -04:00
Kyle Wigley
4ddba7e44c --wip-- 2021-07-27 12:48:33 -04:00
Kyle Wigley
37b31d10c8 add derived traits to structs 2021-07-27 11:28:32 -04:00
Nathaniel May
93cf1f085f handle None return value from yaml loading 2021-07-27 10:59:27 -04:00
Gerda Shank
a84f824a44 [#3573] Put back config iterator for backwards compatibility 2021-07-26 17:56:35 -04:00
Kyle Wigley
9c58f3465b Fix flaky test related to tracking events (#3604)
* skip all tracking event testing

* Turn off tracking in tests that hits model parsing code path
fix other random test that fails because global tracking.current_user exists but is null

* pytest did not respect skip mark

* fix gh actions
2021-07-26 16:55:16 -04:00
Gerda Shank
0e3778132b Merge pull request #3620 from dbt-labs/pp_already_removed_node
If SQL file already scheduled for parsing, don't reprocess
2021-07-26 15:49:10 -04:00
Jeremy Cohen
72722635f2 Fix error handling in dbt build (#3608)
* RunTask -> BuildTask

* Add test, changelog entry
2021-07-25 22:15:13 -04:00
Gerda Shank
a4c7c7fc55 If SQL file already scheduled for parsing, don't reprocess 2021-07-24 15:43:54 -04:00
Nathaniel May
2bad73eead Merge pull request #3610 from dbt-labs/derp-fix
fixing typo in test
2021-07-23 13:14:55 -04:00
Kyle Wigley
c8bc25d11a add struct 2021-07-22 17:05:08 -04:00
Kyle Wigley
4c06689ff5 create a tuple of measurements to group results by 2021-07-22 16:54:57 -04:00
Kyle Wigley
a45c9d0192 add new arg for runner for branch name 2021-07-22 16:24:46 -04:00
Kyle Wigley
34e2c4f90b fix results output dir 2021-07-22 16:16:21 -04:00
Kyle Wigley
c0e2023c81 update profiles dir path 2021-07-22 15:52:28 -04:00
Nathaniel May
108b55bdc3 add regression type 2021-07-22 13:50:17 -04:00
Nathaniel May
a29367b7fe read files and parse json 2021-07-22 12:30:35 -04:00
Nathaniel May
1d7e8349ed draft of comparison 2021-07-22 12:01:30 -04:00
Nathaniel May
67c194dcd1 fixing typo in test 2021-07-22 09:53:26 -04:00
Nathaniel May
75d3d87d64 fix warmup syntax 2021-07-21 18:00:07 -04:00
Nathaniel May
4ff3f6d4e8 moved config dir out of projects dir 2021-07-21 17:35:54 -04:00
Nathaniel May
d0773f3346 warmup fs caches 2021-07-21 17:34:22 -04:00
Nathaniel May
ee58d27d94 use a profiles.yml file 2021-07-21 17:28:28 -04:00
Nathaniel May
9e3da391a7 draft of regression testing workflow 2021-07-21 17:01:01 -04:00
matt-winkler
bd7010678a Feature: on_schema_change for incremental models (#3387)
* detect and act on schema changes

* update incremental helpers code

* update changelog

* fix error in diff_columns from testing

* abstract code a bit further

* address matching names vs. data types

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* updates from Jeremy's feedback

* multi-column add / remove with full_refresh

* simple changes from JC's feedback

* updated for snowflake

* reorganize postgres code

* reorganize approach

* updated full refresh trigger logic

* fixed unintentional wipe behavior

* catch final else condition

* remove WHERE string replace

* touch ups

* port core to snowflake

* added bigquery code

* updated impacted unit tests

* updates from linting tests

* updates from linting again

* snowflake updates from further testing

* fix logging

* clean up incremental logic

* updated for bigquery

* update postgres with new strategy

* update nodeconfig

* starting integration tests

* integration test for ignore case

* add test for append_new_columns

* add integration test for sync

* remove extra tests

* add unique key and snowflake test

* move incremental integration test dir

* update integration tests

* update integration tests

* Suggestions for #3387 (#3558)

* PR feedback: rationalize macros + logging, fix + expand tests

* Rm alter_column_types, always true for sync_all_columns

* update logging and integration test on sync

* update integration tests

* test fix SF integration tests

Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>

* rename integration test folder

* Update core/dbt/include/global_project/macros/materializations/incremental/incremental.sql

Accept Jeremy's suggested change

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Update changelog [skip ci]

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-21 15:49:19 -04:00
leahwicz
9f716b31b3 Moving unit tests into separate workflow (#3588)
* Moving unit tests into separate workflow

* Fixing CircleCI error
2021-07-21 12:35:04 -04:00
Kyle Wigley
3dd486d8fa Source freshness task node selection and cli command parity (#3554)
* cli: add selection args for source freshness command

* rename command to `source freshness` and maintain alias to old command

* update and add tests for source freshness command and node selection

* update changelog, add comments

* fix formatting

* update changelog
2021-07-21 10:31:40 -04:00
Jeremy Cohen
33217891ca Refactor relationships test to support where config (#3583)
* Rewrite relationships with CTEs

* Update changelog PR num [skip ci]
2021-07-20 19:28:09 -04:00
dependabot[bot]
1d37c4e555 Update snowflake-connector-python[secure-local-storage] requirement (#3594)
Updates the requirements on [snowflake-connector-python[secure-local-storage]](https://github.com/snowflakedb/snowflake-connector-python) to permit the latest version.
- [Release notes](https://github.com/snowflakedb/snowflake-connector-python/releases)
- [Commits](https://github.com/snowflakedb/snowflake-connector-python/commits)

---
updated-dependencies:
- dependency-name: snowflake-connector-python[secure-local-storage]
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-20 14:01:35 -04:00
Nathaniel May
9f62ec2153 add rust performance runner 2021-07-20 09:42:12 -04:00
Jeremy Cohen
372eca76b8 Bump werkzeug lower bound, werkzeug-refresh-token script (#3590)
* Update werkzeug, refresh-token script

* Add changelog note
2021-07-20 08:45:56 -04:00
Ian Knox
e3cb050bbc Merge pull request #3490 from dbt-labs/feature/dbt-build
dbt `build`
2021-07-19 19:09:50 -05:00
Jeremy Cohen
0ae93c7f54 Fix store_failures modifier for unique, not_null (#3577)
* Fix store_failures modifier for unique, not_null

* Add test, changelog
2021-07-16 14:50:06 -04:00
leahwicz
1f6386d760 Adding missing v0.20.0 files back to develop (#3566) 2021-07-15 16:46:22 -04:00
Ian Knox
66eb3964e2 PR feedback, Changelog 2021-07-14 13:23:46 -05:00
Nathaniel May
f460d275ba add experimental parser tracking (#3553) 2021-07-09 17:13:34 -04:00
Jeremy Cohen
fb91bad800 Update create_adapter_plugins.py (#3509)
* Update create_adapter_plugins.py

* Bump, changelog [skip ci]
2021-07-09 14:53:01 -04:00
Jeremy Cohen
eaec22ae53 Reconcile changelogs bw 0.20.latest + develop (#3552) 2021-07-09 13:39:42 -04:00
Jeremy Cohen
b7c1768cca Update changelog (#3551) 2021-07-09 13:35:16 -04:00
Nathaniel May
387b26a202 Merge pull request #3549 from dbt-labs/test-tweak
Tweak function to return a value instead of asserting
2021-07-09 12:20:22 -04:00
Ian Knox
8a1e6438f1 Revert downstream blocking on test failure logic due to test issues 😧 2021-07-09 10:52:12 -05:00
Ian Knox
aaac5ff2e6 New node discovery logic for internal work queue 2021-07-09 10:52:12 -05:00
Ian Knox
4dc29630b5 Race condition fix + downstream test blocking 2021-07-09 10:52:12 -05:00
Ian Knox
f716631439 Grouped Topologic sorting logic 2021-07-09 10:52:12 -05:00
Ian Knox
648a780850 First pass dbt build, cli only 2021-07-09 10:51:53 -05:00
Jeremy Cohen
de0919ff88 Include dbt-docs changes for 0.20.0 final (#3544) 2021-07-09 11:33:13 -04:00
Nathaniel May
8b1ea5fb6c return value from test function 2021-07-08 18:14:38 -04:00
Jeremy Cohen
85627aafcd Speed up Snowflake column comments, while still avoiding errors (#3543)
* Have our cake and eat it quickly, too

* Update changelog
2021-07-07 18:18:26 -04:00
Jeremy Cohen
49065158f5 Add gitignore, ignore pycache (#3536) 2021-07-06 11:51:17 -04:00
Gerda Shank
bdb3049218 Merge pull request #3522 from dbt-labs/pp_fix_simul_model_patch_deletion
Partial parsing: check if a node has already been deleted [#3516]
2021-07-06 11:46:02 -04:00
jmriego
e10d1b0f86 '+' config prefix handling whitespace (#3526)
* '+' config prefix handling whitespace

* rerun ci
2021-07-02 12:25:18 -04:00
Drew Banin
83b98c8ebf Merge pull request #3499 from dbt-labs/fix/agate-undesirable-casting
Prevent Agate from coercing values in query result sets
2021-07-01 10:53:28 -04:00
Jeremy Cohen
b9d5123aa3 Update changelog for 0.20.0rc2 2021-06-30 16:24:25 -04:00
Gerda Shank
c09300bfd2 Partial parsing: check if a node has already been deleted [#3516] 2021-06-30 11:35:13 -04:00
leahwicz
fc490cee7b Adding link to plugin release instructions 2021-06-30 09:51:53 -04:00
Jeremy Cohen
3baa3d7fe8 Update badge links 2021-06-30 08:46:33 -04:00
Jeremy Cohen
764c7c0fdc Update logo, badges (#3518)
* Fix logo, badges

* Update commit hash

* A few more edits

* Point Circle badge to develop [skip ci]

* Final fixups [skip ci]
2021-06-30 08:43:05 -04:00
Drew Banin
c97ebbbf35 Update License.md (#3517) 2021-06-29 22:43:50 -04:00
Drew Banin
85fe32bd08 Move to 0.21 changelog section 2021-06-29 20:21:31 -04:00
Drew Banin
eba3fd2255 Merge branch 'develop' of github.com:fishtown-analytics/dbt into fix/agate-undesirable-casting 2021-06-29 20:20:18 -04:00
Jeremy Cohen
e2f2c07873 Include dbt-docs changes for 0.20.0rc2 (#3511) 2021-06-29 18:44:42 -04:00
Nathaniel May
70850cd362 Merge pull request #3497 from fishtown-analytics/experimental-parser-rust
Experimental Parser: Swap python extractor for rust dependency
2021-06-29 16:21:26 -04:00
Gerda Shank
16992e6391 Merge pull request #3505 from fishtown-analytics/pp_testing
Expand partial parsing tests; fix macro partial parsing [#3449]
2021-06-29 15:45:33 -04:00
Gerda Shank
fd0d95140e Expand partial parsing tests; fix macro partial parsing [#3449] 2021-06-29 11:53:38 -04:00
Nathaniel May
ac65fcd557 update experimental parser to use rust dependency 2021-06-28 17:30:07 -04:00
Kyle Wigley
4d246567b9 Update project load tracking to include experimental parser info (#3495)
* Fix docs generation for cross-db sources in REDSHIFT RA3 node (#3408)

* Fix docs generating for cross-db sources

* Code reorganization

* Code adjustments according to flake8

* Error message adjusted to be more precise

* CHANGELOG update

* add static analysis info to parsing data

* update changelog

* don't use `meta`! need better separation between dbt internal objects and external facing data. hacked an internal field on the manifest to save off this parsing info for the time being

* fix partial parsing case

Co-authored-by: kostek-pl <67253952+kostek-pl@users.noreply.github.com>
2021-06-28 10:09:50 -04:00
Drew Banin
1ad1c834f3 (#2984) Prevent Agate from coercing values in query result sets 2021-06-26 13:24:30 -04:00
Jeremy Cohen
41610b822c Touch up init, fix missing adapter errors (#3483)
* Some love to init

* Update changelog
2021-06-24 15:51:36 -04:00
leahwicz
c794600242 Move starter project into dbt repo (#3474)
Addresses issue #3005
2021-06-22 11:03:01 -04:00
Jessica Laughlin
9d414f6ec3 add optional SSL parameters to Postgres connector (#3473)
Allows users to optionally set values for sslcert, sslkey, and
sslrootcert in their Postgres profiles.

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-06-21 17:52:59 -04:00
Ted Conbeer
552e831306 Feature/faster materializations with fewer transactions (#3468)
* drop if exists in view and table materializations

* Add integration test

* Add changelog entry

* PR Review 1
2021-06-21 14:58:32 -04:00
leahwicz
c712c96a0b Commenting our flaky integration test (#3477)
Test continually fails on time out so commenting out until we can fix it
2021-06-21 10:51:36 -04:00
Gerda Shank
eb46bfc3d6 Merge pull request #3460 from fishtown-analytics/minimal_pp_validation
Add minimal validation of schema file yaml prior to partial parsing
2021-06-18 15:08:59 -04:00
dependabot[bot]
f52537b606 Update typing-extensions requirement in /core (#3310)
Updates the requirements on [typing-extensions](https://github.com/python/typing) to permit the latest version.
- [Release notes](https://github.com/python/typing/releases)
- [Commits](https://github.com/python/typing/compare/3.7.4...3.10.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-06-18 13:05:55 -04:00
dependabot[bot]
762419d2fe Update werkzeug requirement from <2.0,>=0.15 to >=0.15,<3.0 in /core (#3390)
Updates the requirements on [werkzeug](https://github.com/pallets/werkzeug) to permit the latest version.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/0.15.0...2.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-18 10:16:28 -04:00
dependabot[bot]
4feb7cb15b Update idna requirement from <3,>=2.5 to >=2.5,<4 in /core (#3429)
Updates the requirements on [idna](https://github.com/kjd/idna) to permit the latest version.
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](https://github.com/kjd/idna/compare/v2.5...v3.2)

---
updated-dependencies:
- dependency-name: idna
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-18 09:52:38 -04:00
Gerda Shank
eb47b85148 Add minimal validation of schema file yaml prior to partial parsing
[#3246]
2021-06-17 13:25:04 -04:00
Anders
9faa019a07 dispatch logic of new test materialization (#3461)
* dispatch logic of new test materialization

allow custom adapters to override the core test select statement functionality

* rename macro

* raconte moi une histoire

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-06-17 12:09:34 -04:00
Jeremy Cohen
9589dc91fa Fix quoting for stringy test configs (#3459)
* Fix quoting for stringy test configs

* Update changelog
2021-06-16 14:26:07 -04:00
Gerda Shank
14507a283e Merge pull request #3454 from fishtown-analytics/adapter_dispatch_infinite_recursion
Fix macro depends_on recursion error when macros call themselves (dbt…utils.datediff)
2021-06-11 11:22:05 -04:00
Gerda Shank
af0fe120ec Fix macro depends_on recursion error when macros call themselves (dbt_utils.datediff) 2021-06-11 10:28:16 -04:00
Gerda Shank
16501ec1c6 Merge pull request #3445 from fishtown-analytics/fix_serialization
create _lock when deserializing manifest, plus cleanup file serialization
2021-06-11 09:30:10 -04:00
leahwicz
bf867f6aff Add minor version release template (#3442)
Add minor version release template
2021-06-09 09:31:21 -04:00
kostek-pl
eb4ad4444f Fix docs generation for cross-db sources in REDSHIFT RA3 node (#3408)
* Fix docs generating for cross-db sources

* Code reorganization

* Code adjustments according to flake8

* Error message adjusted to be more precise

* CHANGELOG update
2021-06-09 09:08:52 -04:00
Gerda Shank
8fdba17ac6 create _lock when deserializing manifest, plus cleanup file
serialization
2021-06-08 16:08:12 -04:00
Github Build Bot
abe8e83945 Merge remote-tracking branch 'origin/releases/0.20.0rc1' into develop 2021-06-04 19:02:42 +00:00
Github Build Bot
02cbae1f9f Release dbt v0.20.0rc1 2021-06-04 18:31:00 +00:00
Gerda Shank
65908b395f Merge pull request #3432 from fishtown-analytics/docs_references
Save doc file node references and use in partial parsing
2021-06-04 13:50:52 -04:00
Gerda Shank
4971395d5d Save doc file node references and use in partial parsing 2021-06-04 13:26:19 -04:00
Kyle Wigley
eeec2038aa bump run results and manifest artifact schemas versions (#3421)
* bump run results and manifest artifact schemas versions

* update changelog
2021-06-04 12:32:24 -04:00
Kyle Wigley
4fac086556 Add deprecation warning for providing packages to adapter.dispatch (#3420)
* add deprecation warning

* update changelog

* update deprecation message

* fix flake8
2021-06-04 10:11:31 -04:00
Jeremy Cohen
8818061d59 One more dbt-docs change for v0.20.0rc1 (#3430)
* Include dbt-docs changes for 0.20.0rc1

* One more dbt-docs change for 0.20.0rc1
2021-06-04 08:54:45 -04:00
leahwicz
b195778eb9 Updating triggers to not have duplicate runs (#3417)
Updating triggers to not have duplicate runs
2021-06-04 08:38:46 -04:00
Jeremy Cohen
de1763618a Avoid repeating nesting in test query (#3427) 2021-06-03 16:57:17 -04:00
Jeremy Cohen
7485066ed4 Include dbt-docs changes for 0.20.0rc1 (#3424) 2021-06-03 16:43:16 -04:00
Jeremy Cohen
15ce956380 Call out breaking changes in changelog [skip ci] (#3418) 2021-06-03 13:50:34 -04:00
Gerda Shank
e5c63884e2 Merge pull request #3364 from fishtown-analytics/move_partial_parsing
Move partial parsing to end of parsing. Switch to using msgpack for saved manifest.
2021-06-02 16:02:10 -04:00
Kyle Wigley
9fef62d83e update test subquery alias (#3414)
* update subquery alias

* update changelog
2021-06-02 15:44:17 -04:00
Gerda Shank
7563b997c2 Move partial parsing to end of parsing and implement new partial parsing
method. Switch to using msgpack for saved manifest.
2021-06-02 15:32:22 -04:00
leahwicz
291ff3600b Creating test workflow in Actions (#3396)
Moving the Windows tests to Actions and adding Mac tests as well
2021-06-02 11:54:37 -04:00
Kyle Wigley
2c405304ee New test configs (#3392)
* New test configs: where, limit, warn_if, error_if

* update test task and tests

* fix flake8

* Update core/dbt/parser/schemas.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Update core/dbt/parser/schema_test_builders.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* respond to some feedback

* add failures field

* add failures to results

* more feedback

* add some test coverage

* dev review feedback

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-06-02 10:28:55 -04:00
Jeremy Cohen
1e5a7878e5 Hard code dispatch namespaces for fivetran_utils? (#3403)
* Hard code for fivetran_utils

* Add changelog entry [skip cii]
2021-06-01 10:58:18 -04:00
Stephen Bailey
d89e1d7f85 Add "tags" and "meta" properties to exposure schema (#3405)
* Add meta and tags to parsed and unparsed

* Fix tests

* Add meta and tags to unparsed schema

* Update Changelog

* Remove "optional" specifier and add default values

* Fix exposure schemas for PG Integration tests
2021-06-01 08:19:10 -04:00
Jeremy Cohen
98c015b775 Fix statically extracting macro calls for macro.depends_on.macros to be (#3363)
used in parsing schema tests by looking at the arguments to
adapter.dispatch. Includes providing an alternative way of specifying
macro search order in project config.
Collaboratively developed with Jeremy Cohen.

Co-authored-by: Gerda Shank <gerda@fishtownanalytics.com>
2021-05-27 17:13:47 -04:00
Ian Knox
a56502688f Merge pull request #3384 from fishtown-analytics/feature/ls_in_RPC
Add `ls` to RPC server
2021-05-27 14:42:49 -05:00
Jeremy Cohen
c0d757ab19 dbt test --store-failures (#3316) 2021-05-27 15:21:29 -04:00
Ian Knox
e68fd6eb7f PR Feedback 2021-05-27 12:25:18 -05:00
Ian Knox
90edc38859 fix ls test 2021-05-26 11:59:34 -05:00
matt-winkler
0f018ea5dd Bugfix/issue 3350 snowflake non json response (#3365)
* attempt at solving with while loop

* added comment on loop

* update changelog

* modified per drew's suggestions

* updates after linting
2021-05-26 12:18:16 -04:00
Ian Knox
1be6254363 updated changelog 2021-05-26 10:39:23 -05:00
Ian Knox
760af71ed2 Merge branch 'feature/ls_in_RPC' of github.com:fishtown-analytics/dbt into feature/ls_in_RPC 2021-05-26 10:21:30 -05:00
Ian Knox
82f5e9f5b2 added additional fields to list response (unique_id, original_file_path) 2021-05-26 10:21:12 -05:00
Ian Knox
988c187db3 Merge branch 'develop' into feature/ls_in_RPC 2021-05-26 09:40:17 -05:00
Ian Knox
b23129982c update default threading settings for all tasks 2021-05-26 09:37:49 -05:00
Daniel Mateus Pires
4d5d0e2150 🔨 Add ssh-client + update git using debian backports in Docker image (#3338)
* 🔨 Add ssh-client and update git version using debian backports in Docker image

* ✏️ Update CHANGELOG

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-05-24 18:39:32 -04:00
Jon Natkins
c0c487bf77 Running a snapshot with missing required configurations results in un... (#3385)
* Running a snapshot with missing required configurations results in uncaught Python exception (#3381)

* Running a snapshot with missing required configurations results in uncaught Python exception

* Add fix details to CHANGELOG

* Update CHANGELOG.md

* Update invalid snapshot test with new/improved error message

* Improve changelog message and contributors addition
2021-05-24 15:48:08 -04:00
Nathaniel May
835d805079 Merge pull request #3374 from fishtown-analytics/experiment/dbt_jinja
Add experimental parser behind flag
2021-05-21 18:25:08 -04:00
Nathaniel May
c2a767184c add changelog entry 2021-05-21 17:02:40 -04:00
Nathaniel May
1e7c8802eb add --use-experimental-parser flag, depend on tree-sitter-jinja2, add unit tests 2021-05-21 17:01:55 -04:00
Nathaniel May
a76ec42586 static parsing skateboard 2021-05-21 17:01:55 -04:00
PJGaetan
7418f36932 Allow to use a costum field as check-cols updated_at (#3376)
* Allow to use a costum field as check-cols updated_at

* Clarify changlog /w jtcohen6
2021-05-21 16:15:56 -04:00
Ian Knox
f9ef5e7e8e changelog 2021-05-21 09:27:54 -05:00
Ian Knox
dbfa351395 tests and fixes 2021-05-21 09:22:18 -05:00
Jeremy Cohen
e775f2b38e Use shutil.which to find executable path (#3299)
* (explicitly) find the executable before running run_cmd

#3035

* fix undefined var

* use Executable to say exe not found and use full pth to exe

* changelog for #3035

* Nest shutil.which for better error msg

Co-authored-by: Majid alDosari <majidaldosari-github@yahoo.com>
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-05-20 17:30:56 -04:00
dependabot[bot]
6f27454be4 Bump jinja2 from 2.11.2 to 2.11.3 in /core (#3077)
Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.2 to 2.11.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/master/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/2.11.2...2.11.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-20 10:54:07 -04:00
Ian Knox
201723d506 ls in RPC works 2021-05-20 08:27:16 -05:00
Josh Devlin
17555faaca Add a better error for undefined macros (#3343)
* Add a better error for undefined macros

* Add check/error when installed packages < specified packages

* fix integration tests

* Fix issue with null packages

* Don't call _get_project_directories() twice

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Fix some integration and unit tests

* Make mypy happy

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Fix docs and rpc integration tests

* Fix (almost) all the rpc tests

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-05-19 14:08:30 -04:00
Kyle Wigley
36e0ab9f42 Merge pull request #3339 from fishtown-analytics/fix/packaging-dep
add 'packaging' package to dbt-core
2021-05-19 11:24:30 -04:00
Kyle Wigley
6017bd6cba Merge branch 'develop' into fix/packaging-dep 2021-05-19 10:26:06 -04:00
Claire Carroll
30fed8d421 Refactor schema tests (#3367) 2021-05-18 18:15:13 -04:00
Kyle Wigley
8ac5cdd2e1 Merge pull request #3351 from fishtown-analytics/fix/bigquery-debug-task
Fix debug task for BigQuery connections
2021-05-14 14:22:23 -04:00
Kyle Wigley
114ac0793a update changelog 2021-05-14 11:02:04 -04:00
Kyle Wigley
d0b750461a fix debug task for bigquery connections 2021-05-14 11:01:23 -04:00
Jeremy Cohen
9693170eb9 Cleanup v0.20.0 changelog [skip ci] (#3323) 2021-05-13 18:44:41 -04:00
Jeremy Cohen
bbab6c2361 Separate compiled_path in manifest + printer (#3327) 2021-05-13 18:29:17 -04:00
Gerda Shank
cfe3636c78 Merge pull request #3342 from fishtown-analytics/split_out_schema_parsing
Do schema file parsing after parsing all other files
2021-05-13 16:11:05 -04:00
Gerda Shank
aadf3c702e Do schema file parsing after parsing all other files 2021-05-13 15:26:03 -04:00
Kyle Wigley
1eac726a07 fix expected redshift stats (I think the possible values for the svv_table_info.encoded column changed) 2021-05-13 13:49:02 -04:00
Gerda Shank
85e2c89794 Merge pull request #3345 from fishtown-analytics/package_name_macros
Handle macros with package names in schema test rendering
2021-05-12 21:53:46 -04:00
Eli Kastelein
fffcd3b404 Check if a snowflake column exists before altering its comment (#3149)
* Check if column exists when altering column comments in snowflake

* Add new test class for persist docs models with missing columns

* Parallel run all integration tests after unit (#3328)

* don't clobber default args

* update changelog

* Update changelog for PR #3149

* Pull in upstream changes

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-05-12 21:33:01 -04:00
peiwangdb
fbfef4b1a3 fix integration test failure due to timezone (#3344)
* fix integration test failure due to timezone

* update changelog
2021-05-12 21:28:19 -04:00
Gerda Shank
526a6c0d0c Merge branch 'develop' into package_name_macros 2021-05-12 21:21:45 -04:00
Ian Knox
1f33b6a74a Merge pull request #3335 from fishtown-analytics/feature/schema_tests_are_more_unique
Feature/schema tests are more unique
2021-05-12 19:18:50 -05:00
Ian Knox
95fc6d43e7 Additional PR feedback 2021-05-12 13:29:24 -05:00
Kyle Wigley
d8c261ffcf Merge pull request #3340 from fishtown-analytics/fix/default-test-args
Stop clobbering default args for test definitions
2021-05-12 10:25:53 -04:00
Kyle Wigley
66ea0a9e0f update changelog 2021-05-12 09:54:20 -04:00
Gerda Shank
435b542e7b Add macros with project name to schema test context, and recursively get
macros
2021-05-12 09:51:44 -04:00
Ian Knox
10cd06f515 PR feedback 2021-05-12 08:38:50 -05:00
Jeremy Cohen
9da1868c3b Parallel run all integration tests after unit (#3328) 2021-05-11 13:23:38 -04:00
Kyle Wigley
2649fac4a4 don't clobber default args 2021-05-11 10:45:16 -04:00
Kyle Wigley
6e05226e3b update changelog 2021-05-11 07:18:15 -04:00
Kyle Wigley
c1c3397f66 add 'packaging' package to dbt-core 2021-05-11 07:07:12 -04:00
Ian Knox
2065db2383 Testing changes to support hashes 2021-05-10 15:49:50 -05:00
Ian Knox
08fb868b63 Adds hash to generic test unique_ids 2021-05-10 15:45:07 -05:00
Aram Panasenco
8d39ef16b6 Now generating a run_results.json even when no nodes are selected (#3315)
* Now generating a run_results.json even when no nodes are selected.

* Typo in changelog

* Modified changelog in accordance with @jtcohen6's suggestions. Also fixed @TeddyCr's entry.
2021-05-06 08:07:54 -04:00
Teddy
66c5082aa7 Feature/3117 dbt deps timeout (#3275)
* added logic to timeout request after 60 seconds + unit test

* fixed typo in comment

* Update test URL for azure CI failure

* Update changelog + change endpoint for timeout test + added retry logic to the timeout exception

* updated exception catching to a generic one + adjusted tests based on error catching + renamed test file accordingly

* updated comment in test_registry_get_request_exception.py to reflect new approach

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-05-05 23:40:21 -04:00
Kyle Wigley
26fb58bd1b Merge pull request #3318 from fishtown-analytics/fix/ephemeral-compilation
[moving fix to current release] Fix ephemeral model compilation
2021-05-04 14:54:19 -04:00
Kyle Wigley
fed8826043 update changelog 2021-05-04 10:02:56 -04:00
Kyle Wigley
9af78a3249 fix tests after merge 2021-05-04 08:57:31 -04:00
Kyle Wigley
bf1ad6cd17 Merge pull request #3139 from fishtown-analytics/fix/ephemeral-compile-sql
Fix compiled sql for ephemeral models
2021-05-04 08:39:39 -04:00
Github Build Bot
15e995f2f5 Merge remote-tracking branch 'origin/releases/0.20.0b1' into develop 2021-05-03 11:33:09 +00:00
Github Build Bot
b3e73b0de8 Release dbt v0.20.0b1 2021-05-03 11:08:53 +00:00
Jeremy Cohen
dd2633dfcb Include parent adapters in dispatch (#3296)
* Add test, expect fail

* Include parent adapters in dispatch

* Use adapter type, not credentials type

* Adjust adapter_macro deprecation test

* fix test/unit/test_context.py to use postgres profile

* Add changelog note

* Redshift default column encoding now AUTO

Co-authored-by: Gerda Shank <gerda@fishtownanalytics.com>
2021-05-02 18:01:57 -04:00
Karthik Ramanathan
29f0278451 prevent locks in incremental full refresh (#2998) 2021-05-02 17:19:25 -04:00
Angel Montana
f0f98be692 Support dots in model names: Make FQN model selectors work with them (#3247)
* Support dots in model names. They are useful as namespace separators

* Update CHANGELOG.md

* Update contributor list

* Extend integration test to support models whose name contains dots

* Cleanup fqn selection login

* Explain condition check

* Support dots in model names: integration tests

* Make linter happy: remove trailing whitespaces

* Support dots in model names: integration test for seeds

* revert 66c26facbd. Integration tests to support dots in model names implemented in 007_graph_selection_tests

* Test model with dots only in postgres. It's meant to work in combination with custom macros for other databases

* Support dot as namespace separators: integration test

* Support dots in test names: integration test

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-04-28 17:32:14 -04:00
Kyle Wigley
5956a64b01 Merge pull request #3305 from fishtown-analytics/fix/bigquery-no-project
add unit test and move default logic to mashumaro hook
2021-04-28 14:52:53 -04:00
Daniel Mateus Pires
5fb36e3e2a Issue 275: Support dbt package dependencies in Git subdirectories (#3267)
* 🔨 Extend git package contract and signatures to pass `subdirectory`

* Add sparse checkout logic

*  Add test

* 🧹 Lint

* ✏️ Update CHANGELOG

* 🐛 Make os.path.join safe

* Use a test-container with an updated `git` version

* 🔨 Fix integration tests

* 📖 Update CHANGELOG contributors to include this PR

* 🧪 Parameterize the test

* Use new test-container published by @kwigley (contains more recent version of git)

* Use repositories managed by fishtown

* 🧘‍♂️ Merge the CHANGELOG

* 🤦‍♂️ Remove repetition of my contribution on the CHANGELOG

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-04-28 09:40:05 -04:00
Cor
9d295a1d91 Add TestCLIInvocationWithProfilesAndProjectDir (#3176)
* Add TestCLIInvocationWithProfilesAndProjectDir

* Add context manager for changing working directory

* Add fixture for changing working directory to temporary directory

* Add missing import

* Move custom profile into function

* Add function to create profiles directory

* Remove path and union

* Make temporary_working_directory a context manager

* Fix some issues, wrong naming and invocation

* Update test with profiles and project dir

* Use a fixture for the sub command

* Run test for debug and run sub command

* Resolve inheritance

* Remove profiles from project dir

* Use pytest mark directly instead of use_profile

* Use parameterize on class

* Remove parameterize

* Create separate test for each subcommand

* Set profiles_dir to false

* Add run test

* Use abspath

* Add entry to change log

* Add suggested changes

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Remove test_postgres_run_with_profiles_separate_from_project_dir

* Add JCZuurmond to change log

* Add test_postgres_run_with_profiles_separate_from_project_dir

* Remove HEAD

* Fix wrong merge

* Add deps before test

* Add run to test which runs test command

* Sort tests

* Force rerun

* Force rerun

* Force rerun

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-04-28 08:20:02 -04:00
Jeremy Cohen
39f350fe89 Be less greedy in test selection expansion (#3235)
* Expand test selection iff all first-order parents are selected

* Renumber new integration test

* PR feedback

* Fix flake
2021-04-27 15:52:16 -04:00
Kyle Wigley
8c55e744b8 Merge pull request #3286 from fishtown-analytics/feature/schema-test-materialization
use test materialization for schema/generic tests
2021-04-27 08:05:42 -04:00
Jeremy Cohen
a260d4e25b Merge pull request #3106 from arzavj/postgres_create_indexes
Postgres: ability to create indexes
2021-04-27 08:02:26 -04:00
Jeremy Cohen
509797588f Merge branch 'develop' into postgres_create_indexes 2021-04-27 07:28:38 -04:00
Kyle Wigley
2eed20f1f3 update changelog 2021-04-23 09:51:56 -04:00
Kyle Wigley
1d7b4c0db2 update integration tests, fix tests for bigquery 2021-04-22 16:24:13 -04:00
Kyle Wigley
ac8cd788cb use test materialization for schema/generic tests, update integration tests 2021-04-22 11:26:13 -04:00
Gerda Shank
33dc970859 Merge pull request #3272 from fishtown-analytics/test_context_regression
Add necessary macros to schema test context namespace
2021-04-20 12:36:40 -04:00
Kyle Wigley
f73202734c Merge pull request #3261 from fishtown-analytics/feature/test-jinja-block
Add `test` Jinja tag
2021-04-19 09:39:53 -04:00
Jeremy Cohen
32bacdab4b Merge pull request #3270 from dmateusp/dmateusp/3268/dbt_deps_support_commit_hashes
Issue 3268: Support commit hashes in dbt deps
2021-04-18 18:17:27 -04:00
Daniel Mateus Pires
6113c3b533 📖 Add myself to contribs 2021-04-18 21:33:01 +01:00
Gerda Shank
1c634af489 Add necessary macros to schema test context namespace [#3229] [#3240] 2021-04-16 13:21:18 -04:00
Daniel Mateus Pires
428cdea2dc ✏️ Update CHANGELOG 2021-04-16 10:58:04 +01:00
Daniel Mateus Pires
f14b55f839 Add test 2021-04-16 10:54:35 +01:00
Daniel Mateus Pires
5934d263b8 Support git commit as revision 2021-04-16 10:21:49 +01:00
Kyle Wigley
3860d919e6 use ternary and f-strings 2021-04-15 14:49:07 -04:00
Kyle Wigley
fd0b9434ae update changelog 2021-04-15 14:49:07 -04:00
Kyle Wigley
efb30d0262 first pass at adding test jinja block 2021-04-15 14:48:12 -04:00
Jeremy Cohen
cee0bfbfa2 Merge pull request #3257 from fishtown-analytics/feature/test-config-parity
Feature: test config parity
2021-04-15 14:15:19 -04:00
Jeremy Cohen
dc684d31d3 Add changelog entry 2021-04-15 14:04:34 -04:00
Gerda Shank
bfdf7f01b5 Merge pull request #3248 from fishtown-analytics/load_all_files
Preload all project files at start of parsing [#3244]
2021-04-14 13:33:32 -04:00
Gerda Shank
2cc0579b6e Preload all project files at start of parsing [#3244] 2021-04-14 09:57:26 -04:00
Jeremy Cohen
bfc472dc0f Cleanup + integration tests 2021-04-13 20:04:53 -04:00
Jeremy Cohen
ea4e3680ab Configure tests from dbt_project.yml 2021-04-13 20:04:42 -04:00
Jeremy Cohen
f02139956d Add support for disabling schema tests 2021-04-13 20:04:42 -04:00
Arzav Jain
cacbd1c212 Updated Changelog.md 2021-04-12 09:38:42 -07:00
Arzav Jain
3f78bb7819 more tests 2021-04-12 09:37:54 -07:00
Arzav Jain
aa65b01fe3 basic tests for table materialization 2021-04-12 09:37:54 -07:00
Arzav Jain
4f0968d678 fix style to get unit tests to pass 2021-04-12 09:37:54 -07:00
Arzav Jain
118973cf79 respond to pr comments 2021-04-12 09:37:54 -07:00
Arzav Jain
df7cc0521f remove no longer used truncate_string macro 2021-04-12 09:37:54 -07:00
Arzav Jain
40c02d2cc9 Respond to pr comments; approach inspired by persist_docs() 2021-04-12 09:37:54 -07:00
Arzav Jain
be70b1a0c1 first pass 2021-04-12 09:37:44 -07:00
Kyle Wigley
7ec5c122e1 Merge pull request #3228 from fishtown-analytics/cleanup/makefile
Better make targets
2021-04-12 10:49:59 -04:00
Jeremy Cohen
a10ab99efc Merge pull request #3243 from fuchsst/fix/3241-add-missing-exposures-property
Fix/3241 add missing exposures property
2021-04-12 09:27:40 -04:00
Fuchs, Stefan
9f4398c557 added contribution to changelog 2021-04-11 18:26:29 +02:00
Fuchs, Stefan
d60f6bc89b added exposures property to manifest 2021-04-11 18:26:05 +02:00
Kyle Wigley
617eeb4ff7 test code changes without reinstalling everything 2021-04-08 12:07:55 -04:00
Kyle Wigley
5b55825638 add flaky logic to bigquery 2021-04-07 09:02:44 -04:00
Kyle Wigley
103d524db5 update changelog 2021-04-06 14:33:05 -04:00
Kyle Wigley
babd084a9b better make targets, some descriptions 2021-04-06 14:33:05 -04:00
Gerda Shank
749f87397e Merge pull request #3219 from fishtown-analytics/partial_parsing
Use Manifest instead of ParseResult [#3163]
2021-04-06 14:05:17 -04:00
Gerda Shank
307d47ebaf Use Manifest instead of ParseResults [#3163] 2021-04-06 13:51:43 -04:00
Jeremy Cohen
6acd4b91c1 Merge pull request #3227 from fishtown-analytics/update/changelog-0191
Update changelog for 0.19.1, 0.18.2
2021-04-05 16:36:28 -04:00
Jeremy Cohen
f4a9530894 Update changelog per 0.18.2 2021-04-05 15:11:08 -04:00
Jeremy Cohen
ab65385a16 Update changelog per 0.19.1 2021-04-05 15:08:33 -04:00
Jeremy Cohen
ebd761e3dc Merge pull request #3156 from max-sixty/patch-1
Update google cloud dependencies
2021-04-02 15:13:24 -04:00
Maximilian Roos
3b942ec790 Merge branch 'develop' into patch-1 2021-04-02 10:09:26 -07:00
Maximilian Roos
b373486908 Update CHANGELOG.md 2021-04-02 10:08:51 -07:00
Maximilian Roos
c8cd5502f6 Update setup.py 2021-04-02 10:05:16 -07:00
Maximilian Roos
d6dd968c4f Pin to major versions 2021-03-31 18:51:53 -07:00
Jeremy Cohen
b8d73d2197 Merge pull request #3182 from cgopalan/app-name-for-postgres
Set application_name for Postgres connections
2021-03-31 15:56:59 -04:00
Kyle Wigley
17e57f1e0b Merge pull request #3181 from fishtown-analytics/feature/data-test-materialization
Adding test materialization, implement for data tests
2021-03-30 14:21:43 -04:00
Kyle Wigley
e21bf9fbc7 code comments and explicit function call 2021-03-30 12:47:40 -04:00
Kyle Wigley
12e281f076 update changelog 2021-03-29 09:50:37 -04:00
Kyle Wigley
a5ce658755 first pass using materialization for data tests 2021-03-29 09:49:02 -04:00
Kyle Wigley
ce30dfa82d Merge pull request #3204 from fishtown-analytics/updates/tox
dev env clean up and improvements
2021-03-29 09:47:49 -04:00
Kyle Wigley
c04d1e9d5c fix circleci tests (forcing azure pipeline tests) 2021-03-29 09:19:05 -04:00
Kyle Wigley
80031d122c fix windows rpc tests 2021-03-29 09:19:05 -04:00
Kyle Wigley
943b090c90 debug windows tests 2021-03-29 09:19:04 -04:00
Kyle Wigley
39fd53d1f9 fix typos, set max-line-length (force azure pipeline tests) 2021-03-29 09:19:04 -04:00
Kyle Wigley
777e7b3b6d update changelog 2021-03-29 09:19:04 -04:00
Kyle Wigley
2783fe2a9f fix last CI step 2021-03-29 09:11:06 -04:00
Kyle Wigley
f5880cb001 CI tweaks 2021-03-29 09:11:06 -04:00
Kyle Wigley
26e501008a use new docker image for tests 2021-03-29 09:11:06 -04:00
Kyle Wigley
2c67e3f5c7 update tox, update makefile, run tests natively by default, general dev workflow cleanup 2021-03-29 09:11:06 -04:00
Kyle Wigley
033596021d Merge pull request #3148 from fishtown-analytics/dependabot/pip/plugins/snowflake/snowflake-connector-python-secure-local-storage--2.4.1
Bump snowflake-connector-python[secure-local-storage] from 2.3.6 to 2.4.1 in /plugins/snowflake
2021-03-29 09:09:49 -04:00
Kyle Wigley
f36c72e085 update changelog 2021-03-29 08:36:33 -04:00
Kyle Wigley
fefaf7b4be update snowflake deps 2021-03-26 16:04:48 -04:00
Chandrakant Gopalan
91431401ad Updated changelog 2021-03-26 16:01:47 -04:00
Chandrakant Gopalan
59d96c08a1 Add tests for application_name 2021-03-26 15:25:38 -04:00
Chandrakant Gopalan
f10447395b Fix tests 2021-03-25 21:18:42 -04:00
Chandrakant Gopalan
c2b6222798 Merge branch 'develop' of https://github.com/fishtown-analytics/dbt into app-name-for-postgres 2021-03-25 21:03:40 -04:00
Chandrakant Gopalan
3a58c49184 Default application_name to dbt 2021-03-25 21:03:08 -04:00
Jeremy Cohen
440a5e49e2 Merge pull request #3041 from yu-iskw/issue-3040
Pass the default scopes to the default BigQuery credentials
2021-03-24 17:59:32 -04:00
Jeremy Cohen
77c10713a3 Merge pull request #3100 from prratek/specify-cols-to-update
Gets columns to update from config for BQ and Snowflake
2021-03-22 17:55:49 +01:00
Jeremy Cohen
48e367ce2f Merge branch 'develop' into specify-cols-to-update 2021-03-22 13:56:11 +01:00
Jeremy Cohen
934c23bf39 Merge pull request #3145 from jmcarp/jmcarp/bigquery-job-labels
Parse query comment and use as bigquery job labels.
2021-03-22 13:30:08 +01:00
Chandrakant Gopalan
e0febcb6c3 Set application_name for Postgres connections 2021-03-20 13:24:44 -04:00
Joshua Carp
044a6c6ea4 Cleanups from code review. 2021-03-20 00:59:55 -04:00
Prratek Ramchandani
8ebbc10572 Merge branch 'develop' into specify-cols-to-update 2021-03-18 09:55:12 -04:00
Jeremy Cohen
7435828082 Merge pull request #3165 from cgopalan/dup-macro-message
Raise proper error message if duplicate macros found
2021-03-17 14:53:50 +01:00
Jeremy Cohen
369b595e8a Merge branch 'develop' into dup-macro-message 2021-03-17 12:46:38 +01:00
Jeremy Cohen
9a6d30f03d Merge pull request #3158 from techytushar/fix#3147
Feature to add _n alias to same column names #3147
2021-03-17 12:15:35 +01:00
Prratek Ramchandani
6bdd01d52b Merge branch 'develop' into specify-cols-to-update 2021-03-16 16:13:30 -04:00
prratek
bae9767498 add my name to contributors 2021-03-16 16:07:50 -04:00
prratek
b0e50dedb8 update changelog 2021-03-16 15:59:34 -04:00
Prratek Ramchandani
96bfb3b259 Update core/dbt/include/global_project/macros/materializations/common/merge.sql
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-03-16 15:54:04 -04:00
Prratek Ramchandani
909068dfa8 leave quoting for merge_update_columns to the user
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-03-16 15:53:25 -04:00
Chandrakant Gopalan
f4c74968be Add to contributor list 2021-03-16 13:53:51 -04:00
Chandrakant Gopalan
0e958f3704 Add to changelog 2021-03-16 13:51:42 -04:00
Chandrakant Gopalan
a8b2942f93 Fix duplicate macro path message 2021-03-16 13:40:18 -04:00
Tushar Mittal
564fe62400 Add unit tests for SQL process_results 2021-03-16 21:42:36 +05:30
Chandrakant Gopalan
5c5013191b Fix failing test 2021-03-14 17:46:53 -04:00
Chandrakant Gopalan
31989b85d1 Fix flake8 errors 2021-03-14 15:46:47 -04:00
Chandrakant Gopalan
5ed4af2372 Raise proper error message if duplicate macros found 2021-03-14 15:33:15 -04:00
prratek
4d18e391aa correct the load date for updated entries in "update seed" 2021-03-13 12:59:41 -05:00
prratek
2feeb5b927 Merge remote-tracking branch 'origin/specify-cols-to-update' into specify-cols-to-update 2021-03-13 12:26:22 -05:00
Prratek Ramchandani
2853f07875 use correct config var name
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-03-13 12:26:12 -05:00
prratek
4e6adc07a1 switch to correct data dir for second run 2021-03-13 12:25:02 -05:00
Tushar Mittal
429dcc7000 Update changelog 2021-03-11 22:10:57 +05:30
Tushar Mittal
5f8235fcfc Feature to add _n alias to same column names #3147
Signed-off-by: Tushar Mittal <chiragmittal.mittal@gmail.com>
2021-03-11 19:44:58 +05:30
Maximilian Roos
8dc1f49ac7 Update google cloud dependencies 2021-03-10 19:57:57 -08:00
Joshua Carp
9fe2b651ed Merge branch 'develop' of github.com:fishtown-analytics/dbt into jmcarp/bigquery-job-labels 2021-03-09 23:32:11 -05:00
Joshua Carp
8566a46793 Add BigQuery job labels to changelog. 2021-03-08 15:50:00 -05:00
Joshua Carp
af3c3f4cbe Add tests for bigquery label sanitize helper. 2021-03-08 15:43:53 -05:00
prratek
8255c913a3 change test logic for new seed directories 2021-03-06 19:12:33 -05:00
prratek
4d4d17669b refactor seeds directory structure and names 2021-03-06 19:08:23 -05:00
prratek
540a0422f5 modify seeds to contain load date and some modified records 2021-03-06 19:06:35 -05:00
prratek
de4d7d6273 Revert "modify some records and the expected result"
This reverts commit 1345d955
2021-03-06 18:39:35 -05:00
prratek
1345d95589 modify some records and the expected result 2021-03-06 18:30:50 -05:00
prratek
a5bc19dd69 paste in some data for seeds 2021-03-06 18:27:02 -05:00
prratek
25b143c8cc WIP test case and empty seeds 2021-03-06 18:17:07 -05:00
Joshua Carp
82cca959e4 Merge branch 'develop' of github.com:fishtown-analytics/dbt into jmcarp/bigquery-job-labels 2021-03-05 09:39:48 -05:00
dependabot[bot]
d52374a0b6 Bump snowflake-connector-python[secure-local-storage]
Bumps [snowflake-connector-python[secure-local-storage]](https://github.com/snowflakedb/snowflake-connector-python) from 2.3.6 to 2.4.1.
- [Release notes](https://github.com/snowflakedb/snowflake-connector-python/releases)
- [Commits](https://github.com/snowflakedb/snowflake-connector-python/compare/v2.3.6...v2.4.1)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-05 06:15:17 +00:00
Joshua Carp
c71a18ca07 Hyphenate query comment fields and fix deserialization bug. 2021-03-05 00:09:17 -05:00
Joshua Carp
8d73ae2cc0 Address comments from code review. 2021-03-04 10:20:15 -05:00
Joshua Carp
7b0c74ca3e Fix lint. 2021-03-04 00:34:46 -05:00
Joshua Carp
62be9f9064 Sanitize bigquery labels. 2021-03-04 00:14:50 -05:00
Joshua Carp
2fdc113d93 Parse query comment and use as bigquery job labels. 2021-03-04 00:06:59 -05:00
Prratek Ramchandani
af3a818f12 loop over column_name instead of column.name
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-03-03 09:27:05 -05:00
prratek
a07532d4c7 revert changes to incremental materializations 2021-03-02 22:14:58 -05:00
prratek
fb449ca4bc rename new config var to merge_update_columns 2021-03-02 22:11:12 -05:00
prratek
4da65643c0 use merge_update_columns when getting merge sql 2021-03-02 22:10:09 -05:00
Prratek Ramchandani
bf64db474c fix typo
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-03-02 21:59:21 -05:00
prratek
808b980301 move "update cols" incremental test to snowflake models 2021-02-20 14:40:55 -05:00
prratek
3528480562 test incremental model w/ subset of cols to update 2021-02-19 23:17:00 -05:00
prratek
6bd263d23f add incremental_update_columns to Snowflake & BQ config schemas 2021-02-19 21:17:39 -05:00
prratek
2b9aa3864b rename config field to incremental_update_columns 2021-02-19 21:13:05 -05:00
Prratek Ramchandani
81155caf88 use get_columns_in_relation as default for snowflake
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-02-19 21:10:34 -05:00
Prratek Ramchandani
4f8c10c1aa default to get_columns_in_relation if not specified in config
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-02-15 20:58:39 -05:00
prratek
9086634c8f get columns to update from config for BQ and Snowflake 2021-02-13 11:09:25 -05:00
Yu ISHIKAWA
55fbaabfda Pass the default scopes to the default BigQuery credentials 2021-01-29 18:09:05 +09:00
4748 changed files with 91182 additions and 28402 deletions

View File

@@ -1,23 +1,27 @@
[bumpversion]
current_version = 0.19.0
current_version = 0.21.0b1
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
((?P<prerelease>[a-z]+)(?P<num>\d+))?
serialize =
{major}.{minor}.{patch}{prerelease}{num}
((?P<prekind>a|b|rc)
(?P<pre>\d+) # pre-release version num
)?
serialize =
{major}.{minor}.{patch}{prekind}{pre}
{major}.{minor}.{patch}
commit = False
tag = False
[bumpversion:part:prerelease]
[bumpversion:part:prekind]
first_value = a
values =
optional_value = final
values =
a
b
rc
final
[bumpversion:part:num]
[bumpversion:part:pre]
first_value = 1
[bumpversion:file:setup.py]
@@ -26,6 +30,8 @@ first_value = 1
[bumpversion:file:core/dbt/version.py]
[bumpversion:file:core/scripts/create_adapter_plugins.py]
[bumpversion:file:plugins/postgres/setup.py]
[bumpversion:file:plugins/redshift/setup.py]
@@ -41,3 +47,4 @@ first_value = 1
[bumpversion:file:plugins/snowflake/dbt/adapters/snowflake/__version__.py]
[bumpversion:file:plugins/bigquery/dbt/adapters/bigquery/__version__.py]

View File

@@ -1,218 +0,0 @@
version: 2.1
jobs:
unit:
docker: &test_only
- image: fishtownanalytics/test-container:9
environment:
DBT_INVOCATION_ENV: circle
steps:
- checkout
- run: tox -e flake8,mypy,unit-py36,unit-py38
build-wheels:
docker: *test_only
steps:
- checkout
- run:
name: Build wheels
command: |
python3.8 -m venv "${PYTHON_ENV}"
export PYTHON_BIN="${PYTHON_ENV}/bin/python"
$PYTHON_BIN -m pip install -U pip setuptools
$PYTHON_BIN -m pip install -r requirements.txt
$PYTHON_BIN -m pip install -r dev_requirements.txt
/bin/bash ./scripts/build-wheels.sh
$PYTHON_BIN ./scripts/collect-dbt-contexts.py > ./dist/context_metadata.json
$PYTHON_BIN ./scripts/collect-artifact-schema.py > ./dist/artifact_schemas.json
environment:
PYTHON_ENV: /home/tox/build_venv/
- store_artifacts:
path: ./dist
destination: dist
integration-postgres-py36:
docker: &test_and_postgres
- image: fishtownanalytics/test-container:9
environment:
DBT_INVOCATION_ENV: circle
- image: postgres
name: database
environment: &pgenv
POSTGRES_USER: "root"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "dbt"
steps:
- checkout
- run: &setupdb
name: Setup postgres
command: bash test/setup_db.sh
environment:
PGHOST: database
PGUSER: root
PGPASSWORD: password
PGDATABASE: postgres
- run:
name: Run tests
command: tox -e integration-postgres-py36
- store_artifacts:
path: ./logs
integration-snowflake-py36:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-snowflake-py36
no_output_timeout: 1h
- store_artifacts:
path: ./logs
integration-redshift-py36:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-redshift-py36
- store_artifacts:
path: ./logs
integration-bigquery-py36:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-bigquery-py36
- store_artifacts:
path: ./logs
integration-postgres-py38:
docker: *test_and_postgres
steps:
- checkout
- run: *setupdb
- run:
name: Run tests
command: tox -e integration-postgres-py38
- store_artifacts:
path: ./logs
integration-snowflake-py38:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-snowflake-py38
no_output_timeout: 1h
- store_artifacts:
path: ./logs
integration-redshift-py38:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-redshift-py38
- store_artifacts:
path: ./logs
integration-bigquery-py38:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-bigquery-py38
- store_artifacts:
path: ./logs
integration-postgres-py39:
docker: *test_and_postgres
steps:
- checkout
- run: *setupdb
- run:
name: Run tests
command: tox -e integration-postgres-py39
- store_artifacts:
path: ./logs
integration-snowflake-py39:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-snowflake-py39
no_output_timeout: 1h
- store_artifacts:
path: ./logs
integration-redshift-py39:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-redshift-py39
- store_artifacts:
path: ./logs
integration-bigquery-py39:
docker: *test_only
steps:
- checkout
- run:
name: Run tests
command: tox -e integration-bigquery-py39
- store_artifacts:
path: ./logs
workflows:
version: 2
test-everything:
jobs:
- unit
- integration-postgres-py36:
requires:
- unit
- integration-redshift-py36:
requires:
- integration-postgres-py36
- integration-bigquery-py36:
requires:
- integration-postgres-py36
- integration-snowflake-py36:
requires:
- integration-postgres-py36
- integration-postgres-py38:
requires:
- unit
- integration-redshift-py38:
requires:
- integration-postgres-py38
- integration-bigquery-py38:
requires:
- integration-postgres-py38
- integration-snowflake-py38:
requires:
- integration-postgres-py38
- integration-postgres-py39:
requires:
- unit
- integration-redshift-py39:
requires:
- integration-postgres-py39
- integration-bigquery-py39:
requires:
- integration-postgres-py39
# - integration-snowflake-py39:
# requires:
# - integration-postgres-py39
- build-wheels:
requires:
- unit
- integration-postgres-py36
- integration-redshift-py36
- integration-bigquery-py36
- integration-snowflake-py36
- integration-postgres-py38
- integration-redshift-py38
- integration-bigquery-py38
- integration-snowflake-py38
- integration-postgres-py39
- integration-redshift-py39
- integration-bigquery-py39
# - integration-snowflake-py39

View File

@@ -0,0 +1,27 @@
---
name: Beta minor version release
about: Creates a tracking checklist of items for a Beta minor version release
title: "[Tracking] v#.##.#B# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Engineering] Verify new release branch is created in the repo
- [ ] [Product] Finalize migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Create an epic for the RC release

View File

@@ -0,0 +1,28 @@
---
name: Final minor version release
about: Creates a tracking checklist of items for a final minor version release
title: "[Tracking] v#.##.# final release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Merge `next` into `current` for docs.getdbt.com
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Update discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.

View File

@@ -0,0 +1,29 @@
---
name: RC minor version release
about: Creates a tracking checklist of items for a RC minor version release
title: "[Tracking] v#.##.#RC# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Update migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Publish discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.
- [ ] [Engineering] Create an epic for the final release

View File

@@ -0,0 +1,10 @@
name: "Set up postgres (linux)"
description: "Set up postgres service on linux vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
sudo systemctl start postgresql.service
pg_isready
sudo -u postgres bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,24 @@
name: "Set up postgres (macos)"
description: "Set up postgres service on macos vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
brew services start postgresql
echo "Check PostgreSQL service is running"
i=10
COMMAND='pg_isready'
while [ $i -gt -1 ]; do
if [ $i == 0 ]; then
echo "PostgreSQL service not ready, all attempts exhausted"
exit 1
fi
echo "Check PostgreSQL service status"
eval $COMMAND && break
echo "PostgreSQL service not ready, wait 10 more sec, attempts left: $i"
sleep 10
((i--))
done
createuser -s postgres
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,12 @@
name: "Set up postgres (windows)"
description: "Set up postgres service on windows vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: pwsh
run: |
$pgService = Get-Service -Name postgresql*
Set-Service -InputObject $pgService -Status running -StartupType automatic
Start-Process -FilePath "$env:PGBIN\pg_isready" -Wait -PassThru
$env:Path += ";$env:PGBIN"
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -9,14 +9,13 @@ resolves #
resolves #1234
-->
### Description
<!--- Describe the Pull Request here -->
### Checklist
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.

View File

@@ -0,0 +1,95 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.6", "3.7", "3.8", "3.9"];
const supportedAdapters = ["snowflake", "postgres", "bigquery", "redshift"];
// if PR, generate matrix based on files changed and PR labels
if (context.eventName.includes("pull_request")) {
// `changes` is a list of adapter names that have related
// file changes in the PR
// ex: ['postgres', 'snowflake']
const changes = JSON.parse(process.env.CHANGES);
const labels = context.payload.pull_request.labels.map(({ name }) => name);
console.log("labels", labels);
console.log("changes", changes);
const testAllLabel = labels.includes("test all");
const include = [];
for (const adapter of supportedAdapters) {
if (
changes.includes(adapter) ||
testAllLabel ||
labels.includes(`test ${adapter}`)
) {
for (const pythonVersion of supportedPythonVersions) {
if (
pythonVersion === defaultPythonVersion ||
labels.includes(`test python${pythonVersion}`) ||
testAllLabel
) {
// always run tests on ubuntu by default
include.push({
os: "ubuntu-latest",
adapter,
"python-version": pythonVersion,
});
if (labels.includes("test windows") || testAllLabel) {
include.push({
os: "windows-latest",
adapter,
"python-version": pythonVersion,
});
}
if (labels.includes("test macos") || testAllLabel) {
include.push({
os: "macos-latest",
adapter,
"python-version": pythonVersion,
});
}
}
}
}
}
console.log("matrix", { include });
return {
include,
};
}
// if not PR, generate matrix of python version, adapter, and operating
// system to run integration tests on
const include = [];
// run for all adapters and python versions on ubuntu
for (const adapter of supportedAdapters) {
for (const pythonVersion of supportedPythonVersions) {
include.push({
os: 'ubuntu-latest',
adapter: adapter,
"python-version": pythonVersion,
});
}
}
// additionally include runs for all adapters, on macos and windows,
// but only for the default python version
for (const adapter of supportedAdapters) {
for (const operatingSystem of ["windows-latest", "macos-latest"]) {
include.push({
os: operatingSystem,
adapter: adapter,
"python-version": defaultPythonVersion,
});
}
}
console.log("matrix", { include });
return {
include,
};
};

266
.github/workflows/integration.yml vendored Normal file
View File

@@ -0,0 +1,266 @@
# **what?**
# This workflow runs all integration tests for supported OS
# and python versions and core adapters. If triggered by PR,
# the workflow will only run tests for adapters related
# to code changes. Use the `test all` and `test ${adapter}`
# label to run all or additional tests. Use `ok to test`
# label to mark PRs from forked repositories that are safe
# to run integration tests for. Requires secrets to run
# against different warehouses.
# **why?**
# This checks the functionality of dbt from a user's perspective
# and attempts to catch functional regressions.
# **when?**
# This workflow will run on every push to a protected branch
# and when manually triggered. It will also run for all PRs, including
# PRs from forks. The workflow will be skipped until there is a label
# to mark the PR as safe to run.
name: Adapter Integration Tests
on:
# pushes to release branches
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
# all PRs, important to note that `pull_request_target` workflows
# will run in the context of the target branch of a PR
pull_request_target:
# manual tigger
workflow_dispatch:
# explicitly turn off permissions for `GITHUB_TOKEN`
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
# sets default shell to bash, for all operating systems
defaults:
run:
shell: bash
jobs:
# generate test metadata about what files changed and the testing matrix to use
test-metadata:
# run if not a PR from a forked repository or has a label to mark as safe to test
if: >-
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.generate-matrix.outputs.result }}
steps:
- name: Check out the repository (non-PR)
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Check if relevant files changed
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: get-changes
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
postgres:
- 'core/**'
- 'plugins/postgres/**'
- 'dev-requirements.txt'
snowflake:
- 'core/**'
- 'plugins/snowflake/**'
bigquery:
- 'core/**'
- 'plugins/bigquery/**'
redshift:
- 'core/**'
- 'plugins/redshift/**'
- 'plugins/postgres/**'
- name: Generate integration test matrix
id: generate-matrix
uses: actions/github-script@v4
env:
CHANGES: ${{ steps.get-changes.outputs.changes }}
with:
script: |
const script = require('./.github/scripts/integration-test-matrix.js')
const matrix = script({ context })
console.log(matrix)
return matrix
test:
name: ${{ matrix.adapter }} / python ${{ matrix.python-version }} / ${{ matrix.os }}
# run if not a PR from a forked repository or has a label to mark as safe to test
# also checks that the matrix generated is not empty
if: >-
needs.test-metadata.outputs.matrix &&
fromJSON( needs.test-metadata.outputs.matrix ).include[0] &&
(
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
)
runs-on: ${{ matrix.os }}
needs: test-metadata
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.test-metadata.outputs.matrix) }}
env:
TOXENV: integration-${{ matrix.adapter }}
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
# explicity checkout the branch for the PR,
# this is necessary for the `pull_request_target` event
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox (postgres)
if: matrix.adapter == 'postgres'
run: tox
- name: Run tox (redshift)
if: matrix.adapter == 'redshift'
env:
REDSHIFT_TEST_DBNAME: ${{ secrets.REDSHIFT_TEST_DBNAME }}
REDSHIFT_TEST_PASS: ${{ secrets.REDSHIFT_TEST_PASS }}
REDSHIFT_TEST_USER: ${{ secrets.REDSHIFT_TEST_USER }}
REDSHIFT_TEST_PORT: ${{ secrets.REDSHIFT_TEST_PORT }}
REDSHIFT_TEST_HOST: ${{ secrets.REDSHIFT_TEST_HOST }}
run: tox
- name: Run tox (snowflake)
if: matrix.adapter == 'snowflake'
env:
SNOWFLAKE_TEST_ACCOUNT: ${{ secrets.SNOWFLAKE_TEST_ACCOUNT }}
SNOWFLAKE_TEST_PASSWORD: ${{ secrets.SNOWFLAKE_TEST_PASSWORD }}
SNOWFLAKE_TEST_USER: ${{ secrets.SNOWFLAKE_TEST_USER }}
SNOWFLAKE_TEST_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_WAREHOUSE }}
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: ${{ secrets.SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN }}
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_ID }}
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET }}
SNOWFLAKE_TEST_ALT_DATABASE: ${{ secrets.SNOWFLAKE_TEST_ALT_DATABASE }}
SNOWFLAKE_TEST_ALT_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_ALT_WAREHOUSE }}
SNOWFLAKE_TEST_DATABASE: ${{ secrets.SNOWFLAKE_TEST_DATABASE }}
SNOWFLAKE_TEST_QUOTED_DATABASE: ${{ secrets.SNOWFLAKE_TEST_QUOTED_DATABASE }}
SNOWFLAKE_TEST_ROLE: ${{ secrets.SNOWFLAKE_TEST_ROLE }}
run: tox
- name: Run tox (bigquery)
if: matrix.adapter == 'bigquery'
env:
BIGQUERY_TEST_SERVICE_ACCOUNT_JSON: ${{ secrets.BIGQUERY_TEST_SERVICE_ACCOUNT_JSON }}
BIGQUERY_TEST_ALT_DATABASE: ${{ secrets.BIGQUERY_TEST_ALT_DATABASE }}
run: tox
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs
path: ./logs
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.adapter }}-${{ steps.date.outputs.date }}.csv
path: integration_results.csv
require-label-comment:
runs-on: ubuntu-latest
needs: test
permissions:
pull-requests: write
steps:
- name: Needs permission PR comment
if: >-
needs.test.result == 'skipped' &&
github.event_name == 'pull_request_target' &&
github.event.pull_request.head.repo.full_name != github.repository
uses: unsplash/comment-on-pr@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: |
"You do not have permissions to run integration tests, @dbt-labs/core "\
"needs to label this PR with `ok to test` in order to run integration tests!"
check_for_duplicate_msg: true

206
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,206 @@
# **what?**
# Runs code quality checks, unit tests, and verifies python build on
# all code commited to the repository. This workflow should not
# require any secrets since it runs for PRs from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code for dbt meets a certain quality standard.
# **when?**
# This will run for all PRs, when code is pushed to a release
# branch, and when manually triggered.
name: Tests and Code Checks
on:
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
pull_request:
workflow_dispatch:
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
code-quality:
name: ${{ matrix.toxenv }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [flake8, mypy]
env:
TOXENV: ${{ matrix.toxenv }}
PYTEST_ADDOPTS: "-v --color=yes"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
unit:
name: unit test / python ${{ matrix.python-version }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: [3.6, 3.7, 3.8] # TODO: support unit testing for python 3.9 (https://github.com/dbt-labs/dbt/issues/3689)
env:
TOXENV: "unit"
PYTEST_ADDOPTS: "-v --color=yes --csv unit_results.csv"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: unit_results_${{ matrix.python-version }}-${{ steps.date.outputs.date }}.csv
path: unit_results.csv
build:
name: build packages
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
- name: Build distributions
run: ./scripts/build-dist.sh
- name: Show distributions
run: ls -lh dist/
- name: Check distribution descriptions
run: |
twine check dist/*
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test-build:
name: verify packages / python ${{ matrix.python-version }} / ${{ matrix.os }}
needs: build
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version

174
.github/workflows/performance.yml vendored Normal file
View File

@@ -0,0 +1,174 @@
name: Performance Regression Tests
# Schedule triggers
on:
# runs twice a day at 10:05am and 10:05pm
schedule:
- cron: "5 10,22 * * *"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# checks fmt of runner code
# purposefully not a dependency of any other job
# will block merging, but not prevent developing
fmt:
name: Cargo fmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- run: rustup component add rustfmt
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --manifest-path performance/runner/Cargo.toml --all -- --check
# runs any tests associated with the runner
# these tests make sure the runner logic is correct
test-runner:
name: Test Runner
runs-on: ubuntu-latest
env:
# turns errors into warnings
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: test
args: --manifest-path performance/runner/Cargo.toml
# build an optimized binary to be used as the runner in later steps
build-runner:
needs: [test-runner]
name: Build Runner
runs-on: ubuntu-latest
env:
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: build
args: --release --manifest-path performance/runner/Cargo.toml
- uses: actions/upload-artifact@v2
with:
name: runner
path: performance/runner/target/release/runner
# run the performance measurements on the current or default branch
measure-dev:
needs: [build-runner]
name: Measure Dev Branch
runs-on: ubuntu-latest
steps:
- name: checkout dev
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run
run: ./runner measure -b dev -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: dev-results
path: performance/results/
# run the performance measurements on the release branch which we use
# as a performance baseline. This part takes by far the longest, so
# we do everything we can first so the job fails fast.
# -----
# we need to checkout dbt twice in this job: once for the baseline dbt
# version, and once to get the latest regression testing projects,
# metrics, and runner code from the develop or current branch so that
# the calculations match for both versions of dbt we are comparing.
measure-baseline:
needs: [build-runner]
name: Measure Baseline Branch
runs-on: ubuntu-latest
steps:
- name: checkout latest
uses: actions/checkout@v2
with:
ref: "0.20.latest"
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: move repo up a level
run: mkdir ${{ github.workspace }}/../baseline/ && cp -r ${{ github.workspace }} ${{ github.workspace }}/../baseline
- name: "[debug] ls new dbt location"
run: ls ${{ github.workspace }}/../baseline/dbt/
# installation creates egg-links so we have to preserve source
- name: install dbt from new location
run: cd ${{ github.workspace }}/../baseline/dbt/ && pip install -r dev-requirements.txt -r editable-requirements.txt
# checkout the current branch to get all the target projects
# this deletes the old checked out code which is why we had to copy before
- name: checkout dev
uses: actions/checkout@v2
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run runner
run: ./runner measure -b baseline -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: baseline-results
path: performance/results/
# detect regressions on the output generated from measuring
# the two branches. Exits with non-zero code if a regression is detected.
calculate-regressions:
needs: [measure-dev, measure-baseline]
name: Compare Results
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v2
with:
name: dev-results
- uses: actions/download-artifact@v2
with:
name: baseline-results
- name: "[debug] ls result files"
run: ls
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run calculation
run: ./runner calculate -r ./
# always attempt to upload the results even if there were regressions found
- uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: final-calculations
path: ./final_calculations.json

1
.gitignore vendored
View File

@@ -85,6 +85,7 @@ target/
# pycharm
.idea/
venv/
# AWS credentials
.aws/

View File

@@ -1,20 +0,0 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 20.8b1
hooks:
- id: black
- repo: https://gitlab.com/PyCQA/flake8
rev: 3.9.0
hooks:
- id: flake8
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.812
hooks:
- id: mypy
files: ^core/dbt/

View File

@@ -1,4 +1,4 @@
The core function of dbt is SQL compilation and execution. Users create projects of dbt resources (models, tests, seeds, snapshots, ...), defined in SQL and YAML files, and they invoke dbt to create, update, or query associated views and tables. Today, dbt makes heavy use of Jinja2 to enable the templating of SQL, and to construct a DAG (Directed Acyclic Graph) from all of the resources in a project. Users can also extend their projects by installing resources (including Jinja macros) from other projects, called "packages."
The core function of dbt is SQL compilation and execution. Users create projects of dbt resources (models, tests, seeds, snapshots, ...), defined in SQL and YAML files, and they invoke dbt to create, update, or query associated views and tables. Today, dbt makes heavy use of Jinja2 to enable the templating of SQL, and to construct a DAG (Directed Acyclic Graph) from all of the resources in a project. Users can also extend their projects by installing resources (including Jinja macros) from other projects, called "packages."
## dbt-core
@@ -26,9 +26,9 @@ This is the docs website code. It comes from the dbt-docs repository, and is gen
## Adapters
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/fishtown-analytics/dbt-spark), [dbt-presto](https://github.com/fishtown-analytics/dbt-presto)).
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/dbt-labs/dbt-spark), [dbt-presto](https://github.com/dbt-labs/dbt-presto)).
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.
Each adapter plugin is a standalone python package that includes:

File diff suppressed because it is too large Load Diff

View File

@@ -1,118 +1,117 @@
# Contributing to dbt
# Contributing to `dbt`
1. [About this document](#about-this-document)
2. [Proposing a change](#proposing-a-change)
3. [Getting the code](#getting-the-code)
4. [Setting up an environment](#setting-up-an-environment)
5. [Running dbt in development](#running-dbt-in-development)
5. [Running `dbt` in development](#running-dbt-in-development)
6. [Testing](#testing)
7. [Submitting a Pull Request](#submitting-a-pull-request)
## About this document
This document is a guide intended for folks interested in contributing to dbt. Below, we document the process by which members of the community should create issues and submit pull requests (PRs) in this repository. It is not intended as a guide for using dbt, and it assumes a certain level of familiarity with Python concepts such as virtualenvs, `pip`, python modules, filesystems, and so on. This guide assumes you are using macOS or Linux and are comfortable with the command line.
This document is a guide intended for folks interested in contributing to `dbt`. Below, we document the process by which members of the community should create issues and submit pull requests (PRs) in this repository. It is not intended as a guide for using `dbt`, and it assumes a certain level of familiarity with Python concepts such as virtualenvs, `pip`, python modules, filesystems, and so on. This guide assumes you are using macOS or Linux and are comfortable with the command line.
If you're new to python development or contributing to open-source software, we encourage you to read this document from start to finish. If you get stuck, drop us a line in the #development channel on [slack](community.getdbt.com).
If you're new to python development or contributing to open-source software, we encourage you to read this document from start to finish. If you get stuck, drop us a line in the `#dbt-core-development` channel on [slack](https://community.getdbt.com).
### Signing the CLA
Please note that all contributors to dbt must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements) to have their Pull Request merged into the dbt codebase. If you are unable to sign the CLA, then the dbt maintainers will unfortunately be unable to merge your Pull Request. You are, however, welcome to open issues and comment on existing ones.
Please note that all contributors to `dbt` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements) to have their Pull Request merged into the `dbt` codebase. If you are unable to sign the CLA, then the `dbt` maintainers will unfortunately be unable to merge your Pull Request. You are, however, welcome to open issues and comment on existing ones.
## Proposing a change
dbt is Apache 2.0-licensed open source software. dbt is what it is today because community members like you have opened issues, provided feedback, and contributed to the knowledge loop for the entire communtiy. Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
`dbt` is Apache 2.0-licensed open source software. `dbt` is what it is today because community members like you have opened issues, provided feedback, and contributed to the knowledge loop for the entire communtiy. Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
### Defining the problem
If you have an idea for a new feature or if you've discovered a bug in dbt, the first step is to open an issue. Please check the list of [open issues](https://github.com/fishtown-analytics/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The dbt maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/dbt-labs/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
**Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
> **Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
### Discussing the idea
After you open an issue, a dbt maintainer will follow up by commenting on your issue (usually within 1-3 days) to explore your idea further and advise on how to implement the suggested changes. In many cases, community members will chime in with their own thoughts on the problem statement. If you as the issue creator are interested in submitting a Pull Request to address the issue, you should indicate this in the body of the issue. The dbt maintainers are _always_ happy to help contributors with the implementation of fixes and features, so please also indicate if there's anything you're unsure about or could use guidance around in the issue.
After you open an issue, a `dbt` maintainer will follow up by commenting on your issue (usually within 1-3 days) to explore your idea further and advise on how to implement the suggested changes. In many cases, community members will chime in with their own thoughts on the problem statement. If you as the issue creator are interested in submitting a Pull Request to address the issue, you should indicate this in the body of the issue. The `dbt` maintainers are _always_ happy to help contributors with the implementation of fixes and features, so please also indicate if there's anything you're unsure about or could use guidance around in the issue.
### Submitting a change
If an issue is appropriately well scoped and describes a beneficial change to the dbt codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
If an issue is appropriately well scoped and describes a beneficial change to the `dbt` codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
The dbt maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/fishtown-analytics/dbt/contribute) page.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/dbt-labs/dbt/contribute) page.
Here's a good workflow:
- Comment on the open issue, expressing your interest in contributing the required code change
- Outline your planned implementation. If you want help getting started, ask!
- Follow the steps outlined below to develop locally. Once you have opened a PR, one of the dbt maintainers will work with you to review your code.
- Add a test! Tests are crucial for both fixes and new features alike. We want to make sure that code works as intended, and that it avoids any bugs previously encountered. Currently, the best resource for understanding dbt's [unit](test/unit) and [integration](test/integration) tests is the tests themselves. One of the maintainers can help by pointing out relevant examples.
- Follow the steps outlined below to develop locally. Once you have opened a PR, one of the `dbt` maintainers will work with you to review your code.
- Add a test! Tests are crucial for both fixes and new features alike. We want to make sure that code works as intended, and that it avoids any bugs previously encountered. Currently, the best resource for understanding `dbt`'s [unit](test/unit) and [integration](test/integration) tests is the tests themselves. One of the maintainers can help by pointing out relevant examples.
In some cases, the right resolution to an open issue might be tangential to the dbt codebase. The right path forward might be a documentation update or a change that can be made in user-space. In other cases, the issue might describe functionality that the dbt maintainers are unwilling or unable to incorporate into the dbt codebase. When it is determined that an open issue describes functionality that will not translate to a code change in the dbt repository, the issue will be tagged with the `wontfix` label (see below) and closed.
In some cases, the right resolution to an open issue might be tangential to the `dbt` codebase. The right path forward might be a documentation update or a change that can be made in user-space. In other cases, the issue might describe functionality that the `dbt` maintainers are unwilling or unable to incorporate into the `dbt` codebase. When it is determined that an open issue describes functionality that will not translate to a code change in the `dbt` repository, the issue will be tagged with the `wontfix` label (see below) and closed.
### Using issue labels
The dbt maintainers use labels to categorize open issues. Some labels indicate the databases impacted by the issue, while others describe the domain in the dbt codebase germane to the discussion. While most of these labels are self-explanatory (eg. `snowflake` or `bigquery`), there are others that are worth describing.
The `dbt` maintainers use labels to categorize open issues. Some labels indicate the databases impacted by the issue, while others describe the domain in the `dbt` codebase germane to the discussion. While most of these labels are self-explanatory (eg. `snowflake` or `bigquery`), there are others that are worth describing.
| tag | description |
| --- | ----------- |
| [triage](https://github.com/fishtown-analytics/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a dbt maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/fishtown-analytics/dbt/labels/bug) | This issue represents a defect or regression in dbt |
| [enhancement](https://github.com/fishtown-analytics/dbt/labels/enhancement) | This issue represents net-new functionality in dbt |
| [good first issue](https://github.com/fishtown-analytics/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the dbt codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/fishtown-analytics/dbt/labels/help%20wanted) / [discussion](https://github.com/fishtown-analytics/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/fishtown-analytics/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The dbt maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/fishtown-analytics/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The dbt maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/fishtown-analytics/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by dbt maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/fishtown-analytics/dbt/labels/wontfix) | This issue does not require a code change in the dbt repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
| [triage](https://github.com/dbt-labs/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/dbt-labs/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/dbt-labs/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/dbt-labs/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/dbt-labs/dbt/labels/help%20wanted) / [discussion](https://github.com/dbt-labs/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/dbt-labs/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/dbt-labs/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/dbt-labs/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/dbt-labs/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
#### Branching Strategy
dbt has three types of branches:
`dbt` has three types of branches:
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
- **Release Branches** track a specific, not yet complete release of dbt. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of dbt.
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
## Getting the code
### Installing git
You will need `git` in order to download and modify the dbt source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
You will need `git` in order to download and modify the `dbt` source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
### External contributors
If you are not a member of the `fishtown-analytics` GitHub organization, you can contribute to dbt by forking the dbt repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
1. fork the dbt repository
1. fork the `dbt` repository
2. clone your fork locally
3. check out a new branch for your proposed changes
4. push changes to your fork
5. open a pull request against `fishtown-analytics/dbt` from your forked repository
5. open a pull request against `dbt-labs/dbt` from your forked repository
### Core contributors
If you are a member of the `fishtown-analytics` GitHub organization, you will have push access to the dbt repo. Rather than
forking dbt to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
## Setting up an environment
There are some tools that will be helpful to you in developing locally. While this is the list relevant for dbt development, many of these tools are used commonly across open-source python projects.
There are some tools that will be helpful to you in developing locally. While this is the list relevant for `dbt` development, many of these tools are used commonly across open-source python projects.
### Tools
A short list of tools used in dbt testing that will be helpful to your understanding:
A short list of tools used in `dbt` testing that will be helpful to your understanding:
- [virtualenv](https://virtualenv.pypa.io/en/stable/) to manage dependencies
- [tox](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions
- [pytest](https://docs.pytest.org/en/latest/) to discover/run tests
- [make](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) - but don't worry too much, nobody _really_ understands how make works and our Makefile is super simple
- [flake8](https://gitlab.com/pycqa/flake8) for code linting
- [`tox`](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions. We currently target the latest patch releases for Python 3.6, Python 3.7, Python 3.8, and Python 3.9
- [`pytest`](https://docs.pytest.org/en/latest/) to discover/run tests
- [`make`](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) - but don't worry too much, nobody _really_ understands how make works and our Makefile is super simple
- [`flake8`](https://flake8.pycqa.org/en/latest/) for code linting
- [`mypy`](https://mypy.readthedocs.io/en/stable/) for static type checking
- [CircleCI](https://circleci.com/product/) and [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/)
A deep understanding of these tools in not required to effectively contribute to dbt, but we recommend checking out the attached documentation if you're interested in learning more about them.
A deep understanding of these tools in not required to effectively contribute to `dbt`, but we recommend checking out the attached documentation if you're interested in learning more about them.
#### virtual environments
We strongly recommend using virtual environments when developing code in dbt. We recommend creating this virtualenv
in the root of the dbt repository. To create a new virtualenv, run:
```
We strongly recommend using virtual environments when developing code in `dbt`. We recommend creating this virtualenv
in the root of the `dbt` repository. To create a new virtualenv, run:
```sh
python3 -m venv env
source env/bin/activate
```
@@ -128,23 +127,25 @@ Docker and docker-compose are both used in testing. Specific instructions for yo
For testing, and later in the examples in this document, you may want to have `psql` available so you can poke around in the database and see what happened. We recommend that you use [homebrew](https://brew.sh/) for that on macOS, and your package manager on Linux. You can install any version of the postgres client that you'd like. On macOS, with homebrew setup, you can run:
```
```sh
brew install postgresql
```
## Running dbt in development
## Running `dbt` in development
### Installation
First make sure that you set up your `virtualenv` as described in section _Setting up an environment_. Next, install dbt (and its dependencies) with:
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
```
pip install -r requirements-editable.txt
```sh
make dev
# or
pip install -r dev-requirements.txt -r editable-requirements.txt
```
When dbt is installed from source in this way, any changes you make to the dbt source code will be reflected immediately in your next `dbt` run.
When `dbt` is installed this way, any changes you make to the `dbt` source code will be reflected immediately in your next `dbt` run.
### Running dbt
### Running `dbt`
With your virtualenv activated, the `dbt` script should point back to the source code you've cloned on your machine. You can verify this by running `which dbt`. This command should show you a path to an executable in your virtualenv.
@@ -152,76 +153,79 @@ Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as
## Testing
Getting the dbt integration tests set up in your local environment will be very helpful as you start to make changes to your local version of dbt. The section that follows outlines some helpful tips for setting up the test environment.
Getting the `dbt` integration tests set up in your local environment will be very helpful as you start to make changes to your local version of `dbt`. The section that follows outlines some helpful tips for setting up the test environment.
### Running tests via Docker
Since `dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, dbt Labs provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
dbt's unit and integration tests run in Docker. Because dbt works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, Fishtown Analytics provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
### Initial setup
We recommend starting with `dbt`'s Postgres tests. These tests cover most of the functionality in `dbt`, are the fastest to run, and are the easiest to set up. To run the Postgres integration tests, you'll have to do one extra step of setting up the test database:
### Specifying your test credentials
dbt uses test credentials specified in a `test.env` file in the root of the repository. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against dbt. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials:
```
cp test.env.sample test.env
```
We recommend starting with dbt's Postgres tests. These tests cover most of the functionality in dbt, are the fastest to run, and are the easiest to set up. dbt's test suite runs Postgres in a Docker container, so no setup should be required to run these tests.
If you additionally want to test Snowflake, Bigquery, or Redshift, locally you'll need to get credentials and add them to the `test.env` file. In general, it's most important to have successful unit and Postgres tests. Once you open a PR, dbt will automatically run integration tests for the other three core database adapters. Of course, if you are a BigQuery user, contributing a BigQuery-only feature, it's important to run BigQuery tests as well.
### Test commands
dbt's unit tests and Python linter can be run with:
```
make test-unit
```
To run the Postgres + Python 3.6 integration tests, you'll have to do one extra step of setting up the test database:
```sh
make setup-db
```
or, alternatively:
```sh
docker-compose up -d database
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
```
To run a quick test for Python3 integration tests on Postgres, you can run:
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
```
make test-quick
cp test.env.sample test.env
$EDITOR test.env
```
To run tests for a specific database, invoke `tox` directly with the required flags:
```
# Run Postgres py36 tests
docker-compose run test tox -e integration-postgres-py36 -- -x
> In general, it's most important to have successful unit and Postgres tests. Once you open a PR, `dbt` will automatically run integration tests for the other three core database adapters. Of course, if you are a BigQuery user, contributing a BigQuery-only feature, it's important to run BigQuery tests as well.
# Run Snowflake py36 tests
docker-compose run test tox -e integration-snowflake-py36 -- -x
### Test commands
# Run BigQuery py36 tests
docker-compose run test tox -e integration-bigquery-py36 -- -x
There are a few methods for running tests locally.
# Run Redshift py36 tests
docker-compose run test tox -e integration-redshift-py36 -- -x
```
#### Makefile
To run a specific test by itself:
```
docker-compose run test tox -e explicit-py36 -- -s -x -m profile_{adapter} {path_to_test_file_or_folder}
```
E.g.
```
docker-compose run test tox -e explicit-py36 -- -s -x -m profile_snowflake test/integration/001_simple_copy_test
```
There are multiple targets in the Makefile to run common test suites and code
checks, most notably:
See the `Makefile` contents for more some other examples of ways to run `tox`.
```sh
# Runs unit tests with py38 and code checks in parallel.
make test
# Runs postgres integration tests with py38 in "fail fast" mode.
make integration
```
> These make targets assume you have a recent version of [`tox`](https://tox.readthedocs.io/en/latest/) installed locally,
> unless you use choose a Docker container to run tests. Run `make help` for more info.
Check out the other targets in the Makefile to see other commonly used test
suites.
#### `tox`
[`tox`](https://tox.readthedocs.io/en/latest/) takes care of managing virtualenvs and install dependencies in order to run
tests. You can also run tests in parallel, for example, you can run unit tests
for Python 3.6, Python 3.7, Python 3.8, `flake8` checks, and `mypy` checks in
parallel with `tox -p`. Also, you can run unit tests for specific python versions
with `tox -e py36`. The configuration for these tests in located in `tox.ini`.
#### `pytest`
Finally, you can also run a specific test or group of tests using [`pytest`](https://docs.pytest.org/en/latest/) directly. With a virtualenv
active and dev dependencies installed you can do things like:
```sh
# run specific postgres integration tests
python -m pytest -m profile_postgres test/integration/001_simple_copy_test
# run all unit tests in a file
python -m pytest test/unit/test_graph.py
# run a specific unit test
python -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
```
> [Here](https://docs.pytest.org/en/reorganize-docs/new-docs/user/commandlineuseful.html)
> is a list of useful command-line options for `pytest` to use while developing.
## Submitting a Pull Request
Fishtown Analytics provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `fishtown-analytics/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
dbt Labs provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `dbt-labs/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
A dbt maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
A `dbt` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
Once all tests are passing and your PR has been approved, a dbt maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:
Once all tests are passing and your PR has been approved, a `dbt` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:

View File

@@ -1,8 +1,11 @@
FROM ubuntu:18.04
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository ppa:git-core/ppa -y \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
netcat \
@@ -46,9 +49,7 @@ RUN curl -LO https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_V
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
RUN pip3 install -U "tox==3.14.4" wheel "six>=1.14.0,<1.15.0" "virtualenv==20.0.3" setuptools
# tox fails if the 'python' interpreter (python2) doesn't have `tox` installed
RUN pip install -U "tox==3.14.4" "six>=1.14.0,<1.15.0" "virtualenv==20.0.3" setuptools
RUN pip3 install -U tox wheel six setuptools
# These args are passed in via docker-compose, which reads then from the .env file.
# On Linux, run `make .env` to create the .env file for the current user.

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Copyright 2021 dbt Labs, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

100
Makefile
View File

@@ -1,24 +1,81 @@
.PHONY: install test test-unit test-integration
.DEFAULT_GOAL:=help
test: .env
@echo "Full test run starting..."
@time docker-compose run --rm test tox
# Optional flag to run target in a docker container.
# (example `make test USE_DOCKER=true`)
ifeq ($(USE_DOCKER),true)
DOCKER_CMD := docker-compose run --rm test
endif
test-unit: .env
@echo "Unit test run starting..."
@time docker-compose run --rm test tox -e unit-py36,flake8
.PHONY: dev
dev: ## Installs dbt-* packages in develop mode along with development dependencies.
pip install -r dev-requirements.txt -r editable-requirements.txt
test-integration: .env
@echo "Integration test run starting..."
@time docker-compose run --rm test tox -e integration-postgres-py36,integration-redshift-py36,integration-snowflake-py36,integration-bigquery-py36
.PHONY: mypy
mypy: .env ## Runs mypy for static type checking.
$(DOCKER_CMD) tox -e mypy
test-quick: .env
@echo "Integration test run starting, will exit on first failure..."
@time docker-compose run --rm test tox -e integration-postgres-py36 -- -x
.PHONY: flake8
flake8: .env ## Runs flake8 to enforce style guide.
$(DOCKER_CMD) tox -e flake8
.PHONY: lint
lint: .env ## Runs all code checks in parallel.
$(DOCKER_CMD) tox -p -e flake8,mypy
.PHONY: unit
unit: .env ## Runs unit tests with py38.
$(DOCKER_CMD) tox -e py38
.PHONY: test
test: .env ## Runs unit tests with py38 and code checks in parallel.
$(DOCKER_CMD) tox -p -e py38,flake8,mypy
.PHONY: integration
integration: .env integration-postgres ## Alias for integration-postgres.
.PHONY: integration-fail-fast
integration-fail-fast: .env integration-postgres-fail-fast ## Alias for integration-postgres-fail-fast.
.PHONY: integration-postgres
integration-postgres: .env ## Runs postgres integration tests with py38.
$(DOCKER_CMD) tox -e py38-postgres -- -nauto
.PHONY: integration-postgres-fail-fast
integration-postgres-fail-fast: .env ## Runs postgres integration tests with py38 in "fail fast" mode.
$(DOCKER_CMD) tox -e py38-postgres -- -x -nauto
.PHONY: integration-redshift
integration-redshift: .env ## Runs redshift integration tests with py38.
$(DOCKER_CMD) tox -e py38-redshift -- -nauto
.PHONY: integration-redshift-fail-fast
integration-redshift-fail-fast: .env ## Runs redshift integration tests with py38 in "fail fast" mode.
$(DOCKER_CMD) tox -e py38-redshift -- -x -nauto
.PHONY: integration-snowflake
integration-snowflake: .env ## Runs snowflake integration tests with py38.
$(DOCKER_CMD) tox -e py38-snowflake -- -nauto
.PHONY: integration-snowflake-fail-fast
integration-snowflake-fail-fast: .env ## Runs snowflake integration tests with py38 in "fail fast" mode.
$(DOCKER_CMD) tox -e py38-snowflake -- -x -nauto
.PHONY: integration-bigquery
integration-bigquery: .env ## Runs bigquery integration tests with py38.
$(DOCKER_CMD) tox -e py38-bigquery -- -nauto
.PHONY: integration-bigquery-fail-fast
integration-bigquery-fail-fast: .env ## Runs bigquery integration tests with py38 in "fail fast" mode.
$(DOCKER_CMD) tox -e py38-bigquery -- -x -nauto
.PHONY: setup-db
setup-db: ## Setup Postgres database with docker-compose for system testing.
docker-compose up -d database
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
# This rule creates a file named .env that is used by docker-compose for passing
# the USER_ID and GROUP_ID arguments to the Docker image.
.env:
.env: ## Setup step for using using docker-compose with make target.
@touch .env
ifneq ($(OS),Windows_NT)
ifneq ($(shell uname -s), Darwin)
@@ -26,9 +83,9 @@ ifneq ($(shell uname -s), Darwin)
@echo GROUP_ID=$(shell id -g) >> .env
endif
endif
@time docker-compose build
clean:
.PHONY: clean
clean: ## Resets development environment.
rm -f .coverage
rm -rf .eggs/
rm -f .env
@@ -42,3 +99,14 @@ clean:
rm -rf target/
find . -type f -name '*.pyc' -delete
find . -type d -name '__pycache__' -depth -delete
.PHONY: help
help: ## Show this help message.
@echo 'usage: make [target] [USE_DOCKER=true]'
@echo
@echo 'targets:'
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
@echo
@echo 'options:'
@echo 'use USE_DOCKER=true to run target in a docker container'

View File

@@ -1,28 +1,18 @@
<p align="center">
<img src="https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
<img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
</p>
<p align="center">
<a href="https://codeclimate.com/github/fishtown-analytics/dbt">
<img src="https://codeclimate.com/github/fishtown-analytics/dbt/badges/gpa.svg" alt="Code Climate"/>
<a href="https://github.com/dbt-labs/dbt/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://circleci.com/gh/fishtown-analytics/dbt/tree/master">
<img src="https://circleci.com/gh/fishtown-analytics/dbt/tree/master.svg?style=svg" alt="CircleCI" />
</a>
<a href="https://ci.appveyor.com/project/DrewBanin/dbt/branch/development">
<img src="https://ci.appveyor.com/api/projects/status/v01rwd3q91jnwp9m/branch/development?svg=true" alt="AppVeyor" />
</a>
<a href="https://community.getdbt.com">
<img src="https://community.getdbt.com/badge.svg" alt="Slack" />
<a href="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** (data build tool) enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
![dbt architecture](https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
dbt can be used to [aggregate pageviews into sessions](https://github.com/fishtown-analytics/snowplow), calculate [ad spend ROI](https://github.com/fishtown-analytics/facebook-ads), or report on [email campaign performance](https://github.com/fishtown-analytics/mailchimp).
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
## Understanding dbt
@@ -30,28 +20,22 @@ Analysts using dbt can transform their data by simply writing select statements,
These select statements, or "models", form a dbt project. Models frequently build on top of one another dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
![dbt dag](https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
![dbt dag](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
## Getting started
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [documentation](https://docs.getdbt.com/).
- Productionize your dbt project with [dbt Cloud](https://www.getdbt.com)
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Find out more
## Join the dbt Community
- Check out the [Introduction to dbt](https://docs.getdbt.com/docs/introduction/).
- Read the [dbt Viewpoint](https://docs.getdbt.com/docs/about/viewpoint/).
## Join thousands of analysts in the dbt community
- Join the [chat](http://community.getdbt.com/) on Slack.
- Find community posts on [dbt Discourse](https://discourse.getdbt.com).
- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/fishtown-analytics/dbt/issues/new).
- Want to help us build dbt? Check out the [Contributing Getting Started Guide](https://github.com/fishtown-analytics/dbt/blob/HEAD/CONTRIBUTING.md)
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
## Code of Conduct

View File

@@ -1,154 +0,0 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
trigger:
branches:
include:
- master
- dev/*
- pr/*
jobs:
- job: UnitTest
pool:
vmImage: 'vs2017-win2016'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e pywin-unit
displayName: Run unit tests
- job: PostgresIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
steps:
- pwsh: |
$serviceName = Get-Service -Name postgresql*
Set-Service -InputObject $serviceName -StartupType Automatic
Start-Service -InputObject $serviceName
& $env:PGBIN\createdb.exe -U postgres dbt
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE root WITH PASSWORD 'password';"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE root WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CREATE, CONNECT ON DATABASE dbt TO root WITH GRANT OPTION;"
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE noaccess WITH PASSWORD 'password' NOSUPERUSER;"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE noaccess WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CONNECT ON DATABASE dbt TO noaccess;"
displayName: Install postgresql and set up database
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e pywin-postgres
displayName: Run integration tests
# These three are all similar except secure environment variables, which MUST be passed along to their tasks,
# but there's probably a better way to do this!
- job: SnowflakeIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: PostgresIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e pywin-snowflake
env:
SNOWFLAKE_TEST_ACCOUNT: $(SNOWFLAKE_TEST_ACCOUNT)
SNOWFLAKE_TEST_PASSWORD: $(SNOWFLAKE_TEST_PASSWORD)
SNOWFLAKE_TEST_USER: $(SNOWFLAKE_TEST_USER)
SNOWFLAKE_TEST_WAREHOUSE: $(SNOWFLAKE_TEST_WAREHOUSE)
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: $(SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN)
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: $(SNOWFLAKE_TEST_OAUTH_CLIENT_ID)
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: $(SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET)
displayName: Run integration tests
- job: BigQueryIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: PostgresIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e pywin-bigquery
env:
BIGQUERY_SERVICE_ACCOUNT_JSON: $(BIGQUERY_SERVICE_ACCOUNT_JSON)
displayName: Run integration tests
- job: RedshiftIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: PostgresIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e pywin-redshift
env:
REDSHIFT_TEST_DBNAME: $(REDSHIFT_TEST_DBNAME)
REDSHIFT_TEST_PASS: $(REDSHIFT_TEST_PASS)
REDSHIFT_TEST_USER: $(REDSHIFT_TEST_USER)
REDSHIFT_TEST_PORT: $(REDSHIFT_TEST_PORT)
REDSHIFT_TEST_HOST: $(REDSHIFT_TEST_HOST)
displayName: Run integration tests
- job: BuildWheel
pool:
vmImage: 'vs2017-win2016'
dependsOn:
- UnitTest
- PostgresIntegrationTest
- RedshiftIntegrationTest
- SnowflakeIntegrationTest
- BigQueryIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip setuptools && python -m pip install -r requirements.txt && python -m pip install -r requirements-dev.txt
displayName: Install dependencies
- task: ShellScript@2
inputs:
scriptPath: scripts/build-wheels.sh
- task: CopyFiles@2
inputs:
contents: 'dist\?(*.whl|*.tar.gz)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: dists

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env python
import json
import yaml
import sys
import argparse
from datetime import datetime, timezone
import dbt.clients.registry as registry
def yaml_type(fname):
with open(fname) as f:
return yaml.load(f)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--project", type=yaml_type, default="dbt_project.yml")
parser.add_argument("--namespace", required=True)
return parser.parse_args()
def get_full_name(args):
return "{}/{}".format(args.namespace, args.project["name"])
def init_project_in_packages(args, packages):
full_name = get_full_name(args)
if full_name not in packages:
packages[full_name] = {
"name": args.project["name"],
"namespace": args.namespace,
"latest": args.project["version"],
"assets": {},
"versions": {},
}
return packages[full_name]
def add_version_to_package(args, project_json):
project_json["versions"][args.project["version"]] = {
"id": "{}/{}".format(get_full_name(args), args.project["version"]),
"name": args.project["name"],
"version": args.project["version"],
"description": "",
"published_at": datetime.now(timezone.utc).astimezone().isoformat(),
"packages": args.project.get("packages") or [],
"works_with": [],
"_source": {
"type": "github",
"url": "",
"readme": "",
},
"downloads": {
"tarball": "",
"format": "tgz",
"sha1": "",
},
}
def main():
args = parse_args()
packages = registry.packages()
project_json = init_project_in_packages(args, packages)
if args.project["version"] in project_json["versions"]:
raise Exception(
"Version {} already in packages JSON".format(args.project["version"]),
file=sys.stderr,
)
add_version_to_package(args, project_json)
print(json.dumps(packages, indent=2))
if __name__ == "__main__":
main()

View File

@@ -1 +1 @@
recursive-include dbt/include *.py *.sql *.yml *.html *.md
recursive-include dbt/include *.py *.sql *.yml *.html *.md .gitkeep .gitignore

View File

@@ -8,10 +8,10 @@ from dbt.exceptions import RuntimeException
@dataclass
class Column:
TYPE_LABELS: ClassVar[Dict[str, str]] = {
"STRING": "TEXT",
"TIMESTAMP": "TIMESTAMP",
"FLOAT": "FLOAT",
"INTEGER": "INT",
'STRING': 'TEXT',
'TIMESTAMP': 'TIMESTAMP',
'FLOAT': 'FLOAT',
'INTEGER': 'INT'
}
column: str
dtype: str
@@ -24,7 +24,7 @@ class Column:
return cls.TYPE_LABELS.get(dtype.upper(), dtype)
@classmethod
def create(cls, name, label_or_dtype: str) -> "Column":
def create(cls, name, label_or_dtype: str) -> 'Column':
column_type = cls.translate_type(label_or_dtype)
return cls(name, column_type)
@@ -41,19 +41,14 @@ class Column:
if self.is_string():
return Column.string_type(self.string_size())
elif self.is_numeric():
return Column.numeric_type(
self.dtype, self.numeric_precision, self.numeric_scale
)
return Column.numeric_type(self.dtype, self.numeric_precision,
self.numeric_scale)
else:
return self.dtype
def is_string(self) -> bool:
return self.dtype.lower() in [
"text",
"character varying",
"character",
"varchar",
]
return self.dtype.lower() in ['text', 'character varying', 'character',
'varchar']
def is_number(self):
return any([self.is_integer(), self.is_numeric(), self.is_float()])
@@ -61,45 +56,33 @@ class Column:
def is_float(self):
return self.dtype.lower() in [
# floats
"real",
"float4",
"float",
"double precision",
"float8",
'real', 'float4', 'float', 'double precision', 'float8'
]
def is_integer(self) -> bool:
return self.dtype.lower() in [
# real types
"smallint",
"integer",
"bigint",
"smallserial",
"serial",
"bigserial",
'smallint', 'integer', 'bigint',
'smallserial', 'serial', 'bigserial',
# aliases
"int2",
"int4",
"int8",
"serial2",
"serial4",
"serial8",
'int2', 'int4', 'int8',
'serial2', 'serial4', 'serial8',
]
def is_numeric(self) -> bool:
return self.dtype.lower() in ["numeric", "decimal"]
return self.dtype.lower() in ['numeric', 'decimal']
def string_size(self) -> int:
if not self.is_string():
raise RuntimeException("Called string_size() on non-string field!")
if self.dtype == "text" or self.char_size is None:
if self.dtype == 'text' or self.char_size is None:
# char_size should never be None. Handle it reasonably just in case
return 256
else:
return int(self.char_size)
def can_expand_to(self, other_column: "Column") -> bool:
def can_expand_to(self, other_column: 'Column') -> bool:
"""returns True if this column can be expanded to the size of the
other column"""
if not self.is_string() or not other_column.is_string():
@@ -127,10 +110,12 @@ class Column:
return "<Column {} ({})>".format(self.name, self.data_type)
@classmethod
def from_description(cls, name: str, raw_data_type: str) -> "Column":
match = re.match(r"([^(]+)(\([^)]+\))?", raw_data_type)
def from_description(cls, name: str, raw_data_type: str) -> 'Column':
match = re.match(r'([^(]+)(\([^)]+\))?', raw_data_type)
if match is None:
raise RuntimeException(f'Could not interpret data type "{raw_data_type}"')
raise RuntimeException(
f'Could not interpret data type "{raw_data_type}"'
)
data_type, size_info = match.groups()
char_size = None
numeric_precision = None
@@ -138,7 +123,7 @@ class Column:
if size_info is not None:
# strip out the parentheses
size_info = size_info[1:-1]
parts = size_info.split(",")
parts = size_info.split(',')
if len(parts) == 1:
try:
char_size = int(parts[0])
@@ -163,4 +148,6 @@ class Column:
f'could not convert "{parts[1]}" to an integer'
)
return cls(name, data_type, char_size, numeric_precision, numeric_scale)
return cls(
name, data_type, char_size, numeric_precision, numeric_scale
)

View File

@@ -1,21 +1,18 @@
import abc
import os
# multiprocessing.RLock is a function returning this type
from multiprocessing.synchronize import RLock
from threading import get_ident
from typing import Dict, Tuple, Hashable, Optional, ContextManager, List, Union
from typing import (
Dict, Tuple, Hashable, Optional, ContextManager, List, Union
)
import agate
import dbt.exceptions
from dbt.contracts.connection import (
Connection,
Identifier,
ConnectionState,
AdapterRequiredConfig,
LazyHandle,
AdapterResponse,
Connection, Identifier, ConnectionState,
AdapterRequiredConfig, LazyHandle, AdapterResponse
)
from dbt.contracts.graph.manifest import Manifest
from dbt.adapters.base.query_headers import (
@@ -38,7 +35,6 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
You must also set the 'TYPE' class attribute with a class-unique constant
string.
"""
TYPE: str = NotImplemented
def __init__(self, profile: AdapterRequiredConfig):
@@ -69,7 +65,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
key = self.get_thread_identifier()
if key in self.thread_connections:
raise dbt.exceptions.InternalException(
"In set_thread_connection, existing connection exists for {}"
'In set_thread_connection, existing connection exists for {}'
)
self.thread_connections[key] = conn
@@ -109,19 +105,18 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
underlying database.
"""
raise dbt.exceptions.NotImplementedException(
"`exception_handler` is not implemented for this adapter!"
)
'`exception_handler` is not implemented for this adapter!')
def set_connection_name(self, name: Optional[str] = None) -> Connection:
conn_name: str
if name is None:
# if a name isn't specified, we'll re-use a single handle
# named 'master'
conn_name = "master"
conn_name = 'master'
else:
if not isinstance(name, str):
raise dbt.exceptions.CompilerException(
f"For connection name, got {name} - not a string!"
f'For connection name, got {name} - not a string!'
)
assert isinstance(name, str)
conn_name = name
@@ -134,20 +129,20 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
state=ConnectionState.INIT,
transaction_open=False,
handle=None,
credentials=self.profile.credentials,
credentials=self.profile.credentials
)
self.set_thread_connection(conn)
if conn.name == conn_name and conn.state == "open":
if conn.name == conn_name and conn.state == 'open':
return conn
logger.debug('Acquiring new {} connection "{}".'.format(self.TYPE, conn_name))
logger.debug(
'Acquiring new {} connection "{}".'.format(self.TYPE, conn_name))
if conn.state == "open":
if conn.state == 'open':
logger.debug(
"Re-using an available connection from the pool (formerly {}).".format(
conn.name
)
'Re-using an available connection from the pool (formerly {}).'
.format(conn.name)
)
else:
conn.handle = LazyHandle(self.open)
@@ -159,7 +154,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def cancel_open(self) -> Optional[List[str]]:
"""Cancel all open connections on the adapter. (passable)"""
raise dbt.exceptions.NotImplementedException(
"`cancel_open` is not implemented for this adapter!"
'`cancel_open` is not implemented for this adapter!'
)
@abc.abstractclassmethod
@@ -173,7 +168,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
connection should not be in either in_use or available.
"""
raise dbt.exceptions.NotImplementedException(
"`open` is not implemented for this adapter!"
'`open` is not implemented for this adapter!'
)
def release(self) -> None:
@@ -194,14 +189,12 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def cleanup_all(self) -> None:
with self.lock:
for connection in self.thread_connections.values():
if connection.state not in {"closed", "init"}:
logger.debug(
"Connection '{}' was left open.".format(connection.name)
)
if connection.state not in {'closed', 'init'}:
logger.debug("Connection '{}' was left open."
.format(connection.name))
else:
logger.debug(
"Connection '{}' was properly closed.".format(connection.name)
)
logger.debug("Connection '{}' was properly closed."
.format(connection.name))
self.close(connection)
# garbage collect these connections
@@ -211,14 +204,14 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def begin(self) -> None:
"""Begin a transaction. (passable)"""
raise dbt.exceptions.NotImplementedException(
"`begin` is not implemented for this adapter!"
'`begin` is not implemented for this adapter!'
)
@abc.abstractmethod
def commit(self) -> None:
"""Commit a transaction. (passable)"""
raise dbt.exceptions.NotImplementedException(
"`commit` is not implemented for this adapter!"
'`commit` is not implemented for this adapter!'
)
@classmethod
@@ -227,17 +220,20 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
try:
connection.handle.rollback()
except Exception:
logger.debug("Failed to rollback {}".format(connection.name), exc_info=True)
logger.debug(
'Failed to rollback {}'.format(connection.name),
exc_info=True
)
@classmethod
def _close_handle(cls, connection: Connection) -> None:
"""Perform the actual close operation."""
# On windows, sometimes connection handles don't have a close() attr.
if hasattr(connection.handle, "close"):
logger.debug(f"On {connection.name}: Close")
if hasattr(connection.handle, 'close'):
logger.debug(f'On {connection.name}: Close')
connection.handle.close()
else:
logger.debug(f"On {connection.name}: No close available on handle")
logger.debug(f'On {connection.name}: No close available on handle')
@classmethod
def _rollback(cls, connection: Connection) -> None:
@@ -245,16 +241,16 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f"In _rollback, got {connection} - not a Connection!"
f'In _rollback, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
f"Tried to rollback transaction on connection "
f'Tried to rollback transaction on connection '
f'"{connection.name}", but it does not have one open!'
)
logger.debug(f"On {connection.name}: ROLLBACK")
logger.debug(f'On {connection.name}: ROLLBACK')
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -264,7 +260,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f"In close, got {connection} - not a Connection!"
f'In close, got {connection} - not a Connection!'
)
# if the connection is in closed or init, there's nothing to do
@@ -272,7 +268,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
return connection
if connection.transaction_open and connection.handle:
logger.debug("On {}: ROLLBACK".format(connection.name))
logger.debug('On {}: ROLLBACK'.format(connection.name))
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -306,5 +302,5 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
:rtype: Tuple[Union[str, AdapterResponse], agate.Table]
"""
raise dbt.exceptions.NotImplementedException(
"`execute` is not implemented for this adapter!"
'`execute` is not implemented for this adapter!'
)

View File

@@ -4,31 +4,17 @@ from contextlib import contextmanager
from datetime import datetime
from itertools import chain
from typing import (
Optional,
Tuple,
Callable,
Iterable,
Type,
Dict,
Any,
List,
Mapping,
Iterator,
Union,
Set,
Optional, Tuple, Callable, Iterable, Type, Dict, Any, List, Mapping,
Iterator, Union, Set
)
import agate
import pytz
from dbt.exceptions import (
raise_database_error,
raise_compiler_error,
invalid_type_error,
raise_database_error, raise_compiler_error, invalid_type_error,
get_relation_returned_multiple_results,
InternalException,
NotImplementedException,
RuntimeException,
InternalException, NotImplementedException, RuntimeException,
)
from dbt import flags
@@ -39,21 +25,19 @@ from dbt.adapters.protocol import (
)
from dbt.clients.agate_helper import empty_table, merge_tables, table_from_rows
from dbt.clients.jinja import MacroGenerator
from dbt.contracts.graph.compiled import CompileResultNode, CompiledSeedNode
from dbt.contracts.graph.compiled import (
CompileResultNode, CompiledSeedNode
)
from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error
from dbt.node_types import NodeType
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import filter_null_values, executor
from dbt.adapters.base.connections import Connection, AdapterResponse
from dbt.adapters.base.meta import AdapterMeta, available
from dbt.adapters.base.relation import (
ComponentName,
BaseRelation,
InformationSchema,
SchemaSearchMap,
ComponentName, BaseRelation, InformationSchema, SchemaSearchMap
)
from dbt.adapters.base import Column as BaseColumn
from dbt.adapters.cache import RelationsCache
@@ -62,14 +46,15 @@ from dbt.adapters.cache import RelationsCache
SeedModel = Union[ParsedSeedNode, CompiledSeedNode]
GET_CATALOG_MACRO_NAME = "get_catalog"
FRESHNESS_MACRO_NAME = "collect_freshness"
GET_CATALOG_MACRO_NAME = 'get_catalog'
FRESHNESS_MACRO_NAME = 'collect_freshness'
def _expect_row_value(key: str, row: agate.Row):
if key not in row.keys():
raise InternalException(
'Got a row without "{}" column, columns: {}'.format(key, row.keys())
'Got a row without "{}" column, columns: {}'
.format(key, row.keys())
)
return row[key]
@@ -78,37 +63,40 @@ def _catalog_filter_schemas(manifest: Manifest) -> Callable[[agate.Row], bool]:
"""Return a function that takes a row and decides if the row should be
included in the catalog output.
"""
schemas = frozenset((d.lower(), s.lower()) for d, s in manifest.get_used_schemas())
schemas = frozenset((d.lower(), s.lower())
for d, s in manifest.get_used_schemas())
def test(row: agate.Row) -> bool:
table_database = _expect_row_value("table_database", row)
table_schema = _expect_row_value("table_schema", row)
table_database = _expect_row_value('table_database', row)
table_schema = _expect_row_value('table_schema', row)
# the schema may be present but None, which is not an error and should
# be filtered out
if table_schema is None:
return False
return (table_database.lower(), table_schema.lower()) in schemas
return test
def _utc(dt: Optional[datetime], source: BaseRelation, field_name: str) -> datetime:
def _utc(
dt: Optional[datetime], source: BaseRelation, field_name: str
) -> datetime:
"""If dt has a timezone, return a new datetime that's in UTC. Otherwise,
assume the datetime is already for UTC and add the timezone.
"""
if dt is None:
raise raise_database_error(
"Expected a non-null value when querying field '{}' of table "
" {} but received value 'null' instead".format(field_name, source)
)
" {} but received value 'null' instead".format(
field_name,
source))
elif not hasattr(dt, "tzinfo"):
elif not hasattr(dt, 'tzinfo'):
raise raise_database_error(
"Expected a timestamp value when querying field '{}' of table "
"{} but received value of type '{}' instead".format(
field_name, source, type(dt).__name__
)
)
field_name,
source,
type(dt).__name__))
elif dt.tzinfo:
return dt.astimezone(pytz.UTC)
@@ -118,7 +106,7 @@ def _utc(dt: Optional[datetime], source: BaseRelation, field_name: str) -> datet
def _relation_name(rel: Optional[BaseRelation]) -> str:
if rel is None:
return "null relation"
return 'null relation'
else:
return str(rel)
@@ -159,7 +147,6 @@ class BaseAdapter(metaclass=AdapterMeta):
Macros:
- get_catalog
"""
Relation: Type[BaseRelation] = BaseRelation
Column: Type[BaseColumn] = BaseColumn
ConnectionManager: Type[ConnectionManagerProtocol]
@@ -193,12 +180,12 @@ class BaseAdapter(metaclass=AdapterMeta):
self.connections.commit_if_has_connection()
def debug_query(self) -> None:
self.execute("select 1 as id")
self.execute('select 1 as id')
def nice_connection_name(self) -> str:
conn = self.connections.get_if_exists()
if conn is None or conn.name is None:
return "<None>"
return '<None>'
return conn.name
@contextmanager
@@ -216,11 +203,13 @@ class BaseAdapter(metaclass=AdapterMeta):
self.connections.query_header.reset()
@contextmanager
def connection_for(self, node: CompileResultNode) -> Iterator[None]:
def connection_for(
self, node: CompileResultNode
) -> Iterator[None]:
with self.connection_named(node.unique_id, node):
yield
@available.parse(lambda *a, **k: ("", empty_table()))
@available.parse(lambda *a, **k: ('', empty_table()))
def execute(
self, sql: str, auto_begin: bool = False, fetch: bool = False
) -> Tuple[Union[str, AdapterResponse], agate.Table]:
@@ -234,10 +223,16 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: A tuple of the status and the results (empty if fetch=False).
:rtype: Tuple[Union[str, AdapterResponse], agate.Table]
"""
return self.connections.execute(sql=sql, auto_begin=auto_begin, fetch=fetch)
return self.connections.execute(
sql=sql,
auto_begin=auto_begin,
fetch=fetch
)
@available.parse(lambda *a, **k: ("", empty_table()))
def get_partitions_metadata(self, table: str) -> Tuple[agate.Table]:
@available.parse(lambda *a, **k: ('', empty_table()))
def get_partitions_metadata(
self, table: str
) -> Tuple[agate.Table]:
"""Obtain partitions metadata for a BigQuery partitioned table.
:param str table_id: a partitioned table id, in standard SQL format.
@@ -245,7 +240,9 @@ class BaseAdapter(metaclass=AdapterMeta):
https://cloud.google.com/bigquery/docs/creating-partitioned-tables#getting_partition_metadata_using_meta_tables.
:rtype: agate.Table
"""
return self.connections.get_partitions_metadata(table=table)
return self.connections.get_partitions_metadata(
table=table
)
###
# Methods that should never be overridden
@@ -275,9 +272,8 @@ class BaseAdapter(metaclass=AdapterMeta):
def load_macro_manifest(self) -> MacroManifest:
if self._macro_manifest_lazy is None:
# avoid a circular import
from dbt.parser.manifest import load_macro_manifest
manifest = load_macro_manifest(
from dbt.parser.manifest import ManifestLoader
manifest = ManifestLoader.load_macros(
self.config, self.connections.set_query_header
)
self._macro_manifest_lazy = manifest
@@ -297,9 +293,8 @@ class BaseAdapter(metaclass=AdapterMeta):
return False
elif (database, schema) not in self.cache:
logger.debug(
'On "{}": cache miss for schema "{}.{}", this is inefficient'.format(
self.nice_connection_name(), database, schema
)
'On "{}": cache miss for schema "{}.{}", this is inefficient'
.format(self.nice_connection_name(), database, schema)
)
return False
else:
@@ -314,8 +309,7 @@ class BaseAdapter(metaclass=AdapterMeta):
self.Relation.create_from(self.config, node).without_identifier()
for node in manifest.nodes.values()
if (
node.resource_type in NodeType.executable()
and not node.is_ephemeral_model
node.is_relational and not node.is_ephemeral_model
)
}
@@ -355,9 +349,9 @@ class BaseAdapter(metaclass=AdapterMeta):
for cache_schema in cache_schemas:
fut = tpe.submit_connected(
self,
f"list_{cache_schema.database}_{cache_schema.schema}",
f'list_{cache_schema.database}_{cache_schema.schema}',
self.list_relations_without_caching,
cache_schema,
cache_schema
)
futures.append(fut)
@@ -375,7 +369,9 @@ class BaseAdapter(metaclass=AdapterMeta):
cache_update.add((relation.database, relation.schema))
self.cache.update_schemas(cache_update)
def set_relations_cache(self, manifest: Manifest, clear: bool = False) -> None:
def set_relations_cache(
self, manifest: Manifest, clear: bool = False
) -> None:
"""Run a query that gets a populated cache of the relations in the
database and set the cache on this adapter.
"""
@@ -393,12 +389,12 @@ class BaseAdapter(metaclass=AdapterMeta):
if relation is None:
name = self.nice_connection_name()
raise_compiler_error(
"Attempted to cache a null relation for {}".format(name)
'Attempted to cache a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.add(relation)
# so jinja doesn't render things
return ""
return ''
@available
def cache_dropped(self, relation: Optional[BaseRelation]) -> str:
@@ -408,11 +404,11 @@ class BaseAdapter(metaclass=AdapterMeta):
if relation is None:
name = self.nice_connection_name()
raise_compiler_error(
"Attempted to drop a null relation for {}".format(name)
'Attempted to drop a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.drop(relation)
return ""
return ''
@available
def cache_renamed(
@@ -428,12 +424,13 @@ class BaseAdapter(metaclass=AdapterMeta):
src_name = _relation_name(from_relation)
dst_name = _relation_name(to_relation)
raise_compiler_error(
"Attempted to rename {} to {} for {}".format(src_name, dst_name, name)
'Attempted to rename {} to {} for {}'
.format(src_name, dst_name, name)
)
if flags.USE_CACHE:
self.cache.rename(from_relation, to_relation)
return ""
return ''
###
# Abstract methods for database-specific values, attributes, and types
@@ -442,13 +439,12 @@ class BaseAdapter(metaclass=AdapterMeta):
def date_function(cls) -> str:
"""Get the date function used by this adapter's database."""
raise NotImplementedException(
"`date_function` is not implemented for this adapter!"
)
'`date_function` is not implemented for this adapter!')
@abc.abstractclassmethod
def is_cancelable(cls) -> bool:
raise NotImplementedException(
"`is_cancelable` is not implemented for this adapter!"
'`is_cancelable` is not implemented for this adapter!'
)
###
@@ -458,7 +454,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def list_schemas(self, database: str) -> List[str]:
"""Get a list of existing schemas in database"""
raise NotImplementedException(
"`list_schemas` is not implemented for this adapter!"
'`list_schemas` is not implemented for this adapter!'
)
@available.parse(lambda *a, **k: False)
@@ -469,7 +465,10 @@ class BaseAdapter(metaclass=AdapterMeta):
and adapters should implement it if there is an optimized path (and
there probably is)
"""
search = (s.lower() for s in self.list_schemas(database=database))
search = (
s.lower() for s in
self.list_schemas(database=database)
)
return schema.lower() in search
###
@@ -483,7 +482,7 @@ class BaseAdapter(metaclass=AdapterMeta):
*Implementors must call self.cache.drop() to preserve cache state!*
"""
raise NotImplementedException(
"`drop_relation` is not implemented for this adapter!"
'`drop_relation` is not implemented for this adapter!'
)
@abc.abstractmethod
@@ -491,7 +490,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def truncate_relation(self, relation: BaseRelation) -> None:
"""Truncate the given relation."""
raise NotImplementedException(
"`truncate_relation` is not implemented for this adapter!"
'`truncate_relation` is not implemented for this adapter!'
)
@abc.abstractmethod
@@ -504,30 +503,36 @@ class BaseAdapter(metaclass=AdapterMeta):
Implementors must call self.cache.rename() to preserve cache state.
"""
raise NotImplementedException(
"`rename_relation` is not implemented for this adapter!"
'`rename_relation` is not implemented for this adapter!'
)
@abc.abstractmethod
@available.parse_list
def get_columns_in_relation(self, relation: BaseRelation) -> List[BaseColumn]:
"""Get a list of the columns in the given Relation."""
def get_columns_in_relation(
self, relation: BaseRelation
) -> List[BaseColumn]:
"""Get a list of the columns in the given Relation. """
raise NotImplementedException(
"`get_columns_in_relation` is not implemented for this adapter!"
'`get_columns_in_relation` is not implemented for this adapter!'
)
@available.deprecated("get_columns_in_relation", lambda *a, **k: [])
def get_columns_in_table(self, schema: str, identifier: str) -> List[BaseColumn]:
@available.deprecated('get_columns_in_relation', lambda *a, **k: [])
def get_columns_in_table(
self, schema: str, identifier: str
) -> List[BaseColumn]:
"""DEPRECATED: Get a list of the columns in the given table."""
relation = self.Relation.create(
database=self.config.credentials.database,
schema=schema,
identifier=identifier,
quote_policy=self.config.quoting,
quote_policy=self.config.quoting
)
return self.get_columns_in_relation(relation)
@abc.abstractmethod
def expand_column_types(self, goal: BaseRelation, current: BaseRelation) -> None:
def expand_column_types(
self, goal: BaseRelation, current: BaseRelation
) -> None:
"""Expand the current table's types to match the goal table. (passable)
:param self.Relation goal: A relation that currently exists in the
@@ -536,7 +541,7 @@ class BaseAdapter(metaclass=AdapterMeta):
database with columns of unspecified types.
"""
raise NotImplementedException(
"`expand_target_column_types` is not implemented for this adapter!"
'`expand_target_column_types` is not implemented for this adapter!'
)
@abc.abstractmethod
@@ -553,7 +558,8 @@ class BaseAdapter(metaclass=AdapterMeta):
:rtype: List[self.Relation]
"""
raise NotImplementedException(
"`list_relations_without_caching` is not implemented for this " "adapter!"
'`list_relations_without_caching` is not implemented for this '
'adapter!'
)
###
@@ -568,33 +574,32 @@ class BaseAdapter(metaclass=AdapterMeta):
"""
if not isinstance(from_relation, self.Relation):
invalid_type_error(
method_name="get_missing_columns",
arg_name="from_relation",
method_name='get_missing_columns',
arg_name='from_relation',
got_value=from_relation,
expected_type=self.Relation,
)
expected_type=self.Relation)
if not isinstance(to_relation, self.Relation):
invalid_type_error(
method_name="get_missing_columns",
arg_name="to_relation",
method_name='get_missing_columns',
arg_name='to_relation',
got_value=to_relation,
expected_type=self.Relation,
)
expected_type=self.Relation)
from_columns = {
col.name: col for col in self.get_columns_in_relation(from_relation)
col.name: col for col in
self.get_columns_in_relation(from_relation)
}
to_columns = {
col.name: col for col in self.get_columns_in_relation(to_relation)
col.name: col for col in
self.get_columns_in_relation(to_relation)
}
missing_columns = set(from_columns.keys()) - set(to_columns.keys())
return [
col
for (col_name, col) in from_columns.items()
col for (col_name, col) in from_columns.items()
if col_name in missing_columns
]
@@ -609,19 +614,18 @@ class BaseAdapter(metaclass=AdapterMeta):
"""
if not isinstance(relation, self.Relation):
invalid_type_error(
method_name="valid_snapshot_target",
arg_name="relation",
method_name='valid_snapshot_target',
arg_name='relation',
got_value=relation,
expected_type=self.Relation,
)
expected_type=self.Relation)
columns = self.get_columns_in_relation(relation)
names = set(c.name.lower() for c in columns)
expanded_keys = ("scd_id", "valid_from", "valid_to")
expanded_keys = ('scd_id', 'valid_from', 'valid_to')
extra = []
missing = []
for legacy in expanded_keys:
desired = "dbt_" + legacy
desired = 'dbt_' + legacy
if desired not in names:
missing.append(desired)
if legacy in names:
@@ -631,13 +635,13 @@ class BaseAdapter(metaclass=AdapterMeta):
if extra:
msg = (
'Snapshot target has ("{}") but not ("{}") - is it an '
"unmigrated previous version archive?".format(
'", "'.join(extra), '", "'.join(missing)
)
'unmigrated previous version archive?'
.format('", "'.join(extra), '", "'.join(missing))
)
else:
msg = 'Snapshot target is not a snapshot table (missing "{}")'.format(
'", "'.join(missing)
msg = (
'Snapshot target is not a snapshot table (missing "{}")'
.format('", "'.join(missing))
)
raise_compiler_error(msg)
@@ -647,19 +651,17 @@ class BaseAdapter(metaclass=AdapterMeta):
) -> None:
if not isinstance(from_relation, self.Relation):
invalid_type_error(
method_name="expand_target_column_types",
arg_name="from_relation",
method_name='expand_target_column_types',
arg_name='from_relation',
got_value=from_relation,
expected_type=self.Relation,
)
expected_type=self.Relation)
if not isinstance(to_relation, self.Relation):
invalid_type_error(
method_name="expand_target_column_types",
arg_name="to_relation",
method_name='expand_target_column_types',
arg_name='to_relation',
got_value=to_relation,
expected_type=self.Relation,
)
expected_type=self.Relation)
self.expand_column_types(from_relation, to_relation)
@@ -672,41 +674,38 @@ class BaseAdapter(metaclass=AdapterMeta):
schema_relation = self.Relation.create(
database=database,
schema=schema,
identifier="",
quote_policy=self.config.quoting,
identifier='',
quote_policy=self.config.quoting
).without_identifier()
# we can't build the relations cache because we don't have a
# manifest so we can't run any operations.
relations = self.list_relations_without_caching(schema_relation)
logger.debug(
"with database={}, schema={}, relations={}".format(
database, schema, relations
)
relations = self.list_relations_without_caching(
schema_relation
)
logger.debug('with database={}, schema={}, relations={}'
.format(database, schema, relations))
return relations
def _make_match_kwargs(
self, database: str, schema: str, identifier: str
) -> Dict[str, str]:
quoting = self.config.quoting
if identifier is not None and quoting["identifier"] is False:
if identifier is not None and quoting['identifier'] is False:
identifier = identifier.lower()
if schema is not None and quoting["schema"] is False:
if schema is not None and quoting['schema'] is False:
schema = schema.lower()
if database is not None and quoting["database"] is False:
if database is not None and quoting['database'] is False:
database = database.lower()
return filter_null_values(
{
"database": database,
"identifier": identifier,
"schema": schema,
}
)
return filter_null_values({
'database': database,
'identifier': identifier,
'schema': schema,
})
def _make_match(
self,
@@ -732,22 +731,25 @@ class BaseAdapter(metaclass=AdapterMeta):
) -> Optional[BaseRelation]:
relations_list = self.list_relations(database, schema)
matches = self._make_match(relations_list, database, schema, identifier)
matches = self._make_match(relations_list, database, schema,
identifier)
if len(matches) > 1:
kwargs = {
"identifier": identifier,
"schema": schema,
"database": database,
'identifier': identifier,
'schema': schema,
'database': database,
}
get_relation_returned_multiple_results(kwargs, matches)
get_relation_returned_multiple_results(
kwargs, matches
)
elif matches:
return matches[0]
return None
@available.deprecated("get_relation", lambda *a, **k: False)
@available.deprecated('get_relation', lambda *a, **k: False)
def already_exists(self, schema: str, name: str) -> bool:
"""DEPRECATED: Return if a model already exists in the database"""
database = self.config.credentials.database
@@ -763,7 +765,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def create_schema(self, relation: BaseRelation):
"""Create the given schema if it does not exist."""
raise NotImplementedException(
"`create_schema` is not implemented for this adapter!"
'`create_schema` is not implemented for this adapter!'
)
@abc.abstractmethod
@@ -771,14 +773,16 @@ class BaseAdapter(metaclass=AdapterMeta):
def drop_schema(self, relation: BaseRelation):
"""Drop the given schema (and everything in it) if it exists."""
raise NotImplementedException(
"`drop_schema` is not implemented for this adapter!"
'`drop_schema` is not implemented for this adapter!'
)
@available
@abc.abstractclassmethod
def quote(cls, identifier: str) -> str:
"""Quote the given identifier, as appropriate for the database."""
raise NotImplementedException("`quote` is not implemented for this adapter!")
raise NotImplementedException(
'`quote` is not implemented for this adapter!'
)
@available
def quote_as_configured(self, identifier: str, quote_key: str) -> str:
@@ -800,17 +804,19 @@ class BaseAdapter(metaclass=AdapterMeta):
return identifier
@available
def quote_seed_column(self, column: str, quote_config: Optional[bool]) -> str:
def quote_seed_column(
self, column: str, quote_config: Optional[bool]
) -> str:
# this is the default for now
quote_columns: bool = False
if isinstance(quote_config, bool):
quote_columns = quote_config
elif quote_config is None:
deprecations.warn("column-quoting-unset")
deprecations.warn('column-quoting-unset')
else:
raise_compiler_error(
f'The seed configuration value of "quote_columns" has an '
f"invalid type {type(quote_config)}"
f'invalid type {type(quote_config)}'
)
if quote_columns:
@@ -823,7 +829,9 @@ class BaseAdapter(metaclass=AdapterMeta):
# converting agate types into their sql equivalents.
###
@abc.abstractclassmethod
def convert_text_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_text_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
"""Return the type in the database that best maps to the agate.Text
type for the given agate table and column index.
@@ -832,11 +840,12 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_text_type` is not implemented for this adapter!"
)
'`convert_text_type` is not implemented for this adapter!')
@abc.abstractclassmethod
def convert_number_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_number_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
"""Return the type in the database that best maps to the agate.Number
type for the given agate table and column index.
@@ -845,11 +854,12 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_number_type` is not implemented for this adapter!"
)
'`convert_number_type` is not implemented for this adapter!')
@abc.abstractclassmethod
def convert_boolean_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_boolean_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
"""Return the type in the database that best maps to the agate.Boolean
type for the given agate table and column index.
@@ -858,11 +868,12 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_boolean_type` is not implemented for this adapter!"
)
'`convert_boolean_type` is not implemented for this adapter!')
@abc.abstractclassmethod
def convert_datetime_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_datetime_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
"""Return the type in the database that best maps to the agate.DateTime
type for the given agate table and column index.
@@ -871,8 +882,7 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_datetime_type` is not implemented for this adapter!"
)
'`convert_datetime_type` is not implemented for this adapter!')
@abc.abstractclassmethod
def convert_date_type(cls, agate_table: agate.Table, col_idx: int) -> str:
@@ -884,8 +894,7 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_date_type` is not implemented for this adapter!"
)
'`convert_date_type` is not implemented for this adapter!')
@abc.abstractclassmethod
def convert_time_type(cls, agate_table: agate.Table, col_idx: int) -> str:
@@ -897,12 +906,13 @@ class BaseAdapter(metaclass=AdapterMeta):
:return: The name of the type in the database
"""
raise NotImplementedException(
"`convert_time_type` is not implemented for this adapter!"
)
'`convert_time_type` is not implemented for this adapter!')
@available
@classmethod
def convert_type(cls, agate_table: agate.Table, col_idx: int) -> Optional[str]:
def convert_type(
cls, agate_table: agate.Table, col_idx: int
) -> Optional[str]:
return cls.convert_agate_type(agate_table, col_idx)
@classmethod
@@ -951,7 +961,7 @@ class BaseAdapter(metaclass=AdapterMeta):
:param release: Ignored.
"""
if release is not False:
deprecations.warn("execute-macro-release")
deprecations.warn('execute-macro-release')
if kwargs is None:
kwargs = {}
if context_override is None:
@@ -965,27 +975,28 @@ class BaseAdapter(metaclass=AdapterMeta):
)
if macro is None:
if project is None:
package_name = "any package"
package_name = 'any package'
else:
package_name = 'the "{}" package'.format(project)
raise RuntimeException(
'dbt could not find a macro with the name "{}" in {}'.format(
macro_name, package_name
)
'dbt could not find a macro with the name "{}" in {}'
.format(macro_name, package_name)
)
# This causes a reference cycle, as generate_runtime_macro()
# ends up calling get_adapter, so the import has to be here.
from dbt.context.providers import generate_runtime_macro
macro_context = generate_runtime_macro(
macro=macro, config=self.config, manifest=manifest, package_name=project
macro=macro,
config=self.config,
manifest=manifest,
package_name=project
)
macro_context.update(context_override)
macro_function = MacroGenerator(macro, macro_context)
with self.connections.exception_handler(f"macro {macro_name}"):
with self.connections.exception_handler(f'macro {macro_name}'):
result = macro_function(**kwargs)
return result
@@ -1000,7 +1011,7 @@ class BaseAdapter(metaclass=AdapterMeta):
table = table_from_rows(
table.rows,
table.column_names,
text_only_columns=["table_database", "table_schema", "table_name"],
text_only_columns=['table_database', 'table_schema', 'table_name']
)
return table.where(_catalog_filter_schemas(manifest))
@@ -1011,7 +1022,10 @@ class BaseAdapter(metaclass=AdapterMeta):
manifest: Manifest,
) -> agate.Table:
kwargs = {"information_schema": information_schema, "schemas": schemas}
kwargs = {
'information_schema': information_schema,
'schemas': schemas
}
table = self.execute_macro(
GET_CATALOG_MACRO_NAME,
kwargs=kwargs,
@@ -1023,7 +1037,9 @@ class BaseAdapter(metaclass=AdapterMeta):
results = self._catalog_filter_table(table, manifest)
return results
def get_catalog(self, manifest: Manifest) -> Tuple[agate.Table, List[Exception]]:
def get_catalog(
self, manifest: Manifest
) -> Tuple[agate.Table, List[Exception]]:
schema_map = self._get_catalog_schemas(manifest)
with executor(self.config) as tpe:
@@ -1031,10 +1047,14 @@ class BaseAdapter(metaclass=AdapterMeta):
for info, schemas in schema_map.items():
if len(schemas) == 0:
continue
name = ".".join([str(info.database), "information_schema"])
name = '.'.join([
str(info.database),
'information_schema'
])
fut = tpe.submit_connected(
self, name, self._get_one_catalog, info, schemas, manifest
self, name,
self._get_one_catalog, info, schemas, manifest
)
futures.append(fut)
@@ -1051,18 +1071,20 @@ class BaseAdapter(metaclass=AdapterMeta):
source: BaseRelation,
loaded_at_field: str,
filter: Optional[str],
manifest: Optional[Manifest] = None,
manifest: Optional[Manifest] = None
) -> Dict[str, Any]:
"""Calculate the freshness of sources in dbt, and return it"""
kwargs: Dict[str, Any] = {
"source": source,
"loaded_at_field": loaded_at_field,
"filter": filter,
'source': source,
'loaded_at_field': loaded_at_field,
'filter': filter,
}
# run the macro
table = self.execute_macro(
FRESHNESS_MACRO_NAME, kwargs=kwargs, manifest=manifest
FRESHNESS_MACRO_NAME,
kwargs=kwargs,
manifest=manifest
)
# now we have a 1-row table of the maximum `loaded_at_field` value and
# the current time according to the db.
@@ -1082,9 +1104,9 @@ class BaseAdapter(metaclass=AdapterMeta):
snapshotted_at = _utc(table[0][1], source, loaded_at_field)
age = (snapshotted_at - max_loaded_at).total_seconds()
return {
"max_loaded_at": max_loaded_at,
"snapshotted_at": snapshotted_at,
"age": age,
'max_loaded_at': max_loaded_at,
'snapshotted_at': snapshotted_at,
'age': age,
}
def pre_model_hook(self, config: Mapping[str, Any]) -> Any:
@@ -1114,7 +1136,6 @@ class BaseAdapter(metaclass=AdapterMeta):
def get_compiler(self):
from dbt.compilation import Compiler
return Compiler(self.config)
# Methods used in adapter tests
@@ -1125,13 +1146,13 @@ class BaseAdapter(metaclass=AdapterMeta):
clause: str,
where_clause: Optional[str] = None,
) -> str:
clause = f"update {dst_name} set {dst_column} = {clause}"
clause = f'update {dst_name} set {dst_column} = {clause}'
if where_clause is not None:
clause += f" where {where_clause}"
clause += f' where {where_clause}'
return clause
def timestamp_add_sql(
self, add_to: str, number: int = 1, interval: str = "hour"
self, add_to: str, number: int = 1, interval: str = 'hour'
) -> str:
# for backwards compatibility, we're compelled to set some sort of
# default. A lot of searching has lead me to believe that the
@@ -1140,24 +1161,23 @@ class BaseAdapter(metaclass=AdapterMeta):
return f"{add_to} + interval '{number} {interval}'"
def string_add_sql(
self,
add_to: str,
value: str,
location="append",
self, add_to: str, value: str, location='append',
) -> str:
if location == "append":
if location == 'append':
return f"{add_to} || '{value}'"
elif location == "prepend":
elif location == 'prepend':
return f"'{value}' || {add_to}"
else:
raise RuntimeException(f'Got an unexpected location value of "{location}"')
raise RuntimeException(
f'Got an unexpected location value of "{location}"'
)
def get_rows_different_sql(
self,
relation_a: BaseRelation,
relation_b: BaseRelation,
column_names: Optional[List[str]] = None,
except_operator: str = "EXCEPT",
except_operator: str = 'EXCEPT',
) -> str:
"""Generate SQL for a query that returns a single row with a two
columns: the number of rows that are different between the two
@@ -1170,7 +1190,7 @@ class BaseAdapter(metaclass=AdapterMeta):
names = sorted((self.quote(c.name) for c in columns))
else:
names = sorted((self.quote(n) for n in column_names))
columns_csv = ", ".join(names)
columns_csv = ', '.join(names)
sql = COLUMNS_EQUAL_SQL.format(
columns=columns_csv,
@@ -1182,7 +1202,7 @@ class BaseAdapter(metaclass=AdapterMeta):
return sql
COLUMNS_EQUAL_SQL = """
COLUMNS_EQUAL_SQL = '''
with diff_count as (
SELECT
1 as id,
@@ -1208,11 +1228,11 @@ select
diff_count.num_missing as num_mismatched
from row_count_diff
join diff_count using (id)
""".strip()
'''.strip()
def catch_as_completed(
futures, # typing: List[Future[agate.Table]]
futures # typing: List[Future[agate.Table]]
) -> Tuple[agate.Table, List[Exception]]:
# catalogs: agate.Table = agate.Table(rows=[])
@@ -1225,10 +1245,15 @@ def catch_as_completed(
if exc is None:
catalog = future.result()
tables.append(catalog)
elif isinstance(exc, KeyboardInterrupt) or not isinstance(exc, Exception):
elif (
isinstance(exc, KeyboardInterrupt) or
not isinstance(exc, Exception)
):
raise exc
else:
warn_or_error(f"Encountered an error while generating catalog: {str(exc)}")
warn_or_error(
f'Encountered an error while generating catalog: {str(exc)}'
)
# exc is not None, derives from Exception, and isn't ctrl+c
exceptions.append(exc)
return merge_tables(tables), exceptions

View File

@@ -30,11 +30,9 @@ class _Available:
x.update(big_expensive_db_query())
return x
"""
def inner(func):
func._parse_replacement_ = parse_replacement
return self(func)
return inner
def deprecated(
@@ -59,14 +57,13 @@ class _Available:
The optional parse_replacement, if provided, will provide a parse-time
replacement for the actual method (see `available.parse`).
"""
def wrapper(func):
func_name = func.__name__
renamed_method(func_name, supported_name)
@wraps(func)
def inner(*args, **kwargs):
warn("adapter:{}".format(func_name))
warn('adapter:{}'.format(func_name))
return func(*args, **kwargs)
if parse_replacement:
@@ -74,7 +71,6 @@ class _Available:
else:
available_function = self
return available_function(inner)
return wrapper
def parse_none(self, func: Callable) -> Callable:
@@ -113,14 +109,14 @@ class AdapterMeta(abc.ABCMeta):
# collect base class data first
for base in bases:
available.update(getattr(base, "_available_", set()))
replacements.update(getattr(base, "_parse_replacements_", set()))
available.update(getattr(base, '_available_', set()))
replacements.update(getattr(base, '_parse_replacements_', set()))
# override with local data if it exists
for name, value in namespace.items():
if getattr(value, "_is_available_", False):
if getattr(value, '_is_available_', False):
available.add(name)
parse_replacement = getattr(value, "_parse_replacement_", None)
parse_replacement = getattr(value, '_parse_replacement_', None)
if parse_replacement is not None:
replacements[name] = parse_replacement

View File

@@ -8,10 +8,11 @@ from dbt.adapters.protocol import AdapterProtocol
def project_name_from_path(include_path: str) -> str:
# avoid an import cycle
from dbt.config.project import Project
partial = Project.partial_load(include_path)
if partial.project_name is None:
raise CompilationException(f"Invalid project at {include_path}: name not set!")
raise CompilationException(
f'Invalid project at {include_path}: name not set!'
)
return partial.project_name
@@ -22,13 +23,12 @@ class AdapterPlugin:
:param dependencies: A list of adapter names that this adapter depends
upon.
"""
def __init__(
self,
adapter: Type[AdapterProtocol],
credentials: Type[Credentials],
include_path: str,
dependencies: Optional[List[str]] = None,
dependencies: Optional[List[str]] = None
):
self.adapter: Type[AdapterProtocol] = adapter

View File

@@ -15,7 +15,7 @@ class NodeWrapper:
self._inner_node = node
def __getattr__(self, name):
return getattr(self._inner_node, name, "")
return getattr(self._inner_node, name, '')
class _QueryComment(local):
@@ -24,7 +24,6 @@ class _QueryComment(local):
- the current thread's query comment.
- a source_name indicating what set the current thread's query comment
"""
def __init__(self, initial):
self.query_comment: Optional[str] = initial
self.append = False
@@ -36,16 +35,16 @@ class _QueryComment(local):
if self.append:
# replace last ';' with '<comment>;'
sql = sql.rstrip()
if sql[-1] == ";":
if sql[-1] == ';':
sql = sql[:-1]
return "{}\n/* {} */;".format(sql, self.query_comment.strip())
return '{}\n/* {} */;'.format(sql, self.query_comment.strip())
return "{}\n/* {} */".format(sql, self.query_comment.strip())
return '{}\n/* {} */'.format(sql, self.query_comment.strip())
return "/* {} */\n{}".format(self.query_comment.strip(), sql)
return '/* {} */\n{}'.format(self.query_comment.strip(), sql)
def set(self, comment: Optional[str], append: bool):
if isinstance(comment, str) and "*/" in comment:
if isinstance(comment, str) and '*/' in comment:
# tell the user "no" so they don't hurt themselves by writing
# garbage
raise RuntimeException(
@@ -64,17 +63,15 @@ class MacroQueryStringSetter:
self.config = config
comment_macro = self._get_comment_macro()
self.generator: QueryStringFunc = lambda name, model: ""
self.generator: QueryStringFunc = lambda name, model: ''
# if the comment value was None or the empty string, just skip it
if comment_macro:
assert isinstance(comment_macro, str)
macro = "\n".join(
(
"{%- macro query_comment_macro(connection_name, node) -%}",
comment_macro,
"{% endmacro %}",
)
)
macro = '\n'.join((
'{%- macro query_comment_macro(connection_name, node) -%}',
comment_macro,
'{% endmacro %}'
))
ctx = self._get_context()
self.generator = QueryStringGenerator(macro, ctx)
self.comment = _QueryComment(None)
@@ -90,7 +87,7 @@ class MacroQueryStringSetter:
return self.comment.add(sql)
def reset(self):
self.set("master", None)
self.set('master', None)
def set(self, name: str, node: Optional[CompileResultNode]):
wrapped: Optional[NodeWrapper] = None

View File

@@ -1,16 +1,13 @@
from collections.abc import Hashable
from dataclasses import dataclass
from typing import Optional, TypeVar, Any, Type, Dict, Union, Iterator, Tuple, Set
from typing import (
Optional, TypeVar, Any, Type, Dict, Union, Iterator, Tuple, Set
)
from dbt.contracts.graph.compiled import CompiledNode
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedNode
from dbt.contracts.relation import (
RelationType,
ComponentName,
HasQuoting,
FakeAPIObject,
Policy,
Path,
RelationType, ComponentName, HasQuoting, FakeAPIObject, Policy, Path
)
from dbt.exceptions import InternalException
from dbt.node_types import NodeType
@@ -19,7 +16,7 @@ from dbt.utils import filter_null_values, deep_merge, classproperty
import dbt.exceptions
Self = TypeVar("Self", bound="BaseRelation")
Self = TypeVar('Self', bound='BaseRelation')
@dataclass(frozen=True, eq=False, repr=False)
@@ -43,7 +40,7 @@ class BaseRelation(FakeAPIObject, Hashable):
if field.name == field_name:
return field
# this should be unreachable
raise ValueError(f"BaseRelation has no {field_name} field!")
raise ValueError(f'BaseRelation has no {field_name} field!')
def __eq__(self, other):
if not isinstance(other, self.__class__):
@@ -52,18 +49,20 @@ class BaseRelation(FakeAPIObject, Hashable):
@classmethod
def get_default_quote_policy(cls) -> Policy:
return cls._get_field_named("quote_policy").default
return cls._get_field_named('quote_policy').default
@classmethod
def get_default_include_policy(cls) -> Policy:
return cls._get_field_named("include_policy").default
return cls._get_field_named('include_policy').default
def get(self, key, default=None):
"""Override `.get` to return a metadata object so we don't break
dbt_utils.
"""
if key == "metadata":
return {"type": self.__class__.__name__}
if key == 'metadata':
return {
'type': self.__class__.__name__
}
return super().get(key, default)
def matches(
@@ -72,19 +71,16 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[str] = None,
identifier: Optional[str] = None,
) -> bool:
search = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
search = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
if not search:
# nothing was passed in
raise dbt.exceptions.RuntimeException(
"Tried to match relation, but no search path was passed!"
)
"Tried to match relation, but no search path was passed!")
exact_match = True
approximate_match = True
@@ -113,13 +109,11 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[bool] = None,
identifier: Optional[bool] = None,
) -> Self:
policy = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
policy = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
new_quote_policy = self.quote_policy.replace_dict(policy)
return self.replace(quote_policy=new_quote_policy)
@@ -130,18 +124,16 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[bool] = None,
identifier: Optional[bool] = None,
) -> Self:
policy = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
policy = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
new_include_policy = self.include_policy.replace_dict(policy)
return self.replace(include_policy=new_include_policy)
def information_schema(self, view_name=None) -> "InformationSchema":
def information_schema(self, view_name=None) -> 'InformationSchema':
# some of our data comes from jinja, where things can be `Undefined`.
if not isinstance(view_name, str):
view_name = None
@@ -151,10 +143,10 @@ class BaseRelation(FakeAPIObject, Hashable):
info_schema = InformationSchema.from_relation(self, view_name)
return info_schema.incorporate(path={"schema": None})
def information_schema_only(self) -> "InformationSchema":
def information_schema_only(self) -> 'InformationSchema':
return self.information_schema()
def without_identifier(self) -> "BaseRelation":
def without_identifier(self) -> 'BaseRelation':
"""Return a form of this relation that only has the database and schema
set to included. To get the appropriately-quoted form the schema out of
the result (for use as part of a query), use `.render()`. To get the
@@ -165,7 +157,7 @@ class BaseRelation(FakeAPIObject, Hashable):
return self.include(identifier=False).replace_path(identifier=None)
def _render_iterator(
self,
self
) -> Iterator[Tuple[Optional[ComponentName], Optional[str]]]:
for key in ComponentName:
@@ -178,10 +170,13 @@ class BaseRelation(FakeAPIObject, Hashable):
def render(self) -> str:
# if there is nothing set, this will return the empty string.
return ".".join(part for _, part in self._render_iterator() if part is not None)
return '.'.join(
part for _, part in self._render_iterator()
if part is not None
)
def quoted(self, identifier):
return "{quote_char}{identifier}{quote_char}".format(
return '{quote_char}{identifier}{quote_char}'.format(
quote_char=self.quote_character,
identifier=identifier,
)
@@ -191,11 +186,11 @@ class BaseRelation(FakeAPIObject, Hashable):
cls: Type[Self], source: ParsedSourceDefinition, **kwargs: Any
) -> Self:
source_quoting = source.quoting.to_dict(omit_none=True)
source_quoting.pop("column", None)
source_quoting.pop('column', None)
quote_policy = deep_merge(
cls.get_default_quote_policy().to_dict(omit_none=True),
source_quoting,
kwargs.get("quote_policy", {}),
kwargs.get('quote_policy', {}),
)
return cls.create(
@@ -203,12 +198,12 @@ class BaseRelation(FakeAPIObject, Hashable):
schema=source.schema,
identifier=source.identifier,
quote_policy=quote_policy,
**kwargs,
**kwargs
)
@staticmethod
def add_ephemeral_prefix(name: str):
return f"__dbt__cte__{name}"
return f'__dbt__cte__{name}'
@classmethod
def create_ephemeral_from_node(
@@ -241,8 +236,7 @@ class BaseRelation(FakeAPIObject, Hashable):
schema=node.schema,
identifier=node.alias,
quote_policy=quote_policy,
**kwargs,
)
**kwargs)
@classmethod
def create_from(
@@ -254,16 +248,15 @@ class BaseRelation(FakeAPIObject, Hashable):
if node.resource_type == NodeType.Source:
if not isinstance(node, ParsedSourceDefinition):
raise InternalException(
"type mismatch, expected ParsedSourceDefinition but got {}".format(
type(node)
)
'type mismatch, expected ParsedSourceDefinition but got {}'
.format(type(node))
)
return cls.create_from_source(node, **kwargs)
else:
if not isinstance(node, (ParsedNode, CompiledNode)):
raise InternalException(
"type mismatch, expected ParsedNode or CompiledNode but "
"got {}".format(type(node))
'type mismatch, expected ParsedNode or CompiledNode but '
'got {}'.format(type(node))
)
return cls.create_from_node(config, node, **kwargs)
@@ -276,16 +269,14 @@ class BaseRelation(FakeAPIObject, Hashable):
type: Optional[RelationType] = None,
**kwargs,
) -> Self:
kwargs.update(
{
"path": {
"database": database,
"schema": schema,
"identifier": identifier,
},
"type": type,
}
)
kwargs.update({
'path': {
'database': database,
'schema': schema,
'identifier': identifier,
},
'type': type,
})
return cls.from_dict(kwargs)
def __repr__(self) -> str:
@@ -351,7 +342,7 @@ class BaseRelation(FakeAPIObject, Hashable):
return RelationType
Info = TypeVar("Info", bound="InformationSchema")
Info = TypeVar('Info', bound='InformationSchema')
@dataclass(frozen=True, eq=False, repr=False)
@@ -361,7 +352,7 @@ class InformationSchema(BaseRelation):
def __post_init__(self):
if not isinstance(self.information_schema_view, (type(None), str)):
raise dbt.exceptions.CompilationException(
"Got an invalid name: {}".format(self.information_schema_view)
'Got an invalid name: {}'.format(self.information_schema_view)
)
@classmethod
@@ -371,7 +362,7 @@ class InformationSchema(BaseRelation):
return Path(
database=relation.database,
schema=relation.schema,
identifier="INFORMATION_SCHEMA",
identifier='INFORMATION_SCHEMA',
)
@classmethod
@@ -402,7 +393,9 @@ class InformationSchema(BaseRelation):
relation: BaseRelation,
information_schema_view: Optional[str],
) -> Info:
include_policy = cls.get_include_policy(relation, information_schema_view)
include_policy = cls.get_include_policy(
relation, information_schema_view
)
quote_policy = cls.get_quote_policy(relation, information_schema_view)
path = cls.get_path(relation, information_schema_view)
return cls(
@@ -424,7 +417,6 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
search for what schemas. The schema values are all lowercased to avoid
duplication.
"""
def add(self, relation: BaseRelation):
key = relation.information_schema_only()
if key not in self:
@@ -434,27 +426,31 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
schema = relation.schema.lower()
self[key].add(schema)
def search(self) -> Iterator[Tuple[InformationSchema, Optional[str]]]:
def search(
self
) -> Iterator[Tuple[InformationSchema, Optional[str]]]:
for information_schema_name, schemas in self.items():
for schema in schemas:
yield information_schema_name, schema
def flatten(self):
def flatten(self, allow_multiple_databases: bool = False):
new = self.__class__()
# make sure we don't have duplicates
seen = {r.database.lower() for r in self if r.database}
if len(seen) > 1:
dbt.exceptions.raise_compiler_error(str(seen))
# make sure we don't have multiple databases if allow_multiple_databases is set to False
if not allow_multiple_databases:
seen = {r.database.lower() for r in self if r.database}
if len(seen) > 1:
dbt.exceptions.raise_compiler_error(str(seen))
for information_schema_name, schema in self.search():
path = {"database": information_schema_name.database, "schema": schema}
new.add(
information_schema_name.incorporate(
path=path,
quote_policy={"database": False},
include_policy={"database": False},
)
)
path = {
'database': information_schema_name.database,
'schema': schema
}
new.add(information_schema_name.incorporate(
path=path,
quote_policy={'database': False},
include_policy={'database': False},
))
return new

View File

@@ -7,7 +7,7 @@ from dbt.logger import CACHE_LOGGER as logger
from dbt.utils import lowercase
import dbt.exceptions
_ReferenceKey = namedtuple("_ReferenceKey", "database schema identifier")
_ReferenceKey = namedtuple('_ReferenceKey', 'database schema identifier')
def _make_key(relation) -> _ReferenceKey:
@@ -15,11 +15,9 @@ def _make_key(relation) -> _ReferenceKey:
to keep track of quoting
"""
# databases and schemas can both be None
return _ReferenceKey(
lowercase(relation.database),
lowercase(relation.schema),
lowercase(relation.identifier),
)
return _ReferenceKey(lowercase(relation.database),
lowercase(relation.schema),
lowercase(relation.identifier))
def dot_separated(key: _ReferenceKey) -> str:
@@ -27,7 +25,7 @@ def dot_separated(key: _ReferenceKey) -> str:
:param _ReferenceKey key: The key to stringify.
"""
return ".".join(map(str, key))
return '.'.join(map(str, key))
class _CachedRelation:
@@ -39,14 +37,13 @@ class _CachedRelation:
that refer to this relation.
:attr BaseRelation inner: The underlying dbt relation.
"""
def __init__(self, inner):
self.referenced_by = {}
self.inner = inner
def __str__(self) -> str:
return (
"_CachedRelation(database={}, schema={}, identifier={}, inner={})"
'_CachedRelation(database={}, schema={}, identifier={}, inner={})'
).format(self.database, self.schema, self.identifier, self.inner)
@property
@@ -81,7 +78,7 @@ class _CachedRelation:
"""
return _make_key(self)
def add_reference(self, referrer: "_CachedRelation"):
def add_reference(self, referrer: '_CachedRelation'):
"""Add a reference from referrer to self, indicating that if this node
were drop...cascaded, the referrer would be dropped as well.
@@ -125,9 +122,9 @@ class _CachedRelation:
# table_name is ever anything but the identifier (via .create())
self.inner = self.inner.incorporate(
path={
"database": new_relation.inner.database,
"schema": new_relation.inner.schema,
"identifier": new_relation.inner.identifier,
'database': new_relation.inner.database,
'schema': new_relation.inner.schema,
'identifier': new_relation.inner.identifier
},
)
@@ -143,9 +140,8 @@ class _CachedRelation:
"""
if new_key in self.referenced_by:
dbt.exceptions.raise_cache_inconsistent(
'in rename of "{}" -> "{}", new name is in the cache already'.format(
old_key, new_key
)
'in rename of "{}" -> "{}", new name is in the cache already'
.format(old_key, new_key)
)
if old_key not in self.referenced_by:
@@ -176,16 +172,13 @@ class RelationsCache:
The adapters also hold this lock while filling the cache.
:attr Set[str] schemas: The set of known/cached schemas, all lowercased.
"""
def __init__(self) -> None:
self.relations: Dict[_ReferenceKey, _CachedRelation] = {}
self.lock = threading.RLock()
self.schemas: Set[Tuple[Optional[str], Optional[str]]] = set()
def add_schema(
self,
database: Optional[str],
schema: Optional[str],
self, database: Optional[str], schema: Optional[str],
) -> None:
"""Add a schema to the set of known schemas (case-insensitive)
@@ -195,9 +188,7 @@ class RelationsCache:
self.schemas.add((lowercase(database), lowercase(schema)))
def drop_schema(
self,
database: Optional[str],
schema: Optional[str],
self, database: Optional[str], schema: Optional[str],
) -> None:
"""Drop the given schema and remove it from the set of known schemas.
@@ -272,15 +263,15 @@ class RelationsCache:
return
if referenced is None:
dbt.exceptions.raise_cache_inconsistent(
"in add_link, referenced link key {} not in cache!".format(
referenced_key
)
'in add_link, referenced link key {} not in cache!'
.format(referenced_key)
)
dependent = self.relations.get(dependent_key)
if dependent is None:
dbt.exceptions.raise_cache_inconsistent(
"in add_link, dependent link key {} not in cache!".format(dependent_key)
'in add_link, dependent link key {} not in cache!'
.format(dependent_key)
)
assert dependent is not None # we just raised!
@@ -307,23 +298,28 @@ class RelationsCache:
# referring to a table outside our control. There's no need to make
# a link - we will never drop the referenced relation during a run.
logger.debug(
"{dep!s} references {ref!s} but {ref.database}.{ref.schema} "
"is not in the cache, skipping assumed external relation".format(
dep=dependent, ref=ref_key
)
'{dep!s} references {ref!s} but {ref.database}.{ref.schema} '
'is not in the cache, skipping assumed external relation'
.format(dep=dependent, ref=ref_key)
)
return
if ref_key not in self.relations:
# Insert a dummy "external" relation.
referenced = referenced.replace(type=referenced.External)
referenced = referenced.replace(
type=referenced.External
)
self.add(referenced)
dep_key = _make_key(dependent)
if dep_key not in self.relations:
# Insert a dummy "external" relation.
dependent = dependent.replace(type=referenced.External)
dependent = dependent.replace(
type=referenced.External
)
self.add(dependent)
logger.debug("adding link, {!s} references {!s}".format(dep_key, ref_key))
logger.debug(
'adding link, {!s} references {!s}'.format(dep_key, ref_key)
)
with self.lock:
self._add_link(ref_key, dep_key)
@@ -334,14 +330,14 @@ class RelationsCache:
:param BaseRelation relation: The underlying relation.
"""
cached = _CachedRelation(relation)
logger.debug("Adding relation: {!s}".format(cached))
logger.debug('Adding relation: {!s}'.format(cached))
lazy_log("before adding: {!s}", self.dump_graph)
lazy_log('before adding: {!s}', self.dump_graph)
with self.lock:
self._setdefault(cached)
lazy_log("after adding: {!s}", self.dump_graph)
lazy_log('after adding: {!s}', self.dump_graph)
def _remove_refs(self, keys):
"""Removes all references to all entries in keys. This does not
@@ -363,10 +359,13 @@ class RelationsCache:
:param _CachedRelation dropped: An existing _CachedRelation to drop.
"""
if dropped not in self.relations:
logger.debug("dropped a nonexistent relationship: {!s}".format(dropped))
logger.debug('dropped a nonexistent relationship: {!s}'
.format(dropped))
return
consequences = self.relations[dropped].collect_consequences()
logger.debug("drop {} is cascading to {}".format(dropped, consequences))
logger.debug(
'drop {} is cascading to {}'.format(dropped, consequences)
)
self._remove_refs(consequences)
def drop(self, relation):
@@ -381,7 +380,7 @@ class RelationsCache:
:param str identifier: The identifier of the relation to drop.
"""
dropped = _make_key(relation)
logger.debug("Dropping relation: {!s}".format(dropped))
logger.debug('Dropping relation: {!s}'.format(dropped))
with self.lock:
self._drop_cascade_relation(dropped)
@@ -405,9 +404,8 @@ class RelationsCache:
for cached in self.relations.values():
if cached.is_referenced_by(old_key):
logger.debug(
"updated reference from {0} -> {2} to {1} -> {2}".format(
old_key, new_key, cached.key()
)
'updated reference from {0} -> {2} to {1} -> {2}'
.format(old_key, new_key, cached.key())
)
cached.rename_key(old_key, new_key)
@@ -432,16 +430,14 @@ class RelationsCache:
"""
if new_key in self.relations:
dbt.exceptions.raise_cache_inconsistent(
"in rename, new key {} already in cache: {}".format(
new_key, list(self.relations.keys())
)
'in rename, new key {} already in cache: {}'
.format(new_key, list(self.relations.keys()))
)
if old_key not in self.relations:
logger.debug(
"old key {} not found in self.relations, assuming temporary".format(
old_key
)
'old key {} not found in self.relations, assuming temporary'
.format(old_key)
)
return False
return True
@@ -460,9 +456,11 @@ class RelationsCache:
"""
old_key = _make_key(old)
new_key = _make_key(new)
logger.debug("Renaming relation {!s} to {!s}".format(old_key, new_key))
logger.debug('Renaming relation {!s} to {!s}'.format(
old_key, new_key
))
lazy_log("before rename: {!s}", self.dump_graph)
lazy_log('before rename: {!s}', self.dump_graph)
with self.lock:
if self._check_rename_constraints(old_key, new_key):
@@ -470,7 +468,7 @@ class RelationsCache:
else:
self._setdefault(_CachedRelation(new))
lazy_log("after rename: {!s}", self.dump_graph)
lazy_log('after rename: {!s}', self.dump_graph)
def get_relations(
self, database: Optional[str], schema: Optional[str]
@@ -485,14 +483,14 @@ class RelationsCache:
schema = lowercase(schema)
with self.lock:
results = [
r.inner
for r in self.relations.values()
if (lowercase(r.schema) == schema and lowercase(r.database) == database)
r.inner for r in self.relations.values()
if (lowercase(r.schema) == schema and
lowercase(r.database) == database)
]
if None in results:
dbt.exceptions.raise_cache_inconsistent(
"in get_relations, a None relation was found in the cache!"
'in get_relations, a None relation was found in the cache!'
)
return results

View File

@@ -50,7 +50,9 @@ class AdapterContainer:
adapter = self.get_adapter_class_by_name(name)
return adapter.Relation
def get_config_class_by_name(self, name: str) -> Type[AdapterConfig]:
def get_config_class_by_name(
self, name: str
) -> Type[AdapterConfig]:
adapter = self.get_adapter_class_by_name(name)
return adapter.AdapterSpecificConfigs
@@ -60,24 +62,24 @@ class AdapterContainer:
# singletons
try:
# mypy doesn't think modules have any attributes.
mod: Any = import_module("." + name, "dbt.adapters")
mod: Any = import_module('.' + name, 'dbt.adapters')
except ModuleNotFoundError as exc:
# if we failed to import the target module in particular, inform
# the user about it via a runtime error
if exc.name == "dbt.adapters." + name:
raise RuntimeException(f"Could not find adapter type {name}!")
logger.info(f"Error importing adapter: {exc}")
if exc.name == 'dbt.adapters.' + name:
raise RuntimeException(f'Could not find adapter type {name}!')
logger.info(f'Error importing adapter: {exc}')
# otherwise, the error had to have come from some underlying
# library. Log the stack trace.
logger.debug("", exc_info=True)
logger.debug('', exc_info=True)
raise
plugin: AdapterPlugin = mod.Plugin
plugin_type = plugin.adapter.type()
if plugin_type != name:
raise RuntimeException(
f"Expected to find adapter with type named {name}, got "
f"adapter with type {plugin_type}"
f'Expected to find adapter with type named {name}, got '
f'adapter with type {plugin_type}'
)
with self.lock:
@@ -107,7 +109,8 @@ class AdapterContainer:
return self.adapters[adapter_name]
def reset_adapters(self):
"""Clear the adapters. This is useful for tests, which change configs."""
"""Clear the adapters. This is useful for tests, which change configs.
"""
with self.lock:
for adapter in self.adapters.values():
adapter.cleanup_connections()
@@ -137,7 +140,9 @@ class AdapterContainer:
try:
plugin = self.plugins[plugin_name]
except KeyError:
raise InternalException(f"No plugin found for {plugin_name}") from None
raise InternalException(
f'No plugin found for {plugin_name}'
) from None
plugins.append(plugin)
seen.add(plugin_name)
if plugin.dependencies is None:
@@ -161,7 +166,7 @@ class AdapterContainer:
path = self.packages[package_name]
except KeyError:
raise InternalException(
f"No internal package listing found for {package_name}"
f'No internal package listing found for {package_name}'
)
paths.append(path)
return paths
@@ -182,7 +187,8 @@ def get_adapter(config: AdapterRequiredConfig):
def reset_adapters():
"""Clear the adapters. This is useful for tests, which change configs."""
"""Clear the adapters. This is useful for tests, which change configs.
"""
FACTORY.reset_adapters()

View File

@@ -1,27 +1,17 @@
from dataclasses import dataclass
from typing import (
Type,
Hashable,
Optional,
ContextManager,
List,
Generic,
TypeVar,
ClassVar,
Tuple,
Union,
Dict,
Any,
Type, Hashable, Optional, ContextManager, List, Generic, TypeVar, ClassVar,
Tuple, Union, Dict, Any
)
from typing_extensions import Protocol
import agate
from dbt.contracts.connection import Connection, AdapterRequiredConfig, AdapterResponse
from dbt.contracts.connection import (
Connection, AdapterRequiredConfig, AdapterResponse
)
from dbt.contracts.graph.compiled import (
CompiledNode,
ManifestNode,
NonSourceCompiledNode,
CompiledNode, ManifestNode, NonSourceCompiledNode
)
from dbt.contracts.graph.parsed import ParsedNode, ParsedSourceDefinition
from dbt.contracts.graph.model_config import BaseConfig
@@ -44,7 +34,7 @@ class ColumnProtocol(Protocol):
pass
Self = TypeVar("Self", bound="RelationProtocol")
Self = TypeVar('Self', bound='RelationProtocol')
class RelationProtocol(Protocol):
@@ -74,11 +64,19 @@ class CompilerProtocol(Protocol):
...
AdapterConfig_T = TypeVar("AdapterConfig_T", bound=AdapterConfig)
ConnectionManager_T = TypeVar("ConnectionManager_T", bound=ConnectionManagerProtocol)
Relation_T = TypeVar("Relation_T", bound=RelationProtocol)
Column_T = TypeVar("Column_T", bound=ColumnProtocol)
Compiler_T = TypeVar("Compiler_T", bound=CompilerProtocol)
AdapterConfig_T = TypeVar(
'AdapterConfig_T', bound=AdapterConfig
)
ConnectionManager_T = TypeVar(
'ConnectionManager_T', bound=ConnectionManagerProtocol
)
Relation_T = TypeVar(
'Relation_T', bound=RelationProtocol
)
Column_T = TypeVar(
'Column_T', bound=ColumnProtocol
)
Compiler_T = TypeVar('Compiler_T', bound=CompilerProtocol)
class AdapterProtocol(
@@ -89,7 +87,7 @@ class AdapterProtocol(
Relation_T,
Column_T,
Compiler_T,
],
]
):
AdapterSpecificConfigs: ClassVar[Type[AdapterConfig_T]]
Column: ClassVar[Type[Column_T]]

View File

@@ -7,7 +7,9 @@ import agate
import dbt.clients.agate_helper
import dbt.exceptions
from dbt.adapters.base import BaseConnectionManager
from dbt.contracts.connection import Connection, ConnectionState, AdapterResponse
from dbt.contracts.connection import (
Connection, ConnectionState, AdapterResponse
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import flags
@@ -21,12 +23,11 @@ class SQLConnectionManager(BaseConnectionManager):
- get_response
- open
"""
@abc.abstractmethod
def cancel(self, connection: Connection):
"""Cancel the given connection."""
raise dbt.exceptions.NotImplementedException(
"`cancel` is not implemented for this adapter!"
'`cancel` is not implemented for this adapter!'
)
def cancel_open(self) -> List[str]:
@@ -40,8 +41,8 @@ class SQLConnectionManager(BaseConnectionManager):
# if the connection failed, the handle will be None so we have
# nothing to cancel.
if (
connection.handle is not None
and connection.state == ConnectionState.OPEN
connection.handle is not None and
connection.state == ConnectionState.OPEN
):
self.cancel(connection)
if connection.name is not None:
@@ -53,22 +54,23 @@ class SQLConnectionManager(BaseConnectionManager):
sql: str,
auto_begin: bool = True,
bindings: Optional[Any] = None,
abridge_sql_log: bool = False,
abridge_sql_log: bool = False
) -> Tuple[Connection, Any]:
connection = self.get_thread_connection()
if auto_begin and connection.transaction_open is False:
self.begin()
logger.debug('Using {} connection "{}".'.format(self.TYPE, connection.name))
logger.debug('Using {} connection "{}".'
.format(self.TYPE, connection.name))
with self.exception_handler(sql):
if abridge_sql_log:
log_sql = "{}...".format(sql[:512])
log_sql = '{}...'.format(sql[:512])
else:
log_sql = sql
logger.debug(
"On {connection_name}: {sql}",
'On {connection_name}: {sql}',
connection_name=connection.name,
sql=log_sql,
)
@@ -79,7 +81,7 @@ class SQLConnectionManager(BaseConnectionManager):
logger.debug(
"SQL status: {status} in {elapsed:0.2f} seconds",
status=self.get_response(cursor),
elapsed=(time.time() - pre),
elapsed=(time.time() - pre)
)
return connection, cursor
@@ -88,14 +90,23 @@ class SQLConnectionManager(BaseConnectionManager):
def get_response(cls, cursor: Any) -> Union[AdapterResponse, str]:
"""Get the status of the cursor."""
raise dbt.exceptions.NotImplementedException(
"`get_response` is not implemented for this adapter!"
'`get_response` is not implemented for this adapter!'
)
@classmethod
def process_results(
cls, column_names: Iterable[str], rows: Iterable[Any]
cls,
column_names: Iterable[str],
rows: Iterable[Any]
) -> List[Dict[str, Any]]:
unique_col_names = dict()
for idx in range(len(column_names)):
col_name = column_names[idx]
if col_name in unique_col_names:
unique_col_names[col_name] += 1
column_names[idx] = f'{col_name}_{unique_col_names[col_name]}'
else:
unique_col_names[column_names[idx]] = 1
return [dict(zip(column_names, row)) for row in rows]
@classmethod
@@ -108,7 +119,10 @@ class SQLConnectionManager(BaseConnectionManager):
rows = cursor.fetchall()
data = cls.process_results(column_names, rows)
return dbt.clients.agate_helper.table_from_data_flat(data, column_names)
return dbt.clients.agate_helper.table_from_data_flat(
data,
column_names
)
def execute(
self, sql: str, auto_begin: bool = False, fetch: bool = False
@@ -123,10 +137,10 @@ class SQLConnectionManager(BaseConnectionManager):
return response, table
def add_begin_query(self):
return self.add_query("BEGIN", auto_begin=False)
return self.add_query('BEGIN', auto_begin=False)
def add_commit_query(self):
return self.add_query("COMMIT", auto_begin=False)
return self.add_query('COMMIT', auto_begin=False)
def begin(self):
connection = self.get_thread_connection()
@@ -134,14 +148,13 @@ class SQLConnectionManager(BaseConnectionManager):
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f"In begin, got {connection} - not a Connection!"
f'In begin, got {connection} - not a Connection!'
)
if connection.transaction_open is True:
raise dbt.exceptions.InternalException(
'Tried to begin a new transaction on connection "{}", but '
"it already had one open!".format(connection.name)
)
'it already had one open!'.format(connection.name))
self.add_begin_query()
@@ -153,16 +166,15 @@ class SQLConnectionManager(BaseConnectionManager):
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f"In commit, got {connection} - not a Connection!"
f'In commit, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
'Tried to commit transaction on connection "{}", but '
"it does not have one open!".format(connection.name)
)
'it does not have one open!'.format(connection.name))
logger.debug("On {}: COMMIT".format(connection.name))
logger.debug('On {}: COMMIT'.format(connection.name))
self.add_commit_query()
connection.transaction_open = False

View File

@@ -10,16 +10,16 @@ from dbt.logger import GLOBAL_LOGGER as logger
from dbt.adapters.base.relation import BaseRelation
LIST_RELATIONS_MACRO_NAME = "list_relations_without_caching"
GET_COLUMNS_IN_RELATION_MACRO_NAME = "get_columns_in_relation"
LIST_SCHEMAS_MACRO_NAME = "list_schemas"
CHECK_SCHEMA_EXISTS_MACRO_NAME = "check_schema_exists"
CREATE_SCHEMA_MACRO_NAME = "create_schema"
DROP_SCHEMA_MACRO_NAME = "drop_schema"
RENAME_RELATION_MACRO_NAME = "rename_relation"
TRUNCATE_RELATION_MACRO_NAME = "truncate_relation"
DROP_RELATION_MACRO_NAME = "drop_relation"
ALTER_COLUMN_TYPE_MACRO_NAME = "alter_column_type"
LIST_RELATIONS_MACRO_NAME = 'list_relations_without_caching'
GET_COLUMNS_IN_RELATION_MACRO_NAME = 'get_columns_in_relation'
LIST_SCHEMAS_MACRO_NAME = 'list_schemas'
CHECK_SCHEMA_EXISTS_MACRO_NAME = 'check_schema_exists'
CREATE_SCHEMA_MACRO_NAME = 'create_schema'
DROP_SCHEMA_MACRO_NAME = 'drop_schema'
RENAME_RELATION_MACRO_NAME = 'rename_relation'
TRUNCATE_RELATION_MACRO_NAME = 'truncate_relation'
DROP_RELATION_MACRO_NAME = 'drop_relation'
ALTER_COLUMN_TYPE_MACRO_NAME = 'alter_column_type'
class SQLAdapter(BaseAdapter):
@@ -60,23 +60,30 @@ class SQLAdapter(BaseAdapter):
:param abridge_sql_log: If set, limit the raw sql logged to 512
characters
"""
return self.connections.add_query(sql, auto_begin, bindings, abridge_sql_log)
return self.connections.add_query(sql, auto_begin, bindings,
abridge_sql_log)
@classmethod
def convert_text_type(cls, agate_table: agate.Table, col_idx: int) -> str:
return "text"
@classmethod
def convert_number_type(cls, agate_table: agate.Table, col_idx: int) -> str:
decimals = agate_table.aggregate(agate.MaxPrecision(col_idx)) # type: ignore
def convert_number_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
decimals = agate_table.aggregate(agate.MaxPrecision(col_idx))
return "float8" if decimals else "integer"
@classmethod
def convert_boolean_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_boolean_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
return "boolean"
@classmethod
def convert_datetime_type(cls, agate_table: agate.Table, col_idx: int) -> str:
def convert_datetime_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
return "timestamp without time zone"
@classmethod
@@ -92,28 +99,31 @@ class SQLAdapter(BaseAdapter):
return True
def expand_column_types(self, goal, current):
reference_columns = {c.name: c for c in self.get_columns_in_relation(goal)}
reference_columns = {
c.name: c for c in
self.get_columns_in_relation(goal)
}
target_columns = {c.name: c for c in self.get_columns_in_relation(current)}
target_columns = {
c.name: c for c
in self.get_columns_in_relation(current)
}
for column_name, reference_column in reference_columns.items():
target_column = target_columns.get(column_name)
if target_column is not None and target_column.can_expand_to(
reference_column
):
if target_column is not None and \
target_column.can_expand_to(reference_column):
col_string_size = reference_column.string_size()
new_type = self.Column.string_type(col_string_size)
logger.debug(
"Changing col type from {} to {} in table {}",
target_column.data_type,
new_type,
current,
)
logger.debug("Changing col type from {} to {} in table {}",
target_column.data_type, new_type, current)
self.alter_column_type(current, column_name, new_type)
def alter_column_type(self, relation, column_name, new_column_type) -> None:
def alter_column_type(
self, relation, column_name, new_column_type
) -> None:
"""
1. Create a new column (w/ temp name and correct type)
2. Copy data over to it
@@ -121,40 +131,53 @@ class SQLAdapter(BaseAdapter):
4. Rename the new column to existing column
"""
kwargs = {
"relation": relation,
"column_name": column_name,
"new_column_type": new_column_type,
'relation': relation,
'column_name': column_name,
'new_column_type': new_column_type,
}
self.execute_macro(ALTER_COLUMN_TYPE_MACRO_NAME, kwargs=kwargs)
self.execute_macro(
ALTER_COLUMN_TYPE_MACRO_NAME,
kwargs=kwargs
)
def drop_relation(self, relation):
if relation.type is None:
dbt.exceptions.raise_compiler_error(
"Tried to drop relation {}, but its type is null.".format(relation)
)
'Tried to drop relation {}, but its type is null.'
.format(relation))
self.cache_dropped(relation)
self.execute_macro(DROP_RELATION_MACRO_NAME, kwargs={"relation": relation})
self.execute_macro(
DROP_RELATION_MACRO_NAME,
kwargs={'relation': relation}
)
def truncate_relation(self, relation):
self.execute_macro(TRUNCATE_RELATION_MACRO_NAME, kwargs={"relation": relation})
self.execute_macro(
TRUNCATE_RELATION_MACRO_NAME,
kwargs={'relation': relation}
)
def rename_relation(self, from_relation, to_relation):
self.cache_renamed(from_relation, to_relation)
kwargs = {"from_relation": from_relation, "to_relation": to_relation}
self.execute_macro(RENAME_RELATION_MACRO_NAME, kwargs=kwargs)
kwargs = {'from_relation': from_relation, 'to_relation': to_relation}
self.execute_macro(
RENAME_RELATION_MACRO_NAME,
kwargs=kwargs
)
def get_columns_in_relation(self, relation):
return self.execute_macro(
GET_COLUMNS_IN_RELATION_MACRO_NAME, kwargs={"relation": relation}
GET_COLUMNS_IN_RELATION_MACRO_NAME,
kwargs={'relation': relation}
)
def create_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
logger.debug('Creating schema "{}"', relation)
kwargs = {
"relation": relation,
'relation': relation,
}
self.execute_macro(CREATE_SCHEMA_MACRO_NAME, kwargs=kwargs)
self.commit_if_has_connection()
@@ -165,35 +188,39 @@ class SQLAdapter(BaseAdapter):
relation = relation.without_identifier()
logger.debug('Dropping schema "{}".', relation)
kwargs = {
"relation": relation,
'relation': relation,
}
self.execute_macro(DROP_SCHEMA_MACRO_NAME, kwargs=kwargs)
# we can update the cache here
self.cache.drop_schema(relation.database, relation.schema)
def list_relations_without_caching(
self,
schema_relation: BaseRelation,
self, schema_relation: BaseRelation,
) -> List[BaseRelation]:
kwargs = {"schema_relation": schema_relation}
results = self.execute_macro(LIST_RELATIONS_MACRO_NAME, kwargs=kwargs)
kwargs = {'schema_relation': schema_relation}
results = self.execute_macro(
LIST_RELATIONS_MACRO_NAME,
kwargs=kwargs
)
relations = []
quote_policy = {"database": True, "schema": True, "identifier": True}
quote_policy = {
'database': True,
'schema': True,
'identifier': True
}
for _database, name, _schema, _type in results:
try:
_type = self.Relation.get_relation_type(_type)
except ValueError:
_type = self.Relation.External
relations.append(
self.Relation.create(
database=_database,
schema=_schema,
identifier=name,
quote_policy=quote_policy,
type=_type,
)
)
relations.append(self.Relation.create(
database=_database,
schema=_schema,
identifier=name,
quote_policy=quote_policy,
type=_type
))
return relations
def quote(self, identifier):
@@ -201,7 +228,8 @@ class SQLAdapter(BaseAdapter):
def list_schemas(self, database: str) -> List[str]:
results = self.execute_macro(
LIST_SCHEMAS_MACRO_NAME, kwargs={"database": database}
LIST_SCHEMAS_MACRO_NAME,
kwargs={'database': database}
)
return [row[0] for row in results]
@@ -210,10 +238,13 @@ class SQLAdapter(BaseAdapter):
information_schema = self.Relation.create(
database=database,
schema=schema,
identifier="INFORMATION_SCHEMA",
quote_policy=self.config.quoting,
identifier='INFORMATION_SCHEMA',
quote_policy=self.config.quoting
).information_schema()
kwargs = {"information_schema": information_schema, "schema": schema}
results = self.execute_macro(CHECK_SCHEMA_EXISTS_MACRO_NAME, kwargs=kwargs)
kwargs = {'information_schema': information_schema, 'schema': schema}
results = self.execute_macro(
CHECK_SCHEMA_EXISTS_MACRO_NAME,
kwargs=kwargs
)
return results[0][0] > 0

View File

@@ -10,89 +10,79 @@ def regex(pat):
class BlockData:
"""raw plaintext data from the top level of the file."""
def __init__(self, contents):
self.block_type_name = "__dbt__data"
self.block_type_name = '__dbt__data'
self.contents = contents
self.full_block = contents
class BlockTag:
def __init__(
self, block_type_name, block_name, contents=None, full_block=None, **kw
):
def __init__(self, block_type_name, block_name, contents=None,
full_block=None, **kw):
self.block_type_name = block_type_name
self.block_name = block_name
self.contents = contents
self.full_block = full_block
def __str__(self):
return "BlockTag({!r}, {!r})".format(self.block_type_name, self.block_name)
return 'BlockTag({!r}, {!r})'.format(self.block_type_name,
self.block_name)
def __repr__(self):
return str(self)
@property
def end_block_type_name(self):
return "end{}".format(self.block_type_name)
return 'end{}'.format(self.block_type_name)
def end_pat(self):
# we don't want to use string formatting here because jinja uses most
# of the string formatting operators in its syntax...
pattern = "".join(
(
r"(?P<endblock>((?:\s*\{\%\-|\{\%)\s*",
self.end_block_type_name,
r"\s*(?:\-\%\}\s*|\%\})))",
)
)
pattern = ''.join((
r'(?P<endblock>((?:\s*\{\%\-|\{\%)\s*',
self.end_block_type_name,
r'\s*(?:\-\%\}\s*|\%\})))',
))
return regex(pattern)
Tag = namedtuple("Tag", "block_type_name block_name start end")
Tag = namedtuple('Tag', 'block_type_name block_name start end')
_NAME_PATTERN = r"[A-Za-z_][A-Za-z_0-9]*"
_NAME_PATTERN = r'[A-Za-z_][A-Za-z_0-9]*'
COMMENT_START_PATTERN = regex(r"(?:(?P<comment_start>(\s*\{\#)))")
COMMENT_END_PATTERN = regex(r"(.*?)(\s*\#\})")
COMMENT_START_PATTERN = regex(r'(?:(?P<comment_start>(\s*\{\#)))')
COMMENT_END_PATTERN = regex(r'(.*?)(\s*\#\})')
RAW_START_PATTERN = regex(
r"(?:\s*\{\%\-|\{\%)\s*(?P<raw_start>(raw))\s*(?:\-\%\}\s*|\%\})"
r'(?:\s*\{\%\-|\{\%)\s*(?P<raw_start>(raw))\s*(?:\-\%\}\s*|\%\})'
)
EXPR_START_PATTERN = regex(r"(?P<expr_start>(\{\{\s*))")
EXPR_END_PATTERN = regex(r"(?P<expr_end>(\s*\}\}))")
EXPR_START_PATTERN = regex(r'(?P<expr_start>(\{\{\s*))')
EXPR_END_PATTERN = regex(r'(?P<expr_end>(\s*\}\}))')
BLOCK_START_PATTERN = regex(
"".join(
(
r"(?:\s*\{\%\-|\{\%)\s*",
r"(?P<block_type_name>({}))".format(_NAME_PATTERN),
# some blocks have a 'block name'.
r"(?:\s+(?P<block_name>({})))?".format(_NAME_PATTERN),
)
)
)
BLOCK_START_PATTERN = regex(''.join((
r'(?:\s*\{\%\-|\{\%)\s*',
r'(?P<block_type_name>({}))'.format(_NAME_PATTERN),
# some blocks have a 'block name'.
r'(?:\s+(?P<block_name>({})))?'.format(_NAME_PATTERN),
)))
RAW_BLOCK_PATTERN = regex(
"".join(
(
r"(?:\s*\{\%\-|\{\%)\s*raw\s*(?:\-\%\}\s*|\%\})",
r"(?:.*?)",
r"(?:\s*\{\%\-|\{\%)\s*endraw\s*(?:\-\%\}\s*|\%\})",
)
)
)
RAW_BLOCK_PATTERN = regex(''.join((
r'(?:\s*\{\%\-|\{\%)\s*raw\s*(?:\-\%\}\s*|\%\})',
r'(?:.*?)',
r'(?:\s*\{\%\-|\{\%)\s*endraw\s*(?:\-\%\}\s*|\%\})',
)))
TAG_CLOSE_PATTERN = regex(r"(?:(?P<tag_close>(\-\%\}\s*|\%\})))")
TAG_CLOSE_PATTERN = regex(r'(?:(?P<tag_close>(\-\%\}\s*|\%\})))')
# stolen from jinja's lexer. Note that we've consumed all prefix whitespace by
# the time we want to use this.
STRING_PATTERN = regex(
r"(?P<string>('([^'\\]*(?:\\.[^'\\]*)*)'|" r'"([^"\\]*(?:\\.[^"\\]*)*)"))'
r"(?P<string>('([^'\\]*(?:\\.[^'\\]*)*)'|"
r'"([^"\\]*(?:\\.[^"\\]*)*)"))'
)
QUOTE_START_PATTERN = regex(r"""(?P<quote>(['"]))""")
QUOTE_START_PATTERN = regex(r'''(?P<quote>(['"]))''')
class TagIterator:
@@ -109,10 +99,10 @@ class TagIterator:
end_val: int = self.pos if end is None else end
data = self.data[:end_val]
# if not found, rfind returns -1, and -1+1=0, which is perfect!
last_line_start = data.rfind("\n") + 1
last_line_start = data.rfind('\n') + 1
# it's easy to forget this, but line numbers are 1-indexed
line_number = data.count("\n") + 1
return f"{line_number}:{end_val - last_line_start}"
line_number = data.count('\n') + 1
return f'{line_number}:{end_val - last_line_start}'
def advance(self, new_position):
self.pos = new_position
@@ -130,7 +120,7 @@ class TagIterator:
matches = []
for pattern in patterns:
# default to 'search', but sometimes we want to 'match'.
if kwargs.get("method", "search") == "search":
if kwargs.get('method', 'search') == 'search':
match = self._search(pattern)
else:
match = self._match(pattern)
@@ -146,7 +136,7 @@ class TagIterator:
match = self._first_match(*patterns, **kwargs)
if match is None:
msg = 'unexpected EOF, expected {}, got "{}"'.format(
expected_name, self.data[self.pos :]
expected_name, self.data[self.pos:]
)
dbt.exceptions.raise_compiler_error(msg)
return match
@@ -166,20 +156,22 @@ class TagIterator:
"""
self.advance(match.end())
while True:
match = self._expect_match("}}", EXPR_END_PATTERN, QUOTE_START_PATTERN)
if match.groupdict().get("expr_end") is not None:
match = self._expect_match('}}',
EXPR_END_PATTERN,
QUOTE_START_PATTERN)
if match.groupdict().get('expr_end') is not None:
break
else:
# it's a quote. we haven't advanced for this match yet, so
# just slurp up the whole string, no need to rewind.
match = self._expect_match("string", STRING_PATTERN)
match = self._expect_match('string', STRING_PATTERN)
self.advance(match.end())
self.advance(match.end())
def handle_comment(self, match):
self.advance(match.end())
match = self._expect_match("#}", COMMENT_END_PATTERN)
match = self._expect_match('#}', COMMENT_END_PATTERN)
self.advance(match.end())
def _expect_block_close(self):
@@ -196,19 +188,22 @@ class TagIterator:
"""
while True:
end_match = self._expect_match(
'tag close ("%}")', QUOTE_START_PATTERN, TAG_CLOSE_PATTERN
'tag close ("%}")',
QUOTE_START_PATTERN,
TAG_CLOSE_PATTERN
)
self.advance(end_match.end())
if end_match.groupdict().get("tag_close") is not None:
if end_match.groupdict().get('tag_close') is not None:
return
# must be a string. Rewind to its start and advance past it.
self.rewind()
string_match = self._expect_match("string", STRING_PATTERN)
string_match = self._expect_match('string', STRING_PATTERN)
self.advance(string_match.end())
def handle_raw(self):
# raw blocks are super special, they are a single complete regex
match = self._expect_match("{% raw %}...{% endraw %}", RAW_BLOCK_PATTERN)
match = self._expect_match('{% raw %}...{% endraw %}',
RAW_BLOCK_PATTERN)
self.advance(match.end())
return match.end()
@@ -225,12 +220,13 @@ class TagIterator:
"""
groups = match.groupdict()
# always a value
block_type_name = groups["block_type_name"]
block_type_name = groups['block_type_name']
# might be None
block_name = groups.get("block_name")
block_name = groups.get('block_name')
start_pos = self.pos
if block_type_name == "raw":
match = self._expect_match("{% raw %}...{% endraw %}", RAW_BLOCK_PATTERN)
if block_type_name == 'raw':
match = self._expect_match('{% raw %}...{% endraw %}',
RAW_BLOCK_PATTERN)
self.advance(match.end())
else:
self.advance(match.end())
@@ -239,13 +235,15 @@ class TagIterator:
block_type_name=block_type_name,
block_name=block_name,
start=start_pos,
end=self.pos,
end=self.pos
)
def find_tags(self):
while True:
match = self._first_match(
BLOCK_START_PATTERN, COMMENT_START_PATTERN, EXPR_START_PATTERN
BLOCK_START_PATTERN,
COMMENT_START_PATTERN,
EXPR_START_PATTERN
)
if match is None:
break
@@ -254,9 +252,9 @@ class TagIterator:
# start = self.pos
groups = match.groupdict()
comment_start = groups.get("comment_start")
expr_start = groups.get("expr_start")
block_type_name = groups.get("block_type_name")
comment_start = groups.get('comment_start')
expr_start = groups.get('expr_start')
block_type_name = groups.get('block_type_name')
if comment_start is not None:
self.handle_comment(match)
@@ -266,8 +264,8 @@ class TagIterator:
yield self.handle_tag(match)
else:
raise dbt.exceptions.InternalException(
"Invalid regex match in next_block, expected block start, "
"expr start, or comment start"
'Invalid regex match in next_block, expected block start, '
'expr start, or comment start'
)
def __iter__(self):
@@ -275,18 +273,21 @@ class TagIterator:
duplicate_tags = (
"Got nested tags: {outer.block_type_name} (started at {outer.start}) did "
"not have a matching {{% end{outer.block_type_name} %}} before a "
"subsequent {inner.block_type_name} was found (started at {inner.start})"
'Got nested tags: {outer.block_type_name} (started at {outer.start}) did '
'not have a matching {{% end{outer.block_type_name} %}} before a '
'subsequent {inner.block_type_name} was found (started at {inner.start})'
)
_CONTROL_FLOW_TAGS = {
"if": "endif",
"for": "endfor",
'if': 'endif',
'for': 'endfor',
}
_CONTROL_FLOW_END_TAGS = {v: k for k, v in _CONTROL_FLOW_TAGS.items()}
_CONTROL_FLOW_END_TAGS = {
v: k
for k, v in _CONTROL_FLOW_TAGS.items()
}
class BlockIterator:
@@ -309,15 +310,15 @@ class BlockIterator:
def is_current_end(self, tag):
return (
tag.block_type_name.startswith("end")
and self.current is not None
and tag.block_type_name[3:] == self.current.block_type_name
tag.block_type_name.startswith('end') and
self.current is not None and
tag.block_type_name[3:] == self.current.block_type_name
)
def find_blocks(self, allowed_blocks=None, collect_raw_data=True):
"""Find all top-level blocks in the data."""
if allowed_blocks is None:
allowed_blocks = {"snapshot", "macro", "materialization", "docs"}
allowed_blocks = {'snapshot', 'macro', 'materialization', 'docs'}
for tag in self.tag_parser.find_tags():
if tag.block_type_name in _CONTROL_FLOW_TAGS:
@@ -328,43 +329,37 @@ class BlockIterator:
found = self.stack.pop()
else:
expected = _CONTROL_FLOW_END_TAGS[tag.block_type_name]
dbt.exceptions.raise_compiler_error(
(
"Got an unexpected control flow end tag, got {} but "
"never saw a preceeding {} (@ {})"
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start),
)
)
dbt.exceptions.raise_compiler_error((
'Got an unexpected control flow end tag, got {} but '
'never saw a preceeding {} (@ {})'
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start)
))
expected = _CONTROL_FLOW_TAGS[found]
if expected != tag.block_type_name:
dbt.exceptions.raise_compiler_error(
(
"Got an unexpected control flow end tag, got {} but "
"expected {} next (@ {})"
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start),
)
)
dbt.exceptions.raise_compiler_error((
'Got an unexpected control flow end tag, got {} but '
'expected {} next (@ {})'
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start)
))
if tag.block_type_name in allowed_blocks:
if self.stack:
dbt.exceptions.raise_compiler_error(
(
"Got a block definition inside control flow at {}. "
"All dbt block definitions must be at the top level"
).format(self.tag_parser.linepos(tag.start))
)
dbt.exceptions.raise_compiler_error((
'Got a block definition inside control flow at {}. '
'All dbt block definitions must be at the top level'
).format(self.tag_parser.linepos(tag.start)))
if self.current is not None:
dbt.exceptions.raise_compiler_error(
duplicate_tags.format(outer=self.current, inner=tag)
)
if collect_raw_data:
raw_data = self.data[self.last_position : tag.start]
raw_data = self.data[self.last_position:tag.start]
self.last_position = tag.start
if raw_data:
yield BlockData(raw_data)
@@ -376,28 +371,23 @@ class BlockIterator:
yield BlockTag(
block_type_name=self.current.block_type_name,
block_name=self.current.block_name,
contents=self.data[self.current.end : tag.start],
full_block=self.data[self.current.start : tag.end],
contents=self.data[self.current.end:tag.start],
full_block=self.data[self.current.start:tag.end]
)
self.current = None
if self.current:
linecount = self.data[: self.current.end].count("\n") + 1
dbt.exceptions.raise_compiler_error(
(
"Reached EOF without finding a close tag for "
"{} (searched from line {})"
).format(self.current.block_type_name, linecount)
)
linecount = self.data[:self.current.end].count('\n') + 1
dbt.exceptions.raise_compiler_error((
'Reached EOF without finding a close tag for '
'{} (searched from line {})'
).format(self.current.block_type_name, linecount))
if collect_raw_data:
raw_data = self.data[self.last_position :]
raw_data = self.data[self.last_position:]
if raw_data:
yield BlockData(raw_data)
def lex_for_blocks(self, allowed_blocks=None, collect_raw_data=True):
return list(
self.find_blocks(
allowed_blocks=allowed_blocks, collect_raw_data=collect_raw_data
)
)
return list(self.find_blocks(allowed_blocks=allowed_blocks,
collect_raw_data=collect_raw_data))

View File

@@ -10,7 +10,7 @@ from typing import Iterable, List, Dict, Union, Optional, Any
from dbt.exceptions import RuntimeException
BOM = BOM_UTF8.decode("utf-8") # '\ufeff'
BOM = BOM_UTF8.decode('utf-8') # '\ufeff'
class ISODateTime(agate.data_types.DateTime):
@@ -30,23 +30,32 @@ class ISODateTime(agate.data_types.DateTime):
except: # noqa
pass
raise agate.exceptions.CastError('Can not parse value "%s" as datetime.' % d)
raise agate.exceptions.CastError(
'Can not parse value "%s" as datetime.' % d
)
def build_type_tester(text_columns: Iterable[str]) -> agate.TypeTester:
def build_type_tester(
text_columns: Iterable[str],
string_null_values: Optional[Iterable[str]] = ('null', '')
) -> agate.TypeTester:
types = [
agate.data_types.Number(null_values=("null", "")),
agate.data_types.Date(null_values=("null", ""), date_format="%Y-%m-%d"),
agate.data_types.DateTime(
null_values=("null", ""), datetime_format="%Y-%m-%d %H:%M:%S"
),
ISODateTime(null_values=("null", "")),
agate.data_types.Boolean(
true_values=("true",), false_values=("false",), null_values=("null", "")
),
agate.data_types.Text(null_values=("null", "")),
agate.data_types.Number(null_values=('null', '')),
agate.data_types.Date(null_values=('null', ''),
date_format='%Y-%m-%d'),
agate.data_types.DateTime(null_values=('null', ''),
datetime_format='%Y-%m-%d %H:%M:%S'),
ISODateTime(null_values=('null', '')),
agate.data_types.Boolean(true_values=('true',),
false_values=('false',),
null_values=('null', '')),
agate.data_types.Text(null_values=string_null_values)
]
force = {k: agate.data_types.Text(null_values=("null", "")) for k in text_columns}
force = {
k: agate.data_types.Text(null_values=string_null_values)
for k in text_columns
}
return agate.TypeTester(force=force, types=types)
@@ -61,7 +70,13 @@ def table_from_rows(
if text_only_columns is None:
column_types = DEFAULT_TYPE_TESTER
else:
column_types = build_type_tester(text_only_columns)
# If text_only_columns are present, prevent coercing empty string or
# literal 'null' strings to a None representation.
column_types = build_type_tester(
text_only_columns,
string_null_values=()
)
return agate.Table(rows, column_names, column_types=column_types)
@@ -81,19 +96,34 @@ def table_from_data(data, column_names: Iterable[str]) -> agate.Table:
def table_from_data_flat(data, column_names: Iterable[str]) -> agate.Table:
"Convert list of dictionaries into an Agate table"
"""
Convert a list of dictionaries into an Agate table. This method does not
coerce string values into more specific types (eg. '005' will not be
coerced to '5'). Additionally, this method does not coerce values to
None (eg. '' or 'null' will retain their string literal representations).
"""
rows = []
text_only_columns = set()
for _row in data:
row = []
for value in list(_row.values()):
for col_name in column_names:
value = _row[col_name]
if isinstance(value, (dict, list, tuple)):
row.append(json.dumps(value, cls=dbt.utils.JSONEncoder))
else:
row.append(value)
# Represent container types as json strings
value = json.dumps(value, cls=dbt.utils.JSONEncoder)
text_only_columns.add(col_name)
elif isinstance(value, str):
text_only_columns.add(col_name)
row.append(value)
rows.append(row)
return table_from_rows(rows=rows, column_names=column_names)
return table_from_rows(
rows=rows,
column_names=column_names,
text_only_columns=text_only_columns
)
def empty_table():
@@ -110,7 +140,7 @@ def as_matrix(table):
def from_csv(abspath, text_columns):
type_tester = build_type_tester(text_columns=text_columns)
with open(abspath, encoding="utf-8") as fp:
with open(abspath, encoding='utf-8') as fp:
if fp.read(1) != BOM:
fp.seek(0)
return agate.Table.from_csv(fp, column_types=type_tester)
@@ -142,8 +172,8 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
elif not isinstance(value, type(existing_type)):
# actual type mismatch!
raise RuntimeException(
f"Tables contain columns with the same names ({key}), "
f"but different types ({value} vs {existing_type})"
f'Tables contain columns with the same names ({key}), '
f'but different types ({value} vs {existing_type})'
)
def finalize(self) -> Dict[str, agate.data_types.DataType]:
@@ -158,7 +188,7 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
def _merged_column_types(
tables: List[agate.Table],
tables: List[agate.Table]
) -> Dict[str, agate.data_types.DataType]:
# this is a lot like agate.Table.merge, but with handling for all-null
# rows being "any type".
@@ -185,7 +215,10 @@ def merge_tables(tables: List[agate.Table]) -> agate.Table:
rows: List[agate.Row] = []
for table in tables:
if table.column_names == column_names and table.column_types == column_types:
if (
table.column_names == column_names and
table.column_types == column_types
):
rows.extend(table.rows)
else:
for row in table.rows:

View File

@@ -12,7 +12,7 @@ https://cloud.google.com/sdk/
def gcloud_installed():
try:
run_cmd(".", ["gcloud", "--version"])
run_cmd('.', ['gcloud', '--version'])
return True
except OSError as e:
logger.debug(e)
@@ -21,6 +21,6 @@ def gcloud_installed():
def setup_default_credentials():
if gcloud_installed():
run_cmd(".", ["gcloud", "auth", "application-default", "login"])
run_cmd('.', ["gcloud", "auth", "application-default", "login"])
else:
raise dbt.exceptions.RuntimeException(NOT_INSTALLED_MSG)

View File

@@ -4,77 +4,112 @@ import os.path
from dbt.clients.system import run_cmd, rmdir
from dbt.logger import GLOBAL_LOGGER as logger
import dbt.exceptions
from packaging import version
def clone(repo, cwd, dirname=None, remove_git_dir=False, branch=None):
clone_cmd = ["git", "clone", "--depth", "1"]
def _is_commit(revision: str) -> bool:
# match SHA-1 git commit
return bool(re.match(r"\b[0-9a-f]{40}\b", revision))
if branch is not None:
clone_cmd.extend(["--branch", branch])
def clone(repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirectory=None):
has_revision = revision is not None
is_commit = _is_commit(revision or "")
clone_cmd = ['git', 'clone', '--depth', '1']
if subdirectory:
logger.debug(' Subdirectory specified: {}, using sparse checkout.'.format(subdirectory))
out, _ = run_cmd(cwd, ['git', '--version'], env={'LC_ALL': 'C'})
git_version = version.parse(re.search(r"\d+\.\d+\.\d+", out.decode("utf-8")).group(0))
if not git_version >= version.parse("2.25.0"):
# 2.25.0 introduces --sparse
raise RuntimeError(
"Please update your git version to pull a dbt package "
"from a subdirectory: your version is {}, >= 2.25.0 needed".format(git_version)
)
clone_cmd.extend(['--filter=blob:none', '--sparse'])
if has_revision and not is_commit:
clone_cmd.extend(['--branch', revision])
clone_cmd.append(repo)
if dirname is not None:
clone_cmd.append(dirname)
result = run_cmd(cwd, clone_cmd, env={'LC_ALL': 'C'})
result = run_cmd(cwd, clone_cmd, env={"LC_ALL": "C"})
if subdirectory:
run_cmd(os.path.join(cwd, dirname or ''), ['git', 'sparse-checkout', 'set', subdirectory])
if remove_git_dir:
rmdir(os.path.join(dirname, ".git"))
rmdir(os.path.join(dirname, '.git'))
return result
def list_tags(cwd):
out, err = run_cmd(cwd, ["git", "tag", "--list"], env={"LC_ALL": "C"})
tags = out.decode("utf-8").strip().split("\n")
out, err = run_cmd(cwd, ['git', 'tag', '--list'], env={'LC_ALL': 'C'})
tags = out.decode('utf-8').strip().split("\n")
return tags
def _checkout(cwd, repo, branch):
logger.debug(" Checking out branch {}.".format(branch))
def _checkout(cwd, repo, revision):
logger.debug(' Checking out revision {}.'.format(revision))
run_cmd(cwd, ["git", "remote", "set-branches", "origin", branch])
run_cmd(cwd, ["git", "fetch", "--tags", "--depth", "1", "origin", branch])
fetch_cmd = ["git", "fetch", "origin", "--depth", "1"]
tags = list_tags(cwd)
# Prefer tags to branches if one exists
if branch in tags:
spec = "tags/{}".format(branch)
if _is_commit(revision):
run_cmd(cwd, fetch_cmd + [revision])
else:
spec = "origin/{}".format(branch)
run_cmd(cwd, ['git', 'remote', 'set-branches', 'origin', revision])
run_cmd(cwd, fetch_cmd + ["--tags", revision])
out, err = run_cmd(cwd, ["git", "reset", "--hard", spec], env={"LC_ALL": "C"})
if _is_commit(revision):
spec = revision
# Prefer tags to branches if one exists
elif revision in list_tags(cwd):
spec = 'tags/{}'.format(revision)
else:
spec = 'origin/{}'.format(revision)
out, err = run_cmd(cwd, ['git', 'reset', '--hard', spec],
env={'LC_ALL': 'C'})
return out, err
def checkout(cwd, repo, branch=None):
if branch is None:
branch = "HEAD"
def checkout(cwd, repo, revision=None):
if revision is None:
revision = 'HEAD'
try:
return _checkout(cwd, repo, branch)
return _checkout(cwd, repo, revision)
except dbt.exceptions.CommandResultError as exc:
stderr = exc.stderr.decode("utf-8").strip()
dbt.exceptions.bad_package_spec(repo, branch, stderr)
stderr = exc.stderr.decode('utf-8').strip()
dbt.exceptions.bad_package_spec(repo, revision, stderr)
def get_current_sha(cwd):
out, err = run_cmd(cwd, ["git", "rev-parse", "HEAD"], env={"LC_ALL": "C"})
out, err = run_cmd(cwd, ['git', 'rev-parse', 'HEAD'], env={'LC_ALL': 'C'})
return out.decode("utf-8")
return out.decode('utf-8')
def remove_remote(cwd):
return run_cmd(cwd, ["git", "remote", "rm", "origin"], env={"LC_ALL": "C"})
return run_cmd(cwd, ['git', 'remote', 'rm', 'origin'], env={'LC_ALL': 'C'})
def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False, branch=None):
def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False,
revision=None, subdirectory=None):
exists = None
try:
_, err = clone(repo, cwd, dirname=dirname, remove_git_dir=remove_git_dir)
_, err = clone(
repo,
cwd,
dirname=dirname,
remove_git_dir=remove_git_dir,
subdirectory=subdirectory,
)
except dbt.exceptions.CommandResultError as exc:
err = exc.stderr.decode("utf-8")
err = exc.stderr.decode('utf-8')
exists = re.match("fatal: destination path '(.+)' already exists", err)
if not exists: # something else is wrong, raise it
raise
@@ -83,26 +118,25 @@ def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False, branch=Non
start_sha = None
if exists:
directory = exists.group(1)
logger.debug("Updating existing dependency {}.", directory)
logger.debug('Updating existing dependency {}.', directory)
else:
matches = re.match("Cloning into '(.+)'", err.decode("utf-8"))
matches = re.match("Cloning into '(.+)'", err.decode('utf-8'))
if matches is None:
raise dbt.exceptions.RuntimeException(
f'Error cloning {repo} - never saw "Cloning into ..." from git'
)
directory = matches.group(1)
logger.debug("Pulling new dependency {}.", directory)
logger.debug('Pulling new dependency {}.', directory)
full_path = os.path.join(cwd, directory)
start_sha = get_current_sha(full_path)
checkout(full_path, repo, branch)
checkout(full_path, repo, revision)
end_sha = get_current_sha(full_path)
if exists:
if start_sha == end_sha:
logger.debug(" Already at {}, nothing to do.", start_sha[:7])
logger.debug(' Already at {}, nothing to do.', start_sha[:7])
else:
logger.debug(
" Updated checkout from {} to {}.", start_sha[:7], end_sha[:7]
)
logger.debug(' Updated checkout from {} to {}.',
start_sha[:7], end_sha[:7])
else:
logger.debug(" Checked out at {}.", end_sha[:7])
return directory
logger.debug(' Checked out at {}.', end_sha[:7])
return os.path.join(directory, subdirectory or '')

View File

@@ -8,17 +8,8 @@ from ast import literal_eval
from contextlib import contextmanager
from itertools import chain, islice
from typing import (
List,
Union,
Set,
Optional,
Dict,
Any,
Iterator,
Type,
NoReturn,
Tuple,
Callable,
List, Union, Set, Optional, Dict, Any, Iterator, Type, NoReturn, Tuple,
Callable
)
import jinja2
@@ -29,22 +20,17 @@ import jinja2.parser
import jinja2.sandbox
from dbt.utils import (
get_dbt_macro_name,
get_docs_macro_name,
get_materialization_macro_name,
deep_map,
get_dbt_macro_name, get_docs_macro_name, get_materialization_macro_name,
get_test_macro_name, deep_map
)
from dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag
from dbt.contracts.graph.compiled import CompiledSchemaTestNode
from dbt.contracts.graph.parsed import ParsedSchemaTestNode
from dbt.exceptions import (
InternalException,
raise_compiler_error,
CompilationException,
invalid_materialization_argument,
MacroReturn,
JinjaRenderingException,
InternalException, raise_compiler_error, CompilationException,
invalid_materialization_argument, MacroReturn, JinjaRenderingException,
UndefinedMacroException
)
from dbt import flags
from dbt.logger import GLOBAL_LOGGER as logger # noqa
@@ -55,26 +41,26 @@ def _linecache_inject(source, write):
# this is the only reliable way to accomplish this. Obviously, it's
# really darn noisy and will fill your temporary directory
tmp_file = tempfile.NamedTemporaryFile(
prefix="dbt-macro-compiled-",
suffix=".py",
prefix='dbt-macro-compiled-',
suffix='.py',
delete=False,
mode="w+",
encoding="utf-8",
mode='w+',
encoding='utf-8',
)
tmp_file.write(source)
filename = tmp_file.name
else:
# `codecs.encode` actually takes a `bytes` as the first argument if
# the second argument is 'hex' - mypy does not know this.
rnd = codecs.encode(os.urandom(12), "hex") # type: ignore
filename = rnd.decode("ascii")
rnd = codecs.encode(os.urandom(12), 'hex') # type: ignore
filename = rnd.decode('ascii')
# put ourselves in the cache
cache_entry = (
len(source),
None,
[line + "\n" for line in source.splitlines()],
filename,
[line + '\n' for line in source.splitlines()],
filename
)
# linecache does in fact have an attribute `cache`, thanks
linecache.cache[filename] = cache_entry # type: ignore
@@ -88,10 +74,12 @@ class MacroFuzzParser(jinja2.parser.Parser):
# modified to fuzz macros defined in the same file. this way
# dbt can understand the stack of macros being called.
# - @cmcarthur
node.name = get_dbt_macro_name(self.parse_assign_target(name_only=True).name)
node.name = get_dbt_macro_name(
self.parse_assign_target(name_only=True).name)
self.parse_signature(node)
node.body = self.parse_statements(("name:endmacro",), drop_needle=True)
node.body = self.parse_statements(('name:endmacro',),
drop_needle=True)
return node
@@ -107,8 +95,8 @@ class MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):
If the value is 'write', also write the files to disk.
WARNING: This can write a ton of data if you aren't careful.
"""
if filename == "<template>" and flags.MACRO_DEBUGGING:
write = flags.MACRO_DEBUGGING == "write"
if filename == '<template>' and flags.MACRO_DEBUGGING:
write = flags.MACRO_DEBUGGING == 'write'
filename = _linecache_inject(source, write)
return super()._compile(source, filename) # type: ignore
@@ -151,7 +139,7 @@ def quoted_native_concat(nodes):
head = list(islice(nodes, 2))
if not head:
return ""
return ''
if len(head) == 1:
raw = head[0]
@@ -193,7 +181,9 @@ class NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore
vars = dict(*args, **kwargs)
try:
return quoted_native_concat(self.root_render_func(self.new_context(vars)))
return quoted_native_concat(
self.root_render_func(self.new_context(vars))
)
except Exception:
return self.environment.handle_exception()
@@ -232,10 +222,10 @@ class BaseMacroGenerator:
self.context: Optional[Dict[str, Any]] = context
def get_template(self):
raise NotImplementedError("get_template not implemented!")
raise NotImplementedError('get_template not implemented!')
def get_name(self) -> str:
raise NotImplementedError("get_name not implemented!")
raise NotImplementedError('get_name not implemented!')
def get_macro(self):
name = self.get_name()
@@ -258,7 +248,9 @@ class BaseMacroGenerator:
def call_macro(self, *args, **kwargs):
# called from __call__ methods
if self.context is None:
raise InternalException("Context is still None in call_macro!")
raise InternalException(
'Context is still None in call_macro!'
)
assert self.context is not None
macro = self.get_macro()
@@ -285,7 +277,7 @@ class MacroStack(threading.local):
def pop(self, name):
got = self.call_stack.pop()
if got != name:
raise InternalException(f"popped {got}, expected {name}")
raise InternalException(f'popped {got}, expected {name}')
class MacroGenerator(BaseMacroGenerator):
@@ -294,7 +286,7 @@ class MacroGenerator(BaseMacroGenerator):
macro,
context: Optional[Dict[str, Any]] = None,
node: Optional[Any] = None,
stack: Optional[MacroStack] = None,
stack: Optional[MacroStack] = None
) -> None:
super().__init__(context)
self.macro = macro
@@ -342,7 +334,9 @@ class MacroGenerator(BaseMacroGenerator):
class QueryStringGenerator(BaseMacroGenerator):
def __init__(self, template_str: str, context: Dict[str, Any]) -> None:
def __init__(
self, template_str: str, context: Dict[str, Any]
) -> None:
super().__init__(context)
self.template_str: str = template_str
env = get_environment()
@@ -352,7 +346,7 @@ class QueryStringGenerator(BaseMacroGenerator):
)
def get_name(self) -> str:
return "query_comment_macro"
return 'query_comment_macro'
def get_template(self):
"""Don't use the template cache, we don't have a node"""
@@ -363,41 +357,45 @@ class QueryStringGenerator(BaseMacroGenerator):
class MaterializationExtension(jinja2.ext.Extension):
tags = ["materialization"]
tags = ['materialization']
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
materialization_name = parser.parse_assign_target(name_only=True).name
materialization_name = \
parser.parse_assign_target(name_only=True).name
adapter_name = "default"
adapter_name = 'default'
node.args = []
node.defaults = []
while parser.stream.skip_if("comma"):
while parser.stream.skip_if('comma'):
target = parser.parse_assign_target(name_only=True)
if target.name == "default":
if target.name == 'default':
pass
elif target.name == "adapter":
parser.stream.expect("assign")
elif target.name == 'adapter':
parser.stream.expect('assign')
value = parser.parse_expression()
adapter_name = value.value
else:
invalid_materialization_argument(materialization_name, target.name)
invalid_materialization_argument(
materialization_name, target.name
)
node.name = get_materialization_macro_name(materialization_name, adapter_name)
node.body = parser.parse_statements(
("name:endmaterialization",), drop_needle=True
node.name = get_materialization_macro_name(
materialization_name, adapter_name
)
node.body = parser.parse_statements(('name:endmaterialization',),
drop_needle=True)
return node
class DocumentationExtension(jinja2.ext.Extension):
tags = ["docs"]
tags = ['docs']
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
@@ -406,12 +404,27 @@ class DocumentationExtension(jinja2.ext.Extension):
node.args = []
node.defaults = []
node.name = get_docs_macro_name(docs_name)
node.body = parser.parse_statements(("name:enddocs",), drop_needle=True)
node.body = parser.parse_statements(('name:enddocs',),
drop_needle=True)
return node
class TestExtension(jinja2.ext.Extension):
tags = ['test']
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
test_name = parser.parse_assign_target(name_only=True).name
parser.parse_signature(node)
node.name = get_test_macro_name(test_name)
node.body = parser.parse_statements(('name:endtest',),
drop_needle=True)
return node
def _is_dunder_name(name):
return name.startswith("__") and name.endswith("__")
return name.startswith('__') and name.endswith('__')
def create_undefined(node=None):
@@ -432,11 +445,10 @@ def create_undefined(node=None):
return self
def __getattr__(self, name):
if name == "name" or _is_dunder_name(name):
if name == 'name' or _is_dunder_name(name):
raise AttributeError(
"'{}' object has no attribute '{}'".format(
type(self).__name__, name
)
"'{}' object has no attribute '{}'"
.format(type(self).__name__, name)
)
self.name = name
@@ -447,24 +459,24 @@ def create_undefined(node=None):
return self
def __reduce__(self):
raise_compiler_error(f"{self.name} is undefined", node=node)
raise_compiler_error(f'{self.name} is undefined', node=node)
return Undefined
NATIVE_FILTERS: Dict[str, Callable[[Any], Any]] = {
"as_text": TextMarker,
"as_bool": BoolMarker,
"as_native": NativeMarker,
"as_number": NumberMarker,
'as_text': TextMarker,
'as_bool': BoolMarker,
'as_native': NativeMarker,
'as_number': NumberMarker,
}
TEXT_FILTERS: Dict[str, Callable[[Any], Any]] = {
"as_text": lambda x: x,
"as_bool": lambda x: x,
"as_native": lambda x: x,
"as_number": lambda x: x,
'as_text': lambda x: x,
'as_bool': lambda x: x,
'as_native': lambda x: x,
'as_number': lambda x: x,
}
@@ -474,14 +486,15 @@ def get_environment(
native: bool = False,
) -> jinja2.Environment:
args: Dict[str, List[Union[str, Type[jinja2.ext.Extension]]]] = {
"extensions": ["jinja2.ext.do"]
'extensions': ['jinja2.ext.do']
}
if capture_macros:
args["undefined"] = create_undefined(node)
args['undefined'] = create_undefined(node)
args["extensions"].append(MaterializationExtension)
args["extensions"].append(DocumentationExtension)
args['extensions'].append(MaterializationExtension)
args['extensions'].append(DocumentationExtension)
args['extensions'].append(TestExtension)
env_cls: Type[jinja2.Environment]
text_filter: Type
@@ -506,7 +519,7 @@ def catch_jinja(node=None) -> Iterator[None]:
e.translated = False
raise CompilationException(str(e), node) from e
except jinja2.exceptions.UndefinedError as e:
raise CompilationException(str(e), node) from e
raise UndefinedMacroException(str(e), node) from e
except CompilationException as exc:
exc.add_node(node)
raise
@@ -544,8 +557,8 @@ def _requote_result(raw_value: str, rendered: str) -> str:
elif single_quoted:
quote_char = "'"
else:
quote_char = ""
return f"{quote_char}{rendered}{quote_char}"
quote_char = ''
return f'{quote_char}{rendered}{quote_char}'
# performance note: Local benmcharking (so take it with a big grain of salt!)
@@ -553,7 +566,7 @@ def _requote_result(raw_value: str, rendered: str) -> str:
# checking two separate patterns, but the standard deviation is smaller with
# one pattern. The time difference between the two was ~2 std deviations, which
# is small enough that I've just chosen the more readable option.
_HAS_RENDER_CHARS_PAT = re.compile(r"({[{%#]|[#}%]})")
_HAS_RENDER_CHARS_PAT = re.compile(r'({[{%#]|[#}%]})')
def get_rendered(
@@ -570,9 +583,9 @@ def get_rendered(
# native=True case by passing the input string to ast.literal_eval, like
# the native renderer does.
if (
not native
and isinstance(string, str)
and _HAS_RENDER_CHARS_PAT.search(string) is None
not native and
isinstance(string, str) and
_HAS_RENDER_CHARS_PAT.search(string) is None
):
return string
template = get_template(
@@ -609,11 +622,12 @@ def extract_toplevel_blocks(
`collect_raw_data` is `True`) `BlockData` objects.
"""
return BlockIterator(data).lex_for_blocks(
allowed_blocks=allowed_blocks, collect_raw_data=collect_raw_data
allowed_blocks=allowed_blocks,
collect_raw_data=collect_raw_data
)
SCHEMA_TEST_KWARGS_NAME = "_dbt_schema_test_kwargs"
SCHEMA_TEST_KWARGS_NAME = '_dbt_schema_test_kwargs'
def add_rendered_test_kwargs(
@@ -625,21 +639,24 @@ def add_rendered_test_kwargs(
renderer, then insert that value into the given context as the special test
keyword arguments member.
"""
looks_like_func = r"^\s*(env_var|ref|var|source|doc)\s*\(.+\)\s*$"
looks_like_func = r'^\s*(env_var|ref|var|source|doc)\s*\(.+\)\s*$'
def _convert_function(value: Any, keypath: Tuple[Union[str, int], ...]) -> Any:
def _convert_function(
value: Any, keypath: Tuple[Union[str, int], ...]
) -> Any:
if isinstance(value, str):
if keypath == ("column_name",):
if keypath == ('column_name',):
# special case: Don't render column names as native, make them
# be strings
return value
if re.match(looks_like_func, value) is not None:
# curly braces to make rendering happy
value = f"{{{{ {value} }}}}"
value = f'{{{{ {value} }}}}'
value = get_rendered(
value, context, node, capture_macros=capture_macros, native=True
value, context, node, capture_macros=capture_macros,
native=True
)
return value

View File

@@ -0,0 +1,225 @@
import jinja2
from dbt.clients.jinja import get_environment
from dbt.exceptions import raise_compiler_error
def statically_extract_macro_calls(string, ctx, db_wrapper=None):
# set 'capture_macros' to capture undefined
env = get_environment(None, capture_macros=True)
parsed = env.parse(string)
standard_calls = ['source', 'ref', 'config']
possible_macro_calls = []
for func_call in parsed.find_all(jinja2.nodes.Call):
func_name = None
if hasattr(func_call, 'node') and hasattr(func_call.node, 'name'):
func_name = func_call.node.name
else:
# func_call for dbt_utils.current_timestamp macro
# Call(
# node=Getattr(
# node=Name(
# name='dbt_utils',
# ctx='load'
# ),
# attr='current_timestamp',
# ctx='load
# ),
# args=[],
# kwargs=[],
# dyn_args=None,
# dyn_kwargs=None
# )
if (hasattr(func_call, 'node') and
hasattr(func_call.node, 'node') and
type(func_call.node.node).__name__ == 'Name' and
hasattr(func_call.node, 'attr')):
package_name = func_call.node.node.name
macro_name = func_call.node.attr
if package_name == 'adapter':
if macro_name == 'dispatch':
ad_macro_calls = statically_parse_adapter_dispatch(
func_call, ctx, db_wrapper)
possible_macro_calls.extend(ad_macro_calls)
else:
# This skips calls such as adapter.parse_index
continue
else:
func_name = f'{package_name}.{macro_name}'
else:
continue
if not func_name:
continue
if func_name in standard_calls:
continue
elif ctx.get(func_name):
continue
else:
if func_name not in possible_macro_calls:
possible_macro_calls.append(func_name)
return possible_macro_calls
# Call(
# node=Getattr(
# node=Name(
# name='adapter',
# ctx='load'
# ),
# attr='dispatch',
# ctx='load'
# ),
# args=[
# Const(value='test_pkg_and_dispatch')
# ],
# kwargs=[
# Keyword(
# key='packages',
# value=Call(node=Getattr(node=Name(name='local_utils', ctx='load'),
# attr='_get_utils_namespaces', ctx='load'), args=[], kwargs=[],
# dyn_args=None, dyn_kwargs=None)
# )
# ],
# dyn_args=None,
# dyn_kwargs=None
# )
def statically_parse_adapter_dispatch(func_call, ctx, db_wrapper):
possible_macro_calls = []
# This captures an adapter.dispatch('<macro_name>') call.
func_name = None
# macro_name positional argument
if len(func_call.args) > 0:
func_name = func_call.args[0].value
if func_name:
possible_macro_calls.append(func_name)
# packages positional argument
packages = None
macro_namespace = None
packages_arg = None
packages_arg_type = None
if len(func_call.args) > 1:
packages_arg = func_call.args[1]
# This can be a List or a Call
packages_arg_type = type(func_call.args[1]).__name__
# keyword arguments
if func_call.kwargs:
for kwarg in func_call.kwargs:
if kwarg.key == 'packages':
# The packages keyword will be deprecated and
# eventually removed
packages_arg = kwarg.value
# This can be a List or a Call
packages_arg_type = type(kwarg.value).__name__
elif kwarg.key == 'macro_name':
# This will remain to enable static resolution
if type(kwarg.value).__name__ == 'Const':
func_name = kwarg.value.value
possible_macro_calls.append(func_name)
else:
raise_compiler_error(f"The macro_name parameter ({kwarg.value.value}) "
"to adapter.dispatch was not a string")
elif kwarg.key == 'macro_namespace':
# This will remain to enable static resolution
kwarg_type = type(kwarg.value).__name__
if kwarg_type == 'Const':
macro_namespace = kwarg.value.value
else:
raise_compiler_error("The macro_namespace parameter to adapter.dispatch "
f"is a {kwarg_type}, not a string")
# positional arguments
if packages_arg:
if packages_arg_type == 'List':
# This will remain to enable static resolution
packages = []
for item in packages_arg.items:
packages.append(item.value)
elif packages_arg_type == 'Const':
# This will remain to enable static resolution
macro_namespace = packages_arg.value
elif packages_arg_type == 'Call':
# This is deprecated and should be removed eventually.
# It is here to support (hackily) common ways of providing
# a packages list to adapter.dispatch
if (hasattr(packages_arg, 'node') and
hasattr(packages_arg.node, 'node') and
hasattr(packages_arg.node.node, 'name') and
hasattr(packages_arg.node, 'attr')):
package_name = packages_arg.node.node.name
macro_name = packages_arg.node.attr
if (macro_name.startswith('_get') and 'namespaces' in macro_name):
# noqa: https://github.com/dbt-labs/dbt-utils/blob/9e9407b/macros/cross_db_utils/_get_utils_namespaces.sql
var_name = f'{package_name}_dispatch_list'
# hard code compatibility for fivetran_utils, just a teensy bit different
# noqa: https://github.com/fivetran/dbt_fivetran_utils/blob/0978ba2/macros/_get_utils_namespaces.sql
if package_name == 'fivetran_utils':
default_packages = ['dbt_utils', 'fivetran_utils']
else:
default_packages = [package_name]
namespace_names = get_dispatch_list(ctx, var_name, default_packages)
packages = []
if namespace_names:
packages.extend(namespace_names)
else:
msg = (
f"As of v0.19.2, custom macros, such as '{macro_name}', are no longer "
"supported in the 'packages' argument of 'adapter.dispatch()'.\n"
f"See https://docs.getdbt.com/reference/dbt-jinja-functions/dispatch "
"for details."
).strip()
raise_compiler_error(msg)
elif packages_arg_type == 'Add':
# This logic is for when there is a variable and an addition of a list,
# like: packages = (var('local_utils_dispatch_list', []) + ['local_utils2'])
# This is deprecated and should be removed eventually.
namespace_var = None
default_namespaces = []
# This might be a single call or it might be the 'left' piece in an addition
for var_call in packages_arg.find_all(jinja2.nodes.Call):
if (hasattr(var_call, 'node') and
var_call.node.name == 'var' and
hasattr(var_call, 'args')):
namespace_var = var_call.args[0].value
if hasattr(packages_arg, 'right'): # we have a default list of namespaces
for item in packages_arg.right.items:
default_namespaces.append(item.value)
if namespace_var:
namespace_names = get_dispatch_list(ctx, namespace_var, default_namespaces)
packages = []
if namespace_names:
packages.extend(namespace_names)
if db_wrapper:
macro = db_wrapper.dispatch(
func_name,
packages=packages,
macro_namespace=macro_namespace
).macro
func_name = f'{macro.package_name}.{macro.name}'
possible_macro_calls.append(func_name)
else: # this is only for test/unit/test_macro_calls.py
if macro_namespace:
packages = [macro_namespace]
if packages is None:
packages = []
for package_name in packages:
possible_macro_calls.append(f'{package_name}.{func_name}')
return possible_macro_calls
def get_dispatch_list(ctx, var_name, default_packages):
namespace_list = None
try:
# match the logic currently used in package _get_namespaces() macro
namespace_list = ctx['var'](var_name) + default_packages
except Exception:
pass
namespace_list = namespace_list if namespace_list else default_packages
return namespace_list

View File

@@ -1,72 +1,79 @@
from functools import wraps
import functools
import requests
from dbt.exceptions import RegistryException
from dbt.utils import memoized
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import deprecations
import os
import time
if os.getenv("DBT_PACKAGE_HUB_URL"):
DEFAULT_REGISTRY_BASE_URL = os.getenv("DBT_PACKAGE_HUB_URL")
if os.getenv('DBT_PACKAGE_HUB_URL'):
DEFAULT_REGISTRY_BASE_URL = os.getenv('DBT_PACKAGE_HUB_URL')
else:
DEFAULT_REGISTRY_BASE_URL = "https://hub.getdbt.com/"
DEFAULT_REGISTRY_BASE_URL = 'https://hub.getdbt.com/'
def _get_url(url, registry_base_url=None):
if registry_base_url is None:
registry_base_url = DEFAULT_REGISTRY_BASE_URL
return "{}{}".format(registry_base_url, url)
return '{}{}'.format(registry_base_url, url)
def _wrap_exceptions(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
max_attempts = 5
attempt = 0
while True:
attempt += 1
try:
return fn(*args, **kwargs)
except requests.exceptions.ConnectionError as exc:
if attempt < max_attempts:
time.sleep(1)
continue
raise RegistryException("Unable to connect to registry hub") from exc
return wrapper
def _get_with_retries(path, registry_base_url=None):
get_fn = functools.partial(_get, path, registry_base_url)
return connection_exception_retry(get_fn, 5)
@_wrap_exceptions
def _get(path, registry_base_url=None):
url = _get_url(path, registry_base_url)
logger.debug("Making package registry request: GET {}".format(url))
resp = requests.get(url)
logger.debug("Response from registry: GET {} {}".format(url, resp.status_code))
logger.debug('Making package registry request: GET {}'.format(url))
resp = requests.get(url, timeout=30)
logger.debug('Response from registry: GET {} {}'.format(url,
resp.status_code))
resp.raise_for_status()
return resp.json()
def index(registry_base_url=None):
return _get("api/v1/index.json", registry_base_url)
return _get_with_retries('api/v1/index.json', registry_base_url)
index_cached = memoized(index)
def packages(registry_base_url=None):
return _get("api/v1/packages.json", registry_base_url)
return _get_with_retries('api/v1/packages.json', registry_base_url)
def package(name, registry_base_url=None):
return _get("api/v1/{}.json".format(name), registry_base_url)
response = _get_with_retries('api/v1/{}.json'.format(name), registry_base_url)
# Either redirectnamespace or redirectname in the JSON response indicate a redirect
# redirectnamespace redirects based on package ownership
# redirectname redirects based on package name
# Both can be present at the same time, or neither. Fails gracefully to old name
if ('redirectnamespace' in response) or ('redirectname' in response):
if ('redirectnamespace' in response) and response['redirectnamespace'] is not None:
use_namespace = response['redirectnamespace']
else:
use_namespace = response['namespace']
if ('redirectname' in response) and response['redirectname'] is not None:
use_name = response['redirectname']
else:
use_name = response['name']
new_nwo = use_namespace + "/" + use_name
deprecations.warn('package-redirect', old_name=name, new_name=new_nwo)
return response
def package_version(name, version, registry_base_url=None):
return _get("api/v1/{}/{}.json".format(name, version), registry_base_url)
return _get_with_retries('api/v1/{}/{}.json'.format(name, version), registry_base_url)
def get_available_versions(name):
response = package(name)
return list(response["versions"])
return list(response['versions'])

View File

@@ -1,4 +1,5 @@
import errno
import functools
import fnmatch
import json
import os
@@ -10,14 +11,15 @@ import sys
import tarfile
import requests
import stat
from typing import Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
from typing import (
Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
)
import dbt.exceptions
import dbt.utils
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import _connection_exception_retry as connection_exception_retry
if sys.platform == "win32":
if sys.platform == 'win32':
from ctypes import WinDLL, c_bool
else:
WinDLL = None
@@ -49,29 +51,30 @@ def find_matching(
reobj = re.compile(regex, re.IGNORECASE)
for relative_path_to_search in relative_paths_to_search:
absolute_path_to_search = os.path.join(root_path, relative_path_to_search)
absolute_path_to_search = os.path.join(
root_path, relative_path_to_search)
walk_results = os.walk(absolute_path_to_search)
for current_path, subdirectories, local_files in walk_results:
for local_file in local_files:
absolute_path = os.path.join(current_path, local_file)
relative_path = os.path.relpath(absolute_path, absolute_path_to_search)
relative_path = os.path.relpath(
absolute_path, absolute_path_to_search
)
if reobj.match(local_file):
matching.append(
{
"searched_path": relative_path_to_search,
"absolute_path": absolute_path,
"relative_path": relative_path,
}
)
matching.append({
'searched_path': relative_path_to_search,
'absolute_path': absolute_path,
'relative_path': relative_path,
})
return matching
def load_file_contents(path: str, strip: bool = True) -> str:
path = convert_path(path)
with open(path, "rb") as handle:
to_return = handle.read().decode("utf-8")
with open(path, 'rb') as handle:
to_return = handle.read().decode('utf-8')
if strip:
to_return = to_return.strip()
@@ -98,14 +101,14 @@ def make_directory(path: str) -> None:
raise e
def make_file(path: str, contents: str = "", overwrite: bool = False) -> bool:
def make_file(path: str, contents: str = '', overwrite: bool = False) -> bool:
"""
Make a file at `path` assuming that the directory it resides in already
exists. The file is saved with contents `contents`
"""
if overwrite or not os.path.exists(path):
path = convert_path(path)
with open(path, "w") as fh:
with open(path, 'w') as fh:
fh.write(contents)
return True
@@ -117,7 +120,7 @@ def make_symlink(source: str, link_path: str) -> None:
Create a symlink at `link_path` referring to `source`.
"""
if not supports_symlinks():
dbt.exceptions.system_error("create a symbolic link")
dbt.exceptions.system_error('create a symbolic link')
os.symlink(source, link_path)
@@ -126,11 +129,11 @@ def supports_symlinks() -> bool:
return getattr(os, "symlink", None) is not None
def write_file(path: str, contents: str = "") -> bool:
def write_file(path: str, contents: str = '') -> bool:
path = convert_path(path)
try:
make_directory(os.path.dirname(path))
with open(path, "w", encoding="utf-8") as f:
with open(path, 'w', encoding='utf-8') as f:
f.write(str(contents))
except Exception as exc:
# note that you can't just catch FileNotFound, because sometimes
@@ -139,20 +142,20 @@ def write_file(path: str, contents: str = "") -> bool:
# sometimes windows fails to write paths that are less than the length
# limit. So on windows, suppress all errors that happen from writing
# to disk.
if os.name == "nt":
if os.name == 'nt':
# sometimes we get a winerror of 3 which means the path was
# definitely too long, but other times we don't and it means the
# path was just probably too long. This is probably based on the
# windows/python version.
if getattr(exc, "winerror", 0) == 3:
reason = "Path was too long"
if getattr(exc, 'winerror', 0) == 3:
reason = 'Path was too long'
else:
reason = "Path was possibly too long"
reason = 'Path was possibly too long'
# all our hard work and the path was still too long. Log and
# continue.
logger.debug(
f"Could not write to path {path}({len(path)} characters): "
f"{reason}\nexception: {exc}"
f'Could not write to path {path}({len(path)} characters): '
f'{reason}\nexception: {exc}'
)
else:
raise
@@ -186,7 +189,10 @@ def resolve_path_from_base(path_to_resolve: str, base_path: str) -> str:
If path_to_resolve is an absolute path or a user path (~), just
resolve it to an absolute path and return.
"""
return os.path.abspath(os.path.join(base_path, os.path.expanduser(path_to_resolve)))
return os.path.abspath(
os.path.join(
base_path,
os.path.expanduser(path_to_resolve)))
def rmdir(path: str) -> None:
@@ -196,7 +202,7 @@ def rmdir(path: str) -> None:
cloned via git) can cause rmtree to throw a PermissionError exception
"""
path = convert_path(path)
if sys.platform == "win32":
if sys.platform == 'win32':
onerror = _windows_rmdir_readonly
else:
onerror = None
@@ -215,7 +221,7 @@ def _win_prepare_path(path: str) -> str:
# letter back in.
# Unless it starts with '\\'. In that case, the path is a UNC mount point
# and splitdrive will be fine.
if not path.startswith("\\\\") and path.startswith("\\"):
if not path.startswith('\\\\') and path.startswith('\\'):
curdrive = os.path.splitdrive(os.getcwd())[0]
path = curdrive + path
@@ -230,7 +236,7 @@ def _win_prepare_path(path: str) -> str:
def _supports_long_paths() -> bool:
if sys.platform != "win32":
if sys.platform != 'win32':
return True
# Eryk Sun says to use `WinDLL('ntdll')` instead of `windll.ntdll` because
# of pointer caching in a comment here:
@@ -238,11 +244,11 @@ def _supports_long_paths() -> bool:
# I don't know exaclty what he means, but I am inclined to believe him as
# he's pretty active on Python windows bugs!
try:
dll = WinDLL("ntdll")
dll = WinDLL('ntdll')
except OSError: # I don't think this happens? you need ntdll to run python
return False
# not all windows versions have it at all
if not hasattr(dll, "RtlAreLongPathsEnabled"):
if not hasattr(dll, 'RtlAreLongPathsEnabled'):
return False
# tell windows we want to get back a single unsigned byte (a bool).
dll.RtlAreLongPathsEnabled.restype = c_bool
@@ -262,7 +268,7 @@ def convert_path(path: str) -> str:
if _supports_long_paths():
return path
prefix = "\\\\?\\"
prefix = '\\\\?\\'
# Nothing to do
if path.startswith(prefix):
return path
@@ -293,35 +299,39 @@ def path_is_symlink(path: str) -> bool:
def open_dir_cmd() -> str:
# https://docs.python.org/2/library/sys.html#sys.platform
if sys.platform == "win32":
return "start"
if sys.platform == 'win32':
return 'start'
elif sys.platform == "darwin":
return "open"
elif sys.platform == 'darwin':
return 'open'
else:
return "xdg-open"
return 'xdg-open'
def _handle_posix_cwd_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _handle_posix_cwd_error(
exc: OSError, cwd: str, cmd: List[str]
) -> NoReturn:
if exc.errno == errno.ENOENT:
message = "Directory does not exist"
message = 'Directory does not exist'
elif exc.errno == errno.EACCES:
message = "Current user cannot access directory, check permissions"
message = 'Current user cannot access directory, check permissions'
elif exc.errno == errno.ENOTDIR:
message = "Not a directory"
message = 'Not a directory'
else:
message = "Unknown OSError: {} - cwd".format(str(exc))
message = 'Unknown OSError: {} - cwd'.format(str(exc))
raise dbt.exceptions.WorkingDirectoryError(cwd, cmd, message)
def _handle_posix_cmd_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _handle_posix_cmd_error(
exc: OSError, cwd: str, cmd: List[str]
) -> NoReturn:
if exc.errno == errno.ENOENT:
message = "Could not find command, ensure it is in the user's PATH"
elif exc.errno == errno.EACCES:
message = "User does not have permissions for this command"
message = 'User does not have permissions for this command'
else:
message = "Unknown OSError: {} - cmd".format(str(exc))
message = 'Unknown OSError: {} - cmd'.format(str(exc))
raise dbt.exceptions.ExecutableError(cwd, cmd, message)
@@ -346,7 +356,7 @@ def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
- exc.errno == EACCES
- exc.filename == None(?)
"""
if getattr(exc, "filename", None) == cwd:
if getattr(exc, 'filename', None) == cwd:
_handle_posix_cwd_error(exc, cwd, cmd)
else:
_handle_posix_cmd_error(exc, cwd, cmd)
@@ -355,48 +365,46 @@ def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _handle_windows_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
cls: Type[dbt.exceptions.Exception] = dbt.exceptions.CommandError
if exc.errno == errno.ENOENT:
message = (
"Could not find command, ensure it is in the user's PATH "
"and that the user has permissions to run it"
)
message = ("Could not find command, ensure it is in the user's PATH "
"and that the user has permissions to run it")
cls = dbt.exceptions.ExecutableError
elif exc.errno == errno.ENOEXEC:
message = "Command was not executable, ensure it is valid"
message = ('Command was not executable, ensure it is valid')
cls = dbt.exceptions.ExecutableError
elif exc.errno == errno.ENOTDIR:
message = (
"Unable to cd: path does not exist, user does not have"
" permissions, or not a directory"
)
message = ('Unable to cd: path does not exist, user does not have'
' permissions, or not a directory')
cls = dbt.exceptions.WorkingDirectoryError
else:
message = 'Unknown error: {} (errno={}: "{}")'.format(
str(exc), exc.errno, errno.errorcode.get(exc.errno, "<Unknown!>")
str(exc), exc.errno, errno.errorcode.get(exc.errno, '<Unknown!>')
)
raise cls(cwd, cmd, message)
def _interpret_oserror(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
"""Interpret an OSError exc and raise the appropriate dbt exception."""
"""Interpret an OSError exc and raise the appropriate dbt exception.
"""
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
# all of these functions raise unconditionally
if os.name == "nt":
if os.name == 'nt':
_handle_windows_error(exc, cwd, cmd)
else:
_handle_posix_error(exc, cwd, cmd)
# this should not be reachable, raise _something_ at least!
raise dbt.exceptions.InternalException(
"Unhandled exception in _interpret_oserror: {}".format(exc)
'Unhandled exception in _interpret_oserror: {}'.format(exc)
)
def run_cmd(
cwd: str, cmd: List[str], env: Optional[Dict[str, Any]] = None
) -> Tuple[bytes, bytes]:
logger.debug('Executing "{}"'.format(" ".join(cmd)))
logger.debug('Executing "{}"'.format(' '.join(cmd)))
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
@@ -408,9 +416,15 @@ def run_cmd(
full_env.update(env)
try:
exe_pth = shutil.which(cmd[0])
if exe_pth:
cmd = [os.path.abspath(exe_pth)] + list(cmd[1:])
proc = subprocess.Popen(
cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=full_env
)
cmd,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=full_env)
out, err = proc.communicate()
except OSError as exc:
@@ -420,19 +434,27 @@ def run_cmd(
logger.debug('STDERR: "{!s}"'.format(err))
if proc.returncode != 0:
logger.debug("command return code={}".format(proc.returncode))
raise dbt.exceptions.CommandResultError(cwd, cmd, proc.returncode, out, err)
logger.debug('command return code={}'.format(proc.returncode))
raise dbt.exceptions.CommandResultError(cwd, cmd, proc.returncode,
out, err)
return out, err
def download_with_retries(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:
download_fn = functools.partial(download, url, path, timeout)
connection_exception_retry(download_fn, 5)
def download(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:
path = convert_path(path)
connection_timeout = timeout or float(os.getenv("DBT_HTTP_TIMEOUT", 10))
connection_timeout = timeout or float(os.getenv('DBT_HTTP_TIMEOUT', 10))
response = requests.get(url, timeout=connection_timeout)
with open(path, "wb") as handle:
with open(path, 'wb') as handle:
for block in response.iter_content(1024 * 64):
handle.write(block)
@@ -456,7 +478,7 @@ def untar_package(
) -> None:
tar_path = convert_path(tar_path)
tar_dir_name = None
with tarfile.open(tar_path, "r") as tarball:
with tarfile.open(tar_path, 'r') as tarball:
tarball.extractall(dest_dir)
tar_dir_name = os.path.commonprefix(tarball.getnames())
if rename_to:
@@ -472,7 +494,7 @@ def chmod_and_retry(func, path, exc_info):
We want to retry most operations here, but listdir is one that we know will
be useless.
"""
if func is os.listdir or os.name != "nt":
if func is os.listdir or os.name != 'nt':
raise
os.chmod(path, stat.S_IREAD | stat.S_IWRITE)
# on error,this will raise.
@@ -493,7 +515,7 @@ def move(src, dst):
"""
src = convert_path(src)
dst = convert_path(dst)
if os.name != "nt":
if os.name != 'nt':
return shutil.move(src, dst)
if os.path.isdir(dst):
@@ -501,7 +523,7 @@ def move(src, dst):
os.rename(src, dst)
return
dst = os.path.join(dst, os.path.basename(src.rstrip("/\\")))
dst = os.path.join(dst, os.path.basename(src.rstrip('/\\')))
if os.path.exists(dst):
raise EnvironmentError("Path '{}' already exists".format(dst))
@@ -510,10 +532,11 @@ def move(src, dst):
except OSError:
# probably different drives
if os.path.isdir(src):
if _absnorm(dst + "\\").startswith(_absnorm(src + "\\")):
if _absnorm(dst + '\\').startswith(_absnorm(src + '\\')):
# dst is inside src
raise EnvironmentError(
"Cannot move a directory '{}' into itself '{}'".format(src, dst)
"Cannot move a directory '{}' into itself '{}'"
.format(src, dst)
)
shutil.copytree(src, dst, symlinks=True)
rmtree(src)

View File

@@ -1,13 +1,19 @@
import dbt.exceptions
from typing import Any, Dict, Optional
import yaml
import yaml.scanner
# the C version is faster, but it doesn't always exist
try:
from yaml import CLoader as Loader, CSafeLoader as SafeLoader, CDumper as Dumper
from yaml import (
CLoader as Loader,
CSafeLoader as SafeLoader,
CDumper as Dumper
)
except ImportError:
from yaml import Loader, SafeLoader, Dumper # type: ignore # noqa: F401
from yaml import ( # type: ignore # noqa: F401
Loader, SafeLoader, Dumper
)
YAML_ERROR_MESSAGE = """
@@ -27,14 +33,14 @@ def line_no(i, line, width=3):
def prefix_with_line_numbers(string, no_start, no_end):
line_list = string.split("\n")
line_list = string.split('\n')
numbers = range(no_start, no_end)
relevant_lines = line_list[no_start:no_end]
return "\n".join(
[line_no(i + 1, line) for (i, line) in zip(numbers, relevant_lines)]
)
return "\n".join([
line_no(i + 1, line) for (i, line) in zip(numbers, relevant_lines)
])
def contextualized_yaml_error(raw_contents, error):
@@ -45,12 +51,12 @@ def contextualized_yaml_error(raw_contents, error):
nice_error = prefix_with_line_numbers(raw_contents, min_line, max_line)
return YAML_ERROR_MESSAGE.format(
line_number=mark.line + 1, nice_error=nice_error, raw_error=error
)
return YAML_ERROR_MESSAGE.format(line_number=mark.line + 1,
nice_error=nice_error,
raw_error=error)
def safe_load(contents):
def safe_load(contents) -> Optional[Dict[str, Any]]:
return yaml.load(contents, Loader=SafeLoader)
@@ -58,7 +64,7 @@ def load_yaml_text(contents):
try:
return safe_load(contents)
except (yaml.scanner.ScannerError, yaml.YAMLError) as e:
if hasattr(e, "problem_mark"):
if hasattr(e, 'problem_mark'):
error = contextualized_yaml_error(contents, e)
else:
error = str(e)

View File

@@ -12,9 +12,8 @@ from dbt.clients.system import make_directory
from dbt.context.providers import generate_runtime_model
from dbt.contracts.graph.manifest import Manifest
from dbt.contracts.graph.compiled import (
CompiledDataTestNode,
CompiledSchemaTestNode,
COMPILED_TYPES,
CompiledSchemaTestNode,
GraphMemberNode,
InjectedCTE,
ManifestNode,
@@ -32,28 +31,28 @@ from dbt.node_types import NodeType
from dbt.utils import pluralize
import dbt.tracking
graph_file_name = "graph.gpickle"
graph_file_name = 'graph.gpickle'
def _compiled_type_for(model: ParsedNode):
if type(model) not in COMPILED_TYPES:
raise InternalException(
f"Asked to compile {type(model)} node, but it has no compiled form"
f'Asked to compile {type(model)} node, but it has no compiled form'
)
return COMPILED_TYPES[type(model)]
def print_compile_stats(stats):
names = {
NodeType.Model: "model",
NodeType.Test: "test",
NodeType.Snapshot: "snapshot",
NodeType.Analysis: "analysis",
NodeType.Macro: "macro",
NodeType.Operation: "operation",
NodeType.Seed: "seed file",
NodeType.Source: "source",
NodeType.Exposure: "exposure",
NodeType.Model: 'model',
NodeType.Test: 'test',
NodeType.Snapshot: 'snapshot',
NodeType.Analysis: 'analysis',
NodeType.Macro: 'macro',
NodeType.Operation: 'operation',
NodeType.Seed: 'seed file',
NodeType.Source: 'source',
NodeType.Exposure: 'exposure',
}
results = {k: 0 for k in names.keys()}
@@ -64,9 +63,10 @@ def print_compile_stats(stats):
resource_counts = {k.pluralize(): v for k, v in results.items()}
dbt.tracking.track_resource_counts(resource_counts)
stat_line = ", ".join(
[pluralize(ct, names.get(t)) for t, ct in results.items() if t in names]
)
stat_line = ", ".join([
pluralize(ct, names.get(t)) for t, ct in results.items()
if t in names
])
logger.info("Found {}".format(stat_line))
@@ -165,7 +165,9 @@ class Compiler:
extra_context: Dict[str, Any],
) -> Dict[str, Any]:
context = generate_runtime_model(node, self.config, manifest)
context = generate_runtime_model(
node, self.config, manifest
)
context.update(extra_context)
if isinstance(node, CompiledSchemaTestNode):
# for test nodes, add a special keyword args value to the context
@@ -180,7 +182,7 @@ class Compiler:
def _get_relation_name(self, node: ParsedNode):
relation_name = None
if node.resource_type in NodeType.refable() and not node.is_ephemeral_model:
if node.is_relational and not node.is_ephemeral_model:
adapter = get_adapter(self.config)
relation_cls = adapter.Relation
relation_name = str(relation_cls.create_from(self.config, node))
@@ -223,45 +225,47 @@ class Compiler:
with_stmt = None
for token in parsed.tokens:
if token.is_keyword and token.normalized == "WITH":
if token.is_keyword and token.normalized == 'WITH':
with_stmt = token
break
if with_stmt is None:
# no with stmt, add one, and inject CTEs right at the beginning
first_token = parsed.token_first()
with_stmt = sqlparse.sql.Token(sqlparse.tokens.Keyword, "with")
with_stmt = sqlparse.sql.Token(sqlparse.tokens.Keyword, 'with')
parsed.insert_before(first_token, with_stmt)
else:
# stmt exists, add a comma (which will come after injected CTEs)
trailing_comma = sqlparse.sql.Token(sqlparse.tokens.Punctuation, ",")
trailing_comma = sqlparse.sql.Token(
sqlparse.tokens.Punctuation, ','
)
parsed.insert_after(with_stmt, trailing_comma)
token = sqlparse.sql.Token(
sqlparse.tokens.Keyword, ", ".join(c.sql for c in ctes)
sqlparse.tokens.Keyword,
", ".join(c.sql for c in ctes)
)
parsed.insert_after(with_stmt, token)
return str(parsed)
def _get_dbt_test_name(self) -> str:
return "dbt__cte__internal_test"
# This method is called by the 'compile_node' method. Starting
# from the node that it is passed in, it will recursively call
# itself using the 'extra_ctes'. The 'ephemeral' models do
# not produce SQL that is executed directly, instead they
# are rolled up into the models that refer to them by
# inserting CTEs into the SQL.
def _recursively_prepend_ctes(
self,
model: NonSourceCompiledNode,
manifest: Manifest,
extra_context: Optional[Dict[str, Any]],
) -> Tuple[NonSourceCompiledNode, List[InjectedCTE]]:
"""This method is called by the 'compile_node' method. Starting
from the node that it is passed in, it will recursively call
itself using the 'extra_ctes'. The 'ephemeral' models do
not produce SQL that is executed directly, instead they
are rolled up into the models that refer to them by
inserting CTEs into the SQL.
"""
if model.compiled_sql is None:
raise RuntimeException("Cannot inject ctes into an unparsed node", model)
raise RuntimeException(
'Cannot inject ctes into an unparsed node', model
)
if model.extra_ctes_injected:
return (model, model.extra_ctes)
@@ -275,62 +279,59 @@ class Compiler:
# gathered and then "injected" into the model.
prepended_ctes: List[InjectedCTE] = []
dbt_test_name = self._get_dbt_test_name()
# extra_ctes are added to the model by
# RuntimeRefResolver.create_relation, which adds an
# extra_cte for every model relation which is an
# ephemeral model.
for cte in model.extra_ctes:
if cte.id == dbt_test_name:
sql = cte.sql
if cte.id not in manifest.nodes:
raise InternalException(
f'During compilation, found a cte reference that '
f'could not be resolved: {cte.id}'
)
cte_model = manifest.nodes[cte.id]
if not cte_model.is_ephemeral_model:
raise InternalException(f'{cte.id} is not ephemeral')
# This model has already been compiled, so it's been
# through here before
if getattr(cte_model, 'compiled', False):
assert isinstance(cte_model, tuple(COMPILED_TYPES.values()))
cte_model = cast(NonSourceCompiledNode, cte_model)
new_prepended_ctes = cte_model.extra_ctes
# if the cte_model isn't compiled, i.e. first time here
else:
if cte.id not in manifest.nodes:
raise InternalException(
f"During compilation, found a cte reference that "
f"could not be resolved: {cte.id}"
)
cte_model = manifest.nodes[cte.id]
if not cte_model.is_ephemeral_model:
raise InternalException(f"{cte.id} is not ephemeral")
# This model has already been compiled, so it's been
# through here before
if getattr(cte_model, "compiled", False):
assert isinstance(cte_model, tuple(COMPILED_TYPES.values()))
cte_model = cast(NonSourceCompiledNode, cte_model)
new_prepended_ctes = cte_model.extra_ctes
# if the cte_model isn't compiled, i.e. first time here
else:
# This is an ephemeral parsed model that we can compile.
# Compile and update the node
cte_model = self._compile_node(cte_model, manifest, extra_context)
# recursively call this method
cte_model, new_prepended_ctes = self._recursively_prepend_ctes(
# This is an ephemeral parsed model that we can compile.
# Compile and update the node
cte_model = self._compile_node(
cte_model, manifest, extra_context)
# recursively call this method
cte_model, new_prepended_ctes = \
self._recursively_prepend_ctes(
cte_model, manifest, extra_context
)
# Save compiled SQL file and sync manifest
self._write_node(cte_model)
manifest.sync_update_node(cte_model)
# Save compiled SQL file and sync manifest
self._write_node(cte_model)
manifest.sync_update_node(cte_model)
_extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
_extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
new_cte_name = self.add_ephemeral_prefix(cte_model.name)
sql = f" {new_cte_name} as (\n{cte_model.compiled_sql}\n)"
new_cte_name = self.add_ephemeral_prefix(cte_model.name)
rendered_sql = (
cte_model._pre_injected_sql or cte_model.compiled_sql
)
sql = f' {new_cte_name} as (\n{rendered_sql}\n)'
_add_prepended_cte(prepended_ctes, InjectedCTE(id=cte.id, sql=sql))
# We don't save injected_sql into compiled sql for ephemeral models
# because it will cause problems with processing of subsequent models.
# Ephemeral models do not produce executable SQL of their own.
if not model.is_ephemeral_model:
injected_sql = self._inject_ctes_into_sql(
model.compiled_sql,
prepended_ctes,
)
model.compiled_sql = injected_sql
injected_sql = self._inject_ctes_into_sql(
model.compiled_sql,
prepended_ctes,
)
model._pre_injected_sql = model.compiled_sql
model.compiled_sql = injected_sql
model.extra_ctes_injected = True
model.extra_ctes = prepended_ctes
model.validate(model.to_dict(omit_none=True))
@@ -339,33 +340,6 @@ class Compiler:
return model, prepended_ctes
def _add_ctes(
self,
compiled_node: NonSourceCompiledNode,
manifest: Manifest,
extra_context: Dict[str, Any],
) -> NonSourceCompiledNode:
"""Wrap the data test SQL in a CTE."""
# for data tests, we need to insert a special CTE at the end of the
# list containing the test query, and then have the "real" query be a
# select count(*) from that model.
# the benefit of doing it this way is that _add_ctes() can be
# rewritten for different adapters to handle databases that don't
# support CTEs, or at least don't have full support.
if isinstance(compiled_node, CompiledDataTestNode):
# the last prepend (so last in order) should be the data test body.
# then we can add our select count(*) from _that_ cte as the "real"
# compiled_sql, and do the regular prepend logic from CTEs.
name = self._get_dbt_test_name()
cte = InjectedCTE(
id=name, sql=f" {name} as (\n{compiled_node.compiled_sql}\n)"
)
compiled_node.extra_ctes.append(cte)
compiled_node.compiled_sql = f"\nselect count(*) from {name}"
return compiled_node
# creates a compiled_node from the ManifestNode passed in,
# creates a "context" dictionary for jinja rendering,
# and then renders the "compiled_sql" using the node, the
@@ -382,17 +356,17 @@ class Compiler:
logger.debug("Compiling {}".format(node.unique_id))
data = node.to_dict(omit_none=True)
data.update(
{
"compiled": False,
"compiled_sql": None,
"extra_ctes_injected": False,
"extra_ctes": [],
}
)
data.update({
'compiled': False,
'compiled_sql': None,
'extra_ctes_injected': False,
'extra_ctes': [],
})
compiled_node = _compiled_type_for(node).from_dict(data)
context = self._create_node_context(compiled_node, manifest, extra_context)
context = self._create_node_context(
compiled_node, manifest, extra_context
)
compiled_node.compiled_sql = jinja.get_rendered(
node.raw_sql,
@@ -404,10 +378,6 @@ class Compiler:
compiled_node.compiled = True
# add ctes for specific test nodes, and also for
# possible future use in adapters
compiled_node = self._add_ctes(compiled_node, manifest, extra_context)
return compiled_node
def write_graph_file(self, linker: Linker, manifest: Manifest):
@@ -416,17 +386,21 @@ class Compiler:
if flags.WRITE_JSON:
linker.write_graph(graph_path, manifest)
def link_node(self, linker: Linker, node: GraphMemberNode, manifest: Manifest):
def link_node(
self, linker: Linker, node: GraphMemberNode, manifest: Manifest
):
linker.add_node(node.unique_id)
for dependency in node.depends_on_nodes:
if dependency in manifest.nodes:
linker.dependency(
node.unique_id, (manifest.nodes[dependency].unique_id)
node.unique_id,
(manifest.nodes[dependency].unique_id)
)
elif dependency in manifest.sources:
linker.dependency(
node.unique_id, (manifest.sources[dependency].unique_id)
node.unique_id,
(manifest.sources[dependency].unique_id)
)
else:
dependency_not_found(node, dependency)
@@ -461,21 +435,19 @@ class Compiler:
# writes the "compiled_sql" into the target/compiled directory
def _write_node(self, node: NonSourceCompiledNode) -> ManifestNode:
if not node.extra_ctes_injected or node.resource_type == NodeType.Snapshot:
if (not node.extra_ctes_injected or
node.resource_type == NodeType.Snapshot):
return node
logger.debug(f'Writing injected SQL for node "{node.unique_id}"')
if node.compiled_sql:
node.build_path = node.write_node(
self.config.target_path, "compiled", node.compiled_sql
node.compiled_path = node.write_node(
self.config.target_path,
'compiled',
node.compiled_sql
)
return node
# This is the main entry point into this code. It's called by
# CompileRunner.compile, GenericRPCRunner.compile, and
# RunTask.get_hook_sql. It calls '_compile_node' to convert
# the node into a compiled node, and then calls the
# recursive method to "prepend" the ctes.
def compile_node(
self,
node: ManifestNode,
@@ -483,9 +455,17 @@ class Compiler:
extra_context: Optional[Dict[str, Any]] = None,
write: bool = True,
) -> NonSourceCompiledNode:
"""This is the main entry point into this code. It's called by
CompileRunner.compile, GenericRPCRunner.compile, and
RunTask.get_hook_sql. It calls '_compile_node' to convert
the node into a compiled node, and then calls the
recursive method to "prepend" the ctes.
"""
node = self._compile_node(node, manifest, extra_context)
node, _ = self._recursively_prepend_ctes(node, manifest, extra_context)
node, _ = self._recursively_prepend_ctes(
node, manifest, extra_context
)
if write:
self._write_node(node)
return node

View File

@@ -20,8 +20,10 @@ from dbt.utils import coerce_dict_str
from .renderer import ProfileRenderer
DEFAULT_THREADS = 1
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser("~"), ".dbt")
PROFILES_DIR = os.path.expanduser(os.getenv("DBT_PROFILES_DIR", DEFAULT_PROFILES_DIR))
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
PROFILES_DIR = os.path.expanduser(
os.getenv('DBT_PROFILES_DIR', DEFAULT_PROFILES_DIR)
)
INVALID_PROFILE_MESSAGE = """
dbt encountered an error while trying to read your profiles.yml file.
@@ -41,13 +43,11 @@ Here, [profile name] should be replaced with a profile name
defined in your profiles.yml file. You can find profiles.yml here:
{profiles_file}/profiles.yml
""".format(
profiles_file=PROFILES_DIR
)
""".format(profiles_file=PROFILES_DIR)
def read_profile(profiles_dir: str) -> Dict[str, Any]:
path = os.path.join(profiles_dir, "profiles.yml")
path = os.path.join(profiles_dir, 'profiles.yml')
contents = None
if os.path.isfile(path):
@@ -55,8 +55,12 @@ def read_profile(profiles_dir: str) -> Dict[str, Any]:
contents = load_file_contents(path, strip=False)
yaml_content = load_yaml_text(contents)
if not yaml_content:
msg = f"The profiles.yml file at {path} is empty"
raise DbtProfileError(INVALID_PROFILE_MESSAGE.format(error_string=msg))
msg = f'The profiles.yml file at {path} is empty'
raise DbtProfileError(
INVALID_PROFILE_MESSAGE.format(
error_string=msg
)
)
return yaml_content
except ValidationException as e:
msg = INVALID_PROFILE_MESSAGE.format(error_string=e)
@@ -69,7 +73,7 @@ def read_user_config(directory: str) -> UserConfig:
try:
profile = read_profile(directory)
if profile:
user_cfg = coerce_dict_str(profile.get("config", {}))
user_cfg = coerce_dict_str(profile.get('config', {}))
if user_cfg is not None:
UserConfig.validate(user_cfg)
return UserConfig.from_dict(user_cfg)
@@ -80,7 +84,8 @@ def read_user_config(directory: str) -> UserConfig:
# The Profile class is included in RuntimeConfig, so any attribute
# additions must also be set where the RuntimeConfig class is created
@dataclass
# `init=False` is a workaround for https://bugs.python.org/issue45081
@dataclass(init=False)
class Profile(HasCredentials):
profile_name: str
target_name: str
@@ -88,7 +93,26 @@ class Profile(HasCredentials):
threads: int
credentials: Credentials
def to_profile_info(self, serialize_credentials: bool = False) -> Dict[str, Any]:
def __init__(
self,
profile_name: str,
target_name: str,
config: UserConfig,
threads: int,
credentials: Credentials
):
"""Explicitly defining `__init__` to work around bug in Python 3.9.7
https://bugs.python.org/issue45081
"""
self.profile_name = profile_name
self.target_name = target_name
self.config = config
self.threads = threads
self.credentials = credentials
def to_profile_info(
self, serialize_credentials: bool = False
) -> Dict[str, Any]:
"""Unlike to_project_config, this dict is not a mirror of any existing
on-disk data structure. It's used when creating a new profile from an
existing one.
@@ -98,35 +122,34 @@ class Profile(HasCredentials):
:returns dict: The serialized profile.
"""
result = {
"profile_name": self.profile_name,
"target_name": self.target_name,
"config": self.config,
"threads": self.threads,
"credentials": self.credentials,
'profile_name': self.profile_name,
'target_name': self.target_name,
'config': self.config,
'threads': self.threads,
'credentials': self.credentials,
}
if serialize_credentials:
result["config"] = self.config.to_dict(omit_none=True)
result["credentials"] = self.credentials.to_dict(omit_none=True)
result['config'] = self.config.to_dict(omit_none=True)
result['credentials'] = self.credentials.to_dict(omit_none=True)
return result
def to_target_dict(self) -> Dict[str, Any]:
target = dict(self.credentials.connection_info(with_aliases=True))
target.update(
{
"type": self.credentials.type,
"threads": self.threads,
"name": self.target_name,
"target_name": self.target_name,
"profile_name": self.profile_name,
"config": self.config.to_dict(omit_none=True),
}
target = dict(
self.credentials.connection_info(with_aliases=True)
)
target.update({
'type': self.credentials.type,
'threads': self.threads,
'name': self.target_name,
'target_name': self.target_name,
'profile_name': self.profile_name,
'config': self.config.to_dict(omit_none=True),
})
return target
def __eq__(self, other: object) -> bool:
if not (
isinstance(other, self.__class__) and isinstance(self, other.__class__)
):
if not (isinstance(other, self.__class__) and
isinstance(self, other.__class__)):
return NotImplemented
return self.to_profile_info() == other.to_profile_info()
@@ -146,17 +169,14 @@ class Profile(HasCredentials):
) -> Credentials:
# avoid an import cycle
from dbt.adapters.factory import load_plugin
# credentials carry their 'type' in their actual type, not their
# attributes. We do want this in order to pick our Credentials class.
if "type" not in profile:
if 'type' not in profile:
raise DbtProfileError(
'required field "type" not found in profile {} and target {}'.format(
profile_name, target_name
)
)
'required field "type" not found in profile {} and target {}'
.format(profile_name, target_name))
typename = profile.pop("type")
typename = profile.pop('type')
try:
cls = load_plugin(typename)
data = cls.translate_aliases(profile)
@@ -165,9 +185,8 @@ class Profile(HasCredentials):
except (RuntimeException, ValidationError) as e:
msg = str(e) if isinstance(e, RuntimeException) else e.message
raise DbtProfileError(
'Credentials in profile "{}", target "{}" invalid: {}'.format(
profile_name, target_name, msg
)
'Credentials in profile "{}", target "{}" invalid: {}'
.format(profile_name, target_name, msg)
) from e
return credentials
@@ -188,21 +207,19 @@ class Profile(HasCredentials):
def _get_profile_data(
profile: Dict[str, Any], profile_name: str, target_name: str
) -> Dict[str, Any]:
if "outputs" not in profile:
if 'outputs' not in profile:
raise DbtProfileError(
"outputs not specified in profile '{}'".format(profile_name)
)
outputs = profile["outputs"]
outputs = profile['outputs']
if target_name not in outputs:
outputs = "\n".join(" - {}".format(output) for output in outputs)
msg = (
"The profile '{}' does not have a target named '{}'. The "
"valid target names for this profile are:\n{}".format(
profile_name, target_name, outputs
)
)
raise DbtProfileError(msg, result_type="invalid_target")
outputs = '\n'.join(' - {}'.format(output)
for output in outputs)
msg = ("The profile '{}' does not have a target named '{}'. The "
"valid target names for this profile are:\n{}"
.format(profile_name, target_name, outputs))
raise DbtProfileError(msg, result_type='invalid_target')
profile_data = outputs[target_name]
if not isinstance(profile_data, dict):
@@ -210,7 +227,7 @@ class Profile(HasCredentials):
f"output '{target_name}' of profile '{profile_name}' is "
f"misconfigured in profiles.yml"
)
raise DbtProfileError(msg, result_type="invalid_target")
raise DbtProfileError(msg, result_type='invalid_target')
return profile_data
@@ -221,8 +238,8 @@ class Profile(HasCredentials):
threads: int,
profile_name: str,
target_name: str,
user_cfg: Optional[Dict[str, Any]] = None,
) -> "Profile":
user_cfg: Optional[Dict[str, Any]] = None
) -> 'Profile':
"""Create a profile from an existing set of Credentials and the
remaining information.
@@ -245,7 +262,7 @@ class Profile(HasCredentials):
target_name=target_name,
config=config,
threads=threads,
credentials=credentials,
credentials=credentials
)
profile.validate()
return profile
@@ -270,18 +287,19 @@ class Profile(HasCredentials):
# name to extract a profile that we can render.
if target_override is not None:
target_name = target_override
elif "target" in raw_profile:
elif 'target' in raw_profile:
# render the target if it was parsed from yaml
target_name = renderer.render_value(raw_profile["target"])
target_name = renderer.render_value(raw_profile['target'])
else:
target_name = "default"
target_name = 'default'
logger.debug(
"target not specified in profile '{}', using '{}'".format(
profile_name, target_name
)
"target not specified in profile '{}', using '{}'"
.format(profile_name, target_name)
)
raw_profile_data = cls._get_profile_data(raw_profile, profile_name, target_name)
raw_profile_data = cls._get_profile_data(
raw_profile, profile_name, target_name
)
try:
profile_data = renderer.render_data(raw_profile_data)
@@ -298,7 +316,7 @@ class Profile(HasCredentials):
user_cfg: Optional[Dict[str, Any]] = None,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> "Profile":
) -> 'Profile':
"""Create a profile from its raw profile information.
(this is an intermediate step, mostly useful for unit testing)
@@ -319,7 +337,7 @@ class Profile(HasCredentials):
"""
# user_cfg is not rendered.
if user_cfg is None:
user_cfg = raw_profile.get("config")
user_cfg = raw_profile.get('config')
# TODO: should it be, and the values coerced to bool?
target_name, profile_data = cls.render_profile(
raw_profile, profile_name, target_override, renderer
@@ -327,7 +345,7 @@ class Profile(HasCredentials):
# valid connections never include the number of threads, but it's
# stored on a per-connection level in the raw configs
threads = profile_data.pop("threads", DEFAULT_THREADS)
threads = profile_data.pop('threads', DEFAULT_THREADS)
if threads_override is not None:
threads = threads_override
@@ -340,7 +358,7 @@ class Profile(HasCredentials):
profile_name=profile_name,
target_name=target_name,
threads=threads,
user_cfg=user_cfg,
user_cfg=user_cfg
)
@classmethod
@@ -351,7 +369,7 @@ class Profile(HasCredentials):
renderer: ProfileRenderer,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> "Profile":
) -> 'Profile':
"""
:param raw_profiles: The profile data, from disk as yaml.
:param profile_name: The profile name to use.
@@ -375,9 +393,15 @@ class Profile(HasCredentials):
# don't render keys, so we can pluck that out
raw_profile = raw_profiles[profile_name]
if not raw_profile:
msg = f"Profile {profile_name} in profiles.yml is empty"
raise DbtProfileError(INVALID_PROFILE_MESSAGE.format(error_string=msg))
user_cfg = raw_profiles.get("config")
msg = (
f'Profile {profile_name} in profiles.yml is empty'
)
raise DbtProfileError(
INVALID_PROFILE_MESSAGE.format(
error_string=msg
)
)
user_cfg = raw_profiles.get('config')
return cls.from_raw_profile_info(
raw_profile=raw_profile,
@@ -394,7 +418,7 @@ class Profile(HasCredentials):
args: Any,
renderer: ProfileRenderer,
project_profile_name: Optional[str],
) -> "Profile":
) -> 'Profile':
"""Given the raw profiles as read from disk and the name of the desired
profile if specified, return the profile component of the runtime
config.
@@ -409,16 +433,15 @@ class Profile(HasCredentials):
target could not be found.
:returns Profile: The new Profile object.
"""
threads_override = getattr(args, "threads", None)
target_override = getattr(args, "target", None)
threads_override = getattr(args, 'threads', None)
target_override = getattr(args, 'target', None)
raw_profiles = read_profile(args.profiles_dir)
profile_name = cls.pick_profile_name(
getattr(args, "profile", None), project_profile_name
)
profile_name = cls.pick_profile_name(getattr(args, 'profile', None),
project_profile_name)
return cls.from_raw_profiles(
raw_profiles=raw_profiles,
profile_name=profile_name,
renderer=renderer,
target_override=target_override,
threads_override=threads_override,
threads_override=threads_override
)

View File

@@ -2,13 +2,7 @@ from copy import deepcopy
from dataclasses import dataclass, field
from itertools import chain
from typing import (
List,
Dict,
Any,
Optional,
TypeVar,
Union,
Mapping,
List, Dict, Any, Optional, TypeVar, Union, Mapping,
)
from typing_extensions import Protocol, runtime_checkable
@@ -88,7 +82,9 @@ def _load_yaml(path):
def package_data_from_root(project_root):
package_filepath = resolve_path_from_base("packages.yml", project_root)
package_filepath = resolve_path_from_base(
'packages.yml', project_root
)
if path_exists(package_filepath):
packages_dict = _load_yaml(package_filepath)
@@ -99,7 +95,7 @@ def package_data_from_root(project_root):
def package_config_from_data(packages_data: Dict[str, Any]):
if not packages_data:
packages_data = {"packages": []}
packages_data = {'packages': []}
try:
PackageConfig.validate(packages_data)
@@ -122,7 +118,7 @@ def _parse_versions(versions: Union[List[str], str]) -> List[VersionSpecifier]:
Regardless, this will return a list of VersionSpecifiers
"""
if isinstance(versions, str):
versions = versions.split(",")
versions = versions.split(',')
return [VersionSpecifier.from_version_string(v) for v in versions]
@@ -133,12 +129,11 @@ def _all_source_paths(
analysis_paths: List[str],
macro_paths: List[str],
) -> List[str]:
return list(
chain(source_paths, data_paths, snapshot_paths, analysis_paths, macro_paths)
)
return list(chain(source_paths, data_paths, snapshot_paths, analysis_paths,
macro_paths))
T = TypeVar("T")
T = TypeVar('T')
def value_or(value: Optional[T], default: T) -> T:
@@ -151,27 +146,30 @@ def value_or(value: Optional[T], default: T) -> T:
def _raw_project_from(project_root: str) -> Dict[str, Any]:
project_root = os.path.normpath(project_root)
project_yaml_filepath = os.path.join(project_root, "dbt_project.yml")
project_yaml_filepath = os.path.join(project_root, 'dbt_project.yml')
# get the project.yml contents
if not path_exists(project_yaml_filepath):
raise DbtProjectError(
"no dbt_project.yml found at expected path {}".format(project_yaml_filepath)
'no dbt_project.yml found at expected path {}'
.format(project_yaml_filepath)
)
project_dict = _load_yaml(project_yaml_filepath)
if not isinstance(project_dict, dict):
raise DbtProjectError("dbt_project.yml does not parse to a dictionary")
raise DbtProjectError(
'dbt_project.yml does not parse to a dictionary'
)
return project_dict
def _query_comment_from_cfg(
cfg_query_comment: Union[QueryComment, NoValue, str, None]
cfg_query_comment: Union[QueryComment, NoValue, str, None]
) -> QueryComment:
if not cfg_query_comment:
return QueryComment(comment="")
return QueryComment(comment='')
if isinstance(cfg_query_comment, str):
return QueryComment(comment=cfg_query_comment)
@@ -188,7 +186,9 @@ def validate_version(dbt_version: List[VersionSpecifier], project_name: str):
if not versions_compatible(*dbt_version):
msg = IMPOSSIBLE_VERSION_ERROR.format(
package=project_name,
version_spec=[x.to_version_string() for x in dbt_version],
version_spec=[
x.to_version_string() for x in dbt_version
]
)
raise DbtProjectError(msg)
@@ -196,7 +196,9 @@ def validate_version(dbt_version: List[VersionSpecifier], project_name: str):
msg = INVALID_VERSION_ERROR.format(
package=project_name,
installed=installed.to_version_string(),
version_spec=[x.to_version_string() for x in dbt_version],
version_spec=[
x.to_version_string() for x in dbt_version
]
)
raise DbtProjectError(msg)
@@ -205,8 +207,8 @@ def _get_required_version(
project_dict: Dict[str, Any],
verify_version: bool,
) -> List[VersionSpecifier]:
dbt_raw_version: Union[List[str], str] = ">=0.0.0"
required = project_dict.get("require-dbt-version")
dbt_raw_version: Union[List[str], str] = '>=0.0.0'
required = project_dict.get('require-dbt-version')
if required is not None:
dbt_raw_version = required
@@ -217,11 +219,11 @@ def _get_required_version(
if verify_version:
# no name is also an error that we want to raise
if "name" not in project_dict:
if 'name' not in project_dict:
raise DbtProjectError(
'Required "name" field not present in project',
)
validate_version(dbt_version, project_dict["name"])
validate_version(dbt_version, project_dict['name'])
return dbt_version
@@ -229,36 +231,34 @@ def _get_required_version(
@dataclass
class RenderComponents:
project_dict: Dict[str, Any] = field(
metadata=dict(description="The project dictionary")
metadata=dict(description='The project dictionary')
)
packages_dict: Dict[str, Any] = field(
metadata=dict(description="The packages dictionary")
metadata=dict(description='The packages dictionary')
)
selectors_dict: Dict[str, Any] = field(
metadata=dict(description="The selectors dictionary")
metadata=dict(description='The selectors dictionary')
)
@dataclass
class PartialProject(RenderComponents):
profile_name: Optional[str] = field(
metadata=dict(description="The unrendered profile name in the project, if set")
)
project_name: Optional[str] = field(
metadata=dict(
description=(
"The name of the project. This should always be set and will not "
"be rendered"
)
profile_name: Optional[str] = field(metadata=dict(
description='The unrendered profile name in the project, if set'
))
project_name: Optional[str] = field(metadata=dict(
description=(
'The name of the project. This should always be set and will not '
'be rendered'
)
)
))
project_root: str = field(
metadata=dict(description="The root directory of the project"),
metadata=dict(description='The root directory of the project'),
)
verify_version: bool = field(
metadata=dict(
description=("If True, verify the dbt version matches the required version")
)
metadata=dict(description=(
'If True, verify the dbt version matches the required version'
))
)
def render_profile_name(self, renderer) -> Optional[str]:
@@ -271,7 +271,9 @@ class PartialProject(RenderComponents):
renderer: DbtProjectYamlRenderer,
) -> RenderComponents:
rendered_project = renderer.render_project(self.project_dict, self.project_root)
rendered_project = renderer.render_project(
self.project_dict, self.project_root
)
rendered_packages = renderer.render_packages(self.packages_dict)
rendered_selectors = renderer.render_selectors(self.selectors_dict)
@@ -281,16 +283,16 @@ class PartialProject(RenderComponents):
selectors_dict=rendered_selectors,
)
def render(self, renderer: DbtProjectYamlRenderer) -> "Project":
def render(self, renderer: DbtProjectYamlRenderer) -> 'Project':
try:
rendered = self.get_rendered(renderer)
return self.create_project(rendered)
except DbtProjectError as exc:
if exc.path is None:
exc.path = os.path.join(self.project_root, "dbt_project.yml")
exc.path = os.path.join(self.project_root, 'dbt_project.yml')
raise
def create_project(self, rendered: RenderComponents) -> "Project":
def create_project(self, rendered: RenderComponents) -> 'Project':
unrendered = RenderComponents(
project_dict=self.project_dict,
packages_dict=self.packages_dict,
@@ -303,7 +305,9 @@ class PartialProject(RenderComponents):
try:
ProjectContract.validate(rendered.project_dict)
cfg = ProjectContract.from_dict(rendered.project_dict)
cfg = ProjectContract.from_dict(
rendered.project_dict
)
except ValidationError as e:
raise DbtProjectError(validator_error_message(e)) from e
# name/version are required in the Project definition, so we can assume
@@ -313,30 +317,31 @@ class PartialProject(RenderComponents):
# this is added at project_dict parse time and should always be here
# once we see it.
if cfg.project_root is None:
raise DbtProjectError("cfg must have a project root!")
raise DbtProjectError('cfg must have a project root!')
else:
project_root = cfg.project_root
# this is only optional in the sense that if it's not present, it needs
# to have been a cli argument.
profile_name = cfg.profile
# these are all the defaults
source_paths: List[str] = value_or(cfg.source_paths, ["models"])
macro_paths: List[str] = value_or(cfg.macro_paths, ["macros"])
data_paths: List[str] = value_or(cfg.data_paths, ["data"])
test_paths: List[str] = value_or(cfg.test_paths, ["test"])
source_paths: List[str] = value_or(cfg.source_paths, ['models'])
macro_paths: List[str] = value_or(cfg.macro_paths, ['macros'])
data_paths: List[str] = value_or(cfg.data_paths, ['data'])
test_paths: List[str] = value_or(cfg.test_paths, ['test'])
analysis_paths: List[str] = value_or(cfg.analysis_paths, [])
snapshot_paths: List[str] = value_or(cfg.snapshot_paths, ["snapshots"])
snapshot_paths: List[str] = value_or(cfg.snapshot_paths, ['snapshots'])
all_source_paths: List[str] = _all_source_paths(
source_paths, data_paths, snapshot_paths, analysis_paths, macro_paths
source_paths, data_paths, snapshot_paths, analysis_paths,
macro_paths
)
docs_paths: List[str] = value_or(cfg.docs_paths, all_source_paths)
asset_paths: List[str] = value_or(cfg.asset_paths, [])
target_path: str = value_or(cfg.target_path, "target")
target_path: str = value_or(cfg.target_path, 'target')
clean_targets: List[str] = value_or(cfg.clean_targets, [target_path])
log_path: str = value_or(cfg.log_path, "logs")
modules_path: str = value_or(cfg.modules_path, "dbt_modules")
log_path: str = value_or(cfg.log_path, 'logs')
modules_path: str = value_or(cfg.modules_path, 'dbt_modules')
# in the default case we'll populate this once we know the adapter type
# It would be nice to just pass along a Quoting here, but that would
# break many things
@@ -344,16 +349,20 @@ class PartialProject(RenderComponents):
if cfg.quoting is not None:
quoting = cfg.quoting.to_dict(omit_none=True)
dispatch: List[Dict[str, Any]]
models: Dict[str, Any]
seeds: Dict[str, Any]
snapshots: Dict[str, Any]
sources: Dict[str, Any]
tests: Dict[str, Any]
vars_value: VarProvider
dispatch = cfg.dispatch
models = cfg.models
seeds = cfg.seeds
snapshots = cfg.snapshots
sources = cfg.sources
tests = cfg.tests
if cfg.vars is None:
vars_dict: Dict[str, Any] = {}
else:
@@ -368,12 +377,11 @@ class PartialProject(RenderComponents):
packages = package_config_from_data(rendered.packages_dict)
selectors = selector_config_from_data(rendered.selectors_dict)
manifest_selectors: Dict[str, Any] = {}
if rendered.selectors_dict and rendered.selectors_dict["selectors"]:
if rendered.selectors_dict and rendered.selectors_dict['selectors']:
# this is a dict with a single key 'selectors' pointing to a list
# of dicts.
manifest_selectors = SelectorDict.parse_from_selectors_list(
rendered.selectors_dict["selectors"]
)
rendered.selectors_dict['selectors'])
project = Project(
project_name=name,
@@ -396,6 +404,7 @@ class PartialProject(RenderComponents):
models=models,
on_run_start=on_run_start,
on_run_end=on_run_end,
dispatch=dispatch,
seeds=seeds,
snapshots=snapshots,
dbt_version=dbt_version,
@@ -404,6 +413,7 @@ class PartialProject(RenderComponents):
selectors=selectors,
query_comment=query_comment,
sources=sources,
tests=tests,
vars=vars_value,
config_version=cfg.config_version,
unrendered=unrendered,
@@ -422,9 +432,10 @@ class PartialProject(RenderComponents):
*,
verify_version: bool = False,
):
"""Construct a partial project from its constituent dicts."""
project_name = project_dict.get("name")
profile_name = project_dict.get("profile")
"""Construct a partial project from its constituent dicts.
"""
project_name = project_dict.get('name')
profile_name = project_dict.get('profile')
return cls(
profile_name=profile_name,
@@ -439,14 +450,14 @@ class PartialProject(RenderComponents):
@classmethod
def from_project_root(
cls, project_root: str, *, verify_version: bool = False
) -> "PartialProject":
) -> 'PartialProject':
project_root = os.path.normpath(project_root)
project_dict = _raw_project_from(project_root)
config_version = project_dict.get("config-version", 1)
config_version = project_dict.get('config-version', 1)
if config_version != 2:
raise DbtProjectError(
f"Invalid config version: {config_version}, expected 2",
path=os.path.join(project_root, "dbt_project.yml"),
f'Invalid config version: {config_version}, expected 2',
path=os.path.join(project_root, 'dbt_project.yml')
)
packages_dict = package_data_from_root(project_root)
@@ -463,10 +474,15 @@ class PartialProject(RenderComponents):
class VarProvider:
"""Var providers are tied to a particular Project."""
def __init__(self, vars: Dict[str, Dict[str, Any]]) -> None:
def __init__(
self,
vars: Dict[str, Dict[str, Any]]
) -> None:
self.vars = vars
def vars_for(self, node: IsFQNResource, adapter_type: str) -> Mapping[str, Any]:
def vars_for(
self, node: IsFQNResource, adapter_type: str
) -> Mapping[str, Any]:
# in v2, vars are only either project or globally scoped
merged = MultiDict([self.vars])
merged.add(self.vars.get(node.package_name, {}))
@@ -500,9 +516,11 @@ class Project:
models: Dict[str, Any]
on_run_start: List[str]
on_run_end: List[str]
dispatch: List[Dict[str, Any]]
seeds: Dict[str, Any]
snapshots: Dict[str, Any]
sources: Dict[str, Any]
tests: Dict[str, Any]
vars: VarProvider
dbt_version: List[VersionSpecifier]
packages: Dict[str, Any]
@@ -515,11 +533,8 @@ class Project:
@property
def all_source_paths(self) -> List[str]:
return _all_source_paths(
self.source_paths,
self.data_paths,
self.snapshot_paths,
self.analysis_paths,
self.macro_paths,
self.source_paths, self.data_paths, self.snapshot_paths,
self.analysis_paths, self.macro_paths
)
def __str__(self):
@@ -527,13 +542,11 @@ class Project:
return str(cfg)
def __eq__(self, other):
if not (
isinstance(other, self.__class__) and isinstance(self, other.__class__)
):
if not (isinstance(other, self.__class__) and
isinstance(self, other.__class__)):
return False
return self.to_project_config(with_packages=True) == other.to_project_config(
with_packages=True
)
return self.to_project_config(with_packages=True) == \
other.to_project_config(with_packages=True)
def to_project_config(self, with_packages=False):
"""Return a dict representation of the config that could be written to
@@ -543,39 +556,40 @@ class Project:
file in the root.
:returns dict: The serialized profile.
"""
result = deepcopy(
{
"name": self.project_name,
"version": self.version,
"project-root": self.project_root,
"profile": self.profile_name,
"source-paths": self.source_paths,
"macro-paths": self.macro_paths,
"data-paths": self.data_paths,
"test-paths": self.test_paths,
"analysis-paths": self.analysis_paths,
"docs-paths": self.docs_paths,
"asset-paths": self.asset_paths,
"target-path": self.target_path,
"snapshot-paths": self.snapshot_paths,
"clean-targets": self.clean_targets,
"log-path": self.log_path,
"quoting": self.quoting,
"models": self.models,
"on-run-start": self.on_run_start,
"on-run-end": self.on_run_end,
"seeds": self.seeds,
"snapshots": self.snapshots,
"sources": self.sources,
"vars": self.vars.to_dict(),
"require-dbt-version": [
v.to_version_string() for v in self.dbt_version
],
"config-version": self.config_version,
}
)
result = deepcopy({
'name': self.project_name,
'version': self.version,
'project-root': self.project_root,
'profile': self.profile_name,
'source-paths': self.source_paths,
'macro-paths': self.macro_paths,
'data-paths': self.data_paths,
'test-paths': self.test_paths,
'analysis-paths': self.analysis_paths,
'docs-paths': self.docs_paths,
'asset-paths': self.asset_paths,
'target-path': self.target_path,
'snapshot-paths': self.snapshot_paths,
'clean-targets': self.clean_targets,
'log-path': self.log_path,
'quoting': self.quoting,
'models': self.models,
'on-run-start': self.on_run_start,
'on-run-end': self.on_run_end,
'dispatch': self.dispatch,
'seeds': self.seeds,
'snapshots': self.snapshots,
'sources': self.sources,
'tests': self.tests,
'vars': self.vars.to_dict(),
'require-dbt-version': [
v.to_version_string() for v in self.dbt_version
],
'config-version': self.config_version,
})
if self.query_comment:
result["query-comment"] = self.query_comment.to_dict(omit_none=True)
result['query-comment'] = \
self.query_comment.to_dict(omit_none=True)
if with_packages:
result.update(self.packages.to_dict(omit_none=True))
@@ -606,8 +620,8 @@ class Project:
selectors_dict: Dict[str, Any],
renderer: DbtProjectYamlRenderer,
*,
verify_version: bool = False,
) -> "Project":
verify_version: bool = False
) -> 'Project':
partial = PartialProject.from_dicts(
project_root=project_root,
project_dict=project_dict,
@@ -624,17 +638,23 @@ class Project:
renderer: DbtProjectYamlRenderer,
*,
verify_version: bool = False,
) -> "Project":
) -> 'Project':
partial = cls.partial_load(project_root, verify_version=verify_version)
return partial.render(renderer)
def hashed_name(self):
return hashlib.md5(self.project_name.encode("utf-8")).hexdigest()
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
def get_selector(self, name: str) -> SelectionSpec:
if name not in self.selectors:
raise RuntimeException(
f"Could not find selector named {name}, expected one of "
f"{list(self.selectors)}"
f'Could not find selector named {name}, expected one of '
f'{list(self.selectors)}'
)
return self.selectors[name]
def get_macro_search_order(self, macro_namespace: str):
for dispatch_entry in self.dispatch:
if dispatch_entry['macro_namespace'] == macro_namespace:
return dispatch_entry['search_order']
return None

View File

@@ -2,7 +2,9 @@ from typing import Dict, Any, Tuple, Optional, Union, Callable
from dbt.clients.jinja import get_rendered, catch_jinja
from dbt.exceptions import DbtProjectError, CompilationException, RecursionException
from dbt.exceptions import (
DbtProjectError, CompilationException, RecursionException
)
from dbt.node_types import NodeType
from dbt.utils import deep_map
@@ -16,7 +18,7 @@ class BaseRenderer:
@property
def name(self):
return "Rendering"
return 'Rendering'
def should_render_keypath(self, keypath: Keypath) -> bool:
return True
@@ -27,7 +29,9 @@ class BaseRenderer:
return self.render_value(value, keypath)
def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:
def render_value(
self, value: Any, keypath: Optional[Keypath] = None
) -> Any:
# keypath is ignored.
# if it wasn't read as a string, ignore it
if not isinstance(value, str):
@@ -36,16 +40,18 @@ class BaseRenderer:
with catch_jinja():
return get_rendered(value, self.context, native=True)
except CompilationException as exc:
msg = f"Could not render {value}: {exc.msg}"
msg = f'Could not render {value}: {exc.msg}'
raise CompilationException(msg) from exc
def render_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
def render_data(
self, data: Dict[str, Any]
) -> Dict[str, Any]:
try:
return deep_map(self.render_entry, data)
except RecursionException:
raise DbtProjectError(
f"Cycle detected: {self.name} input has a reference to itself",
project=data,
f'Cycle detected: {self.name} input has a reference to itself',
project=data
)
@@ -72,15 +78,15 @@ class ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):
def __init__(self):
super().__init__()
self[("on-run-start",)] = _list_if_none_or_string
self[("on-run-end",)] = _list_if_none_or_string
self[('on-run-start',)] = _list_if_none_or_string
self[('on-run-end',)] = _list_if_none_or_string
for k in ("models", "seeds", "snapshots"):
for k in ('models', 'seeds', 'snapshots'):
self[(k,)] = _dict_if_none
self[(k, "vars")] = _dict_if_none
self[(k, "pre-hook")] = _list_if_none_or_string
self[(k, "post-hook")] = _list_if_none_or_string
self[("seeds", "column_types")] = _dict_if_none
self[(k, 'vars')] = _dict_if_none
self[(k, 'pre-hook')] = _list_if_none_or_string
self[(k, 'post-hook')] = _list_if_none_or_string
self[('seeds', 'column_types')] = _dict_if_none
def postprocess(self, value: Any, key: Keypath) -> Any:
if key in self:
@@ -95,7 +101,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
@property
def name(self):
"Project config"
'Project config'
def get_package_renderer(self) -> BaseRenderer:
return PackageRenderer(self.context)
@@ -110,7 +116,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
) -> Dict[str, Any]:
"""Render the project and insert the project root after rendering."""
rendered_project = self.render_data(project)
rendered_project["project-root"] = project_root
rendered_project['project-root'] = project_root
return rendered_project
def render_packages(self, packages: Dict[str, Any]):
@@ -132,19 +138,20 @@ class DbtProjectYamlRenderer(BaseRenderer):
first = keypath[0]
# run hooks are not rendered
if first in {"on-run-start", "on-run-end", "query-comment"}:
if first in {'on-run-start', 'on-run-end', 'query-comment'}:
return False
# don't render vars blocks until runtime
if first == "vars":
if first == 'vars':
return False
if first in {"seeds", "models", "snapshots", "seeds"}:
if first in {'seeds', 'models', 'snapshots', 'tests'}:
keypath_parts = {
(k.lstrip("+") if isinstance(k, str) else k) for k in keypath
(k.lstrip('+ ') if isinstance(k, str) else k)
for k in keypath
}
# model-level hooks
if "pre-hook" in keypath_parts or "post-hook" in keypath_parts:
if 'pre-hook' in keypath_parts or 'post-hook' in keypath_parts:
return False
return True
@@ -153,15 +160,17 @@ class DbtProjectYamlRenderer(BaseRenderer):
class ProfileRenderer(BaseRenderer):
@property
def name(self):
"Profile"
'Profile'
class SchemaYamlRenderer(BaseRenderer):
DOCUMENTABLE_NODES = frozenset(n.pluralize() for n in NodeType.documentable())
DOCUMENTABLE_NODES = frozenset(
n.pluralize() for n in NodeType.documentable()
)
@property
def name(self):
return "Rendering yaml"
return 'Rendering yaml'
def _is_norender_key(self, keypath: Keypath) -> bool:
"""
@@ -176,13 +185,13 @@ class SchemaYamlRenderer(BaseRenderer):
Return True if it's tests or description - those aren't rendered
"""
if len(keypath) >= 2 and keypath[1] in ("tests", "description"):
if len(keypath) >= 2 and keypath[1] in ('tests', 'description'):
return True
if (
len(keypath) >= 4
and keypath[1] == "columns"
and keypath[3] in ("tests", "description")
len(keypath) >= 4 and
keypath[1] == 'columns' and
keypath[3] in ('tests', 'description')
):
return True
@@ -200,13 +209,13 @@ class SchemaYamlRenderer(BaseRenderer):
return True
if keypath[0] == NodeType.Source.pluralize():
if keypath[2] == "description":
if keypath[2] == 'description':
return False
if keypath[2] == "tables":
if keypath[2] == 'tables':
if self._is_norender_key(keypath[3:]):
return False
elif keypath[0] == NodeType.Macro.pluralize():
if keypath[2] == "arguments":
if keypath[2] == 'arguments':
if self._is_norender_key(keypath[3:]):
return False
elif self._is_norender_key(keypath[1:]):
@@ -220,10 +229,10 @@ class SchemaYamlRenderer(BaseRenderer):
class PackageRenderer(BaseRenderer):
@property
def name(self):
return "Packages config"
return 'Packages config'
class SelectorRenderer(BaseRenderer):
@property
def name(self):
return "Selector config"
return 'Selector config'

View File

@@ -4,16 +4,8 @@ from copy import deepcopy
from dataclasses import dataclass, fields
from pathlib import Path
from typing import (
Dict,
Any,
Optional,
Mapping,
Iterator,
Iterable,
Tuple,
List,
MutableSet,
Type,
Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet,
Type
)
from .profile import Profile
@@ -23,7 +15,7 @@ from .utils import parse_cli_vars
from dbt import tracking
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
from dbt.context import generate_base_context
from dbt.context.base import generate_base_context
from dbt.context.target import generate_target_context
from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata
@@ -38,13 +30,15 @@ from dbt.exceptions import (
DbtProjectError,
validator_error_message,
warn_or_error,
raise_compiler_error,
raise_compiler_error
)
from dbt.dataclass_schema import ValidationError
def _project_quoting_dict(proj: Project, profile: Profile) -> Dict[ComponentName, bool]:
def _project_quoting_dict(
proj: Project, profile: Profile
) -> Dict[ComponentName, bool]:
src: Dict[str, Any] = profile.credentials.translate_aliases(proj.quoting)
result: Dict[ComponentName, bool] = {}
for key in ComponentName:
@@ -60,7 +54,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
args: Any
profile_name: str
cli_vars: Dict[str, Any]
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None
def __post_init__(self):
self.validate()
@@ -71,8 +65,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
project: Project,
profile: Profile,
args: Any,
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None,
) -> "RuntimeConfig":
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None,
) -> 'RuntimeConfig':
"""Instantiate a RuntimeConfig from its components.
:param profile: A parsed dbt Profile.
@@ -86,7 +80,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
.replace_dict(_project_quoting_dict(project, profile))
).to_dict(omit_none=True)
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
return cls(
project_name=project.project_name,
@@ -108,6 +102,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
models=project.models,
on_run_start=project.on_run_start,
on_run_end=project.on_run_end,
dispatch=project.dispatch,
seeds=project.seeds,
snapshots=project.snapshots,
dbt_version=project.dbt_version,
@@ -116,6 +111,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
selectors=project.selectors,
query_comment=project.query_comment,
sources=project.sources,
tests=project.tests,
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
@@ -129,7 +125,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
dependencies=dependencies,
)
def new_project(self, project_root: str) -> "RuntimeConfig":
def new_project(self, project_root: str) -> 'RuntimeConfig':
"""Given a new project root, read in its project dictionary, supply the
existing project's profile info, and create a new project file.
@@ -148,7 +144,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
project = Project.from_project_root(
project_root,
renderer,
verify_version=getattr(self.args, "version_check", False),
verify_version=getattr(self.args, 'version_check', False),
)
cfg = self.from_parts(
@@ -171,7 +167,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
"""
result = self.to_project_config(with_packages=True)
result.update(self.to_profile_info(serialize_credentials=True))
result["cli_vars"] = deepcopy(self.cli_vars)
result['cli_vars'] = deepcopy(self.cli_vars)
return result
def validate(self):
@@ -191,21 +187,30 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
profile_renderer: ProfileRenderer,
profile_name: Optional[str],
) -> Profile:
return Profile.render_from_args(args, profile_renderer, profile_name)
return Profile.render_from_args(
args, profile_renderer, profile_name
)
@classmethod
def collect_parts(cls: Type["RuntimeConfig"], args: Any) -> Tuple[Project, Profile]:
def collect_parts(
cls: Type['RuntimeConfig'], args: Any
) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = getattr(args, "version_check", False)
partial = Project.partial_load(project_root, verify_version=version_check)
version_check = getattr(args, 'version_check', False)
partial = Project.partial_load(
project_root,
verify_version=version_check
)
# build the profile using the base renderer and the one fact we know
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
profile_renderer = ProfileRenderer(generate_base_context(cli_vars))
profile_name = partial.render_profile_name(profile_renderer)
profile = cls._get_rendered_profile(args, profile_renderer, profile_name)
profile = cls._get_rendered_profile(
args, profile_renderer, profile_name
)
# get a new renderer using our target information and render the
# project
@@ -215,7 +220,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
return (project, profile)
@classmethod
def from_args(cls, args: Any) -> "RuntimeConfig":
def from_args(cls, args: Any) -> 'RuntimeConfig':
"""Given arguments, read in dbt_project.yml from the current directory,
read in packages.yml if it exists, and use them to find the profile to
load.
@@ -235,7 +240,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
def get_metadata(self) -> ManifestMetadata:
return ManifestMetadata(
project_id=self.hashed_name(), adapter_type=self.credentials.type
project_id=self.hashed_name(),
adapter_type=self.credentials.type
)
def _get_v2_config_paths(
@@ -245,7 +251,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
paths: MutableSet[FQNPath],
) -> PathSet:
for key, value in config.items():
if isinstance(value, dict) and not key.startswith("+"):
if isinstance(value, dict) and not key.startswith('+'):
self._get_v2_config_paths(value, path + (key,), paths)
else:
paths.add(path)
@@ -261,22 +267,23 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
paths = set()
for key, value in config.items():
if isinstance(value, dict) and not key.startswith("+"):
if isinstance(value, dict) and not key.startswith('+'):
self._get_v2_config_paths(value, path + (key,), paths)
else:
paths.add(path)
return frozenset(paths)
def get_resource_config_paths(self) -> Dict[str, PathSet]:
"""Return a dictionary with 'seeds' and 'models' keys whose values are
"""Return a dictionary with resource type keys whose values are
lists of lists of strings, where each inner list of strings represents
a configured path in the resource.
"""
return {
"models": self._get_config_paths(self.models),
"seeds": self._get_config_paths(self.seeds),
"snapshots": self._get_config_paths(self.snapshots),
"sources": self._get_config_paths(self.sources),
'models': self._get_config_paths(self.models),
'seeds': self._get_config_paths(self.seeds),
'snapshots': self._get_config_paths(self.snapshots),
'sources': self._get_config_paths(self.sources),
'tests': self._get_config_paths(self.tests),
}
def get_unused_resource_config_paths(
@@ -297,7 +304,9 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
for config_path in config_paths:
if not _is_config_used(config_path, fqns):
unused_resource_config_paths.append((resource_type,) + config_path)
unused_resource_config_paths.append(
(resource_type,) + config_path
)
return unused_resource_config_paths
def warn_for_unused_resource_config_paths(
@@ -310,25 +319,38 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
return
msg = UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE.format(
len(unused), "\n".join("- {}".format(".".join(u)) for u in unused)
len(unused),
'\n'.join('- {}'.format('.'.join(u)) for u in unused)
)
warn_or_error(msg, log_fmt=warning_tag("{}"))
warn_or_error(msg, log_fmt=warning_tag('{}'))
def load_dependencies(self) -> Mapping[str, "RuntimeConfig"]:
def load_dependencies(self) -> Mapping[str, 'RuntimeConfig']:
if self.dependencies is None:
all_projects = {self.project_name: self}
internal_packages = get_include_paths(self.credentials.type)
# raise exception if fewer installed packages than in packages.yml
count_packages_specified = len(self.packages.packages) # type: ignore
count_packages_installed = len(tuple(self._get_project_directories()))
if count_packages_specified > count_packages_installed:
raise_compiler_error(
f'dbt found {count_packages_specified} package(s) '
f'specified in packages.yml, but only '
f'{count_packages_installed} package(s) installed '
f'in {self.modules_path}. Run "dbt deps" to '
f'install package dependencies.'
)
project_paths = itertools.chain(
internal_packages, self._get_project_directories()
internal_packages,
self._get_project_directories()
)
for project_name, project in self.load_projects(project_paths):
if project_name in all_projects:
raise_compiler_error(
f"dbt found more than one package with the name "
f'dbt found more than one package with the name '
f'"{project_name}" included in this project. Package '
f"names must be unique in a project. Please rename "
f"one of these packages."
f'names must be unique in a project. Please rename '
f'one of these packages.'
)
all_projects[project_name] = project
self.dependencies = all_projects
@@ -339,14 +361,14 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
def load_projects(
self, paths: Iterable[Path]
) -> Iterator[Tuple[str, "RuntimeConfig"]]:
) -> Iterator[Tuple[str, 'RuntimeConfig']]:
for path in paths:
try:
project = self.new_project(str(path))
except DbtProjectError as e:
raise DbtProjectError(
f"Failed to read package: {e}",
result_type="invalid_project",
f'Failed to read package: {e}',
result_type='invalid_project',
path=path,
) from e
else:
@@ -357,18 +379,22 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
if root.exists():
for path in root.iterdir():
if path.is_dir() and not path.name.startswith("__"):
if path.is_dir() and not path.name.startswith('__'):
yield path
class UnsetCredentials(Credentials):
def __init__(self):
super().__init__("", "")
super().__init__('', '')
@property
def type(self):
return None
@property
def unique_field(self):
return None
def connection_info(self, *args, **kwargs):
return {}
@@ -379,7 +405,9 @@ class UnsetCredentials(Credentials):
class UnsetConfig(UserConfig):
def __getattribute__(self, name):
if name in {f.name for f in fields(UserConfig)}:
raise AttributeError(f"'UnsetConfig' object has no attribute {name}")
raise AttributeError(
f"'UnsetConfig' object has no attribute {name}"
)
def __post_serialize__(self, dct):
return {}
@@ -389,15 +417,15 @@ class UnsetProfile(Profile):
def __init__(self):
self.credentials = UnsetCredentials()
self.config = UnsetConfig()
self.profile_name = ""
self.target_name = ""
self.profile_name = ''
self.target_name = ''
self.threads = -1
def to_target_dict(self):
return {}
def __getattribute__(self, name):
if name in {"profile_name", "target_name", "threads"}:
if name in {'profile_name', 'target_name', 'threads'}:
raise RuntimeException(
f'Error: disallowed attribute "{name}" - no profile!'
)
@@ -421,7 +449,7 @@ class UnsetProfileConfig(RuntimeConfig):
def __getattribute__(self, name):
# Override __getattribute__ to check that the attribute isn't 'banned'.
if name in {"profile_name", "target_name"}:
if name in {'profile_name', 'target_name'}:
raise RuntimeException(
f'Error: disallowed attribute "{name}" - no profile!'
)
@@ -439,8 +467,8 @@ class UnsetProfileConfig(RuntimeConfig):
project: Project,
profile: Profile,
args: Any,
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None,
) -> "RuntimeConfig":
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None,
) -> 'RuntimeConfig':
"""Instantiate a RuntimeConfig from its components.
:param profile: Ignored.
@@ -448,7 +476,7 @@ class UnsetProfileConfig(RuntimeConfig):
:param args: The parsed command-line arguments.
:returns RuntimeConfig: The new configuration.
"""
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
return cls(
project_name=project.project_name,
@@ -470,6 +498,7 @@ class UnsetProfileConfig(RuntimeConfig):
models=project.models,
on_run_start=project.on_run_start,
on_run_end=project.on_run_end,
dispatch=project.dispatch,
seeds=project.seeds,
snapshots=project.snapshots,
dbt_version=project.dbt_version,
@@ -478,13 +507,14 @@ class UnsetProfileConfig(RuntimeConfig):
selectors=project.selectors,
query_comment=project.query_comment,
sources=project.sources,
tests=project.tests,
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
profile_name="",
target_name="",
profile_name='',
target_name='',
config=UnsetConfig(),
threads=getattr(args, "threads", 1),
threads=getattr(args, 'threads', 1),
credentials=UnsetCredentials(),
args=args,
cli_vars=cli_vars,
@@ -499,11 +529,16 @@ class UnsetProfileConfig(RuntimeConfig):
profile_name: Optional[str],
) -> Profile:
try:
profile = Profile.render_from_args(args, profile_renderer, profile_name)
profile = Profile.render_from_args(
args, profile_renderer, profile_name
)
except (DbtProjectError, DbtProfileError) as exc:
logger.debug("Profile not loaded due to error: {}", exc, exc_info=True)
logger.debug(
'Profile not loaded due to error: {}', exc, exc_info=True
)
logger.info(
'No profile "{}" found, continuing with no target', profile_name
'No profile "{}" found, continuing with no target',
profile_name
)
# return the poisoned form
profile = UnsetProfile()
@@ -512,7 +547,7 @@ class UnsetProfileConfig(RuntimeConfig):
return profile
@classmethod
def from_args(cls: Type[RuntimeConfig], args: Any) -> "RuntimeConfig":
def from_args(cls: Type[RuntimeConfig], args: Any) -> 'RuntimeConfig':
"""Given arguments, read in dbt_project.yml from the current directory,
read in packages.yml if it exists, and use them to find the profile to
load.
@@ -527,7 +562,11 @@ class UnsetProfileConfig(RuntimeConfig):
# if it's a real profile, return a real config
cls = RuntimeConfig
return cls.from_parts(project=project, profile=profile, args=args)
return cls.from_parts(
project=project,
profile=profile,
args=args
)
UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE = """\
@@ -541,6 +580,6 @@ There are {} unused configuration paths:
def _is_config_used(path, fqns):
if fqns:
for fqn in fqns:
if len(path) <= len(fqn) and fqn[: len(path)] == path:
if len(path) <= len(fqn) and fqn[:len(path)] == path:
return True
return False

View File

@@ -1,6 +1,8 @@
from pathlib import Path
from typing import Dict, Any
from dbt.clients.yaml_helper import yaml, Loader, Dumper, load_yaml_text # noqa: F401
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, Loader, Dumper, load_yaml_text
)
from dbt.dataclass_schema import ValidationError
from .renderer import SelectorRenderer
@@ -28,8 +30,9 @@ Validator Error:
class SelectorConfig(Dict[str, SelectionSpec]):
@classmethod
def selectors_from_dict(cls, data: Dict[str, Any]) -> "SelectorConfig":
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
try:
SelectorFile.validate(data)
selector_file = SelectorFile.from_dict(data)
@@ -42,12 +45,12 @@ class SelectorConfig(Dict[str, SelectionSpec]):
f"union, intersection, string, dictionary. No lists. "
f"\nhttps://docs.getdbt.com/reference/node-selection/"
f"yaml-selectors",
result_type="invalid_selector",
result_type='invalid_selector'
) from exc
except RuntimeException as exc:
raise DbtSelectorsError(
f"Could not read selector file data: {exc}",
result_type="invalid_selector",
f'Could not read selector file data: {exc}',
result_type='invalid_selector',
) from exc
return cls(selectors)
@@ -57,28 +60,26 @@ class SelectorConfig(Dict[str, SelectionSpec]):
cls,
data: Dict[str, Any],
renderer: SelectorRenderer,
) -> "SelectorConfig":
) -> 'SelectorConfig':
try:
rendered = renderer.render_data(data)
except (ValidationError, RuntimeException) as exc:
raise DbtSelectorsError(
f"Could not render selector data: {exc}",
result_type="invalid_selector",
f'Could not render selector data: {exc}',
result_type='invalid_selector',
) from exc
return cls.selectors_from_dict(rendered)
@classmethod
def from_path(
cls,
path: Path,
renderer: SelectorRenderer,
) -> "SelectorConfig":
cls, path: Path, renderer: SelectorRenderer,
) -> 'SelectorConfig':
try:
data = load_yaml_text(load_file_contents(str(path)))
except (ValidationError, RuntimeException) as exc:
raise DbtSelectorsError(
f"Could not read selector file: {exc}",
result_type="invalid_selector",
f'Could not read selector file: {exc}',
result_type='invalid_selector',
path=path,
) from exc
@@ -90,7 +91,9 @@ class SelectorConfig(Dict[str, SelectionSpec]):
def selector_data_from_root(project_root: str) -> Dict[str, Any]:
selector_filepath = resolve_path_from_base("selectors.yml", project_root)
selector_filepath = resolve_path_from_base(
'selectors.yml', project_root
)
if path_exists(selector_filepath):
selectors_dict = load_yaml_text(load_file_contents(selector_filepath))
@@ -99,16 +102,18 @@ def selector_data_from_root(project_root: str) -> Dict[str, Any]:
return selectors_dict
def selector_config_from_data(selectors_data: Dict[str, Any]) -> SelectorConfig:
def selector_config_from_data(
selectors_data: Dict[str, Any]
) -> SelectorConfig:
if not selectors_data:
selectors_data = {"selectors": []}
selectors_data = {'selectors': []}
try:
selectors = SelectorConfig.selectors_from_dict(selectors_data)
except ValidationError as e:
raise DbtSelectorsError(
MALFORMED_SELECTOR_ERROR.format(error=str(e.message)),
result_type="invalid_selector",
result_type='invalid_selector',
) from e
return selectors
@@ -120,6 +125,7 @@ def selector_config_from_data(selectors_data: Dict[str, Any]) -> SelectorConfig:
# be necessary to make changes here. Ideally it would be
# good to combine the two flows into one at some point.
class SelectorDict:
@classmethod
def parse_dict_definition(cls, definition):
key = list(definition)[0]
@@ -130,10 +136,10 @@ class SelectorDict:
new_value = cls.parse_from_definition(sel_def)
new_values.append(new_value)
value = new_values
if key == "exclude":
if key == 'exclude':
definition = {key: value}
elif len(definition) == 1:
definition = {"method": key, "value": value}
definition = {'method': key, 'value': value}
return definition
@classmethod
@@ -155,10 +161,10 @@ class SelectorDict:
def parse_from_definition(cls, definition):
if isinstance(definition, str):
definition = SelectionCriteria.dict_from_single_spec(definition)
elif "union" in definition:
definition = cls.parse_a_definition("union", definition)
elif "intersection" in definition:
definition = cls.parse_a_definition("intersection", definition)
elif 'union' in definition:
definition = cls.parse_a_definition('union', definition)
elif 'intersection' in definition:
definition = cls.parse_a_definition('intersection', definition)
elif isinstance(definition, dict):
definition = cls.parse_dict_definition(definition)
return definition
@@ -169,8 +175,8 @@ class SelectorDict:
def parse_from_selectors_list(cls, selectors):
selector_dict = {}
for selector in selectors:
sel_name = selector["name"]
sel_name = selector['name']
selector_dict[sel_name] = selector
definition = cls.parse_from_definition(selector["definition"])
selector_dict[sel_name]["definition"] = definition
definition = cls.parse_from_definition(selector['definition'])
selector_dict[sel_name]['definition'] = definition
return selector_dict

View File

@@ -15,8 +15,9 @@ def parse_cli_vars(var_string: str) -> Dict[str, Any]:
type_name = var_type.__name__
raise_compiler_error(
"The --vars argument must be a YAML dictionary, but was "
"of type '{}'".format(type_name)
)
"of type '{}'".format(type_name))
except ValidationException:
logger.error("The YAML provided in the --vars argument is not valid.\n")
logger.error(
"The YAML provided in the --vars argument is not valid.\n"
)
raise

View File

@@ -1,16 +1,14 @@
import json
import os
from typing import Any, Dict, NoReturn, Optional, Mapping
from typing import (
Any, Dict, NoReturn, Optional, Mapping
)
from dbt import flags
from dbt import tracking
from dbt.clients.jinja import undefined_error, get_rendered
from dbt.clients.yaml_helper import ( # noqa: F401
yaml,
safe_load,
SafeLoader,
Loader,
Dumper,
yaml, safe_load, SafeLoader, Loader, Dumper
)
from dbt.contracts.graph.compiled import CompiledResource
from dbt.exceptions import raise_compiler_error, MacroReturn
@@ -27,26 +25,38 @@ import re
def get_pytz_module_context() -> Dict[str, Any]:
context_exports = pytz.__all__ # type: ignore
return {name: getattr(pytz, name) for name in context_exports}
return {
name: getattr(pytz, name) for name in context_exports
}
def get_datetime_module_context() -> Dict[str, Any]:
context_exports = ["date", "datetime", "time", "timedelta", "tzinfo"]
context_exports = [
'date',
'datetime',
'time',
'timedelta',
'tzinfo'
]
return {name: getattr(datetime, name) for name in context_exports}
return {
name: getattr(datetime, name) for name in context_exports
}
def get_re_module_context() -> Dict[str, Any]:
context_exports = re.__all__
return {name: getattr(re, name) for name in context_exports}
return {
name: getattr(re, name) for name in context_exports
}
def get_context_modules() -> Dict[str, Dict[str, Any]]:
return {
"pytz": get_pytz_module_context(),
"datetime": get_datetime_module_context(),
"re": get_re_module_context(),
'pytz': get_pytz_module_context(),
'datetime': get_datetime_module_context(),
're': get_re_module_context(),
}
@@ -80,8 +90,8 @@ class ContextMeta(type):
new_dct = {}
for base in bases:
context_members.update(getattr(base, "_context_members_", {}))
context_attrs.update(getattr(base, "_context_attrs_", {}))
context_members.update(getattr(base, '_context_members_', {}))
context_attrs.update(getattr(base, '_context_attrs_', {}))
for key, value in dct.items():
if isinstance(value, ContextMember):
@@ -90,22 +100,21 @@ class ContextMeta(type):
context_attrs[context_key] = key
value = value.inner
new_dct[key] = value
new_dct["_context_members_"] = context_members
new_dct["_context_attrs_"] = context_attrs
new_dct['_context_members_'] = context_members
new_dct['_context_attrs_'] = context_attrs
return type.__new__(mcls, name, bases, new_dct)
class Var:
UndefinedVarError = (
"Required var '{}' not found in config:\nVars " "supplied to {} = {}"
)
UndefinedVarError = "Required var '{}' not found in config:\nVars "\
"supplied to {} = {}"
_VAR_NOTSET = object()
def __init__(
self,
context: Mapping[str, Any],
cli_vars: Mapping[str, Any],
node: Optional[CompiledResource] = None,
node: Optional[CompiledResource] = None
) -> None:
self._context: Mapping[str, Any] = context
self._cli_vars: Mapping[str, Any] = cli_vars
@@ -120,12 +129,14 @@ class Var:
if self._node is not None:
return self._node.name
else:
return "<Configuration>"
return '<Configuration>'
def get_missing_var(self, var_name):
dct = {k: self._merged[k] for k in self._merged}
pretty_vars = json.dumps(dct, sort_keys=True, indent=4)
msg = self.UndefinedVarError.format(var_name, self.node_name, pretty_vars)
msg = self.UndefinedVarError.format(
var_name, self.node_name, pretty_vars
)
raise_compiler_error(msg, self._node)
def has_var(self, var_name: str):
@@ -156,7 +167,7 @@ class BaseContext(metaclass=ContextMeta):
def generate_builtins(self):
builtins: Dict[str, Any] = {}
for key, value in self._context_members_.items():
if hasattr(value, "__get__"):
if hasattr(value, '__get__'):
# handle properties, bound methods, etc
value = value.__get__(self)
builtins[key] = value
@@ -164,9 +175,9 @@ class BaseContext(metaclass=ContextMeta):
# no dbtClassMixin so this is not an actual override
def to_dict(self):
self._ctx["context"] = self._ctx
self._ctx['context'] = self._ctx
builtins = self.generate_builtins()
self._ctx["builtins"] = builtins
self._ctx['builtins'] = builtins
self._ctx.update(builtins)
return self._ctx
@@ -275,20 +286,18 @@ class BaseContext(metaclass=ContextMeta):
msg = f"Env var required but not provided: '{var}'"
undefined_error(msg)
if os.environ.get("DBT_MACRO_DEBUGGING"):
if os.environ.get('DBT_MACRO_DEBUGGING'):
@contextmember
@staticmethod
def debug():
"""Enter a debugger at this line in the compiled jinja code."""
import sys
import ipdb # type: ignore
frame = sys._getframe(3)
ipdb.set_trace(frame)
return ""
return ''
@contextmember("return")
@contextmember('return')
@staticmethod
def _return(data: Any) -> NoReturn:
"""The `return` function can be used in macros to return data to the
@@ -339,7 +348,9 @@ class BaseContext(metaclass=ContextMeta):
@contextmember
@staticmethod
def tojson(value: Any, default: Any = None, sort_keys: bool = False) -> Any:
def tojson(
value: Any, default: Any = None, sort_keys: bool = False
) -> Any:
"""The `tojson` context method can be used to serialize a Python
object primitive, eg. a `dict` or `list` to a json string.
@@ -435,7 +446,7 @@ class BaseContext(metaclass=ContextMeta):
logger.info(msg)
else:
logger.debug(msg)
return ""
return ''
@contextproperty
def run_started_at(self) -> Optional[datetime.datetime]:

View File

@@ -4,14 +4,16 @@ from dbt.contracts.connection import AdapterRequiredConfig
from dbt.node_types import NodeType
from dbt.utils import MultiDict
from dbt.context import contextproperty, Var
from dbt.context.base import contextproperty, Var
from dbt.context.target import TargetContext
class ConfiguredContext(TargetContext):
config: AdapterRequiredConfig
def __init__(self, config: AdapterRequiredConfig) -> None:
def __init__(
self, config: AdapterRequiredConfig
) -> None:
super().__init__(config, config.cli_vars)
@contextproperty
@@ -68,7 +70,20 @@ class SchemaYamlContext(ConfiguredContext):
@contextproperty
def var(self) -> ConfiguredVar:
return ConfiguredVar(self._ctx, self.config, self._project_name)
return ConfiguredVar(
self._ctx, self.config, self._project_name
)
class MacroResolvingContext(ConfiguredContext):
def __init__(self, config):
super().__init__(config)
@contextproperty
def var(self) -> ConfiguredVar:
return ConfiguredVar(
self._ctx, self.config, self.config.project_name
)
def generate_schema_yml(
@@ -76,3 +91,10 @@ def generate_schema_yml(
) -> Dict[str, Any]:
ctx = SchemaYamlContext(config, project_name)
return ctx.to_dict()
def generate_macro_context(
config: AdapterRequiredConfig,
) -> Dict[str, Any]:
ctx = MacroResolvingContext(config)
return ctx.to_dict()

View File

@@ -17,8 +17,8 @@ class ModelParts(IsFQNResource):
package_name: str
T = TypeVar("T") # any old type
C = TypeVar("C", bound=BaseConfig)
T = TypeVar('T') # any old type
C = TypeVar('C', bound=BaseConfig)
class ConfigSource:
@@ -36,13 +36,15 @@ class UnrenderedConfig(ConfigSource):
def get_config_dict(self, resource_type: NodeType) -> Dict[str, Any]:
unrendered = self.project.unrendered.project_dict
if resource_type == NodeType.Seed:
model_configs = unrendered.get("seeds")
model_configs = unrendered.get('seeds')
elif resource_type == NodeType.Snapshot:
model_configs = unrendered.get("snapshots")
model_configs = unrendered.get('snapshots')
elif resource_type == NodeType.Source:
model_configs = unrendered.get("sources")
model_configs = unrendered.get('sources')
elif resource_type == NodeType.Test:
model_configs = unrendered.get('tests')
else:
model_configs = unrendered.get("models")
model_configs = unrendered.get('models')
if model_configs is None:
return {}
@@ -61,6 +63,8 @@ class RenderedConfig(ConfigSource):
model_configs = self.project.snapshots
elif resource_type == NodeType.Source:
model_configs = self.project.sources
elif resource_type == NodeType.Test:
model_configs = self.project.tests
else:
model_configs = self.project.models
return model_configs
@@ -79,8 +83,8 @@ class BaseContextConfigGenerator(Generic[T]):
dependencies = self._active_project.load_dependencies()
if project_name not in dependencies:
raise InternalException(
f"Project name {project_name} not found in dependencies "
f"(found {list(dependencies)})"
f'Project name {project_name} not found in dependencies '
f'(found {list(dependencies)})'
)
return dependencies[project_name]
@@ -92,8 +96,8 @@ class BaseContextConfigGenerator(Generic[T]):
for level_config in fqn_search(model_configs, fqn):
result = {}
for key, value in level_config.items():
if key.startswith("+"):
result[key[1:]] = deepcopy(value)
if key.startswith('+'):
result[key[1:].strip()] = deepcopy(value)
elif not isinstance(value, dict):
result[key] = deepcopy(value)
@@ -116,11 +120,12 @@ class BaseContextConfigGenerator(Generic[T]):
def calculate_node_config(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any] = None
) -> BaseConfig:
own_config = self.get_node_project(project_name)
@@ -130,8 +135,15 @@ class BaseContextConfigGenerator(Generic[T]):
for fqn_config in project_configs:
result = self._update_from_config(result, fqn_config)
for config_call in config_calls:
result = self._update_from_config(result, config_call)
# When schema files patch config, it has lower precedence than
# config in the models (config_call_dict), so we add the patch_config_dict
# before the config_call_dict
if patch_config_dict:
result = self._update_from_config(result, patch_config_dict)
# config_calls are created in the 'experimental' model parser and
# the ParseConfigObject (via add_config_call)
result = self._update_from_config(result, config_call_dict)
if own_config.project_name != self._active_project.project_name:
for fqn_config in self._active_project_configs(fqn, resource_type):
@@ -143,11 +155,12 @@ class BaseContextConfigGenerator(Generic[T]):
@abstractmethod
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any],
) -> Dict[str, Any]:
...
@@ -171,25 +184,31 @@ class ContextConfigGenerator(BaseContextConfigGenerator[C]):
def _update_from_config(
self, result: C, partial: Dict[str, Any], validate: bool = False
) -> C:
translated = self._active_project.credentials.translate_aliases(partial)
translated = self._active_project.credentials.translate_aliases(
partial
)
return result.update_from(
translated, self._active_project.credentials.type, validate=validate
translated,
self._active_project.credentials.type,
validate=validate
)
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
config = self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
finalized = config.finalize_and_validate()
return finalized.to_dict(omit_none=True)
@@ -201,21 +220,27 @@ class UnrenderedConfigGenerator(BaseContextConfigGenerator[Dict[str, Any]]):
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
return self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
def initial_result(self, resource_type: NodeType, base: bool) -> Dict[str, Any]:
def initial_result(
self,
resource_type: NodeType,
base: bool
) -> Dict[str, Any]:
return {}
def _update_from_config(
@@ -224,7 +249,9 @@ class UnrenderedConfigGenerator(BaseContextConfigGenerator[Dict[str, Any]]):
partial: Dict[str, Any],
validate: bool = False,
) -> Dict[str, Any]:
translated = self._active_project.credentials.translate_aliases(partial)
translated = self._active_project.credentials.translate_aliases(
partial
)
result.update(translated)
return result
@@ -237,20 +264,39 @@ class ContextConfig:
resource_type: NodeType,
project_name: str,
) -> None:
self._config_calls: List[Dict[str, Any]] = []
self._config_call_dict: Dict[str, Any] = {}
self._active_project = active_project
self._fqn = fqn
self._resource_type = resource_type
self._project_name = project_name
def update_in_model_config(self, opts: Dict[str, Any]) -> None:
self._config_calls.append(opts)
def add_config_call(self, opts: Dict[str, Any]) -> None:
dct = self._config_call_dict
self._add_config_call(dct, opts)
@classmethod
def _add_config_call(cls, config_call_dict, opts: Dict[str, Any]) -> None:
for k, v in opts.items():
# MergeBehavior for post-hook and pre-hook is to collect all
# values, instead of overwriting
if k in BaseConfig.mergebehavior['append']:
if not isinstance(v, list):
v = [v]
if k in BaseConfig.mergebehavior['update'] and not isinstance(v, dict):
raise InternalException(f'expected dict, got {v}')
if k in config_call_dict and isinstance(config_call_dict[k], list):
config_call_dict[k].extend(v)
elif k in config_call_dict and isinstance(config_call_dict[k], dict):
config_call_dict[k].update(v)
else:
config_call_dict[k] = v
def build_config_dict(
self,
base: bool = False,
*,
rendered: bool = True,
patch_config_dict: dict = None
) -> Dict[str, Any]:
if rendered:
src = ContextConfigGenerator(self._active_project)
@@ -258,9 +304,10 @@ class ContextConfig:
src = UnrenderedConfigGenerator(self._active_project)
return src.calculate_node_config_dict(
config_calls=self._config_calls,
config_call_dict=self._config_call_dict,
fqn=self._fqn,
resource_type=self._resource_type,
project_name=self._project_name,
base=base,
patch_config_dict=patch_config_dict
)

View File

@@ -1,4 +1,6 @@
from typing import Any, Dict, Union
from typing import (
Any, Dict, Union
)
from dbt.exceptions import (
doc_invalid_args,
@@ -9,7 +11,7 @@ from dbt.contracts.graph.compiled import CompileResultNode
from dbt.contracts.graph.manifest import Manifest
from dbt.contracts.graph.parsed import ParsedMacro
from dbt.context import contextmember
from dbt.context.base import contextmember
from dbt.context.configured import SchemaYamlContext
@@ -55,14 +57,19 @@ class DocsRuntimeContext(SchemaYamlContext):
else:
doc_invalid_args(self.node, args)
# ParsedDocumentation
target_doc = self.manifest.resolve_doc(
doc_name,
doc_package_name,
self._project_name,
self.node.package_name,
)
if target_doc is None:
if target_doc:
file_id = target_doc.file_id
if file_id in self.manifest.files:
source_file = self.manifest.files[file_id]
source_file.add_node(self.node.unique_id)
else:
doc_target_not_found(self.node, doc_name, doc_package_name)
return target_doc.block_contents

View File

@@ -1,4 +1,6 @@
from typing import Dict, MutableMapping, Optional
from typing import (
Dict, MutableMapping, Optional
)
from dbt.contracts.graph.parsed import ParsedMacro
from dbt.exceptions import raise_duplicate_macro_name, raise_compiler_error
from dbt.include.global_project import PROJECT_NAME as GLOBAL_PROJECT_NAME
@@ -12,8 +14,12 @@ MacroNamespace = Dict[str, ParsedMacro]
# so that higher precedence macros are found first.
# This functionality is also provided by the MacroNamespace,
# but the intention is to eventually replace that class.
# This enables us to get the macor unique_id without
# This enables us to get the macro unique_id without
# processing every macro in the project.
# Note: the root project macros override everything in the
# dbt internal projects. External projects (dependencies) will
# use their own macros first, then pull from the root project
# followed by dbt internal projects.
class MacroResolver:
def __init__(
self,
@@ -43,20 +49,32 @@ class MacroResolver:
for pkg in reversed(self.internal_package_names):
if pkg in self.internal_packages:
# Turn the internal packages into a flat namespace
self.internal_packages_namespace.update(self.internal_packages[pkg])
self.internal_packages_namespace.update(
self.internal_packages[pkg])
# search order:
# local_namespace (package of particular node), not including
# the internal packages or the root package
# This means that within an extra package, it uses its own macros
# root package namespace
# non-internal packages (that aren't local or root)
# dbt internal packages
def _build_macros_by_name(self):
macros_by_name = {}
# search root package macros
for macro in self.root_package_macros.values():
# all internal packages (already in the right order)
for macro in self.internal_packages_namespace.values():
macros_by_name[macro.name] = macro
# search miscellaneous non-internal packages
# non-internal packages
for fnamespace in self.packages.values():
for macro in fnamespace.values():
macros_by_name[macro.name] = macro
# search all internal packages
for macro in self.internal_packages_namespace.values():
# root package macros
for macro in self.root_package_macros.values():
macros_by_name[macro.name] = macro
self.macros_by_name = macros_by_name
def _add_macro_to(
@@ -71,7 +89,9 @@ class MacroResolver:
package_namespaces[macro.package_name] = namespace
if macro.name in namespace:
raise_duplicate_macro_name(macro, macro, macro.package_name)
raise_duplicate_macro_name(
macro, macro, macro.package_name
)
package_namespaces[macro.package_name][macro.name] = macro
def add_macro(self, macro: ParsedMacro):
@@ -92,20 +112,26 @@ class MacroResolver:
for macro in self.macros.values():
self.add_macro(macro)
def get_macro_id(self, local_package, macro_name):
def get_macro(self, local_package, macro_name):
local_package_macros = {}
if (
local_package not in self.internal_package_names
and local_package in self.packages
):
if (local_package not in self.internal_package_names and
local_package in self.packages):
local_package_macros = self.packages[local_package]
# First: search the local packages for this macro
if macro_name in local_package_macros:
return local_package_macros[macro_name].unique_id
return local_package_macros[macro_name]
# Now look up in the standard search order
if macro_name in self.macros_by_name:
return self.macros_by_name[macro_name].unique_id
return self.macros_by_name[macro_name]
return None
def get_macro_id(self, local_package, macro_name):
macro = self.get_macro(local_package, macro_name)
if macro is None:
return None
else:
return macro.unique_id
# Currently this is just used by test processing in the schema
# parser (in connection with the MacroResolver). Future work
@@ -114,22 +140,42 @@ class MacroResolver:
# is that you can limit the number of macros provided to the
# context dictionary in the 'to_dict' manifest method.
class TestMacroNamespace:
def __init__(self, macro_resolver, ctx, node, thread_ctx, depends_on_macros):
def __init__(
self, macro_resolver, ctx, node, thread_ctx, depends_on_macros
):
self.macro_resolver = macro_resolver
self.ctx = ctx
self.node = node
self.node = node # can be none
self.thread_ctx = thread_ctx
local_namespace = {}
self.local_namespace = {}
self.project_namespace = {}
if depends_on_macros:
for macro_unique_id in depends_on_macros:
macro = self.manifest.macros[macro_unique_id]
local_namespace[macro.name] = MacroGenerator(
macro,
self.ctx,
self.node,
self.thread_ctx,
)
self.local_namespace = local_namespace
dep_macros = []
self.recursively_get_depends_on_macros(depends_on_macros, dep_macros)
for macro_unique_id in dep_macros:
if macro_unique_id in self.macro_resolver.macros:
# Split up the macro unique_id to get the project_name
(_, project_name, macro_name) = macro_unique_id.split('.')
# Save the plain macro_name in the local_namespace
macro = self.macro_resolver.macros[macro_unique_id]
macro_gen = MacroGenerator(
macro, self.ctx, self.node, self.thread_ctx,
)
self.local_namespace[macro_name] = macro_gen
# We also need the two part macro name
if project_name not in self.project_namespace:
self.project_namespace[project_name] = {}
self.project_namespace[project_name][macro_name] = macro_gen
def recursively_get_depends_on_macros(self, depends_on_macros, dep_macros):
for macro_unique_id in depends_on_macros:
if macro_unique_id in dep_macros:
continue
dep_macros.append(macro_unique_id)
if macro_unique_id in self.macro_resolver.macros:
macro = self.macro_resolver.macros[macro_unique_id]
if macro.depends_on.macros:
self.recursively_get_depends_on_macros(macro.depends_on.macros, dep_macros)
def get_from_package(
self, package_name: Optional[str], name: str
@@ -139,9 +185,15 @@ class TestMacroNamespace:
macro = self.macro_resolver.macros_by_name.get(name)
elif package_name == GLOBAL_PROJECT_NAME:
macro = self.macro_resolver.internal_packages_namespace.get(name)
elif package_name in self.resolver.packages:
elif package_name in self.macro_resolver.packages:
macro = self.macro_resolver.packages[package_name].get(name)
else:
raise_compiler_error(f"Could not find package '{package_name}'")
macro_func = MacroGenerator(macro, self.ctx, self.node, self.thread_ctx)
raise_compiler_error(
f"Could not find package '{package_name}'"
)
if not macro:
return None
macro_func = MacroGenerator(
macro, self.ctx, self.node, self.thread_ctx
)
return macro_func

View File

@@ -1,9 +1,13 @@
from typing import Any, Dict, Iterable, Union, Optional, List, Iterator, Mapping, Set
from typing import (
Any, Dict, Iterable, Union, Optional, List, Iterator, Mapping, Set
)
from dbt.clients.jinja import MacroGenerator, MacroStack
from dbt.contracts.graph.parsed import ParsedMacro
from dbt.include.global_project import PROJECT_NAME as GLOBAL_PROJECT_NAME
from dbt.exceptions import raise_duplicate_macro_name, raise_compiler_error
from dbt.exceptions import (
raise_duplicate_macro_name, raise_compiler_error
)
FlatNamespace = Dict[str, MacroGenerator]
@@ -15,13 +19,17 @@ FullNamespace = Dict[str, NamespaceMember]
# and provide the ability to flatten them into the ManifestContexts
# that are created for jinja, so that macro calls can be resolved.
# Creates special iterators and _keys methods to flatten the lists.
# When this class is created it has a static 'local_namespace' which
# depends on the package of the node, so it only works for one
# particular local package at a time for "flattening" into a context.
# 'get_by_package' should work for any macro.
class MacroNamespace(Mapping):
def __init__(
self,
global_namespace: FlatNamespace,
local_namespace: FlatNamespace,
global_project_namespace: FlatNamespace,
packages: Dict[str, FlatNamespace],
global_namespace: FlatNamespace, # root package macros
local_namespace: FlatNamespace, # packages for *this* node
global_project_namespace: FlatNamespace, # internal packages
packages: Dict[str, FlatNamespace], # non-internal packages
):
self.global_namespace: FlatNamespace = global_namespace
self.local_namespace: FlatNamespace = local_namespace
@@ -29,13 +37,13 @@ class MacroNamespace(Mapping):
self.global_project_namespace: FlatNamespace = global_project_namespace
def _search_order(self) -> Iterable[Union[FullNamespace, FlatNamespace]]:
yield self.local_namespace
yield self.global_namespace
yield self.packages
yield self.local_namespace # local package
yield self.global_namespace # root package
yield self.packages # non-internal packages
yield {
GLOBAL_PROJECT_NAME: self.global_project_namespace,
GLOBAL_PROJECT_NAME: self.global_project_namespace, # dbt
}
yield self.global_project_namespace
yield self.global_project_namespace # other internal project besides dbt
# provides special keys method for MacroNamespace iterator
# returns keys from local_namespace, global_namespace, packages,
@@ -71,7 +79,9 @@ class MacroNamespace(Mapping):
elif package_name in self.packages:
return self.packages[package_name].get(name)
else:
raise_compiler_error(f"Could not find package '{package_name}'")
raise_compiler_error(
f"Could not find package '{package_name}'"
)
# This class builds the MacroNamespace by adding macros to
@@ -92,7 +102,9 @@ class MacroNamespaceBuilder:
# internal packages comes from get_adapter_package_names
self.internal_package_names = set(internal_packages)
self.internal_package_names_order = internal_packages
# macro_func is added here if in root package
# macro_func is added here if in root package, since
# the root package acts as a "global" namespace, overriding
# everything else except local external package macro calls
self.globals: FlatNamespace = {}
# macro_func is added here if it's the package for this node
self.locals: FlatNamespace = {}
@@ -116,7 +128,9 @@ class MacroNamespaceBuilder:
hierarchy[macro.package_name] = namespace
if macro.name in namespace:
raise_duplicate_macro_name(macro_func.macro, macro, macro.package_name)
raise_duplicate_macro_name(
macro_func.macro, macro, macro.package_name
)
hierarchy[macro.package_name][macro.name] = macro_func
def add_macro(self, macro: ParsedMacro, ctx: Dict[str, Any]):
@@ -161,8 +175,8 @@ class MacroNamespaceBuilder:
global_project_namespace.update(self.internal_packages[pkg])
return MacroNamespace(
global_namespace=self.globals,
local_namespace=self.locals,
global_project_namespace=global_project_namespace,
packages=self.packages,
global_namespace=self.globals, # root package macros
local_namespace=self.locals, # packages for *this* node
global_project_namespace=global_project_namespace, # internal packages
packages=self.packages, # non internal_packages
)

View File

@@ -2,7 +2,7 @@ from typing import List
from dbt.clients.jinja import MacroStack
from dbt.contracts.connection import AdapterRequiredConfig
from dbt.contracts.graph.manifest import Manifest, AnyManifest
from dbt.contracts.graph.manifest import Manifest
from dbt.context.macro_resolver import TestMacroNamespace
@@ -17,11 +17,10 @@ class ManifestContext(ConfiguredContext):
The given macros can override any previous context values, which will be
available as if they were accessed relative to the package name.
"""
def __init__(
self,
config: AdapterRequiredConfig,
manifest: AnyManifest,
manifest: Manifest,
search_package: str,
) -> None:
super().__init__(config)
@@ -38,12 +37,13 @@ class ManifestContext(ConfiguredContext):
# this takes all the macros in the manifest and adds them
# to the MacroNamespaceBuilder stored in self.namespace
builder = self._get_namespace_builder()
return builder.build_namespace(self.manifest.macros.values(), self._ctx)
return builder.build_namespace(
self.manifest.macros.values(), self._ctx
)
def _get_namespace_builder(self) -> MacroNamespaceBuilder:
# avoid an import loop
from dbt.adapters.factory import get_adapter_package_names
internal_packages: List[str] = get_adapter_package_names(
self.config.credentials.type
)
@@ -62,16 +62,21 @@ class ManifestContext(ConfiguredContext):
# keys in the manifest dictionary
if isinstance(self.namespace, TestMacroNamespace):
dct.update(self.namespace.local_namespace)
dct.update(self.namespace.project_namespace)
else:
dct.update(self.namespace)
return dct
class QueryHeaderContext(ManifestContext):
def __init__(self, config: AdapterRequiredConfig, manifest: Manifest) -> None:
def __init__(
self, config: AdapterRequiredConfig, manifest: Manifest
) -> None:
super().__init__(config, manifest, config.project_name)
def generate_query_header_context(config: AdapterRequiredConfig, manifest: Manifest):
def generate_query_header_context(
config: AdapterRequiredConfig, manifest: Manifest
):
ctx = QueryHeaderContext(config, manifest)
return ctx.to_dict()

View File

@@ -1,33 +1,29 @@
import abc
import os
from typing import (
Callable,
Any,
Dict,
Optional,
Union,
List,
TypeVar,
Type,
Iterable,
Callable, Any, Dict, Optional, Union, List, TypeVar, Type, Iterable,
Mapping,
)
from typing_extensions import Protocol
from dbt import deprecations
from dbt.adapters.base.column import Column
from dbt.adapters.factory import get_adapter, get_adapter_package_names
from dbt.adapters.factory import (
get_adapter, get_adapter_package_names, get_adapter_type_names
)
from dbt.clients import agate_helper
from dbt.clients.jinja import get_rendered, MacroGenerator, MacroStack
from dbt.config import RuntimeConfig, Project
from dbt.context import contextmember, contextproperty, Var
from .base import contextmember, contextproperty, Var
from .configured import FQNLookup
from .context_config import ContextConfig
from dbt.context.macro_resolver import MacroResolver, TestMacroNamespace
from .macros import MacroNamespaceBuilder, MacroNamespace
from .manifest import ManifestContext
from dbt.contracts.connection import AdapterResponse
from dbt.contracts.graph.manifest import Manifest, AnyManifest, Disabled, MacroManifest
from dbt.contracts.graph.manifest import (
Manifest, Disabled
)
from dbt.contracts.graph.compiled import (
CompiledResource,
CompiledSeedNode,
@@ -56,7 +52,9 @@ from dbt.config import IsFQNResource
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt.node_types import NodeType
from dbt.utils import merge, AttrDict, MultiDict
from dbt.utils import (
merge, AttrDict, MultiDict
)
import agate
@@ -79,8 +77,9 @@ class RelationProxy:
return self._relation_type.create_from_source(*args, **kwargs)
def create(self, *args, **kwargs):
kwargs["quote_policy"] = merge(
self._quoting_config, kwargs.pop("quote_policy", {})
kwargs['quote_policy'] = merge(
self._quoting_config,
kwargs.pop('quote_policy', {})
)
return self._relation_type.create(*args, **kwargs)
@@ -97,7 +96,7 @@ class BaseDatabaseWrapper:
self._namespace = namespace
def __getattr__(self, name):
raise NotImplementedError("subclasses need to implement this")
raise NotImplementedError('subclasses need to implement this')
@property
def config(self):
@@ -110,19 +109,23 @@ class BaseDatabaseWrapper:
return self._adapter.commit_if_has_connection()
def _get_adapter_macro_prefixes(self) -> List[str]:
# a future version of this could have plugins automatically call fall
# back to their dependencies' dependencies by using
# `get_adapter_type_names` instead of `[self.config.credentials.type]`
search_prefixes = [self._adapter.type(), "default"]
# order matters for dispatch:
# 1. current adapter
# 2. any parent adapters (dependencies)
# 3. 'default'
search_prefixes = get_adapter_type_names(self._adapter.type()) + ['default']
return search_prefixes
def dispatch(
self, macro_name: str, packages: Optional[List[str]] = None
self,
macro_name: str,
macro_namespace: Optional[str] = None,
packages: Optional[List[str]] = None,
) -> MacroGenerator:
search_packages: List[Optional[str]]
if "." in macro_name:
suggest_package, suggest_macro_name = macro_name.split(".", 1)
if '.' in macro_name:
suggest_package, suggest_macro_name = macro_name.split('.', 1)
msg = (
f'In adapter.dispatch, got a macro name of "{macro_name}", '
f'but "." is not a valid macro name component. Did you mean '
@@ -131,38 +134,50 @@ class BaseDatabaseWrapper:
)
raise CompilationException(msg)
if packages is None:
if packages is not None:
deprecations.warn('dispatch-packages', macro_name=macro_name)
namespace = packages if packages else macro_namespace
if namespace is None:
search_packages = [None]
elif isinstance(packages, str):
raise CompilationException(
f"In adapter.dispatch, got a string packages argument "
f'("{packages}"), but packages should be None or a list.'
)
elif isinstance(namespace, str):
search_packages = self._adapter.config.get_macro_search_order(namespace)
if not search_packages and namespace in self._adapter.config.dependencies:
search_packages = [namespace]
if not search_packages:
raise CompilationException(
f'In adapter.dispatch, got a string packages argument '
f'("{packages}"), but packages should be None or a list.'
)
else:
search_packages = packages
# Not a string and not None so must be a list
search_packages = namespace
attempts = []
for package_name in search_packages:
for prefix in self._get_adapter_macro_prefixes():
search_name = f"{prefix}__{macro_name}"
search_name = f'{prefix}__{macro_name}'
try:
# this uses the namespace from the context
macro = self._namespace.get_from_package(package_name, search_name)
macro = self._namespace.get_from_package(
package_name, search_name
)
except CompilationException as exc:
raise CompilationException(
f"In dispatch: {exc.msg}",
f'In dispatch: {exc.msg}',
) from exc
if package_name is None:
attempts.append(search_name)
else:
attempts.append(f"{package_name}.{search_name}")
attempts.append(f'{package_name}.{search_name}')
if macro is not None:
return macro
searched = ", ".join(repr(a) for a in attempts)
searched = ', '.join(repr(a) for a in attempts)
msg = (
f"In dispatch: No macro named '{macro_name}' found\n"
f" Searched for: {searched}"
@@ -192,10 +207,14 @@ class BaseResolver(metaclass=abc.ABCMeta):
class BaseRefResolver(BaseResolver):
@abc.abstractmethod
def resolve(self, name: str, package: Optional[str] = None) -> RelationProxy:
def resolve(
self, name: str, package: Optional[str] = None
) -> RelationProxy:
...
def _repack_args(self, name: str, package: Optional[str]) -> List[str]:
def _repack_args(
self, name: str, package: Optional[str]
) -> List[str]:
if package is None:
return [name]
else:
@@ -204,13 +223,14 @@ class BaseRefResolver(BaseResolver):
def validate_args(self, name: str, package: Optional[str]):
if not isinstance(name, str):
raise CompilationException(
f"The name argument to ref() must be a string, got " f"{type(name)}"
f'The name argument to ref() must be a string, got '
f'{type(name)}'
)
if package is not None and not isinstance(package, str):
raise CompilationException(
f"The package argument to ref() must be a string or None, got "
f"{type(package)}"
f'The package argument to ref() must be a string or None, got '
f'{type(package)}'
)
def __call__(self, *args: str) -> RelationProxy:
@@ -235,19 +255,20 @@ class BaseSourceResolver(BaseResolver):
def validate_args(self, source_name: str, table_name: str):
if not isinstance(source_name, str):
raise CompilationException(
f"The source name (first) argument to source() must be a "
f"string, got {type(source_name)}"
f'The source name (first) argument to source() must be a '
f'string, got {type(source_name)}'
)
if not isinstance(table_name, str):
raise CompilationException(
f"The table name (second) argument to source() must be a "
f"string, got {type(table_name)}"
f'The table name (second) argument to source() must be a '
f'string, got {type(table_name)}'
)
def __call__(self, *args: str) -> RelationProxy:
if len(args) != 2:
raise_compiler_error(
f"source() takes exactly two arguments ({len(args)} given)", self.model
f"source() takes exactly two arguments ({len(args)} given)",
self.model
)
self.validate_args(args[0], args[1])
return self.resolve(args[0], args[1])
@@ -258,22 +279,21 @@ class Config(Protocol):
...
# `config` implementations
# Implementation of "config(..)" calls in models
class ParseConfigObject(Config):
def __init__(self, model, context_config: Optional[ContextConfig]):
self.model = model
self.context_config = context_config
def _transform_config(self, config):
for oldkey in ("pre_hook", "post_hook"):
for oldkey in ('pre_hook', 'post_hook'):
if oldkey in config:
newkey = oldkey.replace("_", "-")
newkey = oldkey.replace('_', '-')
if newkey in config:
raise_compiler_error(
'Invalid config, has conflicting keys "{}" and "{}"'.format(
oldkey, newkey
),
self.model,
'Invalid config, has conflicting keys "{}" and "{}"'
.format(oldkey, newkey),
self.model
)
config[newkey] = config.pop(oldkey)
return config
@@ -284,25 +304,29 @@ class ParseConfigObject(Config):
elif len(args) == 0 and len(kwargs) > 0:
opts = kwargs
else:
raise_compiler_error("Invalid inline model config", self.model)
raise_compiler_error(
"Invalid inline model config",
self.model)
opts = self._transform_config(opts)
# it's ok to have a parse context with no context config, but you must
# not call it!
if self.context_config is None:
raise RuntimeException("At parse time, did not receive a context config")
self.context_config.update_in_model_config(opts)
return ""
raise RuntimeException(
'At parse time, did not receive a context config'
)
self.context_config.add_config_call(opts)
return ''
def set(self, name, value):
return self.__call__({name: value})
def require(self, name, validator=None):
return ""
return ''
def get(self, name, validator=None, default=None):
return ""
return ''
def persist_relation_docs(self) -> bool:
return False
@@ -312,12 +336,14 @@ class ParseConfigObject(Config):
class RuntimeConfigObject(Config):
def __init__(self, model, context_config: Optional[ContextConfig] = None):
def __init__(
self, model, context_config: Optional[ContextConfig] = None
):
self.model = model
# we never use or get a config, only the parser cares
def __call__(self, *args, **kwargs):
return ""
return ''
def set(self, name, value):
return self.__call__({name: value})
@@ -327,7 +353,7 @@ class RuntimeConfigObject(Config):
def _lookup(self, name, default=_MISSING):
# if this is a macro, there might be no `model.config`.
if not hasattr(self.model, "config"):
if not hasattr(self.model, 'config'):
result = default
else:
result = self.model.config.get(name, default)
@@ -352,24 +378,22 @@ class RuntimeConfigObject(Config):
return to_return
def persist_relation_docs(self) -> bool:
persist_docs = self.get("persist_docs", default={})
persist_docs = self.get('persist_docs', default={})
if not isinstance(persist_docs, dict):
raise_compiler_error(
f"Invalid value provided for 'persist_docs'. Expected dict "
f"but received {type(persist_docs)}"
)
f"but received {type(persist_docs)}")
return persist_docs.get("relation", False)
return persist_docs.get('relation', False)
def persist_column_docs(self) -> bool:
persist_docs = self.get("persist_docs", default={})
persist_docs = self.get('persist_docs', default={})
if not isinstance(persist_docs, dict):
raise_compiler_error(
f"Invalid value provided for 'persist_docs'. Expected dict "
f"but received {type(persist_docs)}"
)
f"but received {type(persist_docs)}")
return persist_docs.get("columns", False)
return persist_docs.get('columns', False)
# `adapter` implementations
@@ -379,10 +403,8 @@ class ParseDatabaseWrapper(BaseDatabaseWrapper):
"""
def __getattr__(self, name):
override = (
name in self._adapter._available_
and name in self._adapter._parse_replacements_
)
override = (name in self._adapter._available_ and
name in self._adapter._parse_replacements_)
if override:
return self._adapter._parse_replacements_[name]
@@ -414,7 +436,9 @@ class RuntimeDatabaseWrapper(BaseDatabaseWrapper):
# `ref` implementations
class ParseRefResolver(BaseRefResolver):
def resolve(self, name: str, package: Optional[str] = None) -> RelationProxy:
def resolve(
self, name: str, package: Optional[str] = None
) -> RelationProxy:
self.model.refs.append(self._repack_args(name, package))
return self.Relation.create_from(self.config, self.model)
@@ -444,15 +468,22 @@ class RuntimeRefResolver(BaseRefResolver):
self.validate(target_model, target_name, target_package)
return self.create_relation(target_model, target_name)
def create_relation(self, target_model: ManifestNode, name: str) -> RelationProxy:
def create_relation(
self, target_model: ManifestNode, name: str
) -> RelationProxy:
if target_model.is_ephemeral_model:
self.model.set_cte(target_model.unique_id, None)
return self.Relation.create_ephemeral_from_node(self.config, target_model)
return self.Relation.create_ephemeral_from_node(
self.config, target_model
)
else:
return self.Relation.create_from(self.config, target_model)
def validate(
self, resolved: ManifestNode, target_name: str, target_package: Optional[str]
self,
resolved: ManifestNode,
target_name: str,
target_package: Optional[str]
) -> None:
if resolved.unique_id not in self.model.depends_on.nodes:
args = self._repack_args(target_name, target_package)
@@ -468,15 +499,16 @@ class OperationRefResolver(RuntimeRefResolver):
) -> None:
pass
def create_relation(self, target_model: ManifestNode, name: str) -> RelationProxy:
def create_relation(
self, target_model: ManifestNode, name: str
) -> RelationProxy:
if target_model.is_ephemeral_model:
# In operations, we can't ref() ephemeral nodes, because
# ParsedMacros do not support set_cte
raise_compiler_error(
"Operations can not ref() ephemeral nodes, but {} is ephemeral".format(
target_model.name
),
self.model,
'Operations can not ref() ephemeral nodes, but {} is ephemeral'
.format(target_model.name),
self.model
)
else:
return super().create_relation(target_model, name)
@@ -528,7 +560,8 @@ class ModelConfiguredVar(Var):
if package_name not in dependencies:
# I don't think this is actually reachable
raise_compiler_error(
f"Node package named {package_name} not found!", self._node
f'Node package named {package_name} not found!',
self._node
)
yield dependencies[package_name]
yield self._config
@@ -600,7 +633,7 @@ class OperationProvider(RuntimeProvider):
ref = OperationRefResolver
T = TypeVar("T")
T = TypeVar('T')
# Base context collection, used for parsing configs.
@@ -614,7 +647,9 @@ class ProviderContext(ManifestContext):
context_config: Optional[ContextConfig],
) -> None:
if provider is None:
raise InternalException(f"Invalid provider given to context: {provider}")
raise InternalException(
f"Invalid provider given to context: {provider}"
)
# mypy appeasement - we know it'll be a RuntimeConfig
self.config: RuntimeConfig
self.model: Union[ParsedMacro, ManifestNode] = model
@@ -624,12 +659,16 @@ class ProviderContext(ManifestContext):
self.provider: Provider = provider
self.adapter = get_adapter(self.config)
# The macro namespace is used in creating the DatabaseWrapper
self.db_wrapper = self.provider.DatabaseWrapper(self.adapter, self.namespace)
self.db_wrapper = self.provider.DatabaseWrapper(
self.adapter, self.namespace
)
# This overrides the method in ManifestContext, and provides
# a model, which the ManifestContext builder does not
def _get_namespace_builder(self):
internal_packages = get_adapter_package_names(self.config.credentials.type)
internal_packages = get_adapter_package_names(
self.config.credentials.type
)
return MacroNamespaceBuilder(
self.config.project_name,
self.search_package,
@@ -648,19 +687,19 @@ class ProviderContext(ManifestContext):
@contextmember
def store_result(
self, name: str, response: Any, agate_table: Optional[agate.Table] = None
self, name: str,
response: Any,
agate_table: Optional[agate.Table] = None
) -> str:
if agate_table is None:
agate_table = agate_helper.empty_table()
self.sql_results[name] = AttrDict(
{
"response": response,
"data": agate_helper.as_matrix(agate_table),
"table": agate_table,
}
)
return ""
self.sql_results[name] = AttrDict({
'response': response,
'data': agate_helper.as_matrix(agate_table),
'table': agate_table
})
return ''
@contextmember
def store_raw_result(
@@ -669,11 +708,10 @@ class ProviderContext(ManifestContext):
message=Optional[str],
code=Optional[str],
rows_affected=Optional[str],
agate_table: Optional[agate.Table] = None,
agate_table: Optional[agate.Table] = None
) -> str:
response = AdapterResponse(
_message=message, code=code, rows_affected=rows_affected
)
_message=message, code=code, rows_affected=rows_affected)
return self.store_result(name, response, agate_table)
@contextproperty
@@ -686,28 +724,25 @@ class ProviderContext(ManifestContext):
elif value == arg:
return
raise ValidationException(
'Expected value "{}" to be one of {}'.format(
value, ",".join(map(str, args))
)
)
'Expected value "{}" to be one of {}'
.format(value, ','.join(map(str, args))))
return inner
return AttrDict(
{
"any": validate_any,
}
)
return AttrDict({
'any': validate_any,
})
@contextmember
def write(self, payload: str) -> str:
# macros/source defs aren't 'writeable'.
if isinstance(self.model, (ParsedMacro, ParsedSourceDefinition)):
raise_compiler_error('cannot "write" macros or sources')
raise_compiler_error(
'cannot "write" macros or sources'
)
self.model.build_path = self.model.write_node(
self.config.target_path, "run", payload
self.config.target_path, 'run', payload
)
return ""
return ''
@contextmember
def render(self, string: str) -> str:
@@ -720,17 +755,20 @@ class ProviderContext(ManifestContext):
try:
return func(*args, **kwargs)
except Exception:
raise_compiler_error(message_if_exception, self.model)
raise_compiler_error(
message_if_exception, self.model
)
@contextmember
def load_agate_table(self) -> agate.Table:
if not isinstance(self.model, (ParsedSeedNode, CompiledSeedNode)):
raise_compiler_error(
"can only load_agate_table for seeds (got a {})".format(
self.model.resource_type
)
'can only load_agate_table for seeds (got a {})'
.format(self.model.resource_type)
)
path = os.path.join(self.model.root_path, self.model.original_file_path)
path = os.path.join(
self.model.root_path, self.model.original_file_path
)
column_types = self.model.config.column_types
try:
table = agate_helper.from_csv(path, text_columns=column_types)
@@ -788,7 +826,7 @@ class ProviderContext(ManifestContext):
self.db_wrapper, self.model, self.config, self.manifest
)
@contextproperty("config")
@contextproperty('config')
def ctx_config(self) -> Config:
"""The `config` variable exists to handle end-user configuration for
custom materializations. Configs like `unique_key` can be implemented
@@ -960,7 +998,7 @@ class ProviderContext(ManifestContext):
node=self.model,
)
@contextproperty("adapter")
@contextproperty('adapter')
def ctx_adapter(self) -> BaseDatabaseWrapper:
"""`adapter` is a wrapper around the internal database adapter used by
dbt. It allows users to make calls to the database in their dbt models.
@@ -972,8 +1010,8 @@ class ProviderContext(ManifestContext):
@contextproperty
def api(self) -> Dict[str, Any]:
return {
"Relation": self.db_wrapper.Relation,
"Column": self.adapter.Column,
'Relation': self.db_wrapper.Relation,
'Column': self.adapter.Column,
}
@contextproperty
@@ -1091,7 +1129,7 @@ class ProviderContext(ManifestContext):
""" # noqa
return self.manifest.flat_graph
@contextproperty("model")
@contextproperty('model')
def ctx_model(self) -> Dict[str, Any]:
return self.model.to_dict(omit_none=True)
@@ -1155,20 +1193,21 @@ class ProviderContext(ManifestContext):
...
{%- endmacro %}
"""
deprecations.warn("adapter-macro", macro_name=name)
deprecations.warn('adapter-macro', macro_name=name)
original_name = name
package_names: Optional[List[str]] = None
if "." in name:
package_name, name = name.split(".", 1)
package_names = [package_name]
package_name = None
if '.' in name:
package_name, name = name.split('.', 1)
try:
macro = self.db_wrapper.dispatch(macro_name=name, packages=package_names)
macro = self.db_wrapper.dispatch(
macro_name=name, macro_namespace=package_name
)
except CompilationException as exc:
raise CompilationException(
f"In adapter_macro: {exc.msg}\n"
f'In adapter_macro: {exc.msg}\n'
f" Original name: '{original_name}'",
node=self.model,
node=self.model
) from exc
return macro(*args, **kwargs)
@@ -1176,17 +1215,17 @@ class ProviderContext(ManifestContext):
class MacroContext(ProviderContext):
"""Internally, macros can be executed like nodes, with some restrictions:
- they don't have have all values available that nodes do:
- 'this', 'pre_hooks', 'post_hooks', and 'sql' are missing
- 'schema' does not use any 'model' information
- they can't be configured with config() directives
- they don't have have all values available that nodes do:
- 'this', 'pre_hooks', 'post_hooks', and 'sql' are missing
- 'schema' does not use any 'model' information
- they can't be configured with config() directives
"""
def __init__(
self,
model: ParsedMacro,
config: RuntimeConfig,
manifest: AnyManifest,
manifest: Manifest,
provider: Provider,
search_package: Optional[str],
) -> None:
@@ -1204,29 +1243,37 @@ class ModelContext(ProviderContext):
@contextproperty
def pre_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [h.to_dict(omit_none=True) for h in self.model.config.pre_hook]
return [
h.to_dict(omit_none=True) for h in self.model.config.pre_hook
]
@contextproperty
def post_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [h.to_dict(omit_none=True) for h in self.model.config.post_hook]
return [
h.to_dict(omit_none=True) for h in self.model.config.post_hook
]
@contextproperty
def sql(self) -> Optional[str]:
if getattr(self.model, "extra_ctes_injected", None):
if getattr(self.model, 'extra_ctes_injected', None):
return self.model.compiled_sql
return None
@contextproperty
def database(self) -> str:
return getattr(self.model, "database", self.config.credentials.database)
return getattr(
self.model, 'database', self.config.credentials.database
)
@contextproperty
def schema(self) -> str:
return getattr(self.model, "schema", self.config.credentials.schema)
return getattr(
self.model, 'schema', self.config.credentials.schema
)
@contextproperty
def this(self) -> Optional[RelationProxy]:
@@ -1268,13 +1315,15 @@ class ModelContext(ProviderContext):
def generate_parser_model(
model: ManifestNode,
config: RuntimeConfig,
manifest: MacroManifest,
manifest: Manifest,
context_config: ContextConfig,
) -> Dict[str, Any]:
# The __init__ method of ModelContext also initializes
# a ManifestContext object which creates a MacroNamespaceBuilder
# which adds every macro in the Manifest.
ctx = ModelContext(model, config, manifest, ParseProvider(), context_config)
ctx = ModelContext(
model, config, manifest, ParseProvider(), context_config
)
# The 'to_dict' method in ManifestContext moves all of the macro names
# in the macro 'namespace' up to top level keys
return ctx.to_dict()
@@ -1283,9 +1332,11 @@ def generate_parser_model(
def generate_generate_component_name_macro(
macro: ParsedMacro,
config: RuntimeConfig,
manifest: MacroManifest,
manifest: Manifest,
) -> Dict[str, Any]:
ctx = MacroContext(macro, config, manifest, GenerateNameProvider(), None)
ctx = MacroContext(
macro, config, manifest, GenerateNameProvider(), None
)
return ctx.to_dict()
@@ -1294,7 +1345,9 @@ def generate_runtime_model(
config: RuntimeConfig,
manifest: Manifest,
) -> Dict[str, Any]:
ctx = ModelContext(model, config, manifest, RuntimeProvider(), None)
ctx = ModelContext(
model, config, manifest, RuntimeProvider(), None
)
return ctx.to_dict()
@@ -1304,7 +1357,9 @@ def generate_runtime_macro(
manifest: Manifest,
package_name: Optional[str],
) -> Dict[str, Any]:
ctx = MacroContext(macro, config, manifest, OperationProvider(), package_name)
ctx = MacroContext(
macro, config, manifest, OperationProvider(), package_name
)
return ctx.to_dict()
@@ -1313,39 +1368,40 @@ class ExposureRefResolver(BaseResolver):
if len(args) not in (1, 2):
ref_invalid_args(self.model, args)
self.model.refs.append(list(args))
return ""
return ''
class ExposureSourceResolver(BaseResolver):
def __call__(self, *args) -> str:
if len(args) != 2:
raise_compiler_error(
f"source() takes exactly two arguments ({len(args)} given)", self.model
f"source() takes exactly two arguments ({len(args)} given)",
self.model
)
self.model.sources.append(list(args))
return ""
return ''
def generate_parse_exposure(
exposure: ParsedExposure,
config: RuntimeConfig,
manifest: MacroManifest,
manifest: Manifest,
package_name: str,
) -> Dict[str, Any]:
project = config.load_dependencies()[package_name]
return {
"ref": ExposureRefResolver(
'ref': ExposureRefResolver(
None,
exposure,
project,
manifest,
),
"source": ExposureSourceResolver(
'source': ExposureSourceResolver(
None,
exposure,
project,
manifest,
),
)
}
@@ -1367,7 +1423,12 @@ class TestContext(ProviderContext):
self.macro_resolver = macro_resolver
self.thread_ctx = MacroStack()
super().__init__(model, config, manifest, provider, context_config)
self._build_test_namespace
self._build_test_namespace()
# We need to rebuild this because it's already been built by
# the ProviderContext with the wrong namespace.
self.db_wrapper = self.provider.DatabaseWrapper(
self.adapter, self.namespace
)
def _build_namespace(self):
return {}
@@ -1380,10 +1441,17 @@ class TestContext(ProviderContext):
depends_on_macros = []
if self.model.depends_on and self.model.depends_on.macros:
depends_on_macros = self.model.depends_on.macros
lookup_macros = depends_on_macros.copy()
for macro_unique_id in lookup_macros:
lookup_macro = self.macro_resolver.macros.get(macro_unique_id)
if lookup_macro:
depends_on_macros.extend(lookup_macro.depends_on.macros)
macro_namespace = TestMacroNamespace(
self.macro_resolver, self.ctx, self.node, self.thread_ctx, depends_on_macros
self.macro_resolver, self._ctx, self.model, self.thread_ctx,
depends_on_macros
)
self._namespace = macro_namespace
self.namespace = macro_namespace
def generate_test_context(
@@ -1391,10 +1459,11 @@ def generate_test_context(
config: RuntimeConfig,
manifest: Manifest,
context_config: ContextConfig,
macro_resolver: MacroResolver,
macro_resolver: MacroResolver
) -> Dict[str, Any]:
ctx = TestContext(
model, config, manifest, ParseProvider(), context_config, macro_resolver
model, config, manifest, ParseProvider(), context_config,
macro_resolver
)
# The 'to_dict' method in ManifestContext moves all of the macro names
# in the macro 'namespace' up to top level keys

View File

@@ -2,7 +2,9 @@ from typing import Any, Dict
from dbt.contracts.connection import HasCredentials
from dbt.context import BaseContext, contextproperty
from dbt.context.base import (
BaseContext, contextproperty
)
class TargetContext(BaseContext):

View File

@@ -1,36 +1,27 @@
import abc
import itertools
import hashlib
from dataclasses import dataclass, field
from typing import (
Any,
ClassVar,
Dict,
Tuple,
Iterable,
Optional,
List,
Callable,
Any, ClassVar, Dict, Tuple, Iterable, Optional, List, Callable,
)
from dbt.exceptions import InternalException
from dbt.utils import translate_aliases
from dbt.logger import GLOBAL_LOGGER as logger
from typing_extensions import Protocol
from dbt.dataclass_schema import (
dbtClassMixin,
StrEnum,
ExtensibleDbtClassMixin,
ValidatedStringMixin,
register_pattern,
dbtClassMixin, StrEnum, ExtensibleDbtClassMixin, HyphenatedDbtClassMixin,
ValidatedStringMixin, register_pattern
)
from dbt.contracts.util import Replaceable
class Identifier(ValidatedStringMixin):
ValidationRegex = r"^[A-Za-z_][A-Za-z0-9_]+$"
ValidationRegex = r'^[A-Za-z_][A-Za-z0-9_]+$'
# we need register_pattern for jsonschema validation
register_pattern(Identifier, r"^[A-Za-z_][A-Za-z0-9_]+$")
register_pattern(Identifier, r'^[A-Za-z_][A-Za-z0-9_]+$')
@dataclass
@@ -44,10 +35,10 @@ class AdapterResponse(dbtClassMixin):
class ConnectionState(StrEnum):
INIT = "init"
OPEN = "open"
CLOSED = "closed"
FAIL = "fail"
INIT = 'init'
OPEN = 'open'
CLOSED = 'closed'
FAIL = 'fail'
@dataclass(init=False)
@@ -91,7 +82,8 @@ class Connection(ExtensibleDbtClassMixin, Replaceable):
self._handle.resolve(self)
except RecursionError as exc:
raise InternalException(
"A connection's open() method attempted to read the " "handle value"
"A connection's open() method attempted to read the "
"handle value"
) from exc
return self._handle
@@ -110,7 +102,8 @@ class LazyHandle:
def resolve(self, connection: Connection) -> Connection:
logger.debug(
"Opening a new connection, currently in state {}".format(connection.state)
'Opening a new connection, currently in state {}'
.format(connection.state)
)
return self.opener(connection)
@@ -120,24 +113,42 @@ class LazyHandle:
# for why we have type: ignore. Maybe someday dataclasses + abstract classes
# will work.
@dataclass # type: ignore
class Credentials(ExtensibleDbtClassMixin, Replaceable, metaclass=abc.ABCMeta):
class Credentials(
ExtensibleDbtClassMixin,
Replaceable,
metaclass=abc.ABCMeta
):
database: str
schema: str
_ALIASES: ClassVar[Dict[str, str]] = field(default={}, init=False)
@abc.abstractproperty
def type(self) -> str:
raise NotImplementedError("type not implemented for base credentials class")
raise NotImplementedError(
'type not implemented for base credentials class'
)
@abc.abstractproperty
def unique_field(self) -> str:
raise NotImplementedError(
'type not implemented for base credentials class'
)
def hashed_unique_field(self) -> str:
return hashlib.md5(self.unique_field.encode('utf-8')).hexdigest()
def connection_info(
self, *, with_aliases: bool = False
) -> Iterable[Tuple[str, Any]]:
"""Return an ordered iterator of key/value pairs for pretty-printing."""
"""Return an ordered iterator of key/value pairs for pretty-printing.
"""
as_dict = self.to_dict(omit_none=False)
connection_keys = set(self._connection_keys())
aliases: List[str] = []
if with_aliases:
aliases = [k for k, v in self._ALIASES.items() if v in connection_keys]
aliases = [
k for k, v in self._ALIASES.items() if v in connection_keys
]
for key in itertools.chain(self._connection_keys(), aliases):
if key in as_dict:
yield key, as_dict[key]
@@ -161,13 +172,11 @@ class Credentials(ExtensibleDbtClassMixin, Replaceable, metaclass=abc.ABCMeta):
def __post_serialize__(self, dct):
# no super() -- do we need it?
if self._ALIASES:
dct.update(
{
new_name: dct[canonical_name]
for new_name, canonical_name in self._ALIASES.items()
if canonical_name in dct
}
)
dct.update({
new_name: dct[canonical_name]
for new_name, canonical_name in self._ALIASES.items()
if canonical_name in dct
})
return dct
@@ -189,10 +198,10 @@ class HasCredentials(Protocol):
threads: int
def to_target_dict(self):
raise NotImplementedError("to_target_dict not implemented")
raise NotImplementedError('to_target_dict not implemented')
DEFAULT_QUERY_COMMENT = """
DEFAULT_QUERY_COMMENT = '''
{%- set comment_dict = {} -%}
{%- do comment_dict.update(
app='dbt',
@@ -209,13 +218,14 @@ DEFAULT_QUERY_COMMENT = """
{%- do comment_dict.update(connection_name=connection_name) -%}
{%- endif -%}
{{ return(tojson(comment_dict)) }}
"""
'''
@dataclass
class QueryComment(dbtClassMixin):
class QueryComment(HyphenatedDbtClassMixin):
comment: str = DEFAULT_QUERY_COMMENT
append: bool = False
job_label: bool = False
class AdapterRequiredConfig(HasCredentials, Protocol):

View File

@@ -1,17 +1,41 @@
import hashlib
import os
from dataclasses import dataclass, field
from typing import List, Optional, Union
from mashumaro.types import SerializableType
from typing import List, Optional, Union, Dict, Any
from dbt.dataclass_schema import dbtClassMixin
from dbt.dataclass_schema import dbtClassMixin, StrEnum
from dbt.exceptions import InternalException
from .util import MacroKey, SourceKey
from .util import SourceKey
MAXIMUM_SEED_SIZE = 1 * 1024 * 1024
MAXIMUM_SEED_SIZE_NAME = "1MB"
MAXIMUM_SEED_SIZE_NAME = '1MB'
class ParseFileType(StrEnum):
Macro = 'macro'
Model = 'model'
Snapshot = 'snapshot'
Analysis = 'analysis'
Test = 'test'
Seed = 'seed'
Documentation = 'docs'
Schema = 'schema'
Hook = 'hook' # not a real filetype, from dbt_project.yml
parse_file_type_to_parser = {
ParseFileType.Macro: 'MacroParser',
ParseFileType.Model: 'ModelParser',
ParseFileType.Snapshot: 'SnapshotParser',
ParseFileType.Analysis: 'AnalysisParser',
ParseFileType.Test: 'DataTestParser',
ParseFileType.Seed: 'SeedParser',
ParseFileType.Documentation: 'DocumentationParser',
ParseFileType.Schema: 'SchemaParser',
ParseFileType.Hook: 'HookParser',
}
@dataclass
@@ -28,7 +52,9 @@ class FilePath(dbtClassMixin):
@property
def full_path(self) -> str:
# useful for symlink preservation
return os.path.join(self.project_root, self.searched_path, self.relative_path)
return os.path.join(
self.project_root, self.searched_path, self.relative_path
)
@property
def absolute_path(self) -> str:
@@ -38,10 +64,13 @@ class FilePath(dbtClassMixin):
def original_file_path(self) -> str:
# this is mostly used for reporting errors. It doesn't show the project
# name, should it?
return os.path.join(self.searched_path, self.relative_path)
return os.path.join(
self.searched_path, self.relative_path
)
def seed_too_large(self) -> bool:
"""Return whether the file this represents is over the seed size limit"""
"""Return whether the file this represents is over the seed size limit
"""
return os.stat(self.full_path).st_size > MAXIMUM_SEED_SIZE
@@ -52,35 +81,35 @@ class FileHash(dbtClassMixin):
@classmethod
def empty(cls):
return FileHash(name="none", checksum="")
return FileHash(name='none', checksum='')
@classmethod
def path(cls, path: str):
return FileHash(name="path", checksum=path)
return FileHash(name='path', checksum=path)
def __eq__(self, other):
if not isinstance(other, FileHash):
return NotImplemented
if self.name == "none" or self.name != other.name:
if self.name == 'none' or self.name != other.name:
return False
return self.checksum == other.checksum
def compare(self, contents: str) -> bool:
"""Compare the file contents with the given hash"""
if self.name == "none":
if self.name == 'none':
return False
return self.from_contents(contents, name=self.name) == self.checksum
@classmethod
def from_contents(cls, contents: str, name="sha256") -> "FileHash":
def from_contents(cls, contents: str, name='sha256') -> 'FileHash':
"""Create a file hash from the given file contents. The hash is always
the utf-8 encoding of the contents given, because dbt only reads files
as utf-8.
"""
data = contents.encode("utf-8")
data = contents.encode('utf-8')
checksum = hashlib.new(name, data).hexdigest()
return cls(name=name, checksum=checksum)
@@ -89,75 +118,181 @@ class FileHash(dbtClassMixin):
class RemoteFile(dbtClassMixin):
@property
def searched_path(self) -> str:
return "from remote system"
return 'from remote system'
@property
def relative_path(self) -> str:
return "from remote system"
return 'from remote system'
@property
def absolute_path(self) -> str:
return "from remote system"
return 'from remote system'
@property
def original_file_path(self):
return "from remote system"
return 'from remote system'
@dataclass
class SourceFile(dbtClassMixin):
class BaseSourceFile(dbtClassMixin, SerializableType):
"""Define a source file in dbt"""
path: Union[FilePath, RemoteFile] # the path information
checksum: FileHash
# Seems like knowing which project the file came from would be useful
project_name: Optional[str] = None
# Parse file type: i.e. which parser will process this file
parse_file_type: Optional[ParseFileType] = None
# we don't want to serialize this
_contents: Optional[str] = None
contents: Optional[str] = None
# the unique IDs contained in this file
@property
def file_id(self):
if isinstance(self.path, RemoteFile):
return None
if self.checksum.name == 'none':
return None
return f'{self.project_name}://{self.path.original_file_path}'
def _serialize(self):
dct = self.to_dict()
return dct
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
if dct['parse_file_type'] == 'schema':
sf = SchemaSourceFile.from_dict(dct)
else:
sf = SourceFile.from_dict(dct)
return sf
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
# remove empty lists to save space
dct_keys = list(dct.keys())
for key in dct_keys:
if isinstance(dct[key], list) and not dct[key]:
del dct[key]
# remove contents. Schema files will still have 'dict_from_yaml'
# from the contents
if 'contents' in dct:
del dct['contents']
return dct
@dataclass
class SourceFile(BaseSourceFile):
nodes: List[str] = field(default_factory=list)
docs: List[str] = field(default_factory=list)
macros: List[str] = field(default_factory=list)
sources: List[str] = field(default_factory=list)
exposures: List[str] = field(default_factory=list)
# any node patches in this file. The entries are names, not unique ids!
patches: List[str] = field(default_factory=list)
# any macro patches in this file. The entries are package, name pairs.
macro_patches: List[MacroKey] = field(default_factory=list)
# any source patches in this file. The entries are package, name pairs
source_patches: List[SourceKey] = field(default_factory=list)
@property
def search_key(self) -> Optional[str]:
if isinstance(self.path, RemoteFile):
return None
if self.checksum.name == "none":
return None
return self.path.search_key
@property
def contents(self) -> str:
if self._contents is None:
raise InternalException("SourceFile has no contents!")
return self._contents
@contents.setter
def contents(self, value):
self._contents = value
@classmethod
def empty(cls, path: FilePath) -> "SourceFile":
self = cls(path=path, checksum=FileHash.empty())
self.contents = ""
return self
@classmethod
def big_seed(cls, path: FilePath) -> "SourceFile":
def big_seed(cls, path: FilePath) -> 'SourceFile':
"""Parse seeds over the size limit with just the path"""
self = cls(path=path, checksum=FileHash.path(path.original_file_path))
self.contents = ""
self.contents = ''
return self
def add_node(self, value):
if value not in self.nodes:
self.nodes.append(value)
# TODO: do this a different way. This remote file kludge isn't going
# to work long term
@classmethod
def remote(cls, contents: str) -> "SourceFile":
self = cls(path=RemoteFile(), checksum=FileHash.empty())
self.contents = contents
def remote(cls, contents: str, project_name: str) -> 'SourceFile':
self = cls(
path=RemoteFile(),
checksum=FileHash.from_contents(contents),
project_name=project_name,
contents=contents,
)
return self
@dataclass
class SchemaSourceFile(BaseSourceFile):
dfy: Dict[str, Any] = field(default_factory=dict)
# these are in the manifest.nodes dictionary
tests: Dict[str, Any] = field(default_factory=dict)
sources: List[str] = field(default_factory=list)
exposures: List[str] = field(default_factory=list)
# node patches contain models, seeds, snapshots, analyses
ndp: List[str] = field(default_factory=list)
# any macro patches in this file by macro unique_id.
mcp: Dict[str, str] = field(default_factory=dict)
# any source patches in this file. The entries are package, name pairs
# Patches are only against external sources. Sources can be
# created too, but those are in 'sources'
sop: List[SourceKey] = field(default_factory=list)
pp_dict: Optional[Dict[str, Any]] = None
pp_test_index: Optional[Dict[str, Any]] = None
@property
def dict_from_yaml(self):
return self.dfy
@property
def node_patches(self):
return self.ndp
@property
def macro_patches(self):
return self.mcp
@property
def source_patches(self):
return self.sop
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
# Remove partial parsing specific data
for key in ('pp_files', 'pp_test_index', 'pp_dict'):
if key in dct:
del dct[key]
return dct
def append_patch(self, yaml_key, unique_id):
self.node_patches.append(unique_id)
def add_test(self, node_unique_id, test_from):
name = test_from['name']
key = test_from['key']
if key not in self.tests:
self.tests[key] = {}
if name not in self.tests[key]:
self.tests[key][name] = []
self.tests[key][name].append(node_unique_id)
def remove_tests(self, yaml_key, name):
if yaml_key in self.tests:
if name in self.tests[yaml_key]:
del self.tests[yaml_key][name]
def get_tests(self, yaml_key, name):
if yaml_key in self.tests:
if name in self.tests[yaml_key]:
return self.tests[yaml_key][name]
return []
def get_key_and_name_for_test(self, test_unique_id):
yaml_key = None
block_name = None
for key in self.tests.keys():
for name in self.tests[key]:
for unique_id in self.tests[key][name]:
if unique_id == test_unique_id:
yaml_key = key
block_name = name
break
return (yaml_key, block_name)
def get_all_test_ids(self):
test_ids = []
for key in self.tests.keys():
for name in self.tests[key]:
test_ids.extend(self.tests[key][name])
return test_ids
AnySourceFile = Union[SchemaSourceFile, SourceFile]

View File

@@ -43,6 +43,7 @@ class CompiledNode(ParsedNode, CompiledNodeMixin):
extra_ctes_injected: bool = False
extra_ctes: List[InjectedCTE] = field(default_factory=list)
relation_name: Optional[str] = None
_pre_injected_sql: Optional[str] = None
def set_cte(self, cte_id: str, sql: str):
"""This is the equivalent of what self.extra_ctes[cte_id] = sql would
@@ -55,32 +56,40 @@ class CompiledNode(ParsedNode, CompiledNodeMixin):
else:
self.extra_ctes.append(InjectedCTE(id=cte_id, sql=sql))
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if '_pre_injected_sql' in dct:
del dct['_pre_injected_sql']
return dct
@dataclass
class CompiledAnalysisNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Analysis]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Analysis]})
@dataclass
class CompiledHookNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Operation]})
resource_type: NodeType = field(
metadata={'restrict': [NodeType.Operation]}
)
index: Optional[int] = None
@dataclass
class CompiledModelNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Model]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Model]})
@dataclass
class CompiledRPCNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.RPCCall]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.RPCCall]})
@dataclass
class CompiledSeedNode(CompiledNode):
# keep this in sync with ParsedSeedNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Seed]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Seed]})
config: SeedConfig = field(default_factory=SeedConfig)
@property
@@ -94,35 +103,35 @@ class CompiledSeedNode(CompiledNode):
@dataclass
class CompiledSnapshotNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Snapshot]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Snapshot]})
@dataclass
class CompiledDataTestNode(CompiledNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
@dataclass
class CompiledSchemaTestNode(CompiledNode, HasTestMetadata):
# keep this in sync with ParsedSchemaTestNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Test]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
def same_config(self, other) -> bool:
return self.unrendered_config.get("severity") == other.unrendered_config.get(
"severity"
)
def same_column_name(self, other) -> bool:
return self.column_name == other.column_name
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
def same_contents(self, other) -> bool:
if other is None:
return False
return self.same_config(other) and self.same_fqn(other) and True
return (
self.same_config(other) and
self.same_fqn(other) and
True
)
CompiledTestNode = Union[CompiledDataTestNode, CompiledSchemaTestNode]
@@ -168,7 +177,8 @@ def parsed_instance_for(compiled: CompiledNode) -> ParsedResource:
cls = PARSED_TYPES.get(type(compiled))
if cls is None:
# how???
raise ValueError("invalid resource_type: {}".format(compiled.resource_type))
raise ValueError('invalid resource_type: {}'
.format(compiled.resource_type))
return cls.from_dict(compiled.to_dict(omit_none=True))

File diff suppressed because it is too large Load Diff

View File

@@ -2,29 +2,19 @@ from dataclasses import field, Field, dataclass
from enum import Enum
from itertools import chain
from typing import (
Any,
List,
Optional,
Dict,
MutableMapping,
Union,
Type,
TypeVar,
Callable,
Any, List, Optional, Dict, Union, Type, TypeVar, Callable
)
from dbt.dataclass_schema import (
dbtClassMixin,
ValidationError,
register_pattern,
dbtClassMixin, ValidationError, register_pattern,
)
from dbt.contracts.graph.unparsed import AdditionalPropertiesAllowed
from dbt.exceptions import CompilationException, InternalException
from dbt.exceptions import InternalException, CompilationException
from dbt.contracts.util import Replaceable, list_str
from dbt import hooks
from dbt.node_types import NodeType
M = TypeVar("M", bound="Metadata")
M = TypeVar('M', bound='Metadata')
def _get_meta_value(cls: Type[M], fld: Field, key: str, default: Any) -> M:
@@ -39,7 +29,9 @@ def _get_meta_value(cls: Type[M], fld: Field, key: str, default: Any) -> M:
try:
return cls(value)
except ValueError as exc:
raise InternalException(f"Invalid {cls} value: {value}") from exc
raise InternalException(
f'Invalid {cls} value: {value}'
) from exc
def _set_meta_value(
@@ -61,17 +53,19 @@ class Metadata(Enum):
return _get_meta_value(cls, fld, key, default)
def meta(self, existing: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
def meta(
self, existing: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
key = self.metadata_key()
return _set_meta_value(self, key, existing)
@classmethod
def default_field(cls) -> "Metadata":
raise NotImplementedError("Not implemented")
def default_field(cls) -> 'Metadata':
raise NotImplementedError('Not implemented')
@classmethod
def metadata_key(cls) -> str:
raise NotImplementedError("Not implemented")
raise NotImplementedError('Not implemented')
class MergeBehavior(Metadata):
@@ -80,12 +74,12 @@ class MergeBehavior(Metadata):
Clobber = 3
@classmethod
def default_field(cls) -> "MergeBehavior":
def default_field(cls) -> 'MergeBehavior':
return cls.Clobber
@classmethod
def metadata_key(cls) -> str:
return "merge"
return 'merge'
class ShowBehavior(Metadata):
@@ -93,12 +87,12 @@ class ShowBehavior(Metadata):
Hide = 2
@classmethod
def default_field(cls) -> "ShowBehavior":
def default_field(cls) -> 'ShowBehavior':
return cls.Show
@classmethod
def metadata_key(cls) -> str:
return "show_hide"
return 'show_hide'
@classmethod
def should_show(cls, fld: Field) -> bool:
@@ -110,12 +104,12 @@ class CompareBehavior(Metadata):
Exclude = 2
@classmethod
def default_field(cls) -> "CompareBehavior":
def default_field(cls) -> 'CompareBehavior':
return cls.Include
@classmethod
def metadata_key(cls) -> str:
return "compare"
return 'compare'
@classmethod
def should_include(cls, fld: Field) -> bool:
@@ -147,30 +141,32 @@ def _merge_field_value(
return _listify(self_value) + _listify(other_value)
elif merge_behavior == MergeBehavior.Update:
if not isinstance(self_value, dict):
raise InternalException(f"expected dict, got {self_value}")
raise InternalException(f'expected dict, got {self_value}')
if not isinstance(other_value, dict):
raise InternalException(f"expected dict, got {other_value}")
raise InternalException(f'expected dict, got {other_value}')
value = self_value.copy()
value.update(other_value)
return value
else:
raise InternalException(f"Got an invalid merge_behavior: {merge_behavior}")
raise InternalException(
f'Got an invalid merge_behavior: {merge_behavior}'
)
def insensitive_patterns(*patterns: str):
lowercased = []
for pattern in patterns:
lowercased.append(
"".join("[{}{}]".format(s.upper(), s.lower()) for s in pattern)
''.join('[{}{}]'.format(s.upper(), s.lower()) for s in pattern)
)
return "^({})$".format("|".join(lowercased))
return '^({})$'.format('|'.join(lowercased))
class Severity(str):
pass
register_pattern(Severity, insensitive_patterns("warn", "error"))
register_pattern(Severity, insensitive_patterns('warn', 'error'))
@dataclass
@@ -180,22 +176,28 @@ class Hook(dbtClassMixin, Replaceable):
index: Optional[int] = None
T = TypeVar("T", bound="BaseConfig")
T = TypeVar('T', bound='BaseConfig')
@dataclass
class BaseConfig(AdditionalPropertiesAllowed, Replaceable, MutableMapping[str, Any]):
# Implement MutableMapping so this config will behave as some macros expect
# during parsing (notably, syntax like `{{ node.config['schema'] }}`)
class BaseConfig(
AdditionalPropertiesAllowed, Replaceable
):
# enable syntax like: config['key']
def __getitem__(self, key):
"""Handle parse-time use of `config` as a dictionary, making the extra
values available during parsing.
"""
return self.get(key)
# like doing 'get' on a dictionary
def get(self, key, default=None):
if hasattr(self, key):
return getattr(self, key)
else:
elif key in self._extra:
return self._extra[key]
else:
return default
# enable syntax like: config['key'] = value
def __setitem__(self, key, value):
if hasattr(self, key):
setattr(self, key, value)
@@ -205,7 +207,8 @@ class BaseConfig(AdditionalPropertiesAllowed, Replaceable, MutableMapping[str, A
def __delitem__(self, key):
if hasattr(self, key):
msg = (
'Error, tried to delete config key "{}": Cannot delete ' "built-in keys"
'Error, tried to delete config key "{}": Cannot delete '
'built-in keys'
).format(key)
raise CompilationException(msg)
else:
@@ -245,7 +248,9 @@ class BaseConfig(AdditionalPropertiesAllowed, Replaceable, MutableMapping[str, A
return unrendered[key] == other[key]
@classmethod
def same_contents(cls, unrendered: Dict[str, Any], other: Dict[str, Any]) -> bool:
def same_contents(
cls, unrendered: Dict[str, Any], other: Dict[str, Any]
) -> bool:
"""This is like __eq__, except it ignores some fields."""
seen = set()
for fld, target_name in cls._get_fields():
@@ -262,8 +267,17 @@ class BaseConfig(AdditionalPropertiesAllowed, Replaceable, MutableMapping[str, A
return False
return True
# This is used in 'add_config_call' to created the combined config_call_dict.
# 'meta' moved here from node
mergebehavior = {
"append": ['pre-hook', 'pre_hook', 'post-hook', 'post_hook', 'tags'],
"update": ['quoting', 'column_types', 'meta'],
}
@classmethod
def _extract_dict(cls, src: Dict[str, Any], data: Dict[str, Any]) -> Dict[str, Any]:
def _merge_dicts(
cls, src: Dict[str, Any], data: Dict[str, Any]
) -> Dict[str, Any]:
"""Find all the items in data that match a target_field on this class,
and merge them with the data found in `src` for target_field, using the
field's specified merge behavior. Matching items will be removed from
@@ -303,15 +317,14 @@ class BaseConfig(AdditionalPropertiesAllowed, Replaceable, MutableMapping[str, A
"""
# sadly, this is a circular import
from dbt.adapters.factory import get_config_class_by_name
dct = self.to_dict(omit_none=False)
adapter_config_cls = get_config_class_by_name(adapter_type)
self_merged = self._extract_dict(dct, data)
self_merged = self._merge_dicts(dct, data)
dct.update(self_merged)
adapter_merged = adapter_config_cls._extract_dict(dct, data)
adapter_merged = adapter_config_cls._merge_dicts(dct, data)
dct.update(adapter_merged)
# any remaining fields must be "clobber"
@@ -343,33 +356,8 @@ class SourceConfig(BaseConfig):
@dataclass
class NodeConfig(BaseConfig):
class NodeAndTestConfig(BaseConfig):
enabled: bool = True
materialized: str = "view"
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
# this only applies for config v1, so it doesn't participate in comparison
vars: Dict[str, Any] = field(
default_factory=dict,
metadata=metas(CompareBehavior.Exclude, MergeBehavior.Update),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# these fields are included in serialized output, but are not part of
# config comparison (they are part of database_representation)
alias: Optional[str] = field(
@@ -386,16 +374,47 @@ class NodeConfig(BaseConfig):
)
tags: Union[List[str], str] = field(
default_factory=list_str,
metadata=metas(
ShowBehavior.Hide, MergeBehavior.Append, CompareBehavior.Exclude
),
metadata=metas(ShowBehavior.Hide,
MergeBehavior.Append,
CompareBehavior.Exclude),
)
meta: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
@dataclass
class NodeConfig(NodeAndTestConfig):
# Note: if any new fields are added with MergeBehavior, also update the
# 'mergebehavior' dictionary
materialized: str = 'view'
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
full_refresh: Optional[bool] = None
on_schema_change: Optional[str] = 'ignore'
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
field_map = {"post-hook": "post_hook", "pre-hook": "pre_hook"}
field_map = {'post-hook': 'post_hook', 'pre-hook': 'pre_hook'}
# create a new dict because otherwise it gets overwritten in
# tests
new_dict = {}
@@ -413,7 +432,7 @@ class NodeConfig(BaseConfig):
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
field_map = {"post_hook": "post-hook", "pre_hook": "pre-hook"}
field_map = {'post_hook': 'post-hook', 'pre_hook': 'pre-hook'}
for field_name in field_map:
if field_name in dct:
dct[field_map[field_name]] = dct.pop(field_name)
@@ -422,24 +441,59 @@ class NodeConfig(BaseConfig):
# this is still used by jsonschema validation
@classmethod
def field_mapping(cls):
return {"post_hook": "post-hook", "pre_hook": "pre-hook"}
return {'post_hook': 'post-hook', 'pre_hook': 'pre-hook'}
@dataclass
class SeedConfig(NodeConfig):
materialized: str = "seed"
materialized: str = 'seed'
quote_columns: Optional[bool] = None
@dataclass
class TestConfig(NodeConfig):
materialized: str = "test"
severity: Severity = Severity("ERROR")
class TestConfig(NodeAndTestConfig):
# this is repeated because of a different default
schema: Optional[str] = field(
default='dbt_test__audit',
metadata=CompareBehavior.Exclude.meta(),
)
materialized: str = 'test'
severity: Severity = Severity('ERROR')
store_failures: Optional[bool] = None
where: Optional[str] = None
limit: Optional[int] = None
fail_calc: str = 'count(*)'
warn_if: str = '!= 0'
error_if: str = '!= 0'
@classmethod
def same_contents(
cls, unrendered: Dict[str, Any], other: Dict[str, Any]
) -> bool:
"""This is like __eq__, except it explicitly checks certain fields."""
modifiers = [
'severity',
'where',
'limit',
'fail_calc',
'warn_if',
'error_if',
'store_failures'
]
seen = set()
for _, target_name in cls._get_fields():
key = target_name
seen.add(key)
if key in modifiers:
if not cls.compare_key(unrendered, other, key):
return False
return True
@dataclass
class EmptySnapshotConfig(NodeConfig):
materialized: str = "snapshot"
materialized: str = 'snapshot'
@dataclass
@@ -454,28 +508,30 @@ class SnapshotConfig(EmptySnapshotConfig):
@classmethod
def validate(cls, data):
super().validate(data)
if data.get("strategy") == "check":
if not data.get("check_cols"):
if not data.get('strategy') or not data.get('unique_key') or not \
data.get('target_schema'):
raise ValidationError(
"Snapshots must be configured with a 'strategy', 'unique_key', "
"and 'target_schema'.")
if data.get('strategy') == 'check':
if not data.get('check_cols'):
raise ValidationError(
"A snapshot configured with the check strategy must "
"specify a check_cols configuration."
)
if isinstance(data["check_cols"], str) and data["check_cols"] != "all":
"specify a check_cols configuration.")
if (isinstance(data['check_cols'], str) and
data['check_cols'] != 'all'):
raise ValidationError(
f"Invalid value for 'check_cols': {data['check_cols']}. "
"Expected 'all' or a list of strings."
)
"Expected 'all' or a list of strings.")
elif data.get("strategy") == "timestamp":
if not data.get("updated_at"):
elif data.get('strategy') == 'timestamp':
if not data.get('updated_at'):
raise ValidationError(
"A snapshot configured with the timestamp strategy "
"must specify an updated_at configuration."
)
if data.get("check_cols"):
"must specify an updated_at configuration.")
if data.get('check_cols'):
raise ValidationError(
"A 'timestamp' snapshot should not have 'check_cols'"
)
"A 'timestamp' snapshot should not have 'check_cols'")
# If the strategy is not 'check' or 'timestamp' it's a custom strategy,
# formerly supported with GenericSnapshotConfig
@@ -497,7 +553,9 @@ RESOURCE_TYPES: Dict[NodeType, Type[BaseConfig]] = {
# base resource types are like resource types, except nothing has mandatory
# configs.
BASE_RESOURCE_TYPES: Dict[NodeType, Type[BaseConfig]] = RESOURCE_TYPES.copy()
BASE_RESOURCE_TYPES.update({NodeType.Snapshot: EmptySnapshotConfig})
BASE_RESOURCE_TYPES.update({
NodeType.Snapshot: EmptySnapshotConfig
})
def get_config_for(resource_type: NodeType, base=False) -> Type[BaseConfig]:

View File

@@ -1,5 +1,7 @@
import os
import time
from dataclasses import dataclass, field
from mashumaro.types import SerializableType
from pathlib import Path
from typing import (
Optional,
@@ -13,27 +15,18 @@ from typing import (
TypeVar,
)
from dbt.dataclass_schema import dbtClassMixin, ExtensibleDbtClassMixin
from dbt.dataclass_schema import (
dbtClassMixin, ExtensibleDbtClassMixin
)
from dbt.clients.system import write_file
from dbt.contracts.files import FileHash, MAXIMUM_SEED_SIZE_NAME
from dbt.contracts.graph.unparsed import (
UnparsedNode,
UnparsedDocumentation,
Quoting,
Docs,
UnparsedBaseNode,
FreshnessThreshold,
ExternalTable,
HasYamlMetadata,
MacroArgument,
UnparsedSourceDefinition,
UnparsedSourceTableDefinition,
UnparsedColumn,
TestDef,
ExposureOwner,
ExposureType,
MaturityType,
UnparsedNode, UnparsedDocumentation, Quoting, Docs,
UnparsedBaseNode, FreshnessThreshold, ExternalTable,
HasYamlMetadata, MacroArgument, UnparsedSourceDefinition,
UnparsedSourceTableDefinition, UnparsedColumn, TestDef,
ExposureOwner, ExposureType, MaturityType
)
from dbt.contracts.util import Replaceable, AdditionalPropertiesMixin
from dbt.exceptions import warn_or_error
@@ -53,9 +46,13 @@ from .model_config import (
@dataclass
class ColumnInfo(AdditionalPropertiesMixin, ExtensibleDbtClassMixin, Replaceable):
class ColumnInfo(
AdditionalPropertiesMixin,
ExtensibleDbtClassMixin,
Replaceable
):
name: str
description: str = ""
description: str = ''
meta: Dict[str, Any] = field(default_factory=dict)
data_type: Optional[str] = None
quote: Optional[bool] = None
@@ -67,7 +64,7 @@ class ColumnInfo(AdditionalPropertiesMixin, ExtensibleDbtClassMixin, Replaceable
class HasFqn(dbtClassMixin, Replaceable):
fqn: List[str]
def same_fqn(self, other: "HasFqn") -> bool:
def same_fqn(self, other: 'HasFqn') -> bool:
return self.fqn == other.fqn
@@ -106,8 +103,8 @@ class HasRelationMetadata(dbtClassMixin, Replaceable):
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
if "database" not in data:
data["database"] = None
if 'database' not in data:
data['database'] = None
return data
@@ -120,9 +117,24 @@ class ParsedNodeMixins(dbtClassMixin):
def is_refable(self):
return self.resource_type in NodeType.refable()
@property
def should_store_failures(self):
return self.resource_type == NodeType.Test and (
self.config.store_failures if self.config.store_failures is not None
else flags.STORE_FAILURES
)
# will this node map to an object in the database?
@property
def is_relational(self):
return (
self.resource_type in NodeType.refable() or
self.should_store_failures
)
@property
def is_ephemeral(self):
return self.config.materialized == "ephemeral"
return self.config.materialized == 'ephemeral'
@property
def is_ephemeral_model(self):
@@ -132,11 +144,14 @@ class ParsedNodeMixins(dbtClassMixin):
def depends_on_nodes(self):
return self.depends_on.nodes
def patch(self, patch: "ParsedNodePatch"):
def patch(self, patch: 'ParsedNodePatch'):
"""Given a ParsedNodePatch, add the new information to the node."""
# explicitly pick out the parts to update so we don't inadvertently
# step on the model name or anything
self.patch_path: Optional[str] = patch.original_file_path
# Note: config should already be updated
self.patch_path: Optional[str] = patch.file_id
# update created_at so process_docs will run in partial parsing
self.created_at = int(time.time())
self.description = patch.description
self.columns = patch.columns
self.meta = patch.meta
@@ -152,13 +167,14 @@ class ParsedNodeMixins(dbtClassMixin):
def get_materialization(self):
return self.config.materialized
def local_vars(self):
return self.config.vars
@dataclass
class ParsedNodeMandatory(
UnparsedNode, HasUniqueID, HasFqn, HasRelationMetadata, Replaceable
UnparsedNode,
HasUniqueID,
HasFqn,
HasRelationMetadata,
Replaceable
):
alias: str
checksum: FileHash
@@ -175,38 +191,87 @@ class ParsedNodeDefaults(ParsedNodeMandatory):
refs: List[List[str]] = field(default_factory=list)
sources: List[List[Any]] = field(default_factory=list)
depends_on: DependsOn = field(default_factory=DependsOn)
description: str = field(default="")
description: str = field(default='')
columns: Dict[str, ColumnInfo] = field(default_factory=dict)
meta: Dict[str, Any] = field(default_factory=dict)
docs: Docs = field(default_factory=Docs)
patch_path: Optional[str] = None
compiled_path: Optional[str] = None
build_path: Optional[str] = None
deferred: bool = False
unrendered_config: Dict[str, Any] = field(default_factory=dict)
created_at: int = field(default_factory=lambda: int(time.time()))
config_call_dict: Dict[str, Any] = field(default_factory=dict)
def write_node(self, target_path: str, subdirectory: str, payload: str):
if os.path.basename(self.path) == os.path.basename(self.original_file_path):
if (os.path.basename(self.path) ==
os.path.basename(self.original_file_path)):
# One-to-one relationship of nodes to files.
path = self.original_file_path
else:
# Many-to-one relationship of nodes to files.
path = os.path.join(self.original_file_path, self.path)
full_path = os.path.join(target_path, subdirectory, self.package_name, path)
full_path = os.path.join(
target_path, subdirectory, self.package_name, path
)
write_file(full_path, payload)
return full_path
T = TypeVar("T", bound="ParsedNode")
T = TypeVar('T', bound='ParsedNode')
@dataclass
class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins):
class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
def _serialize(self):
return self.to_dict()
def __post_serialize__(self, dct):
if 'config_call_dict' in dct:
del dct['config_call_dict']
return dct
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
# The serialized ParsedNodes do not differ from each other
# in fields that would allow 'from_dict' to distinguis
# between them.
resource_type = dct['resource_type']
if resource_type == 'model':
return ParsedModelNode.from_dict(dct)
elif resource_type == 'analysis':
return ParsedAnalysisNode.from_dict(dct)
elif resource_type == 'seed':
return ParsedSeedNode.from_dict(dct)
elif resource_type == 'rpc':
return ParsedRPCNode.from_dict(dct)
elif resource_type == 'test':
if 'test_metadata' in dct:
return ParsedSchemaTestNode.from_dict(dct)
else:
return ParsedDataTestNode.from_dict(dct)
elif resource_type == 'operation':
return ParsedHookNode.from_dict(dct)
elif resource_type == 'seed':
return ParsedSeedNode.from_dict(dct)
elif resource_type == 'snapshot':
return ParsedSnapshotNode.from_dict(dct)
else:
return cls.from_dict(dct)
def _persist_column_docs(self) -> bool:
return bool(self.config.persist_docs.get("columns"))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('columns'))
return False
def _persist_relation_docs(self) -> bool:
return bool(self.config.persist_docs.get("relation"))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('relation'))
return False
def same_body(self: T, other: T) -> bool:
return self.raw_sql == other.raw_sql
@@ -221,7 +286,9 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins):
if self._persist_column_docs():
# assert other._persist_column_docs()
column_descriptions = {k: v.description for k, v in self.columns.items()}
column_descriptions = {
k: v.description for k, v in self.columns.items()
}
other_column_descriptions = {
k: v.description for k, v in other.columns.items()
}
@@ -235,7 +302,7 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins):
# compares the configured value, rather than the ultimate value (so
# generate_*_name and unset values derived from the target are
# ignored)
keys = ("database", "schema", "alias")
keys = ('database', 'schema', 'alias')
for key in keys:
mine = self.unrendered_config.get(key)
others = other.unrendered_config.get(key)
@@ -254,34 +321,36 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins):
return False
return (
self.same_body(old)
and self.same_config(old)
and self.same_persisted_description(old)
and self.same_fqn(old)
and self.same_database_representation(old)
and True
self.same_body(old) and
self.same_config(old) and
self.same_persisted_description(old) and
self.same_fqn(old) and
self.same_database_representation(old) and
True
)
@dataclass
class ParsedAnalysisNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Analysis]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Analysis]})
@dataclass
class ParsedHookNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Operation]})
resource_type: NodeType = field(
metadata={'restrict': [NodeType.Operation]}
)
index: Optional[int] = None
@dataclass
class ParsedModelNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Model]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Model]})
@dataclass
class ParsedRPCNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.RPCCall]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.RPCCall]})
def same_seeds(first: ParsedNode, second: ParsedNode) -> bool:
@@ -291,31 +360,31 @@ def same_seeds(first: ParsedNode, second: ParsedNode) -> bool:
# if the current checksum is a path, we want to log a warning.
result = first.checksum == second.checksum
if first.checksum.name == "path":
if first.checksum.name == 'path':
msg: str
if second.checksum.name != "path":
if second.checksum.name != 'path':
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was "
f"<={MAXIMUM_SEED_SIZE_NAME}, so it has changed"
f'Found a seed ({first.package_name}.{first.name}) '
f'>{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was '
f'<={MAXIMUM_SEED_SIZE_NAME}, so it has changed'
)
elif result:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size at the same path, dbt "
f"cannot tell if it has changed: assuming they are the same"
f'Found a seed ({first.package_name}.{first.name}) '
f'>{MAXIMUM_SEED_SIZE_NAME} in size at the same path, dbt '
f'cannot tell if it has changed: assuming they are the same'
)
elif not result:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was in "
f"a different location, assuming it has changed"
f'Found a seed ({first.package_name}.{first.name}) '
f'>{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was in '
f'a different location, assuming it has changed'
)
else:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file had a "
f"checksum type of {second.checksum.name}, so it has changed"
f'Found a seed ({first.package_name}.{first.name}) '
f'>{MAXIMUM_SEED_SIZE_NAME} in size. The previous file had a '
f'checksum type of {second.checksum.name}, so it has changed'
)
warn_or_error(msg, node=first)
@@ -325,7 +394,7 @@ def same_seeds(first: ParsedNode, second: ParsedNode) -> bool:
@dataclass
class ParsedSeedNode(ParsedNode):
# keep this in sync with CompiledSeedNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Seed]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Seed]})
config: SeedConfig = field(default_factory=SeedConfig)
@property
@@ -351,30 +420,30 @@ class HasTestMetadata(dbtClassMixin):
@dataclass
class ParsedDataTestNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
@dataclass
class ParsedSchemaTestNode(ParsedNode, HasTestMetadata):
# keep this in sync with CompiledSchemaTestNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Test]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
def same_config(self, other) -> bool:
return self.unrendered_config.get("severity") == other.unrendered_config.get(
"severity"
)
def same_column_name(self, other) -> bool:
return self.column_name == other.column_name
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
def same_contents(self, other) -> bool:
if other is None:
return False
return self.same_config(other) and self.same_fqn(other) and True
return (
self.same_config(other) and
self.same_fqn(other) and
True
)
@dataclass
@@ -385,13 +454,13 @@ class IntermediateSnapshotNode(ParsedNode):
# defined in config blocks. To fix that, we have an intermediate type that
# uses a regular node config, which the snapshot parser will then convert
# into a full ParsedSnapshotNode after rendering.
resource_type: NodeType = field(metadata={"restrict": [NodeType.Snapshot]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Snapshot]})
config: EmptySnapshotConfig = field(default_factory=EmptySnapshotConfig)
@dataclass
class ParsedSnapshotNode(ParsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Snapshot]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Snapshot]})
config: SnapshotConfig
@@ -401,6 +470,7 @@ class ParsedPatch(HasYamlMetadata, Replaceable):
description: str
meta: Dict[str, Any]
docs: Docs
config: Dict[str, Any]
# The parsed node update is only the 'patch', not the test. The test became a
@@ -420,23 +490,22 @@ class ParsedMacroPatch(ParsedPatch):
class ParsedMacro(UnparsedBaseNode, HasUniqueID):
name: str
macro_sql: str
resource_type: NodeType = field(metadata={"restrict": [NodeType.Macro]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Macro]})
# TODO: can macros even have tags?
tags: List[str] = field(default_factory=list)
# TODO: is this ever populated?
depends_on: MacroDependsOn = field(default_factory=MacroDependsOn)
description: str = ""
description: str = ''
meta: Dict[str, Any] = field(default_factory=dict)
docs: Docs = field(default_factory=Docs)
patch_path: Optional[str] = None
arguments: List[MacroArgument] = field(default_factory=list)
def local_vars(self):
return {}
created_at: int = field(default_factory=lambda: int(time.time()))
def patch(self, patch: ParsedMacroPatch):
self.patch_path: Optional[str] = patch.original_file_path
self.patch_path: Optional[str] = patch.file_id
self.description = patch.description
self.created_at = int(time.time())
self.meta = patch.meta
self.docs = patch.docs
self.arguments = patch.arguments
@@ -446,7 +515,7 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
dct = self.to_dict(omit_none=False)
self.validate(dct)
def same_contents(self, other: Optional["ParsedMacro"]) -> bool:
def same_contents(self, other: Optional['ParsedMacro']) -> bool:
if other is None:
return False
# the only thing that makes one macro different from another with the
@@ -463,7 +532,7 @@ class ParsedDocumentation(UnparsedDocumentation, HasUniqueID):
def search_name(self):
return self.name
def same_contents(self, other: Optional["ParsedDocumentation"]) -> bool:
def same_contents(self, other: Optional['ParsedDocumentation']) -> bool:
if other is None:
return False
# the only thing that makes one doc different from another with the
@@ -482,11 +551,11 @@ def normalize_test(testdef: TestDef) -> Dict[str, Any]:
class UnpatchedSourceDefinition(UnparsedBaseNode, HasUniqueID, HasFqn):
source: UnparsedSourceDefinition
table: UnparsedSourceTableDefinition
resource_type: NodeType = field(metadata={"restrict": [NodeType.Source]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Source]})
patch_path: Optional[Path] = None
def get_full_source_name(self):
return f"{self.source.name}_{self.table.name}"
return f'{self.source.name}_{self.table.name}'
def get_source_representation(self):
return f'source("{self.source.name}", "{self.table.name}")'
@@ -511,7 +580,9 @@ class UnpatchedSourceDefinition(UnparsedBaseNode, HasUniqueID, HasFqn):
else:
return self.table.columns
def get_tests(self) -> Iterator[Tuple[Dict[str, Any], Optional[UnparsedColumn]]]:
def get_tests(
self
) -> Iterator[Tuple[Dict[str, Any], Optional[UnparsedColumn]]]:
for test in self.tests:
yield normalize_test(test), None
@@ -530,19 +601,23 @@ class UnpatchedSourceDefinition(UnparsedBaseNode, HasUniqueID, HasFqn):
@dataclass
class ParsedSourceDefinition(
UnparsedBaseNode, HasUniqueID, HasRelationMetadata, HasFqn
UnparsedBaseNode,
HasUniqueID,
HasRelationMetadata,
HasFqn,
):
name: str
source_name: str
source_description: str
loader: str
identifier: str
resource_type: NodeType = field(metadata={"restrict": [NodeType.Source]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Source]})
quoting: Quoting = field(default_factory=Quoting)
loaded_at_field: Optional[str] = None
freshness: Optional[FreshnessThreshold] = None
external: Optional[ExternalTable] = None
description: str = ""
description: str = ''
columns: Dict[str, ColumnInfo] = field(default_factory=dict)
meta: Dict[str, Any] = field(default_factory=dict)
source_meta: Dict[str, Any] = field(default_factory=dict)
@@ -551,35 +626,38 @@ class ParsedSourceDefinition(
patch_path: Optional[Path] = None
unrendered_config: Dict[str, Any] = field(default_factory=dict)
relation_name: Optional[str] = None
created_at: int = field(default_factory=lambda: int(time.time()))
def same_database_representation(self, other: "ParsedSourceDefinition") -> bool:
def same_database_representation(
self, other: 'ParsedSourceDefinition'
) -> bool:
return (
self.database == other.database
and self.schema == other.schema
and self.identifier == other.identifier
and True
self.database == other.database and
self.schema == other.schema and
self.identifier == other.identifier and
True
)
def same_quoting(self, other: "ParsedSourceDefinition") -> bool:
def same_quoting(self, other: 'ParsedSourceDefinition') -> bool:
return self.quoting == other.quoting
def same_freshness(self, other: "ParsedSourceDefinition") -> bool:
def same_freshness(self, other: 'ParsedSourceDefinition') -> bool:
return (
self.freshness == other.freshness
and self.loaded_at_field == other.loaded_at_field
and True
self.freshness == other.freshness and
self.loaded_at_field == other.loaded_at_field and
True
)
def same_external(self, other: "ParsedSourceDefinition") -> bool:
def same_external(self, other: 'ParsedSourceDefinition') -> bool:
return self.external == other.external
def same_config(self, old: "ParsedSourceDefinition") -> bool:
def same_config(self, old: 'ParsedSourceDefinition') -> bool:
return self.config.same_contents(
self.unrendered_config,
old.unrendered_config,
)
def same_contents(self, old: Optional["ParsedSourceDefinition"]) -> bool:
def same_contents(self, old: Optional['ParsedSourceDefinition']) -> bool:
# existing when it didn't before is a change!
if old is None:
return True
@@ -593,17 +671,17 @@ class ParsedSourceDefinition(
# metadata/tags changes are not "changes"
# patching/description changes are not "changes"
return (
self.same_database_representation(old)
and self.same_fqn(old)
and self.same_config(old)
and self.same_quoting(old)
and self.same_freshness(old)
and self.same_external(old)
and True
self.same_database_representation(old) and
self.same_fqn(old) and
self.same_config(old) and
self.same_quoting(old) and
self.same_freshness(old) and
self.same_external(old) and
True
)
def get_full_source_name(self):
return f"{self.source_name}_{self.name}"
return f'{self.source_name}_{self.name}'
def get_source_representation(self):
return f'source("{self.source.name}", "{self.table.name}")'
@@ -624,6 +702,10 @@ class ParsedSourceDefinition(
def depends_on_nodes(self):
return []
@property
def depends_on(self):
return DependsOn(macros=[], nodes=[])
@property
def refs(self):
return []
@@ -638,7 +720,7 @@ class ParsedSourceDefinition(
@property
def search_name(self):
return f"{self.source_name}.{self.name}"
return f'{self.source_name}.{self.name}'
@dataclass
@@ -647,12 +729,15 @@ class ParsedExposure(UnparsedBaseNode, HasUniqueID, HasFqn):
type: ExposureType
owner: ExposureOwner
resource_type: NodeType = NodeType.Exposure
description: str = ""
description: str = ''
maturity: Optional[MaturityType] = None
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
url: Optional[str] = None
depends_on: DependsOn = field(default_factory=DependsOn)
refs: List[List[str]] = field(default_factory=list)
sources: List[List[str]] = field(default_factory=list)
created_at: int = field(default_factory=lambda: int(time.time()))
@property
def depends_on_nodes(self):
@@ -662,46 +747,54 @@ class ParsedExposure(UnparsedBaseNode, HasUniqueID, HasFqn):
def search_name(self):
return self.name
# no tags for now, but we could definitely add them
@property
def tags(self):
return []
def same_depends_on(self, old: "ParsedExposure") -> bool:
def same_depends_on(self, old: 'ParsedExposure') -> bool:
return set(self.depends_on.nodes) == set(old.depends_on.nodes)
def same_description(self, old: "ParsedExposure") -> bool:
def same_description(self, old: 'ParsedExposure') -> bool:
return self.description == old.description
def same_maturity(self, old: "ParsedExposure") -> bool:
def same_maturity(self, old: 'ParsedExposure') -> bool:
return self.maturity == old.maturity
def same_owner(self, old: "ParsedExposure") -> bool:
def same_owner(self, old: 'ParsedExposure') -> bool:
return self.owner == old.owner
def same_exposure_type(self, old: "ParsedExposure") -> bool:
def same_exposure_type(self, old: 'ParsedExposure') -> bool:
return self.type == old.type
def same_url(self, old: "ParsedExposure") -> bool:
def same_url(self, old: 'ParsedExposure') -> bool:
return self.url == old.url
def same_contents(self, old: Optional["ParsedExposure"]) -> bool:
def same_contents(self, old: Optional['ParsedExposure']) -> bool:
# existing when it didn't before is a change!
# metadata/tags changes are not "changes"
if old is None:
return True
return (
self.same_fqn(old)
and self.same_exposure_type(old)
and self.same_owner(old)
and self.same_maturity(old)
and self.same_url(old)
and self.same_description(old)
and self.same_depends_on(old)
and True
self.same_fqn(old) and
self.same_exposure_type(old) and
self.same_owner(old) and
self.same_maturity(old) and
self.same_url(old) and
self.same_description(old) and
self.same_depends_on(old) and
True
)
ManifestNodes = Union[
ParsedAnalysisNode,
ParsedDataTestNode,
ParsedHookNode,
ParsedModelNode,
ParsedRPCNode,
ParsedSchemaTestNode,
ParsedSeedNode,
ParsedSnapshotNode,
]
ParsedResource = Union[
ParsedDocumentation,
ParsedMacro,

View File

@@ -4,12 +4,13 @@ from dbt.contracts.util import (
Mergeable,
Replaceable,
)
# trigger the PathEncoder
import dbt.helper_types # noqa:F401
from dbt.exceptions import CompilationException
from dbt.dataclass_schema import dbtClassMixin, StrEnum, ExtensibleDbtClassMixin
from dbt.dataclass_schema import (
dbtClassMixin, StrEnum, ExtensibleDbtClassMixin
)
from dataclasses import dataclass, field
from datetime import timedelta
@@ -24,6 +25,10 @@ class UnparsedBaseNode(dbtClassMixin, Replaceable):
path: str
original_file_path: str
@property
def file_id(self):
return f'{self.package_name}://{self.original_file_path}'
@dataclass
class HasSQL:
@@ -36,25 +41,21 @@ class HasSQL:
@dataclass
class UnparsedMacro(UnparsedBaseNode, HasSQL):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Macro]})
resource_type: NodeType = field(metadata={'restrict': [NodeType.Macro]})
@dataclass
class UnparsedNode(UnparsedBaseNode, HasSQL):
name: str
resource_type: NodeType = field(
metadata={
"restrict": [
NodeType.Model,
NodeType.Analysis,
NodeType.Test,
NodeType.Snapshot,
NodeType.Operation,
NodeType.Seed,
NodeType.RPCCall,
]
}
)
resource_type: NodeType = field(metadata={'restrict': [
NodeType.Model,
NodeType.Analysis,
NodeType.Test,
NodeType.Snapshot,
NodeType.Operation,
NodeType.Seed,
NodeType.RPCCall,
]})
@property
def search_name(self):
@@ -63,7 +64,9 @@ class UnparsedNode(UnparsedBaseNode, HasSQL):
@dataclass
class UnparsedRunHook(UnparsedNode):
resource_type: NodeType = field(metadata={"restrict": [NodeType.Operation]})
resource_type: NodeType = field(
metadata={'restrict': [NodeType.Operation]}
)
index: Optional[int] = None
@@ -73,9 +76,10 @@ class Docs(dbtClassMixin, Replaceable):
@dataclass
class HasDocs(AdditionalPropertiesMixin, ExtensibleDbtClassMixin, Replaceable):
class HasDocs(AdditionalPropertiesMixin, ExtensibleDbtClassMixin,
Replaceable):
name: str
description: str = ""
description: str = ''
meta: Dict[str, Any] = field(default_factory=dict)
data_type: Optional[str] = None
docs: Docs = field(default_factory=Docs)
@@ -116,14 +120,23 @@ class HasYamlMetadata(dbtClassMixin):
yaml_key: str
package_name: str
@property
def file_id(self):
return f'{self.package_name}://{self.original_file_path}'
@dataclass
class UnparsedAnalysisUpdate(HasColumnDocs, HasDocs, HasYamlMetadata):
class HasConfig():
config: Dict[str, Any] = field(default_factory=dict)
@dataclass
class UnparsedAnalysisUpdate(HasConfig, HasColumnDocs, HasDocs, HasYamlMetadata):
pass
@dataclass
class UnparsedNodeUpdate(HasColumnTests, HasTests, HasYamlMetadata):
class UnparsedNodeUpdate(HasConfig, HasColumnTests, HasTests, HasYamlMetadata):
quote_columns: Optional[bool] = None
@@ -131,21 +144,21 @@ class UnparsedNodeUpdate(HasColumnTests, HasTests, HasYamlMetadata):
class MacroArgument(dbtClassMixin):
name: str
type: Optional[str] = None
description: str = ""
description: str = ''
@dataclass
class UnparsedMacroUpdate(HasDocs, HasYamlMetadata):
class UnparsedMacroUpdate(HasConfig, HasDocs, HasYamlMetadata):
arguments: List[MacroArgument] = field(default_factory=list)
class TimePeriod(StrEnum):
minute = "minute"
hour = "hour"
day = "day"
minute = 'minute'
hour = 'hour'
day = 'day'
def plural(self) -> str:
return str(self) + "s"
return str(self) + 's'
@dataclass
@@ -167,7 +180,6 @@ class FreshnessThreshold(dbtClassMixin, Mergeable):
def status(self, age: float) -> "dbt.contracts.results.FreshnessStatus":
from dbt.contracts.results import FreshnessStatus
if self.error_after and self.error_after.exceeded(age):
return FreshnessStatus.Error
elif self.warn_after and self.warn_after.exceeded(age):
@@ -180,21 +192,24 @@ class FreshnessThreshold(dbtClassMixin, Mergeable):
@dataclass
class AdditionalPropertiesAllowed(AdditionalPropertiesMixin, ExtensibleDbtClassMixin):
class AdditionalPropertiesAllowed(
AdditionalPropertiesMixin,
ExtensibleDbtClassMixin
):
_extra: Dict[str, Any] = field(default_factory=dict)
@dataclass
class ExternalPartition(AdditionalPropertiesAllowed, Replaceable):
name: str = ""
description: str = ""
data_type: str = ""
name: str = ''
description: str = ''
data_type: str = ''
meta: Dict[str, Any] = field(default_factory=dict)
def __post_init__(self):
if self.name == "" or self.data_type == "":
if self.name == '' or self.data_type == '':
raise CompilationException(
"External partition columns must have names and data types"
'External partition columns must have names and data types'
)
@@ -223,39 +238,44 @@ class UnparsedSourceTableDefinition(HasColumnTests, HasTests):
loaded_at_field: Optional[str] = None
identifier: Optional[str] = None
quoting: Quoting = field(default_factory=Quoting)
freshness: Optional[FreshnessThreshold] = field(default_factory=FreshnessThreshold)
freshness: Optional[FreshnessThreshold] = field(
default_factory=FreshnessThreshold
)
external: Optional[ExternalTable] = None
tags: List[str] = field(default_factory=list)
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if "freshness" not in dct and self.freshness is None:
dct["freshness"] = None
if 'freshness' not in dct and self.freshness is None:
dct['freshness'] = None
return dct
@dataclass
class UnparsedSourceDefinition(dbtClassMixin, Replaceable):
name: str
description: str = ""
description: str = ''
meta: Dict[str, Any] = field(default_factory=dict)
database: Optional[str] = None
schema: Optional[str] = None
loader: str = ""
loader: str = ''
quoting: Quoting = field(default_factory=Quoting)
freshness: Optional[FreshnessThreshold] = field(default_factory=FreshnessThreshold)
freshness: Optional[FreshnessThreshold] = field(
default_factory=FreshnessThreshold
)
loaded_at_field: Optional[str] = None
tables: List[UnparsedSourceTableDefinition] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
@property
def yaml_key(self) -> "str":
return "sources"
def yaml_key(self) -> 'str':
return 'sources'
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if "freshnewss" not in dct and self.freshness is None:
dct["freshness"] = None
if 'freshnewss' not in dct and self.freshness is None:
dct['freshness'] = None
return dct
@@ -269,7 +289,9 @@ class SourceTablePatch(dbtClassMixin):
loaded_at_field: Optional[str] = None
identifier: Optional[str] = None
quoting: Quoting = field(default_factory=Quoting)
freshness: Optional[FreshnessThreshold] = field(default_factory=FreshnessThreshold)
freshness: Optional[FreshnessThreshold] = field(
default_factory=FreshnessThreshold
)
external: Optional[ExternalTable] = None
tags: Optional[List[str]] = None
tests: Optional[List[TestDef]] = None
@@ -277,13 +299,13 @@ class SourceTablePatch(dbtClassMixin):
def to_patch_dict(self) -> Dict[str, Any]:
dct = self.to_dict(omit_none=True)
remove_keys = "name"
remove_keys = ('name')
for key in remove_keys:
if key in dct:
del dct[key]
if self.freshness is None:
dct["freshness"] = None
dct['freshness'] = None
return dct
@@ -291,13 +313,13 @@ class SourceTablePatch(dbtClassMixin):
@dataclass
class SourcePatch(dbtClassMixin, Replaceable):
name: str = field(
metadata=dict(description="The name of the source to override"),
metadata=dict(description='The name of the source to override'),
)
overrides: str = field(
metadata=dict(description="The package of the source to override"),
metadata=dict(description='The package of the source to override'),
)
path: Path = field(
metadata=dict(description="The path to the patch-defining yml file"),
metadata=dict(description='The path to the patch-defining yml file'),
)
description: Optional[str] = None
meta: Optional[Dict[str, Any]] = None
@@ -314,13 +336,13 @@ class SourcePatch(dbtClassMixin, Replaceable):
def to_patch_dict(self) -> Dict[str, Any]:
dct = self.to_dict(omit_none=True)
remove_keys = ("name", "overrides", "tables", "path")
remove_keys = ('name', 'overrides', 'tables', 'path')
for key in remove_keys:
if key in dct:
del dct[key]
if self.freshness is None:
dct["freshness"] = None
dct['freshness'] = None
return dct
@@ -339,6 +361,10 @@ class UnparsedDocumentation(dbtClassMixin, Replaceable):
path: str
original_file_path: str
@property
def file_id(self):
return f'{self.package_name}://{self.original_file_path}'
@property
def resource_type(self):
return NodeType.Documentation
@@ -352,9 +378,9 @@ class UnparsedDocumentationFile(UnparsedDocumentation):
# can't use total_ordering decorator here, as str provides an ordering already
# and it's not the one we want.
class Maturity(StrEnum):
low = "low"
medium = "medium"
high = "high"
low = 'low'
medium = 'medium'
high = 'high'
def __lt__(self, other):
if not isinstance(other, Maturity):
@@ -379,17 +405,17 @@ class Maturity(StrEnum):
class ExposureType(StrEnum):
Dashboard = "dashboard"
Notebook = "notebook"
Analysis = "analysis"
ML = "ml"
Application = "application"
Dashboard = 'dashboard'
Notebook = 'notebook'
Analysis = 'analysis'
ML = 'ml'
Application = 'application'
class MaturityType(StrEnum):
Low = "low"
Medium = "medium"
High = "high"
Low = 'low'
Medium = 'medium'
High = 'high'
@dataclass
@@ -403,7 +429,9 @@ class UnparsedExposure(dbtClassMixin, Replaceable):
name: str
type: ExposureType
owner: ExposureOwner
description: str = ""
description: str = ''
maturity: Optional[MaturityType] = None
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
url: Optional[str] = None
depends_on: List[str] = field(default_factory=list)

View File

@@ -5,26 +5,24 @@ from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt import tracking
from dbt import ui
from dbt.dataclass_schema import (
dbtClassMixin,
ValidationError,
dbtClassMixin, ValidationError,
HyphenatedDbtClassMixin,
ExtensibleDbtClassMixin,
register_pattern,
ValidatedStringMixin,
register_pattern, ValidatedStringMixin
)
from dataclasses import dataclass, field
from typing import Optional, List, Dict, Union, Any
from mashumaro.types import SerializableType
PIN_PACKAGE_URL = "https://docs.getdbt.com/docs/package-management#section-specifying-package-versions" # noqa
PIN_PACKAGE_URL = 'https://docs.getdbt.com/docs/package-management#section-specifying-package-versions' # noqa
DEFAULT_SEND_ANONYMOUS_USAGE_STATS = True
class Name(ValidatedStringMixin):
ValidationRegex = r"^[^\d\W]\w*$"
ValidationRegex = r'^[^\d\W]\w*$'
register_pattern(Name, r"^[^\d\W]\w*$")
register_pattern(Name, r'^[^\d\W]\w*$')
class SemverString(str, SerializableType):
@@ -32,7 +30,7 @@ class SemverString(str, SerializableType):
return self
@classmethod
def _deserialize(cls, value: str) -> "SemverString":
def _deserialize(cls, value: str) -> 'SemverString':
return SemverString(value)
@@ -41,7 +39,7 @@ class SemverString(str, SerializableType):
# 'semver lite'.
register_pattern(
SemverString,
r"^(?:0|[1-9]\d*)\.(?:0|[1-9]\d*)(\.(?:0|[1-9]\d*))?$",
r'^(?:0|[1-9]\d*)\.(?:0|[1-9]\d*)(\.(?:0|[1-9]\d*))?$',
)
@@ -72,6 +70,7 @@ class GitPackage(Package):
git: str
revision: Optional[RawVersion] = None
warn_unpinned: Optional[bool] = None
subdirectory: Optional[str] = None
def get_revisions(self) -> List[str]:
if self.revision is None:
@@ -84,6 +83,7 @@ class GitPackage(Package):
class RegistryPackage(Package):
package: str
version: Union[RawVersion, List[RawVersion]]
install_prerelease: Optional[bool] = False
def get_versions(self) -> List[str]:
if isinstance(self.version, list):
@@ -107,7 +107,8 @@ class ProjectPackageMetadata:
@classmethod
def from_project(cls, project):
return cls(name=project.project_name, packages=project.packages.packages)
return cls(name=project.project_name,
packages=project.packages.packages)
@dataclass
@@ -125,46 +126,46 @@ class RegistryPackageMetadata(
# A list of all the reserved words that packages may not have as names.
BANNED_PROJECT_NAMES = {
"_sql_results",
"adapter",
"api",
"column",
"config",
"context",
"database",
"env",
"env_var",
"exceptions",
"execute",
"flags",
"fromjson",
"fromyaml",
"graph",
"invocation_id",
"load_agate_table",
"load_result",
"log",
"model",
"modules",
"post_hooks",
"pre_hooks",
"ref",
"render",
"return",
"run_started_at",
"schema",
"source",
"sql",
"sql_now",
"store_result",
"store_raw_result",
"target",
"this",
"tojson",
"toyaml",
"try_or_compiler_error",
"var",
"write",
'_sql_results',
'adapter',
'api',
'column',
'config',
'context',
'database',
'env',
'env_var',
'exceptions',
'execute',
'flags',
'fromjson',
'fromyaml',
'graph',
'invocation_id',
'load_agate_table',
'load_result',
'log',
'model',
'modules',
'post_hooks',
'pre_hooks',
'ref',
'render',
'return',
'run_started_at',
'schema',
'source',
'sql',
'sql_now',
'store_result',
'store_raw_result',
'target',
'this',
'tojson',
'toyaml',
'try_or_compiler_error',
'var',
'write',
}
@@ -191,15 +192,17 @@ class Project(HyphenatedDbtClassMixin, Replaceable):
on_run_start: Optional[List[str]] = field(default_factory=list_str)
on_run_end: Optional[List[str]] = field(default_factory=list_str)
require_dbt_version: Optional[Union[List[str], str]] = None
dispatch: List[Dict[str, Any]] = field(default_factory=list)
models: Dict[str, Any] = field(default_factory=dict)
seeds: Dict[str, Any] = field(default_factory=dict)
snapshots: Dict[str, Any] = field(default_factory=dict)
analyses: Dict[str, Any] = field(default_factory=dict)
sources: Dict[str, Any] = field(default_factory=dict)
tests: Dict[str, Any] = field(default_factory=dict)
vars: Optional[Dict[str, Any]] = field(
default=None,
metadata=dict(
description="map project names to their vars override dicts",
description='map project names to their vars override dicts',
),
)
packages: List[PackageSpec] = field(default_factory=list)
@@ -208,10 +211,17 @@ class Project(HyphenatedDbtClassMixin, Replaceable):
@classmethod
def validate(cls, data):
super().validate(data)
if data["name"] in BANNED_PROJECT_NAMES:
if data['name'] in BANNED_PROJECT_NAMES:
raise ValidationError(
f"Invalid project name: {data['name']} is a reserved word"
)
# validate dispatch config
if 'dispatch' in data and data['dispatch']:
entries = data['dispatch']
for entry in entries:
if ('macro_namespace' not in entry or 'search_order' not in entry or
not isinstance(entry['search_order'], list)):
raise ValidationError(f"Invalid project dispatch config: {entry}")
@dataclass
@@ -236,8 +246,8 @@ class UserConfig(ExtensibleDbtClassMixin, Replaceable, UserConfigContract):
@dataclass
class ProfileConfig(HyphenatedDbtClassMixin, Replaceable):
profile_name: str = field(metadata={"preserve_underscore": True})
target_name: str = field(metadata={"preserve_underscore": True})
profile_name: str = field(metadata={'preserve_underscore': True})
target_name: str = field(metadata={'preserve_underscore': True})
config: UserConfig
threads: int
# TODO: make this a dynamic union of some kind?
@@ -256,7 +266,7 @@ class ConfiguredQuoting(Quoting, Replaceable):
class Configuration(Project, ProfileConfig):
cli_vars: Dict[str, Any] = field(
default_factory=dict,
metadata={"preserve_underscore": True},
metadata={'preserve_underscore': True},
)
quoting: Optional[ConfiguredQuoting] = None

View File

@@ -1,8 +1,7 @@
from collections.abc import Mapping
from dataclasses import dataclass, fields
from typing import (
Optional,
Dict,
Optional, Dict,
)
from typing_extensions import Protocol
@@ -15,17 +14,17 @@ from dbt.utils import deep_merge
class RelationType(StrEnum):
Table = "table"
View = "view"
CTE = "cte"
MaterializedView = "materializedview"
External = "external"
Table = 'table'
View = 'view'
CTE = 'cte'
MaterializedView = 'materializedview'
External = 'external'
class ComponentName(StrEnum):
Database = "database"
Schema = "schema"
Identifier = "identifier"
Database = 'database'
Schema = 'schema'
Identifier = 'identifier'
class HasQuoting(Protocol):
@@ -44,12 +43,12 @@ class FakeAPIObject(dbtClassMixin, Replaceable, Mapping):
raise KeyError(key) from None
def __iter__(self):
deprecations.warn("not-a-dictionary", obj=self)
deprecations.warn('not-a-dictionary', obj=self)
for _, name in self._get_fields():
yield name
def __len__(self):
deprecations.warn("not-a-dictionary", obj=self)
deprecations.warn('not-a-dictionary', obj=self)
return len(fields(self.__class__))
def incorporate(self, **kwargs):
@@ -73,7 +72,8 @@ class Policy(FakeAPIObject):
return self.identifier
else:
raise ValueError(
"Got a key of {}, expected one of {}".format(key, list(ComponentName))
'Got a key of {}, expected one of {}'
.format(key, list(ComponentName))
)
def replace_dict(self, dct: Dict[ComponentName, bool]):
@@ -93,15 +93,15 @@ class Path(FakeAPIObject):
# handle pesky jinja2.Undefined sneaking in here and messing up rende
if not isinstance(self.database, (type(None), str)):
raise CompilationException(
"Got an invalid path database: {}".format(self.database)
'Got an invalid path database: {}'.format(self.database)
)
if not isinstance(self.schema, (type(None), str)):
raise CompilationException(
"Got an invalid path schema: {}".format(self.schema)
'Got an invalid path schema: {}'.format(self.schema)
)
if not isinstance(self.identifier, (type(None), str)):
raise CompilationException(
"Got an invalid path identifier: {}".format(self.identifier)
'Got an invalid path identifier: {}'.format(self.identifier)
)
def get_lowered_part(self, key: ComponentName) -> Optional[str]:
@@ -119,7 +119,8 @@ class Path(FakeAPIObject):
return self.identifier
else:
raise ValueError(
"Got a key of {}, expected one of {}".format(key, list(ComponentName))
'Got a key of {}, expected one of {}'
.format(key, list(ComponentName))
)
def replace_dict(self, dct: Dict[ComponentName, str]):

View File

@@ -1,5 +1,7 @@
from dbt.contracts.graph.manifest import CompileResultNode
from dbt.contracts.graph.unparsed import FreshnessThreshold
from dbt.contracts.graph.unparsed import (
FreshnessThreshold
)
from dbt.contracts.graph.parsed import ParsedSourceDefinition
from dbt.contracts.util import (
BaseArtifactMetadata,
@@ -22,13 +24,7 @@ import agate
from dataclasses import dataclass, field
from datetime import datetime
from typing import (
Union,
Dict,
List,
Optional,
Any,
NamedTuple,
Sequence,
Union, Dict, List, Optional, Any, NamedTuple, Sequence,
)
from dbt.clients.system import write_json
@@ -58,7 +54,7 @@ class collect_timing_info:
def __exit__(self, exc_type, exc_value, traceback):
self.timing_info.end()
with JsonOnly(), TimingProcessor(self.timing_info):
logger.debug("finished collecting timing info")
logger.debug('finished collecting timing info')
class NodeStatus(StrEnum):
@@ -82,6 +78,7 @@ class TestStatus(StrEnum):
Error = NodeStatus.Error
Fail = NodeStatus.Fail
Warn = NodeStatus.Warn
Skipped = NodeStatus.Skipped
class FreshnessStatus(StrEnum):
@@ -98,13 +95,16 @@ class BaseResult(dbtClassMixin):
thread_id: str
execution_time: float
adapter_response: Dict[str, Any]
message: Optional[Union[str, int]]
message: Optional[str]
failures: Optional[int]
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
if "message" not in data:
data["message"] = None
if 'message' not in data:
data['message'] = None
if 'failures' not in data:
data['failures'] = None
return data
@@ -116,8 +116,9 @@ class NodeResult(BaseResult):
@dataclass
class RunResult(NodeResult):
agate_table: Optional[agate.Table] = field(
default=None,
metadata={"serialize": lambda x: None, "deserialize": lambda x: None},
default=None, metadata={
'serialize': lambda x: None, 'deserialize': lambda x: None
}
)
@property
@@ -161,6 +162,7 @@ def process_run_result(result: RunResult) -> RunResultOutput:
execution_time=result.execution_time,
message=result.message,
adapter_response=result.adapter_response,
failures=result.failures
)
@@ -183,7 +185,7 @@ class RunExecutionResult(
@dataclass
@schema_version("run-results", 1)
@schema_version('run-results', 2)
class RunResultsArtifact(ExecutionResult, ArtifactMixin):
results: Sequence[RunResultOutput]
args: Dict[str, Any] = field(default_factory=dict)
@@ -205,7 +207,7 @@ class RunResultsArtifact(ExecutionResult, ArtifactMixin):
metadata=meta,
results=processed_results,
elapsed_time=elapsed_time,
args=args,
args=args
)
def write(self, path: str):
@@ -219,14 +221,15 @@ class RunOperationResult(ExecutionResult):
@dataclass
class RunOperationResultMetadata(BaseArtifactMetadata):
dbt_schema_version: str = field(
default_factory=lambda: str(RunOperationResultsArtifact.dbt_schema_version)
)
dbt_schema_version: str = field(default_factory=lambda: str(
RunOperationResultsArtifact.dbt_schema_version
))
@dataclass
@schema_version("run-operation-result", 1)
@schema_version('run-operation-result', 1)
class RunOperationResultsArtifact(RunOperationResult, ArtifactMixin):
@classmethod
def from_success(
cls,
@@ -245,7 +248,6 @@ class RunOperationResultsArtifact(RunOperationResult, ArtifactMixin):
success=success,
)
# due to issues with typing.Union collapsing subclasses, this can't subclass
# PartialResult
@@ -264,7 +266,7 @@ class SourceFreshnessResult(NodeResult):
class FreshnessErrorEnum(StrEnum):
runtime_error = "runtime error"
runtime_error = 'runtime error'
@dataclass
@@ -294,11 +296,14 @@ class PartialSourceFreshnessResult(NodeResult):
return False
FreshnessNodeResult = Union[PartialSourceFreshnessResult, SourceFreshnessResult]
FreshnessNodeResult = Union[PartialSourceFreshnessResult,
SourceFreshnessResult]
FreshnessNodeOutput = Union[SourceFreshnessRuntimeError, SourceFreshnessOutput]
def process_freshness_result(result: FreshnessNodeResult) -> FreshnessNodeOutput:
def process_freshness_result(
result: FreshnessNodeResult
) -> FreshnessNodeOutput:
unique_id = result.node.unique_id
if result.status == FreshnessStatus.RuntimeErr:
return SourceFreshnessRuntimeError(
@@ -310,15 +315,16 @@ def process_freshness_result(result: FreshnessNodeResult) -> FreshnessNodeOutput
# we know that this must be a SourceFreshnessResult
if not isinstance(result, SourceFreshnessResult):
raise InternalException(
"Got {} instead of a SourceFreshnessResult for a "
"non-error result in freshness execution!".format(type(result))
'Got {} instead of a SourceFreshnessResult for a '
'non-error result in freshness execution!'
.format(type(result))
)
# if we're here, we must have a non-None freshness threshold
criteria = result.node.freshness
if criteria is None:
raise InternalException(
"Somehow evaluated a freshness result for a source "
"that has no freshness criteria!"
'Somehow evaluated a freshness result for a source '
'that has no freshness criteria!'
)
return SourceFreshnessOutput(
unique_id=unique_id,
@@ -327,14 +333,16 @@ def process_freshness_result(result: FreshnessNodeResult) -> FreshnessNodeOutput
max_loaded_at_time_ago_in_s=result.age,
status=result.status,
criteria=criteria,
adapter_response=result.adapter_response,
adapter_response=result.adapter_response
)
@dataclass
class FreshnessMetadata(BaseArtifactMetadata):
dbt_schema_version: str = field(
default_factory=lambda: str(FreshnessExecutionResultArtifact.dbt_schema_version)
default_factory=lambda: str(
FreshnessExecutionResultArtifact.dbt_schema_version
)
)
@@ -355,7 +363,7 @@ class FreshnessResult(ExecutionResult):
@dataclass
@schema_version("sources", 1)
@schema_version('sources', 1)
class FreshnessExecutionResultArtifact(
ArtifactMixin,
VersionedSchema,
@@ -375,9 +383,11 @@ class FreshnessExecutionResultArtifact(
Primitive = Union[bool, str, float, None]
PrimitiveDict = Dict[str, Primitive]
CatalogKey = NamedTuple(
"CatalogKey", [("database", Optional[str]), ("schema", str), ("name", str)]
'CatalogKey',
[('database', Optional[str]), ('schema', str), ('name', str)]
)
@@ -446,13 +456,13 @@ class CatalogResults(dbtClassMixin):
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if "_compile_results" in dct:
del dct["_compile_results"]
if '_compile_results' in dct:
del dct['_compile_results']
return dct
@dataclass
@schema_version("catalog", 1)
@schema_version('catalog', 1)
class CatalogArtifact(CatalogResults, ArtifactMixin):
metadata: CatalogMetadata
@@ -463,8 +473,8 @@ class CatalogArtifact(CatalogResults, ArtifactMixin):
nodes: Dict[str, CatalogTable],
sources: Dict[str, CatalogTable],
compile_results: Optional[Any],
errors: Optional[List[str]],
) -> "CatalogArtifact":
errors: Optional[List[str]]
) -> 'CatalogArtifact':
meta = CatalogMetadata(generated_at=generated_at)
return cls(
metadata=meta,

View File

@@ -10,9 +10,7 @@ from dbt.dataclass_schema import dbtClassMixin, StrEnum
from dbt.contracts.graph.compiled import CompileResultNode
from dbt.contracts.graph.manifest import WritableManifest
from dbt.contracts.results import (
RunResult,
RunResultsArtifact,
TimingInfo,
RunResult, RunResultsArtifact, TimingInfo,
CatalogArtifact,
CatalogResults,
ExecutionResult,
@@ -42,10 +40,10 @@ class RPCParameters(dbtClassMixin):
@classmethod
def __pre_deserialize__(cls, data, omit_none=True):
data = super().__pre_deserialize__(data)
if "timeout" not in data:
data["timeout"] = None
if "task_tags" not in data:
data["task_tags"] = None
if 'timeout' not in data:
data['timeout'] = None
if 'task_tags' not in data:
data['task_tags'] = None
return data
@@ -60,15 +58,28 @@ class RPCExecParameters(RPCParameters):
class RPCCompileParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@dataclass
class RPCListParameters(RPCParameters):
resource_types: Optional[List[str]] = None
models: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
selector: Optional[str] = None
output: Optional[str] = 'json'
output_keys: Optional[List[str]] = None
@dataclass
class RPCRunParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@@ -108,6 +119,17 @@ class RPCDocsGenerateParameters(RPCParameters):
state: Optional[str] = None
@dataclass
class RPCBuildParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCCliParameters(RPCParameters):
cli: str
@@ -163,7 +185,6 @@ class GCParameters(RPCParameters):
will be applied to the task manager before GC starts. By default the
existing gc settings remain.
"""
task_ids: Optional[List[TaskID]] = None
before: Optional[datetime] = None
settings: Optional[GCSettings] = None
@@ -179,13 +200,14 @@ class RPCRunOperationParameters(RPCParameters):
class RPCSourceFreshnessParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
@dataclass
class GetManifestParameters(RPCParameters):
pass
# Outputs
@@ -195,13 +217,20 @@ class RemoteResult(VersionedSchema):
@dataclass
@schema_version("remote-deps-result", 1)
@schema_version('remote-list-results', 1)
class RemoteListResults(RemoteResult):
output: List[Any]
generated_at: datetime = field(default_factory=datetime.utcnow)
@dataclass
@schema_version('remote-deps-result', 1)
class RemoteDepsResult(RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
@dataclass
@schema_version("remote-catalog-result", 1)
@schema_version('remote-catalog-result', 1)
class RemoteCatalogResults(CatalogResults, RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
@@ -225,7 +254,7 @@ class RemoteCompileResultMixin(RemoteResult):
@dataclass
@schema_version("remote-compile-result", 1)
@schema_version('remote-compile-result', 1)
class RemoteCompileResult(RemoteCompileResultMixin):
generated_at: datetime = field(default_factory=datetime.utcnow)
@@ -235,7 +264,7 @@ class RemoteCompileResult(RemoteCompileResultMixin):
@dataclass
@schema_version("remote-execution-result", 1)
@schema_version('remote-execution-result', 1)
class RemoteExecutionResult(ExecutionResult, RemoteResult):
results: Sequence[RunResult]
args: Dict[str, Any] = field(default_factory=dict)
@@ -255,7 +284,7 @@ class RemoteExecutionResult(ExecutionResult, RemoteResult):
cls,
base: RunExecutionResult,
logs: List[LogMessage],
) -> "RemoteExecutionResult":
) -> 'RemoteExecutionResult':
return cls(
generated_at=base.generated_at,
results=base.results,
@@ -272,7 +301,7 @@ class ResultTable(dbtClassMixin):
@dataclass
@schema_version("remote-run-operation-result", 1)
@schema_version('remote-run-operation-result', 1)
class RemoteRunOperationResult(RunOperationResult, RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
@@ -281,7 +310,7 @@ class RemoteRunOperationResult(RunOperationResult, RemoteResult):
cls,
base: RunOperationResultsArtifact,
logs: List[LogMessage],
) -> "RemoteRunOperationResult":
) -> 'RemoteRunOperationResult':
return cls(
generated_at=base.metadata.generated_at,
results=base.results,
@@ -300,14 +329,15 @@ class RemoteRunOperationResult(RunOperationResult, RemoteResult):
@dataclass
@schema_version("remote-freshness-result", 1)
@schema_version('remote-freshness-result', 1)
class RemoteFreshnessResult(FreshnessResult, RemoteResult):
@classmethod
def from_local_result(
cls,
base: FreshnessResult,
logs: List[LogMessage],
) -> "RemoteFreshnessResult":
) -> 'RemoteFreshnessResult':
return cls(
metadata=base.metadata,
results=base.results,
@@ -321,7 +351,7 @@ class RemoteFreshnessResult(FreshnessResult, RemoteResult):
@dataclass
@schema_version("remote-run-result", 1)
@schema_version('remote-run-result', 1)
class RemoteRunResult(RemoteCompileResultMixin):
table: ResultTable
generated_at: datetime = field(default_factory=datetime.utcnow)
@@ -339,15 +369,14 @@ RPCResult = Union[
# GC types
class GCResultState(StrEnum):
Deleted = "deleted" # successful GC
Missing = "missing" # nothing to GC
Running = "running" # can't GC
Deleted = 'deleted' # successful GC
Missing = 'missing' # nothing to GC
Running = 'running' # can't GC
@dataclass
@schema_version("remote-gc-result", 1)
@schema_version('remote-gc-result', 1)
class GCResult(RemoteResult):
logs: List[LogMessage] = field(default_factory=list)
deleted: List[TaskID] = field(default_factory=list)
@@ -362,20 +391,21 @@ class GCResult(RemoteResult):
elif state == GCResultState.Deleted:
self.deleted.append(task_id)
else:
raise InternalException(f"Got invalid state in add_result: {state}")
raise InternalException(
f'Got invalid state in add_result: {state}'
)
# Task management types
class TaskHandlerState(StrEnum):
NotStarted = "not started"
Initializing = "initializing"
Running = "running"
Success = "success"
Error = "error"
Killed = "killed"
Failed = "failed"
NotStarted = 'not started'
Initializing = 'initializing'
Running = 'running'
Success = 'success'
Error = 'error'
Killed = 'killed'
Failed = 'failed'
def __lt__(self, other) -> bool:
"""A logical ordering for TaskHandlerState:
@@ -383,7 +413,7 @@ class TaskHandlerState(StrEnum):
NotStarted < Initializing < Running < (Success, Error, Killed, Failed)
"""
if not isinstance(other, TaskHandlerState):
raise TypeError("cannot compare to non-TaskHandlerState")
raise TypeError('cannot compare to non-TaskHandlerState')
order = (self.NotStarted, self.Initializing, self.Running)
smaller = set()
for value in order:
@@ -395,11 +425,13 @@ class TaskHandlerState(StrEnum):
def __le__(self, other) -> bool:
# so that ((Success <= Error) is True)
return (self < other) or (self == other) or (self.finished and other.finished)
return ((self < other) or
(self == other) or
(self.finished and other.finished))
def __gt__(self, other) -> bool:
if not isinstance(other, TaskHandlerState):
raise TypeError("cannot compare to non-TaskHandlerState")
raise TypeError('cannot compare to non-TaskHandlerState')
order = (self.NotStarted, self.Initializing, self.Running)
smaller = set()
for value in order:
@@ -410,7 +442,9 @@ class TaskHandlerState(StrEnum):
def __ge__(self, other) -> bool:
# so that ((Success <= Error) is True)
return (self > other) or (self == other) or (self.finished and other.finished)
return ((self > other) or
(self == other) or
(self.finished and other.finished))
@property
def finished(self) -> bool:
@@ -429,7 +463,7 @@ class TaskTiming(dbtClassMixin):
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
for field_name in ("start", "end", "elapsed"):
for field_name in ('start', 'end', 'elapsed'):
if field_name not in data:
data[field_name] = None
return data
@@ -446,27 +480,27 @@ class TaskRow(TaskTiming):
@dataclass
@schema_version("remote-ps-result", 1)
@schema_version('remote-ps-result', 1)
class PSResult(RemoteResult):
rows: List[TaskRow]
class KillResultStatus(StrEnum):
Missing = "missing"
NotStarted = "not_started"
Killed = "killed"
Finished = "finished"
Missing = 'missing'
NotStarted = 'not_started'
Killed = 'killed'
Finished = 'finished'
@dataclass
@schema_version("remote-kill-result", 1)
@schema_version('remote-kill-result', 1)
class KillResult(RemoteResult):
state: KillResultStatus = KillResultStatus.Missing
logs: List[LogMessage] = field(default_factory=list)
@dataclass
@schema_version("remote-manifest-result", 1)
@schema_version('remote-manifest-result', 1)
class GetManifestResult(RemoteResult):
manifest: Optional[WritableManifest] = None
@@ -497,28 +531,29 @@ class PollResult(RemoteResult, TaskTiming):
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
for field_name in ("start", "end", "elapsed"):
for field_name in ('start', 'end', 'elapsed'):
if field_name not in data:
data[field_name] = None
return data
@dataclass
@schema_version("poll-remote-deps-result", 1)
@schema_version('poll-remote-deps-result', 1)
class PollRemoteEmptyCompleteResult(PollResult, RemoteResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
generated_at: datetime = field(default_factory=datetime.utcnow)
@classmethod
def from_result(
cls: Type["PollRemoteEmptyCompleteResult"],
cls: Type['PollRemoteEmptyCompleteResult'],
base: RemoteDepsResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollRemoteEmptyCompleteResult":
) -> 'PollRemoteEmptyCompleteResult':
return cls(
logs=logs,
tags=tags,
@@ -526,12 +561,12 @@ class PollRemoteEmptyCompleteResult(PollResult, RemoteResult):
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at,
generated_at=base.generated_at
)
@dataclass
@schema_version("poll-remote-killed-result", 1)
@schema_version('poll-remote-killed-result', 1)
class PollKilledResult(PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Killed),
@@ -539,23 +574,24 @@ class PollKilledResult(PollResult):
@dataclass
@schema_version("poll-remote-execution-result", 1)
@schema_version('poll-remote-execution-result', 1)
class PollExecuteCompleteResult(
RemoteExecutionResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollExecuteCompleteResult"],
cls: Type['PollExecuteCompleteResult'],
base: RemoteExecutionResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollExecuteCompleteResult":
) -> 'PollExecuteCompleteResult':
return cls(
results=base.results,
elapsed_time=base.elapsed_time,
@@ -570,23 +606,24 @@ class PollExecuteCompleteResult(
@dataclass
@schema_version("poll-remote-compile-result", 1)
@schema_version('poll-remote-compile-result', 1)
class PollCompileCompleteResult(
RemoteCompileResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollCompileCompleteResult"],
cls: Type['PollCompileCompleteResult'],
base: RemoteCompileResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollCompileCompleteResult":
) -> 'PollCompileCompleteResult':
return cls(
raw_sql=base.raw_sql,
compiled_sql=base.compiled_sql,
@@ -598,28 +635,29 @@ class PollCompileCompleteResult(
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at,
generated_at=base.generated_at
)
@dataclass
@schema_version("poll-remote-run-result", 1)
@schema_version('poll-remote-run-result', 1)
class PollRunCompleteResult(
RemoteRunResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollRunCompleteResult"],
cls: Type['PollRunCompleteResult'],
base: RemoteRunResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollRunCompleteResult":
) -> 'PollRunCompleteResult':
return cls(
raw_sql=base.raw_sql,
compiled_sql=base.compiled_sql,
@@ -632,28 +670,29 @@ class PollRunCompleteResult(
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at,
generated_at=base.generated_at
)
@dataclass
@schema_version("poll-remote-run-operation-result", 1)
@schema_version('poll-remote-run-operation-result', 1)
class PollRunOperationCompleteResult(
RemoteRunOperationResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollRunOperationCompleteResult"],
cls: Type['PollRunOperationCompleteResult'],
base: RemoteRunOperationResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollRunOperationCompleteResult":
) -> 'PollRunOperationCompleteResult':
return cls(
success=base.success,
results=base.results,
@@ -669,20 +708,21 @@ class PollRunOperationCompleteResult(
@dataclass
@schema_version("poll-remote-catalog-result", 1)
@schema_version('poll-remote-catalog-result', 1)
class PollCatalogCompleteResult(RemoteCatalogResults, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollCatalogCompleteResult"],
cls: Type['PollCatalogCompleteResult'],
base: RemoteCatalogResults,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollCatalogCompleteResult":
) -> 'PollCatalogCompleteResult':
return cls(
nodes=base.nodes,
sources=base.sources,
@@ -699,26 +739,27 @@ class PollCatalogCompleteResult(RemoteCatalogResults, PollResult):
@dataclass
@schema_version("poll-remote-in-progress-result", 1)
@schema_version('poll-remote-in-progress-result', 1)
class PollInProgressResult(PollResult):
pass
@dataclass
@schema_version("poll-remote-get-manifest-result", 1)
@schema_version('poll-remote-get-manifest-result', 1)
class PollGetManifestResult(GetManifestResult, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollGetManifestResult"],
cls: Type['PollGetManifestResult'],
base: GetManifestResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollGetManifestResult":
) -> 'PollGetManifestResult':
return cls(
manifest=base.manifest,
logs=logs,
@@ -731,20 +772,21 @@ class PollGetManifestResult(GetManifestResult, PollResult):
@dataclass
@schema_version("poll-remote-freshness-result", 1)
@schema_version('poll-remote-freshness-result', 1)
class PollFreshnessResult(RemoteFreshnessResult, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success, TaskHandlerState.Failed),
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type["PollFreshnessResult"],
cls: Type['PollFreshnessResult'],
base: RemoteFreshnessResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> "PollFreshnessResult":
) -> 'PollFreshnessResult':
return cls(
logs=logs,
tags=tags,
@@ -757,19 +799,18 @@ class PollFreshnessResult(RemoteFreshnessResult, PollResult):
elapsed_time=base.elapsed_time,
)
# Manifest parsing types
class ManifestStatus(StrEnum):
Init = "init"
Compiling = "compiling"
Ready = "ready"
Error = "error"
Init = 'init'
Compiling = 'compiling'
Ready = 'ready'
Error = 'error'
@dataclass
@schema_version("remote-status-result", 1)
@schema_version('remote-status-result', 1)
class LastParse(RemoteResult):
state: ManifestStatus = ManifestStatus.Init
logs: List[LogMessage] = field(default_factory=list)

View File

@@ -8,7 +8,7 @@ from typing import List, Dict, Any, Union
class SelectorDefinition(dbtClassMixin):
name: str
definition: Union[str, Dict[str, Any]]
description: str = ""
description: str = ''
@dataclass

View File

@@ -9,7 +9,7 @@ class PreviousState:
self.path: Path = path
self.manifest: Optional[WritableManifest] = None
manifest_path = self.path / "manifest.json"
manifest_path = self.path / 'manifest.json'
if manifest_path.exists() and manifest_path.is_file():
try:
self.manifest = WritableManifest.read(str(manifest_path))

View File

@@ -1,7 +1,9 @@
import dataclasses
import os
from datetime import datetime
from typing import List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
from typing import (
List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
)
from dbt.clients.system import write_json, read_json
from dbt.exceptions import (
@@ -12,7 +14,6 @@ from dbt.version import __version__
from dbt.tracking import get_invocation_id
from dbt.dataclass_schema import dbtClassMixin
MacroKey = Tuple[str, str]
SourceKey = Tuple[str, str]
@@ -55,7 +56,9 @@ class Mergeable(Replaceable):
class Writable:
def write(self, path: str):
write_json(path, self.to_dict(omit_none=False)) # type: ignore
write_json(
path, self.to_dict(omit_none=False) # type: ignore
)
class AdditionalPropertiesMixin:
@@ -64,7 +67,6 @@ class AdditionalPropertiesMixin:
The underlying class definition must include a type definition for a field
named '_extra' that is of type `Dict[str, Any]`.
"""
ADDITIONAL_PROPERTIES = True
# This takes attributes in the dictionary that are
@@ -83,10 +85,10 @@ class AdditionalPropertiesMixin:
cls_keys = cls._get_field_names()
new_dict = {}
for key, value in data.items():
if key not in cls_keys and key != "_extra":
if "_extra" not in new_dict:
new_dict["_extra"] = {}
new_dict["_extra"][key] = value
if key not in cls_keys and key != '_extra':
if '_extra' not in new_dict:
new_dict['_extra'] = {}
new_dict['_extra'][key] = value
else:
new_dict[key] = value
data = new_dict
@@ -96,8 +98,8 @@ class AdditionalPropertiesMixin:
def __post_serialize__(self, dct):
data = super().__post_serialize__(dct)
data.update(self.extra)
if "_extra" in data:
del data["_extra"]
if '_extra' in data:
del data['_extra']
return data
def replace(self, **kwargs):
@@ -123,8 +125,8 @@ class Readable:
return cls.from_dict(data) # type: ignore
BASE_SCHEMAS_URL = "https://schemas.getdbt.com/"
SCHEMA_PATH = "dbt/{name}/v{version}.json"
BASE_SCHEMAS_URL = 'https://schemas.getdbt.com/'
SCHEMA_PATH = 'dbt/{name}/v{version}.json'
@dataclasses.dataclass
@@ -134,22 +136,24 @@ class SchemaVersion:
@property
def path(self) -> str:
return SCHEMA_PATH.format(name=self.name, version=self.version)
return SCHEMA_PATH.format(
name=self.name,
version=self.version
)
def __str__(self) -> str:
return BASE_SCHEMAS_URL + self.path
SCHEMA_VERSION_KEY = "dbt_schema_version"
SCHEMA_VERSION_KEY = 'dbt_schema_version'
METADATA_ENV_PREFIX = "DBT_ENV_CUSTOM_ENV_"
METADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'
def get_metadata_env() -> Dict[str, str]:
return {
k[len(METADATA_ENV_PREFIX) :]: v
for k, v in os.environ.items()
k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()
if k.startswith(METADATA_ENV_PREFIX)
}
@@ -158,8 +162,12 @@ def get_metadata_env() -> Dict[str, str]:
class BaseArtifactMetadata(dbtClassMixin):
dbt_schema_version: str
dbt_version: str = __version__
generated_at: datetime = dataclasses.field(default_factory=datetime.utcnow)
invocation_id: Optional[str] = dataclasses.field(default_factory=get_invocation_id)
generated_at: datetime = dataclasses.field(
default_factory=datetime.utcnow
)
invocation_id: Optional[str] = dataclasses.field(
default_factory=get_invocation_id
)
env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)
@@ -170,7 +178,6 @@ def schema_version(name: str, version: int):
version=version,
)
return cls
return inner
@@ -182,11 +189,11 @@ class VersionedSchema(dbtClassMixin):
def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:
result = super().json_schema(embeddable=embeddable)
if not embeddable:
result["$id"] = str(cls.dbt_schema_version)
result['$id'] = str(cls.dbt_schema_version)
return result
T = TypeVar("T", bound="ArtifactMixin")
T = TypeVar('T', bound='ArtifactMixin')
# metadata should really be a Generic[T_M] where T_M is a TypeVar bound to
@@ -200,4 +207,6 @@ class ArtifactMixin(VersionedSchema, Writable, Readable):
def validate(cls, data):
super().validate(data)
if cls.dbt_schema_version is None:
raise InternalException("Cannot call from_dict with no schema version!")
raise InternalException(
'Cannot call from_dict with no schema version!'
)

View File

@@ -1,7 +1,5 @@
from typing import (
Type,
ClassVar,
cast,
Type, ClassVar, cast,
)
import re
from dataclasses import fields
@@ -13,7 +11,9 @@ from hologram import JsonSchemaMixin, FieldEncoder, ValidationError
# type: ignore
from mashumaro import DataClassDictMixin
from mashumaro.config import TO_DICT_ADD_OMIT_NONE_FLAG, BaseConfig as MashBaseConfig
from mashumaro.config import (
TO_DICT_ADD_OMIT_NONE_FLAG, BaseConfig as MashBaseConfig
)
from mashumaro.types import SerializableType, SerializationStrategy
@@ -26,7 +26,9 @@ class DateTimeSerialization(SerializationStrategy):
return out
def deserialize(self, value):
return value if isinstance(value, datetime) else parse(cast(str, value))
return (
value if isinstance(value, datetime) else parse(cast(str, value))
)
# This class pulls in both JsonSchemaMixin from Hologram and
@@ -36,8 +38,8 @@ class DateTimeSerialization(SerializationStrategy):
# come from Hologram.
class dbtClassMixin(DataClassDictMixin, JsonSchemaMixin):
"""Mixin which adds methods to generate a JSON schema and
convert to and from JSON encodable dicts with validation
against the schema
convert to and from JSON encodable dicts with validation
against the schema
"""
class Config(MashBaseConfig):
@@ -58,8 +60,8 @@ class dbtClassMixin(DataClassDictMixin, JsonSchemaMixin):
if self._hyphenated:
new_dict = {}
for key in dct:
if "_" in key:
new_key = key.replace("_", "-")
if '_' in key:
new_key = key.replace('_', '-')
new_dict[new_key] = dct[key]
else:
new_dict[key] = dct[key]
@@ -71,11 +73,13 @@ class dbtClassMixin(DataClassDictMixin, JsonSchemaMixin):
# performing the conversion to a dict
@classmethod
def __pre_deserialize__(cls, data):
if cls._hyphenated:
# `data` might not be a dict, e.g. for `query_comment`, which accepts
# a dict or a string; only snake-case for dict values.
if cls._hyphenated and isinstance(data, dict):
new_dict = {}
for key in data:
if "-" in key:
new_key = key.replace("-", "_")
if '-' in key:
new_key = key.replace('-', '_')
new_dict[new_key] = data[key]
else:
new_dict[key] = data[key]
@@ -87,16 +91,16 @@ class dbtClassMixin(DataClassDictMixin, JsonSchemaMixin):
# hologram and in mashumaro.
def _local_to_dict(self, **kwargs):
args = {}
if "omit_none" in kwargs:
args["omit_none"] = kwargs["omit_none"]
if 'omit_none' in kwargs:
args['omit_none'] = kwargs['omit_none']
return self.to_dict(**args)
class ValidatedStringMixin(str, SerializableType):
ValidationRegex = ""
ValidationRegex = ''
@classmethod
def _deserialize(cls, value: str) -> "ValidatedStringMixin":
def _deserialize(cls, value: str) -> 'ValidatedStringMixin':
cls.validate(value)
return ValidatedStringMixin(value)

View File

@@ -14,31 +14,53 @@ class DBTDeprecation:
def name(self) -> str:
if self._name is not None:
return self._name
raise NotImplementedError("name not implemented for {}".format(self))
raise NotImplementedError(
'name not implemented for {}'.format(self)
)
def track_deprecation_warn(self) -> None:
if dbt.tracking.active_user is not None:
dbt.tracking.track_deprecation_warn({"deprecation_name": self.name})
dbt.tracking.track_deprecation_warn({
"deprecation_name": self.name
})
@property
def description(self) -> str:
if self._description is not None:
return self._description
raise NotImplementedError("description not implemented for {}".format(self))
raise NotImplementedError(
'description not implemented for {}'.format(self)
)
def show(self, *args, **kwargs) -> None:
if self.name not in active_deprecations:
desc = self.description.format(**kwargs)
msg = ui.line_wrap_message(desc, prefix="* Deprecation Warning: ")
msg = ui.line_wrap_message(
desc, prefix='* Deprecation Warning: '
)
dbt.exceptions.warn_or_error(msg)
self.track_deprecation_warn()
active_deprecations.add(self.name)
class MaterializationReturnDeprecation(DBTDeprecation):
_name = "materialization-return"
class DispatchPackagesDeprecation(DBTDeprecation):
_name = 'dispatch-packages'
_description = '''\
The "packages" argument of adapter.dispatch() has been deprecated.
Use the "macro_namespace" argument instead.
_description = """\
Raised during dispatch for: {macro_name}
For more information, see:
https://docs.getdbt.com/reference/dbt-jinja-functions/dispatch
'''
class MaterializationReturnDeprecation(DBTDeprecation):
_name = 'materialization-return'
_description = '''\
The materialization ("{materialization}") did not explicitly return a list
of relations to add to the cache. By default the target relation will be
added, but this behavior will be removed in a future version of dbt.
@@ -48,22 +70,22 @@ class MaterializationReturnDeprecation(DBTDeprecation):
For more information, see:
https://docs.getdbt.com/v0.15/docs/creating-new-materializations#section-6-returning-relations
"""
'''
class NotADictionaryDeprecation(DBTDeprecation):
_name = "not-a-dictionary"
_name = 'not-a-dictionary'
_description = """\
_description = '''\
The object ("{obj}") was used as a dictionary. In a future version of dbt
this capability will be removed from objects of this type.
"""
'''
class ColumnQuotingDeprecation(DBTDeprecation):
_name = "column-quoting-unset"
_name = 'column-quoting-unset'
_description = """\
_description = '''\
The quote_columns parameter was not set for seeds, so the default value of
False was chosen. The default will change to True in a future release.
@@ -72,13 +94,13 @@ class ColumnQuotingDeprecation(DBTDeprecation):
For more information, see:
https://docs.getdbt.com/v0.15/docs/seeds#section-specify-column-quoting
"""
'''
class ModelsKeyNonModelDeprecation(DBTDeprecation):
_name = "models-key-mismatch"
_name = 'models-key-mismatch'
_description = """\
_description = '''\
"{node.name}" is a {node.resource_type} node, but it is specified in
the {patch.yaml_key} section of {patch.original_file_path}.
@@ -88,25 +110,33 @@ class ModelsKeyNonModelDeprecation(DBTDeprecation):
the {expected_key} key instead.
This warning will become an error in a future release.
"""
'''
class ExecuteMacrosReleaseDeprecation(DBTDeprecation):
_name = "execute-macro-release"
_description = """\
_name = 'execute-macro-release'
_description = '''\
The "release" argument to execute_macro is now ignored, and will be removed
in a future relase of dbt. At that time, providing a `release` argument
will result in an error.
"""
'''
class AdapterMacroDeprecation(DBTDeprecation):
_name = "adapter-macro"
_description = """\
_name = 'adapter-macro'
_description = '''\
The "adapter_macro" macro has been deprecated. Instead, use the
`adapter.dispatch` method to find a macro and call the result.
adapter_macro was called for: {macro_name}
"""
'''
class PackageRedirectDeprecation(DBTDeprecation):
_name = 'package-redirect'
_description = '''\
The `{old_name}` package is deprecated in favor of `{new_name}`. Please update
your `packages.yml` configuration to use `{new_name}` instead.
'''
_adapter_renamed_description = """\
@@ -120,11 +150,11 @@ Documentation for {new_name} can be found here:
def renamed_method(old_name: str, new_name: str):
class AdapterDeprecationWarning(DBTDeprecation):
_name = "adapter:{}".format(old_name)
_description = _adapter_renamed_description.format(
old_name=old_name, new_name=new_name
)
_name = 'adapter:{}'.format(old_name)
_description = _adapter_renamed_description.format(old_name=old_name,
new_name=new_name)
dep = AdapterDeprecationWarning()
deprecations_list.append(dep)
@@ -134,7 +164,9 @@ def renamed_method(old_name: str, new_name: str):
def warn(name, *args, **kwargs):
if name not in deprecations:
# this should (hopefully) never happen
raise RuntimeError("Error showing deprecation warning: {}".format(name))
raise RuntimeError(
"Error showing deprecation warning: {}".format(name)
)
deprecations[name].show(*args, **kwargs)
@@ -145,15 +177,19 @@ def warn(name, *args, **kwargs):
active_deprecations: Set[str] = set()
deprecations_list: List[DBTDeprecation] = [
DispatchPackagesDeprecation(),
MaterializationReturnDeprecation(),
NotADictionaryDeprecation(),
ColumnQuotingDeprecation(),
ModelsKeyNonModelDeprecation(),
ExecuteMacrosReleaseDeprecation(),
AdapterMacroDeprecation(),
PackageRedirectDeprecation()
]
deprecations: Dict[str, DBTDeprecation] = {d.name: d for d in deprecations_list}
deprecations: Dict[str, DBTDeprecation] = {
d.name: d for d in deprecations_list
}
def reset_deprecations():

View File

@@ -22,12 +22,12 @@ def downloads_directory():
# the user might have set an environment variable. Set it to that, and do
# not remove it when finished.
if DOWNLOADS_PATH is None:
DOWNLOADS_PATH = os.getenv("DBT_DOWNLOADS_DIR")
DOWNLOADS_PATH = os.getenv('DBT_DOWNLOADS_DIR')
remove_downloads = False
# if we are making a per-run temp directory, remove it at the end of
# successful runs
if DOWNLOADS_PATH is None:
DOWNLOADS_PATH = tempfile.mkdtemp(prefix="dbt-downloads-")
DOWNLOADS_PATH = tempfile.mkdtemp(prefix='dbt-downloads-')
remove_downloads = True
system.make_directory(DOWNLOADS_PATH)
@@ -62,7 +62,7 @@ class PinnedPackage(BasePackage):
if not version:
return self.name
return "{}@{}".format(self.name, version)
return '{}@{}'.format(self.name, version)
@abc.abstractmethod
def get_version(self) -> Optional[str]:
@@ -93,9 +93,12 @@ class PinnedPackage(BasePackage):
dest_dirname = self.get_project_name(project, renderer)
return os.path.join(project.modules_path, dest_dirname)
def get_subdirectory(self):
return None
SomePinned = TypeVar("SomePinned", bound=PinnedPackage)
SomeUnpinned = TypeVar("SomeUnpinned", bound="UnpinnedPackage")
SomePinned = TypeVar('SomePinned', bound=PinnedPackage)
SomeUnpinned = TypeVar('SomeUnpinned', bound='UnpinnedPackage')
class UnpinnedPackage(Generic[SomePinned], BasePackage):

View File

@@ -1,6 +1,6 @@
import os
import hashlib
from typing import List
from typing import List, Optional
from dbt.clients import git, system
from dbt.config import Project
@@ -8,16 +8,18 @@ from dbt.contracts.project import (
ProjectPackageMetadata,
GitPackage,
)
from dbt.deps import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.exceptions import ExecutableError, warn_or_error, raise_dependency_error
from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.exceptions import (
ExecutableError, warn_or_error, raise_dependency_error
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import ui
PIN_PACKAGE_URL = "https://docs.getdbt.com/docs/package-management#section-specifying-package-versions" # noqa
PIN_PACKAGE_URL = 'https://docs.getdbt.com/docs/package-management#section-specifying-package-versions' # noqa
def md5sum(s: str):
return hashlib.md5(s.encode("latin-1")).hexdigest()
return hashlib.md5(s.encode('latin-1')).hexdigest()
class GitPackageMixin:
@@ -30,29 +32,39 @@ class GitPackageMixin:
return self.git
def source_type(self) -> str:
return "git"
return 'git'
class GitPinnedPackage(GitPackageMixin, PinnedPackage):
def __init__(self, git: str, revision: str, warn_unpinned: bool = True) -> None:
def __init__(
self,
git: str,
revision: str,
warn_unpinned: bool = True,
subdirectory: Optional[str] = None,
) -> None:
super().__init__(git)
self.revision = revision
self.warn_unpinned = warn_unpinned
self.subdirectory = subdirectory
self._checkout_name = md5sum(self.git)
def get_version(self):
return self.revision
def get_subdirectory(self):
return self.subdirectory
def nice_version_name(self):
if self.revision == "HEAD":
return "HEAD (default branch)"
if self.revision == 'HEAD':
return 'HEAD (default revision)'
else:
return "revision {}".format(self.revision)
return 'revision {}'.format(self.revision)
def unpinned_msg(self):
if self.revision == "HEAD":
return "not pinned, using HEAD (default branch)"
elif self.revision in ("main", "master"):
if self.revision == 'HEAD':
return 'not pinned, using HEAD (default branch)'
elif self.revision in ('main', 'master'):
return f'pinned to the "{self.revision}" branch'
else:
return None
@@ -64,17 +76,15 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
the path to the checked out directory."""
try:
dir_ = git.clone_and_checkout(
self.git,
get_downloads_path(),
branch=self.revision,
dirname=self._checkout_name,
self.git, get_downloads_path(), revision=self.revision,
dirname=self._checkout_name, subdirectory=self.subdirectory
)
except ExecutableError as exc:
if exc.cmd and exc.cmd[0] == "git":
if exc.cmd and exc.cmd[0] == 'git':
logger.error(
"Make sure git is installed on your machine. More "
"information: "
"https://docs.getdbt.com/docs/package-management"
'Make sure git is installed on your machine. More '
'information: '
'https://docs.getdbt.com/docs/package-management'
)
raise
return os.path.join(get_downloads_path(), dir_)
@@ -85,10 +95,9 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
if self.unpinned_msg() and self.warn_unpinned:
warn_or_error(
'The git package "{}" \n\tis {}.\n\tThis can introduce '
"breaking changes into your project without warning!\n\nSee {}".format(
self.git, self.unpinned_msg(), PIN_PACKAGE_URL
),
log_fmt=ui.yellow("WARNING: {}"),
'breaking changes into your project without warning!\n\nSee {}'
.format(self.git, self.unpinned_msg(), PIN_PACKAGE_URL),
log_fmt=ui.yellow('WARNING: {}')
)
loaded = Project.from_project_root(path, renderer)
return ProjectPackageMetadata.from_project(loaded)
@@ -106,46 +115,57 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
class GitUnpinnedPackage(GitPackageMixin, UnpinnedPackage[GitPinnedPackage]):
def __init__(
self, git: str, revisions: List[str], warn_unpinned: bool = True
self,
git: str,
revisions: List[str],
warn_unpinned: bool = True,
subdirectory: Optional[str] = None,
) -> None:
super().__init__(git)
self.revisions = revisions
self.warn_unpinned = warn_unpinned
self.subdirectory = subdirectory
@classmethod
def from_contract(cls, contract: GitPackage) -> "GitUnpinnedPackage":
def from_contract(
cls, contract: GitPackage
) -> 'GitUnpinnedPackage':
revisions = contract.get_revisions()
# we want to map None -> True
warn_unpinned = contract.warn_unpinned is not False
return cls(git=contract.git, revisions=revisions, warn_unpinned=warn_unpinned)
return cls(git=contract.git, revisions=revisions,
warn_unpinned=warn_unpinned, subdirectory=contract.subdirectory)
def all_names(self) -> List[str]:
if self.git.endswith(".git"):
if self.git.endswith('.git'):
other = self.git[:-4]
else:
other = self.git + ".git"
other = self.git + '.git'
return [self.git, other]
def incorporate(self, other: "GitUnpinnedPackage") -> "GitUnpinnedPackage":
def incorporate(
self, other: 'GitUnpinnedPackage'
) -> 'GitUnpinnedPackage':
warn_unpinned = self.warn_unpinned and other.warn_unpinned
return GitUnpinnedPackage(
git=self.git,
revisions=self.revisions + other.revisions,
warn_unpinned=warn_unpinned,
subdirectory=self.subdirectory,
)
def resolved(self) -> GitPinnedPackage:
requested = set(self.revisions)
if len(requested) == 0:
requested = {"HEAD"}
requested = {'HEAD'}
elif len(requested) > 1:
raise_dependency_error(
"git dependencies should contain exactly one version. "
"{} contains: {}".format(self.git, requested)
)
'git dependencies should contain exactly one version. '
'{} contains: {}'.format(self.git, requested))
return GitPinnedPackage(
git=self.git, revision=requested.pop(), warn_unpinned=self.warn_unpinned
git=self.git, revision=requested.pop(),
warn_unpinned=self.warn_unpinned, subdirectory=self.subdirectory
)

View File

@@ -1,7 +1,7 @@
import shutil
from dbt.clients import system
from dbt.deps import PinnedPackage, UnpinnedPackage
from dbt.deps.base import PinnedPackage, UnpinnedPackage
from dbt.contracts.project import (
ProjectPackageMetadata,
LocalPackage,
@@ -19,7 +19,7 @@ class LocalPackageMixin:
return self.local
def source_type(self):
return "local"
return 'local'
class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
@@ -30,7 +30,7 @@ class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
return None
def nice_version_name(self):
return "<local @ {}>".format(self.local)
return '<local @ {}>'.format(self.local)
def resolve_path(self, project):
return system.resolve_path_from_base(
@@ -39,7 +39,9 @@ class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
)
def _fetch_metadata(self, project, renderer):
loaded = project.from_project_root(self.resolve_path(project), renderer)
loaded = project.from_project_root(
self.resolve_path(project), renderer
)
return ProjectPackageMetadata.from_project(loaded)
def install(self, project, renderer):
@@ -55,22 +57,27 @@ class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
system.remove_file(dest_path)
if can_create_symlink:
logger.debug(" Creating symlink to local dependency.")
logger.debug(' Creating symlink to local dependency.')
system.make_symlink(src_path, dest_path)
else:
logger.debug(
" Symlinks are not available on this " "OS, copying dependency."
)
logger.debug(' Symlinks are not available on this '
'OS, copying dependency.')
shutil.copytree(src_path, dest_path)
class LocalUnpinnedPackage(LocalPackageMixin, UnpinnedPackage[LocalPinnedPackage]):
class LocalUnpinnedPackage(
LocalPackageMixin, UnpinnedPackage[LocalPinnedPackage]
):
@classmethod
def from_contract(cls, contract: LocalPackage) -> "LocalUnpinnedPackage":
def from_contract(
cls, contract: LocalPackage
) -> 'LocalUnpinnedPackage':
return cls(local=contract.local)
def incorporate(self, other: "LocalUnpinnedPackage") -> "LocalUnpinnedPackage":
def incorporate(
self, other: 'LocalUnpinnedPackage'
) -> 'LocalUnpinnedPackage':
return LocalUnpinnedPackage(local=self.local)
def resolved(self) -> LocalPinnedPackage:

View File

@@ -7,7 +7,7 @@ from dbt.contracts.project import (
RegistryPackageMetadata,
RegistryPackage,
)
from dbt.deps import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.exceptions import (
package_version_not_found,
VersionsNotCompatibleException,
@@ -26,26 +26,33 @@ class RegistryPackageMixin:
return self.package
def source_type(self) -> str:
return "hub"
return 'hub'
class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def __init__(self, package: str, version: str) -> None:
def __init__(self,
package: str,
version: str,
version_latest: str) -> None:
super().__init__(package)
self.version = version
self.version_latest = version_latest
@property
def name(self):
return self.package
def source_type(self):
return "hub"
return 'hub'
def get_version(self):
return self.version
def get_version_latest(self):
return self.version_latest
def nice_version_name(self):
return "version {}".format(self.version)
return 'version {}'.format(self.version)
def _fetch_metadata(self, project, renderer) -> RegistryPackageMetadata:
dct = registry.package_version(self.package, self.version)
@@ -54,12 +61,14 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def install(self, project, renderer):
metadata = self.fetch_metadata(project, renderer)
tar_name = "{}.{}.tar.gz".format(self.package, self.version)
tar_path = os.path.realpath(os.path.join(get_downloads_path(), tar_name))
tar_name = '{}.{}.tar.gz'.format(self.package, self.version)
tar_path = os.path.realpath(
os.path.join(get_downloads_path(), tar_name)
)
system.make_directory(os.path.dirname(tar_path))
download_url = metadata.downloads.tarball
system.download(download_url, tar_path)
system.download_with_retries(download_url, tar_path)
deps_path = project.modules_path
package_name = self.get_project_name(project, renderer)
system.untar_package(tar_path, deps_path, package_name)
@@ -68,9 +77,15 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
class RegistryUnpinnedPackage(
RegistryPackageMixin, UnpinnedPackage[RegistryPinnedPackage]
):
def __init__(self, package: str, versions: List[semver.VersionSpecifier]) -> None:
def __init__(
self,
package: str,
versions: List[semver.VersionSpecifier],
install_prerelease: bool
) -> None:
super().__init__(package)
self.versions = versions
self.install_prerelease = install_prerelease
def _check_in_index(self):
index = registry.index_cached()
@@ -78,17 +93,27 @@ class RegistryUnpinnedPackage(
package_not_found(self.package)
@classmethod
def from_contract(cls, contract: RegistryPackage) -> "RegistryUnpinnedPackage":
def from_contract(
cls, contract: RegistryPackage
) -> 'RegistryUnpinnedPackage':
raw_version = contract.get_versions()
versions = [semver.VersionSpecifier.from_version_string(v) for v in raw_version]
return cls(package=contract.package, versions=versions)
versions = [
semver.VersionSpecifier.from_version_string(v)
for v in raw_version
]
return cls(
package=contract.package,
versions=versions,
install_prerelease=contract.install_prerelease
)
def incorporate(
self, other: "RegistryUnpinnedPackage"
) -> "RegistryUnpinnedPackage":
self, other: 'RegistryUnpinnedPackage'
) -> 'RegistryUnpinnedPackage':
return RegistryUnpinnedPackage(
package=self.package,
install_prerelease=self.install_prerelease,
versions=self.versions + other.versions,
)
@@ -97,16 +122,23 @@ class RegistryUnpinnedPackage(
try:
range_ = semver.reduce_versions(*self.versions)
except VersionsNotCompatibleException as e:
new_msg = "Version error for package {}: {}".format(self.name, e)
new_msg = ('Version error for package {}: {}'
.format(self.name, e))
raise DependencyException(new_msg) from e
available = registry.get_available_versions(self.package)
installable = semver.filter_installable(
available,
self.install_prerelease
)
available_latest = installable[-1]
# for now, pick a version and then recurse. later on,
# we'll probably want to traverse multiple options
# so we can match packages. not going to make a difference
# right now.
target = semver.resolve_to_specific_version(range_, available)
target = semver.resolve_to_specific_version(range_, installable)
if not target:
package_version_not_found(self.package, range_, available)
return RegistryPinnedPackage(package=self.package, version=target)
package_version_not_found(self.package, range_, installable)
return RegistryPinnedPackage(package=self.package, version=target,
version_latest=available_latest)

View File

@@ -6,7 +6,7 @@ from dbt.exceptions import raise_dependency_error, InternalException
from dbt.context.target import generate_target_context
from dbt.config import Project, RuntimeConfig
from dbt.config.renderer import DbtProjectYamlRenderer
from dbt.deps import BasePackage, PinnedPackage, UnpinnedPackage
from dbt.deps.base import BasePackage, PinnedPackage, UnpinnedPackage
from dbt.deps.local import LocalUnpinnedPackage
from dbt.deps.git import GitUnpinnedPackage
from dbt.deps.registry import RegistryUnpinnedPackage
@@ -49,10 +49,12 @@ class PackageListing:
key_str: str = self._pick_key(key)
self.packages[key_str] = value
def _mismatched_types(self, old: UnpinnedPackage, new: UnpinnedPackage) -> NoReturn:
def _mismatched_types(
self, old: UnpinnedPackage, new: UnpinnedPackage
) -> NoReturn:
raise_dependency_error(
f"Cannot incorporate {new} ({new.__class__.__name__}) in {old} "
f"({old.__class__.__name__}): mismatched types"
f'Cannot incorporate {new} ({new.__class__.__name__}) in {old} '
f'({old.__class__.__name__}): mismatched types'
)
def incorporate(self, package: UnpinnedPackage):
@@ -76,14 +78,14 @@ class PackageListing:
pkg = RegistryUnpinnedPackage.from_contract(contract)
else:
raise InternalException(
"Invalid package type {}".format(type(contract))
'Invalid package type {}'.format(type(contract))
)
self.incorporate(pkg)
@classmethod
def from_contracts(
cls: Type["PackageListing"], src: List[PackageContract]
) -> "PackageListing":
cls: Type['PackageListing'], src: List[PackageContract]
) -> 'PackageListing':
self = cls({})
self.update_from(src)
return self
@@ -106,14 +108,14 @@ def _check_for_duplicate_project_names(
if project_name in seen:
raise_dependency_error(
f'Found duplicate project "{project_name}". This occurs when '
"a dependency has the same project name as some other "
"dependency."
'a dependency has the same project name as some other '
'dependency.'
)
elif project_name == config.project_name:
raise_dependency_error(
"Found a dependency with the same name as the root project "
'Found a dependency with the same name as the root project '
f'"{project_name}". Package names must be unique in a project.'
" Please rename one of these packages."
' Please rename one of these packages.'
)
seen.add(project_name)

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
import os
import multiprocessing
if os.name != "nt":
if os.name != 'nt':
# https://bugs.python.org/issue41567
import multiprocessing.popen_spawn_posix # type: ignore
from pathlib import Path
@@ -14,9 +13,11 @@ FULL_REFRESH = None
USE_CACHE = None
WARN_ERROR = None
TEST_NEW_PARSER = None
USE_EXPERIMENTAL_PARSER = None
WRITE_JSON = None
PARTIAL_PARSE = None
USE_COLORS = None
STORE_FAILURES = None
def env_set_truthy(key: str) -> Optional[str]:
@@ -24,7 +25,7 @@ def env_set_truthy(key: str) -> Optional[str]:
otherwise.
"""
value = os.getenv(key)
if not value or value.lower() in ("0", "false", "f"):
if not value or value.lower() in ('0', 'false', 'f'):
return None
return value
@@ -37,56 +38,68 @@ def env_set_path(key: str) -> Optional[Path]:
return Path(value)
SINGLE_THREADED_WEBSERVER = env_set_truthy("DBT_SINGLE_THREADED_WEBSERVER")
SINGLE_THREADED_HANDLER = env_set_truthy("DBT_SINGLE_THREADED_HANDLER")
MACRO_DEBUGGING = env_set_truthy("DBT_MACRO_DEBUGGING")
DEFER_MODE = env_set_truthy("DBT_DEFER_TO_STATE")
ARTIFACT_STATE_PATH = env_set_path("DBT_ARTIFACT_STATE_PATH")
SINGLE_THREADED_WEBSERVER = env_set_truthy('DBT_SINGLE_THREADED_WEBSERVER')
SINGLE_THREADED_HANDLER = env_set_truthy('DBT_SINGLE_THREADED_HANDLER')
MACRO_DEBUGGING = env_set_truthy('DBT_MACRO_DEBUGGING')
DEFER_MODE = env_set_truthy('DBT_DEFER_TO_STATE')
ARTIFACT_STATE_PATH = env_set_path('DBT_ARTIFACT_STATE_PATH')
def _get_context():
# TODO: change this back to use fork() on linux when we have made that safe
return multiprocessing.get_context("spawn")
return multiprocessing.get_context('spawn')
MP_CONTEXT = _get_context()
def reset():
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
STRICT_MODE = False
FULL_REFRESH = False
USE_CACHE = True
WARN_ERROR = False
TEST_NEW_PARSER = False
USE_EXPERIMENTAL_PARSER = False
WRITE_JSON = True
PARTIAL_PARSE = False
MP_CONTEXT = _get_context()
USE_COLORS = True
STORE_FAILURES = False
def set_from_args(args):
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
USE_CACHE = getattr(args, "use_cache", USE_CACHE)
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
FULL_REFRESH = getattr(args, "full_refresh", FULL_REFRESH)
STRICT_MODE = getattr(args, "strict", STRICT_MODE)
WARN_ERROR = STRICT_MODE or getattr(args, "warn_error", STRICT_MODE or WARN_ERROR)
FULL_REFRESH = getattr(args, 'full_refresh', FULL_REFRESH)
STRICT_MODE = getattr(args, 'strict', STRICT_MODE)
WARN_ERROR = (
STRICT_MODE or
getattr(args, 'warn_error', STRICT_MODE or WARN_ERROR)
)
TEST_NEW_PARSER = getattr(args, "test_new_parser", TEST_NEW_PARSER)
WRITE_JSON = getattr(args, "write_json", WRITE_JSON)
PARTIAL_PARSE = getattr(args, "partial_parse", None)
TEST_NEW_PARSER = getattr(args, 'test_new_parser', TEST_NEW_PARSER)
USE_EXPERIMENTAL_PARSER = getattr(args, 'use_experimental_parser', USE_EXPERIMENTAL_PARSER)
WRITE_JSON = getattr(args, 'write_json', WRITE_JSON)
PARTIAL_PARSE = getattr(args, 'partial_parse', None)
MP_CONTEXT = _get_context()
# The use_colors attribute will always have a value because it is assigned
# None by default from the add_mutually_exclusive_group function
use_colors_override = getattr(args, "use_colors")
use_colors_override = getattr(args, 'use_colors')
if use_colors_override is not None:
USE_COLORS = use_colors_override
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
# initialize everything to the defaults on module load
reset()

View File

@@ -2,7 +2,9 @@
import itertools
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
from typing import Dict, List, Optional, Tuple, Any, Union
from typing import (
Dict, List, Optional, Tuple, Any, Union
)
from dbt.contracts.selection import SelectorDefinition, SelectorFile
from dbt.exceptions import InternalException, ValidationException
@@ -15,33 +17,34 @@ from .selector_spec import (
SelectionCriteria,
)
INTERSECTION_DELIMITER = ","
INTERSECTION_DELIMITER = ','
DEFAULT_INCLUDES: List[str] = ["fqn:*", "source:*", "exposure:*"]
DEFAULT_INCLUDES: List[str] = ['fqn:*', 'source:*', 'exposure:*']
DEFAULT_EXCLUDES: List[str] = []
DATA_TEST_SELECTOR: str = "test_type:data"
SCHEMA_TEST_SELECTOR: str = "test_type:schema"
DATA_TEST_SELECTOR: str = 'test_type:data'
SCHEMA_TEST_SELECTOR: str = 'test_type:schema'
def parse_union(components: List[str], expect_exists: bool) -> SelectionUnion:
def parse_union(
components: List[str], expect_exists: bool, greedy: bool = False
) -> SelectionUnion:
# turn ['a b', 'c'] -> ['a', 'b', 'c']
raw_specs = itertools.chain.from_iterable(r.split(" ") for r in components)
raw_specs = itertools.chain.from_iterable(
r.split(' ') for r in components
)
union_components: List[SelectionSpec] = []
# ['a', 'b', 'c,d'] -> union('a', 'b', intersection('c', 'd'))
for raw_spec in raw_specs:
intersection_components: List[SelectionSpec] = [
SelectionCriteria.from_single_spec(part)
SelectionCriteria.from_single_spec(part, greedy=greedy)
for part in raw_spec.split(INTERSECTION_DELIMITER)
]
union_components.append(
SelectionIntersection(
components=intersection_components,
expect_exists=expect_exists,
raw=raw_spec,
)
)
union_components.append(SelectionIntersection(
components=intersection_components,
expect_exists=expect_exists,
raw=raw_spec,
))
return SelectionUnion(
components=union_components,
expect_exists=False,
@@ -50,21 +53,21 @@ def parse_union(components: List[str], expect_exists: bool) -> SelectionUnion:
def parse_union_from_default(
raw: Optional[List[str]], default: List[str]
raw: Optional[List[str]], default: List[str], greedy: bool = False
) -> SelectionUnion:
components: List[str]
expect_exists: bool
if raw is None:
return parse_union(components=default, expect_exists=False)
return parse_union(components=default, expect_exists=False, greedy=greedy)
else:
return parse_union(components=raw, expect_exists=True)
return parse_union(components=raw, expect_exists=True, greedy=greedy)
def parse_difference(
include: Optional[List[str]], exclude: Optional[List[str]]
) -> SelectionDifference:
included = parse_union_from_default(include, DEFAULT_INCLUDES)
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES)
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
return SelectionDifference(components=[included, excluded])
@@ -74,7 +77,9 @@ def parse_test_selectors(
union_components = []
if data:
union_components.append(SelectionCriteria.from_single_spec(DATA_TEST_SELECTOR))
union_components.append(
SelectionCriteria.from_single_spec(DATA_TEST_SELECTOR)
)
if schema:
union_components.append(
SelectionCriteria.from_single_spec(SCHEMA_TEST_SELECTOR)
@@ -92,21 +97,27 @@ def parse_test_selectors(
raw=[DATA_TEST_SELECTOR, SCHEMA_TEST_SELECTOR],
)
return SelectionIntersection(components=[base, intersect_with], expect_exists=True)
return SelectionIntersection(
components=[base, intersect_with], expect_exists=True
)
RawDefinition = Union[str, Dict[str, Any]]
def _get_list_dicts(dct: Dict[str, Any], key: str) -> List[RawDefinition]:
def _get_list_dicts(
dct: Dict[str, Any], key: str
) -> List[RawDefinition]:
result: List[RawDefinition] = []
if key not in dct:
raise InternalException(
f"Expected to find key {key} in dict, only found {list(dct)}"
f'Expected to find key {key} in dict, only found {list(dct)}'
)
values = dct[key]
if not isinstance(values, list):
raise ValidationException(f'Invalid value for key "{key}". Expected a list.')
raise ValidationException(
f'Invalid value for key "{key}". Expected a list.'
)
for value in values:
if isinstance(value, dict):
for value_key in value:
@@ -121,31 +132,36 @@ def _get_list_dicts(dct: Dict[str, Any], key: str) -> List[RawDefinition]:
else:
raise ValidationException(
f'Invalid value type {type(value)} in key "{key}", expected '
f"dict or str (value: {value})."
f'dict or str (value: {value}).'
)
return result
def _parse_exclusions(definition) -> Optional[SelectionSpec]:
exclusions = _get_list_dicts(definition, "exclude")
parsed_exclusions = [parse_from_definition(excl) for excl in exclusions]
exclusions = _get_list_dicts(definition, 'exclude')
parsed_exclusions = [
parse_from_definition(excl) for excl in exclusions
]
if len(parsed_exclusions) == 1:
return parsed_exclusions[0]
elif len(parsed_exclusions) > 1:
return SelectionUnion(components=parsed_exclusions, raw=exclusions)
return SelectionUnion(
components=parsed_exclusions,
raw=exclusions
)
else:
return None
def _parse_include_exclude_subdefs(
definitions: List[RawDefinition],
definitions: List[RawDefinition]
) -> Tuple[List[SelectionSpec], Optional[SelectionSpec]]:
include_parts: List[SelectionSpec] = []
diff_arg: Optional[SelectionSpec] = None
for definition in definitions:
if isinstance(definition, dict) and "exclude" in definition:
if isinstance(definition, dict) and 'exclude' in definition:
# do not allow multiple exclude: defs at the same level
if diff_arg is not None:
yaml_sel_cfg = yaml.dump(definition)
@@ -161,7 +177,7 @@ def _parse_include_exclude_subdefs(
def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
union_def_parts = _get_list_dicts(definition, "union")
union_def_parts = _get_list_dicts(definition, 'union')
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
union = SelectionUnion(components=include)
@@ -170,11 +186,16 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
union.raw = definition
return union
else:
return SelectionDifference(components=[union, exclude], raw=definition)
return SelectionDifference(
components=[union, exclude],
raw=definition
)
def parse_intersection_definition(definition: Dict[str, Any]) -> SelectionSpec:
intersection_def_parts = _get_list_dicts(definition, "intersection")
def parse_intersection_definition(
definition: Dict[str, Any]
) -> SelectionSpec:
intersection_def_parts = _get_list_dicts(definition, 'intersection')
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
intersection = SelectionIntersection(components=include)
@@ -182,7 +203,10 @@ def parse_intersection_definition(definition: Dict[str, Any]) -> SelectionSpec:
intersection.raw = definition
return intersection
else:
return SelectionDifference(components=[intersection, exclude], raw=definition)
return SelectionDifference(
components=[intersection, exclude],
raw=definition
)
def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
@@ -196,14 +220,14 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
f'"{type(key)}" ({key})'
)
dct = {
"method": key,
"value": value,
'method': key,
'value': value,
}
elif "method" in definition and "value" in definition:
elif 'method' in definition and 'value' in definition:
dct = definition
if "exclude" in definition:
if 'exclude' in definition:
diff_arg = _parse_exclusions(definition)
dct = {k: v for k, v in dct.items() if k != "exclude"}
dct = {k: v for k, v in dct.items() if k != 'exclude'}
else:
raise ValidationException(
f'Expected either 1 key or else "method" '
@@ -218,14 +242,13 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
return SelectionDifference(components=[base, diff_arg])
def parse_from_definition(definition: RawDefinition, rootlevel=False) -> SelectionSpec:
def parse_from_definition(
definition: RawDefinition, rootlevel=False
) -> SelectionSpec:
if (
isinstance(definition, dict)
and ("union" in definition or "intersection" in definition)
and rootlevel
and len(definition) > 1
):
if (isinstance(definition, dict) and
('union' in definition or 'intersection' in definition) and
rootlevel and len(definition) > 1):
keys = ",".join(definition.keys())
raise ValidationException(
f"Only a single 'union' or 'intersection' key is allowed "
@@ -233,24 +256,25 @@ def parse_from_definition(definition: RawDefinition, rootlevel=False) -> Selecti
)
if isinstance(definition, str):
return SelectionCriteria.from_single_spec(definition)
elif "union" in definition:
elif 'union' in definition:
return parse_union_definition(definition)
elif "intersection" in definition:
elif 'intersection' in definition:
return parse_intersection_definition(definition)
elif isinstance(definition, dict):
return parse_dict_definition(definition)
else:
raise ValidationException(
f"Expected to find union, intersection, str or dict, instead "
f"found {type(definition)}: {definition}"
f'Expected to find union, intersection, str or dict, instead '
f'found {type(definition)}: {definition}'
)
def parse_from_selectors_definition(source: SelectorFile) -> Dict[str, SelectionSpec]:
def parse_from_selectors_definition(
source: SelectorFile
) -> Dict[str, SelectionSpec]:
result: Dict[str, SelectionSpec] = {}
selector: SelectorDefinition
for selector in source.selectors:
result[selector.name] = parse_from_definition(
selector.definition, rootlevel=True
)
result[selector.name] = parse_from_definition(selector.definition,
rootlevel=True)
return result

View File

@@ -1,16 +1,17 @@
from typing import Set, Iterable, Iterator, Optional, NewType
from typing import (
Set, Iterable, Iterator, Optional, NewType
)
import networkx as nx # type: ignore
from dbt.exceptions import InternalException
UniqueId = NewType("UniqueId", str)
UniqueId = NewType('UniqueId', str)
class Graph:
"""A wrapper around the networkx graph that understands SelectionCriteria
and how they interact with the graph.
"""
def __init__(self, graph):
self.graph = graph
@@ -28,11 +29,12 @@ class Graph:
) -> Set[UniqueId]:
"""Returns all nodes having a path to `node` in `graph`"""
if not self.graph.has_node(node):
raise InternalException(f"Node {node} not found in the graph!")
raise InternalException(f'Node {node} not found in the graph!')
with nx.utils.reversed(self.graph):
anc = nx.single_source_shortest_path_length(
G=self.graph, source=node, cutoff=max_depth
).keys()
anc = nx.single_source_shortest_path_length(G=self.graph,
source=node,
cutoff=max_depth)\
.keys()
return anc - {node}
def descendants(
@@ -40,13 +42,16 @@ class Graph:
) -> Set[UniqueId]:
"""Returns all nodes reachable from `node` in `graph`"""
if not self.graph.has_node(node):
raise InternalException(f"Node {node} not found in the graph!")
des = nx.single_source_shortest_path_length(
G=self.graph, source=node, cutoff=max_depth
).keys()
raise InternalException(f'Node {node} not found in the graph!')
des = nx.single_source_shortest_path_length(G=self.graph,
source=node,
cutoff=max_depth)\
.keys()
return des - {node}
def select_childrens_parents(self, selected: Set[UniqueId]) -> Set[UniqueId]:
def select_childrens_parents(
self, selected: Set[UniqueId]
) -> Set[UniqueId]:
ancestors_for = self.select_children(selected) | selected
return self.select_parents(ancestors_for) | ancestors_for
@@ -72,7 +77,7 @@ class Graph:
successors.update(self.graph.successors(node))
return successors
def get_subset_graph(self, selected: Iterable[UniqueId]) -> "Graph":
def get_subset_graph(self, selected: Iterable[UniqueId]) -> 'Graph':
"""Create and return a new graph that is a shallow copy of the graph,
but with only the nodes in include_nodes. Transitive edges across
removed nodes are preserved as explicit new edges.
@@ -93,7 +98,7 @@ class Graph:
)
return Graph(new_graph)
def subgraph(self, nodes: Iterable[UniqueId]) -> "Graph":
def subgraph(self, nodes: Iterable[UniqueId]) -> 'Graph':
return Graph(self.graph.subgraph(nodes))
def get_dependent_nodes(self, node: UniqueId):

View File

@@ -1,8 +1,8 @@
import threading
from queue import PriorityQueue
from typing import Dict, Set, Optional
import networkx as nx # type: ignore
import threading
from queue import PriorityQueue
from typing import Dict, Set, List, Generator, Optional
from .graph import UniqueId
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedExposure
@@ -34,7 +34,7 @@ class GraphQueue:
# this lock controls most things
self.lock = threading.Lock()
# store the 'score' of each node as a number. Lower is higher priority.
self._scores = self._calculate_scores()
self._scores = self._get_scores(self.graph)
# populate the initial queue
self._find_new_additions()
# awaits after task end
@@ -53,33 +53,59 @@ class GraphQueue:
return False
return True
def _calculate_scores(self) -> Dict[UniqueId, int]:
"""Calculate the 'value' of each node in the graph based on how many
blocking descendants it has. We use this score for the internal
priority queue's ordering, so the quality of this metric is important.
@staticmethod
def _grouped_topological_sort(
graph: nx.DiGraph,
) -> Generator[List[str], None, None]:
"""Topological sort of given graph that groups ties.
The score is stored as a negative number because the internal
PriorityQueue picks lowest values first.
Adapted from `nx.topological_sort`, this function returns a topo sort of a graph however
instead of arbitrarily ordering ties in the sort order, ties are grouped together in
lists.
We could do this in one pass over the graph instead of len(self.graph)
passes but this is easy. For large graphs this may hurt performance.
Args:
graph: The graph to be sorted.
This operates on the graph, so it would require a lock if called from
outside __init__.
:return Dict[str, int]: The score dict, mapping unique IDs to integer
scores. Lower scores are higher priority.
Returns:
A generator that yields lists of nodes, one list per graph depth level.
"""
indegree_map = {v: d for v, d in graph.in_degree() if d > 0}
zero_indegree = [v for v, d in graph.in_degree() if d == 0]
while zero_indegree:
yield zero_indegree
new_zero_indegree = []
for v in zero_indegree:
for _, child in graph.edges(v):
indegree_map[child] -= 1
if not indegree_map[child]:
new_zero_indegree.append(child)
zero_indegree = new_zero_indegree
def _get_scores(self, graph: nx.DiGraph) -> Dict[str, int]:
"""Scoring nodes for processing order.
Scores are calculated by the graph depth level. Lowest score (0) should be processed first.
Args:
graph: The graph to be scored.
Returns:
A dictionary consisting of `node name`:`score` pairs.
"""
# split graph by connected subgraphs
subgraphs = (
graph.subgraph(x) for x in nx.connected_components(nx.Graph(graph))
)
# score all nodes in all subgraphs
scores = {}
for node in self.graph.nodes():
score = -1 * len(
[
d
for d in nx.descendants(self.graph, node)
if self._include_in_cost(d)
]
)
scores[node] = score
for subgraph in subgraphs:
grouped_nodes = self._grouped_topological_sort(subgraph)
for level, group in enumerate(grouped_nodes):
for node in group:
scores[node] = level
return scores
def get(
@@ -133,8 +159,6 @@ class GraphQueue:
def _find_new_additions(self) -> None:
"""Find any nodes in the graph that need to be added to the internal
queue and add them.
Callers must hold the lock.
"""
for node, in_degree in self.graph.in_degree():
if not self._already_known(node) and in_degree == 0:

View File

@@ -1,4 +1,5 @@
from typing import Set, List, Optional
from typing import Set, List, Optional, Tuple
from .graph import Graph, UniqueId
from .queue import GraphQueue
@@ -24,13 +25,26 @@ def get_package_names(nodes):
def alert_non_existence(raw_spec, nodes):
if len(nodes) == 0:
warn_or_error(
f"The selection criterion '{str(raw_spec)}' does not match" f" any nodes"
f"The selection criterion '{str(raw_spec)}' does not match"
f" any nodes"
)
class NodeSelector(MethodManager):
"""The node selector is aware of the graph and manifest,"""
def can_select_indirectly(node):
"""If a node is not selected itself, but its parent(s) are, it may qualify
for indirect selection.
Today, only Test nodes can be indirectly selected. In the future,
other node types or invocation flags might qualify.
"""
if node.resource_type == NodeType.Test:
return True
else:
return False
class NodeSelector(MethodManager):
"""The node selector is aware of the graph and manifest,
"""
def __init__(
self,
graph: Graph,
@@ -43,16 +57,13 @@ class NodeSelector(MethodManager):
# build a subgraph containing only non-empty, enabled nodes and enabled
# sources.
graph_members = {
unique_id
for unique_id in self.full_graph.nodes()
unique_id for unique_id in self.full_graph.nodes()
if self._is_graph_member(unique_id)
}
self.graph = self.full_graph.subgraph(graph_members)
def select_included(
self,
included_nodes: Set[UniqueId],
spec: SelectionCriteria,
self, included_nodes: Set[UniqueId], spec: SelectionCriteria,
) -> Set[UniqueId]:
"""Select the explicitly included nodes, using the given spec. Return
the selected set of unique IDs.
@@ -62,8 +73,8 @@ class NodeSelector(MethodManager):
def get_nodes_from_criteria(
self,
spec: SelectionCriteria,
) -> Set[UniqueId]:
spec: SelectionCriteria
) -> Tuple[Set[UniqueId], Set[UniqueId]]:
"""Get all nodes specified by the single selection criteria.
- collect the directly included nodes
@@ -80,11 +91,14 @@ class NodeSelector(MethodManager):
f"The '{spec.method}' selector specified in {spec.raw} is "
f"invalid. Must be one of [{valid_selectors}]"
)
return set()
return set(), set()
extras = self.collect_specified_neighbors(spec, collected)
result = self.expand_selection(collected | extras)
return result
neighbors = self.collect_specified_neighbors(spec, collected)
direct_nodes, indirect_nodes = self.expand_selection(
selected=(collected | neighbors),
greedy=spec.greedy
)
return direct_nodes, indirect_nodes
def collect_specified_neighbors(
self, spec: SelectionCriteria, selected: Set[UniqueId]
@@ -107,21 +121,46 @@ class NodeSelector(MethodManager):
additional.update(self.graph.select_children(selected, depth))
return additional
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
"""Select the nodes in the graph according to the spec.
If the spec is a composite spec (a union, difference, or intersection),
def select_nodes_recursively(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
"""If the spec is a composite spec (a union, difference, or intersection),
recurse into its selections and combine them. If the spec is a concrete
selection criteria, resolve that using the given graph.
"""
if isinstance(spec, SelectionCriteria):
result = self.get_nodes_from_criteria(spec)
direct_nodes, indirect_nodes = self.get_nodes_from_criteria(spec)
else:
node_selections = [self.select_nodes(component) for component in spec]
result = spec.combined(node_selections)
bundles = [
self.select_nodes_recursively(component)
for component in spec
]
direct_sets = []
indirect_sets = []
for direct, indirect in bundles:
direct_sets.append(direct)
indirect_sets.append(direct | indirect)
initial_direct = spec.combined(direct_sets)
indirect_nodes = spec.combined(indirect_sets)
direct_nodes = self.incorporate_indirect_nodes(initial_direct, indirect_nodes)
if spec.expect_exists:
alert_non_existence(spec.raw, result)
return result
alert_non_existence(spec.raw, direct_nodes)
return direct_nodes, indirect_nodes
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
"""Select the nodes in the graph according to the spec.
This is the main point of entry for turning a spec into a set of nodes:
- Recurse through spec, select by criteria, combine by set operation
- Return final (unfiltered) selection set
"""
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
return direct_nodes
def _is_graph_member(self, unique_id: UniqueId) -> bool:
if unique_id in self.manifest.sources:
@@ -147,30 +186,77 @@ class NodeSelector(MethodManager):
elif unique_id in self.manifest.exposures:
node = self.manifest.exposures[unique_id]
else:
raise InternalException(f"Node {unique_id} not found in the manifest!")
raise InternalException(
f'Node {unique_id} not found in the manifest!'
)
return self.node_is_match(node)
def filter_selection(self, selected: Set[UniqueId]) -> Set[UniqueId]:
"""Return the subset of selected nodes that is a match for this
selector.
"""
return {unique_id for unique_id in selected if self._is_match(unique_id)}
return {
unique_id for unique_id in selected if self._is_match(unique_id)
}
def expand_selection(
self, selected: Set[UniqueId], greedy: bool = False
) -> Tuple[Set[UniqueId], Set[UniqueId]]:
# Test selection can expand to include an implicitly/indirectly selected test.
# In this way, `dbt test -m model_a` also includes tests that directly depend on `model_a`.
# Expansion has two modes, GREEDY and NOT GREEDY.
#
# GREEDY mode: If ANY parent is selected, select the test. We use this for EXCLUSION.
#
# NOT GREEDY mode:
# - If ALL parents are selected, select the test.
# - If ANY parent is missing, return it separately. We'll keep it around
# for later and see if its other parents show up.
# We use this for INCLUSION.
direct_nodes = set(selected)
indirect_nodes = set()
for unique_id in self.graph.select_successors(selected):
if unique_id in self.manifest.nodes:
node = self.manifest.nodes[unique_id]
if can_select_indirectly(node):
# should we add it in directly?
if greedy or set(node.depends_on.nodes) <= set(selected):
direct_nodes.add(unique_id)
# if not:
else:
indirect_nodes.add(unique_id)
return direct_nodes, indirect_nodes
def incorporate_indirect_nodes(
self, direct_nodes: Set[UniqueId], indirect_nodes: Set[UniqueId] = set()
) -> Set[UniqueId]:
# Check tests previously selected indirectly to see if ALL their
# parents are now present.
selected = set(direct_nodes)
for unique_id in indirect_nodes:
if unique_id in self.manifest.nodes:
node = self.manifest.nodes[unique_id]
if set(node.depends_on.nodes) <= set(selected):
selected.add(unique_id)
def expand_selection(self, selected: Set[UniqueId]) -> Set[UniqueId]:
"""Perform selector-specific expansion."""
return selected
def get_selected(self, spec: SelectionSpec) -> Set[UniqueId]:
"""get_selected runs trhough the node selection process:
"""get_selected runs through the node selection process:
- node selection. Based on the include/exclude sets, the set
of matched unique IDs is returned
- expand the graph at each leaf node, before combination
- selectors might override this. for example, this is where
tests are added
- filtering:
- selectors can filter the nodes after all of them have been
selected
- node selection. Based on the include/exclude sets, the set
of matched unique IDs is returned
- expand the graph at each leaf node, before combination
- selectors might override this. for example, this is where
tests are added
- filtering:
- selectors can filter the nodes after all of them have been
selected
"""
selected_nodes = self.select_nodes(spec)
filtered_nodes = self.filter_selection(selected_nodes)

View File

@@ -22,52 +22,48 @@ from dbt.contracts.graph.parsed import (
ParsedSourceDefinition,
)
from dbt.contracts.state import PreviousState
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.exceptions import (
InternalException,
RuntimeException,
)
from dbt.node_types import NodeType
from dbt.ui import warning_tag
SELECTOR_GLOB = "*"
SELECTOR_DELIMITER = ":"
SELECTOR_GLOB = '*'
SELECTOR_DELIMITER = ':'
class MethodName(StrEnum):
FQN = "fqn"
Tag = "tag"
Source = "source"
Path = "path"
Package = "package"
Config = "config"
TestName = "test_name"
TestType = "test_type"
ResourceType = "resource_type"
State = "state"
Exposure = "exposure"
FQN = 'fqn'
Tag = 'tag'
Source = 'source'
Path = 'path'
Package = 'package'
Config = 'config'
TestName = 'test_name'
TestType = 'test_type'
ResourceType = 'resource_type'
State = 'state'
Exposure = 'exposure'
def is_selected_node(real_node, node_selector):
for i, selector_part in enumerate(node_selector):
def is_selected_node(fqn: List[str], node_selector: str):
is_last = i == len(node_selector) - 1
# If qualified_name exactly matches model name (fqn's leaf), return True
if fqn[-1] == node_selector:
return True
# Flatten node parts. Dots in model names act as namespace separators
flat_fqn = [item for segment in fqn for item in segment.split('.')]
# Selector components cannot be more than fqn's
if len(flat_fqn) < len(node_selector.split('.')):
return False
for i, selector_part in enumerate(node_selector.split('.')):
# if we hit a GLOB, then this node is selected
if selector_part == SELECTOR_GLOB:
return True
# match package.node_name or package.dir.node_name
elif is_last and selector_part == real_node[-1]:
return True
elif len(real_node) <= i:
return False
elif real_node[i] == selector_part:
elif flat_fqn[i] == selector_part:
continue
else:
return False
@@ -83,14 +79,15 @@ class SelectorMethod(metaclass=abc.ABCMeta):
self,
manifest: Manifest,
previous_state: Optional[PreviousState],
arguments: List[str],
arguments: List[str]
):
self.manifest: Manifest = manifest
self.previous_state = previous_state
self.arguments: List[str] = arguments
def parsed_nodes(
self, included_nodes: Set[UniqueId]
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, ManifestNode]]:
for key, node in self.manifest.nodes.items():
@@ -100,7 +97,8 @@ class SelectorMethod(metaclass=abc.ABCMeta):
yield unique_id, node
def source_nodes(
self, included_nodes: Set[UniqueId]
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, ParsedSourceDefinition]]:
for key, source in self.manifest.sources.items():
@@ -110,7 +108,8 @@ class SelectorMethod(metaclass=abc.ABCMeta):
yield unique_id, source
def exposure_nodes(
self, included_nodes: Set[UniqueId]
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, ParsedExposure]]:
for key, exposure in self.manifest.exposures.items():
@@ -120,28 +119,26 @@ class SelectorMethod(metaclass=abc.ABCMeta):
yield unique_id, exposure
def all_nodes(
self, included_nodes: Set[UniqueId]
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, SelectorTarget]]:
yield from chain(
self.parsed_nodes(included_nodes),
self.source_nodes(included_nodes),
self.exposure_nodes(included_nodes),
)
yield from chain(self.parsed_nodes(included_nodes),
self.source_nodes(included_nodes),
self.exposure_nodes(included_nodes))
def configurable_nodes(
self, included_nodes: Set[UniqueId]
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, CompileResultNode]]:
yield from chain(
self.parsed_nodes(included_nodes), self.source_nodes(included_nodes)
)
yield from chain(self.parsed_nodes(included_nodes),
self.source_nodes(included_nodes))
def non_source_nodes(
self,
included_nodes: Set[UniqueId],
) -> Iterator[Tuple[UniqueId, Union[ParsedExposure, ManifestNode]]]:
yield from chain(
self.parsed_nodes(included_nodes), self.exposure_nodes(included_nodes)
)
yield from chain(self.parsed_nodes(included_nodes),
self.exposure_nodes(included_nodes))
@abc.abstractmethod
def search(
@@ -149,35 +146,24 @@ class SelectorMethod(metaclass=abc.ABCMeta):
included_nodes: Set[UniqueId],
selector: str,
) -> Iterator[UniqueId]:
raise NotImplementedError("subclasses should implement this")
raise NotImplementedError('subclasses should implement this')
class QualifiedNameSelectorMethod(SelectorMethod):
def node_is_match(
self,
qualified_name: List[str],
package_names: Set[str],
fqn: List[str],
) -> bool:
"""Determine if a qualfied name matches an fqn, given the set of package
def node_is_match(self, qualified_name: str, fqn: List[str]) -> bool:
"""Determine if a qualified name matches an fqn for all package
names in the graph.
:param List[str] qualified_name: The components of the selector or node
name, split on '.'.
:param Set[str] package_names: The set of pacakge names in the graph.
:param str qualified_name: The qualified name to match the nodes with
:param List[str] fqn: The node's fully qualified name in the graph.
"""
if len(qualified_name) == 1 and fqn[-1] == qualified_name[0]:
unscoped_fqn = fqn[1:]
if is_selected_node(fqn, qualified_name):
return True
# Match nodes across different packages
elif is_selected_node(unscoped_fqn, qualified_name):
return True
if qualified_name[0] in package_names:
if is_selected_node(fqn, qualified_name):
return True
for package_name in package_names:
local_qualified_node_name = [package_name] + qualified_name
if is_selected_node(fqn, local_qualified_node_name):
return True
return False
@@ -188,15 +174,9 @@ class QualifiedNameSelectorMethod(SelectorMethod):
:param str selector: The selector or node name
"""
qualified_name = selector.split(".")
parsed_nodes = list(self.parsed_nodes(included_nodes))
package_names = {n.package_name for _, n in parsed_nodes}
for node, real_node in parsed_nodes:
if self.node_is_match(
qualified_name,
package_names,
real_node.fqn,
):
if self.node_is_match(selector, real_node.fqn):
yield node
@@ -215,7 +195,7 @@ class SourceSelectorMethod(SelectorMethod):
self, included_nodes: Set[UniqueId], selector: str
) -> Iterator[UniqueId]:
"""yields nodes from included are the specified source."""
parts = selector.split(".")
parts = selector.split('.')
target_package = SELECTOR_GLOB
if len(parts) == 1:
target_source, target_table = parts[0], None
@@ -226,9 +206,9 @@ class SourceSelectorMethod(SelectorMethod):
else: # len(parts) > 3 or len(parts) == 0
msg = (
'Invalid source selector value "{}". Sources must be of the '
"form `${{source_name}}`, "
"`${{source_name}}.${{target_name}}`, or "
"`${{package_name}}.${{source_name}}.${{target_name}}"
'form `${{source_name}}`, '
'`${{source_name}}.${{target_name}}`, or '
'`${{package_name}}.${{source_name}}.${{target_name}}'
).format(selector)
raise RuntimeException(msg)
@@ -247,7 +227,7 @@ class ExposureSelectorMethod(SelectorMethod):
def search(
self, included_nodes: Set[UniqueId], selector: str
) -> Iterator[UniqueId]:
parts = selector.split(".")
parts = selector.split('.')
target_package = SELECTOR_GLOB
if len(parts) == 1:
target_name = parts[0]
@@ -256,8 +236,8 @@ class ExposureSelectorMethod(SelectorMethod):
else:
msg = (
'Invalid exposure selector value "{}". Exposures must be of '
"the form ${{exposure_name}} or "
"${{exposure_package.exposure_name}}"
'the form ${{exposure_name}} or '
'${{exposure_package.exposure_name}}'
).format(selector)
raise RuntimeException(msg)
@@ -274,7 +254,9 @@ class PathSelectorMethod(SelectorMethod):
def search(
self, included_nodes: Set[UniqueId], selector: str
) -> Iterator[UniqueId]:
"""Yields nodes from inclucded that match the given path."""
"""Yields nodes from inclucded that match the given path.
"""
# use '.' and not 'root' for easy comparison
root = Path.cwd()
paths = set(p.relative_to(root) for p in root.glob(selector))
@@ -333,7 +315,7 @@ class ConfigSelectorMethod(SelectorMethod):
parts = self.arguments
# special case: if the user wanted to compare test severity,
# make the comparison case-insensitive
if parts == ["severity"]:
if parts == ['severity']:
selector = CaseInsensitive(selector)
# search sources is kind of useless now source configs only have
@@ -379,13 +361,14 @@ class TestTypeSelectorMethod(SelectorMethod):
self, included_nodes: Set[UniqueId], selector: str
) -> Iterator[UniqueId]:
search_types: Tuple[Type, ...]
if selector == "schema":
if selector == 'schema':
search_types = (ParsedSchemaTestNode, CompiledSchemaTestNode)
elif selector == "data":
elif selector == 'data':
search_types = (ParsedDataTestNode, CompiledDataTestNode)
else:
raise RuntimeException(
f'Invalid test type selector {selector}: expected "data" or ' '"schema"'
f'Invalid test type selector {selector}: expected "data" or '
'"schema"'
)
for node, real_node in self.parsed_nodes(included_nodes):
@@ -396,57 +379,87 @@ class TestTypeSelectorMethod(SelectorMethod):
class StateSelectorMethod(SelectorMethod):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.macros_were_modified: Optional[List[str]] = None
self.modified_macros: Optional[List[str]] = None
def _macros_modified(self) -> List[str]:
# we checked in the caller!
if self.previous_state is None or self.previous_state.manifest is None:
raise InternalException("No comparison manifest in _macros_modified")
raise InternalException(
'No comparison manifest in _macros_modified'
)
old_macros = self.previous_state.manifest.macros
new_macros = self.manifest.macros
modified = []
for uid, macro in new_macros.items():
name = f"{macro.package_name}.{macro.name}"
if uid in old_macros:
old_macro = old_macros[uid]
if macro.macro_sql != old_macro.macro_sql:
modified.append(f"{name} changed")
modified.append(uid)
else:
modified.append(f"{name} added")
modified.append(uid)
for uid, macro in old_macros.items():
if uid not in new_macros:
modified.append(f"{macro.package_name}.{macro.name} removed")
modified.append(uid)
return modified[:3]
return modified
def check_modified(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def recursively_check_macros_modified(self, node):
# check if there are any changes in macros the first time
if self.modified_macros is None:
self.modified_macros = self._macros_modified()
# loop through all macros that this node depends on
for macro_uid in node.depends_on.macros:
# is this macro one of the modified macros?
if macro_uid in self.modified_macros:
return True
# if not, and this macro depends on other macros, keep looping
macro = self.manifest.macros[macro_uid]
if len(macro.depends_on.macros) > 0:
return self.recursively_check_macros_modified(macro)
else:
return False
return False
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
different_contents = not new.same_contents(old) # type: ignore
upstream_macro_change = self.recursively_check_macros_modified(new)
return different_contents or upstream_macro_change
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_body"):
return not new.same_body(old) # type: ignore
else:
return False
def check_modified_configs(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_config"):
return not new.same_config(old) # type: ignore
else:
return False
def check_modified_persisted_descriptions(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
# check if there are any changes in macros, if so, log a warning the
# first time
if self.macros_were_modified is None:
self.macros_were_modified = self._macros_modified()
if self.macros_were_modified:
log_str = ", ".join(self.macros_were_modified)
logger.warning(
warning_tag(
f"During a state comparison, dbt detected a change in "
f"macros. This will not be marked as a modification. Some "
f"macros: {log_str}"
)
)
if hasattr(new, "same_persisted_description"):
return not new.same_persisted_description(old) # type: ignore
else:
return False
return not new.same_contents(old) # type: ignore
def check_new(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def check_modified_relation(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
if hasattr(new, "same_database_representation"):
return not new.same_database_representation(old) # type: ignore
else:
return False
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
return self.recursively_check_macros_modified(new)
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
return old is None
def search(
@@ -454,12 +467,19 @@ class StateSelectorMethod(SelectorMethod):
) -> Iterator[UniqueId]:
if self.previous_state is None or self.previous_state.manifest is None:
raise RuntimeException(
"Got a state selector method, but no comparison manifest"
'Got a state selector method, but no comparison manifest'
)
state_checks = {
"modified": self.check_modified,
"new": self.check_new,
# it's new if there is no old version
'new': lambda old, _: old is None,
# use methods defined above to compare properties of old + new
'modified': self.check_modified,
'modified.body': self.check_modified_body,
'modified.configs': self.check_modified_configs,
'modified.persisted_descriptions': self.check_modified_persisted_descriptions,
'modified.relation': self.check_modified_relation,
'modified.macros': self.check_modified_macros,
}
if selector in state_checks:
checker = state_checks[selector]
@@ -513,7 +533,7 @@ class MethodManager:
if method not in self.SELECTOR_METHODS:
raise InternalException(
f'Method name "{method}" is a valid node selection '
f"method name, but it is not handled"
f'method name, but it is not handled'
)
cls: Type[SelectorMethod] = self.SELECTOR_METHODS[method]
return cls(self.manifest, self.previous_state, method_arguments)

View File

@@ -3,21 +3,23 @@ import re
from abc import ABCMeta, abstractmethod
from dataclasses import dataclass
from typing import Set, Iterator, List, Optional, Dict, Union, Any, Iterable, Tuple
from typing import (
Set, Iterator, List, Optional, Dict, Union, Any, Iterable, Tuple
)
from .graph import UniqueId
from .selector_methods import MethodName
from dbt.exceptions import RuntimeException, InvalidSelectorException
RAW_SELECTOR_PATTERN = re.compile(
r"\A"
r"(?P<childrens_parents>(\@))?"
r"(?P<parents>((?P<parents_depth>(\d*))\+))?"
r"((?P<method>([\w.]+)):)?(?P<value>(.*?))"
r"(?P<children>(\+(?P<children_depth>(\d*))))?"
r"\Z"
r'\A'
r'(?P<childrens_parents>(\@))?'
r'(?P<parents>((?P<parents_depth>(\d*))\+))?'
r'((?P<method>([\w.]+)):)?(?P<value>(.*?))'
r'(?P<children>(\+(?P<children_depth>(\d*))))?'
r'\Z'
)
SELECTOR_METHOD_SEPARATOR = "."
SELECTOR_METHOD_SEPARATOR = '.'
def _probably_path(value: str):
@@ -41,15 +43,15 @@ def _match_to_int(match: Dict[str, str], key: str) -> Optional[int]:
return int(raw)
except ValueError as exc:
raise RuntimeException(
f"Invalid node spec - could not handle parent depth {raw}"
f'Invalid node spec - could not handle parent depth {raw}'
) from exc
SelectionSpec = Union[
"SelectionCriteria",
"SelectionIntersection",
"SelectionDifference",
"SelectionUnion",
'SelectionCriteria',
'SelectionIntersection',
'SelectionDifference',
'SelectionUnion',
]
@@ -64,12 +66,13 @@ class SelectionCriteria:
parents_depth: Optional[int]
children: bool
children_depth: Optional[int]
greedy: bool = False
def __post_init__(self):
if self.children and self.childrens_parents:
raise RuntimeException(
f'Invalid node spec {self.raw} - "@" prefix and "+" suffix '
"are incompatible"
'are incompatible'
)
@classmethod
@@ -80,10 +83,12 @@ class SelectionCriteria:
return MethodName.FQN
@classmethod
def parse_method(cls, groupdict: Dict[str, Any]) -> Tuple[MethodName, List[str]]:
raw_method = groupdict.get("method")
def parse_method(
cls, groupdict: Dict[str, Any]
) -> Tuple[MethodName, List[str]]:
raw_method = groupdict.get('method')
if raw_method is None:
return cls.default_method(groupdict["value"]), []
return cls.default_method(groupdict['value']), []
method_parts: List[str] = raw_method.split(SELECTOR_METHOD_SEPARATOR)
try:
@@ -99,54 +104,57 @@ class SelectionCriteria:
@classmethod
def selection_criteria_from_dict(
cls, raw: Any, dct: Dict[str, Any]
) -> "SelectionCriteria":
if "value" not in dct:
raise RuntimeException(f'Invalid node spec "{raw}" - no search value!')
cls, raw: Any, dct: Dict[str, Any], greedy: bool = False
) -> 'SelectionCriteria':
if 'value' not in dct:
raise RuntimeException(
f'Invalid node spec "{raw}" - no search value!'
)
method_name, method_arguments = cls.parse_method(dct)
parents_depth = _match_to_int(dct, "parents_depth")
children_depth = _match_to_int(dct, "children_depth")
parents_depth = _match_to_int(dct, 'parents_depth')
children_depth = _match_to_int(dct, 'children_depth')
return cls(
raw=raw,
method=method_name,
method_arguments=method_arguments,
value=dct["value"],
childrens_parents=bool(dct.get("childrens_parents")),
parents=bool(dct.get("parents")),
value=dct['value'],
childrens_parents=bool(dct.get('childrens_parents')),
parents=bool(dct.get('parents')),
parents_depth=parents_depth,
children=bool(dct.get("children")),
children=bool(dct.get('children')),
children_depth=children_depth,
greedy=greedy
)
@classmethod
def dict_from_single_spec(cls, raw: str):
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
result = RAW_SELECTOR_PATTERN.match(raw)
if result is None:
return {"error": "Invalid selector spec"}
return {'error': 'Invalid selector spec'}
dct: Dict[str, Any] = result.groupdict()
method_name, method_arguments = cls.parse_method(dct)
meth_name = str(method_name)
if method_arguments:
meth_name = meth_name + "." + ".".join(method_arguments)
dct["method"] = meth_name
dct = {k: v for k, v in dct.items() if (v is not None and v != "")}
if "childrens_parents" in dct:
dct["childrens_parents"] = bool(dct.get("childrens_parents"))
if "parents" in dct:
dct["parents"] = bool(dct.get("parents"))
if "children" in dct:
dct["children"] = bool(dct.get("children"))
meth_name = meth_name + '.' + '.'.join(method_arguments)
dct['method'] = meth_name
dct = {k: v for k, v in dct.items() if (v is not None and v != '')}
if 'childrens_parents' in dct:
dct['childrens_parents'] = bool(dct.get('childrens_parents'))
if 'parents' in dct:
dct['parents'] = bool(dct.get('parents'))
if 'children' in dct:
dct['children'] = bool(dct.get('children'))
return dct
@classmethod
def from_single_spec(cls, raw: str) -> "SelectionCriteria":
def from_single_spec(cls, raw: str, greedy: bool = False) -> 'SelectionCriteria':
result = RAW_SELECTOR_PATTERN.match(raw)
if result is None:
# bad spec!
raise RuntimeException(f'Invalid selector spec "{raw}"')
return cls.selection_criteria_from_dict(raw, result.groupdict())
return cls.selection_criteria_from_dict(raw, result.groupdict(), greedy=greedy)
class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
@@ -169,7 +177,9 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
self,
selections: List[Set[UniqueId]],
) -> Set[UniqueId]:
raise NotImplementedError("_combine_selections not implemented!")
raise NotImplementedError(
'_combine_selections not implemented!'
)
def combined(self, selections: List[Set[UniqueId]]) -> Set[UniqueId]:
if not selections:

View File

@@ -5,9 +5,7 @@ from pathlib import Path
from typing import Tuple, AbstractSet, Union
from dbt.dataclass_schema import (
dbtClassMixin,
ValidationError,
StrEnum,
dbtClassMixin, ValidationError, StrEnum,
)
from hologram import FieldEncoder, JsonDict
from mashumaro.types import SerializableType
@@ -15,11 +13,11 @@ from mashumaro.types import SerializableType
class Port(int, SerializableType):
@classmethod
def _deserialize(cls, value: Union[int, str]) -> "Port":
def _deserialize(cls, value: Union[int, str]) -> 'Port':
try:
value = int(value)
except ValueError:
raise ValidationError(f"Cannot encode {value} into port number")
raise ValidationError(f'Cannot encode {value} into port number')
return Port(value)
@@ -30,7 +28,7 @@ class Port(int, SerializableType):
class PortEncoder(FieldEncoder):
@property
def json_schema(self):
return {"type": "integer", "minimum": 0, "maximum": 65535}
return {'type': 'integer', 'minimum': 0, 'maximum': 65535}
class TimeDeltaFieldEncoder(FieldEncoder[timedelta]):
@@ -46,12 +44,12 @@ class TimeDeltaFieldEncoder(FieldEncoder[timedelta]):
return timedelta(seconds=value)
except TypeError:
raise ValidationError(
"cannot encode {} into timedelta".format(value)
'cannot encode {} into timedelta'.format(value)
) from None
@property
def json_schema(self) -> JsonDict:
return {"type": "number"}
return {'type': 'number'}
class PathEncoder(FieldEncoder):
@@ -65,16 +63,16 @@ class PathEncoder(FieldEncoder):
return Path(value)
except TypeError:
raise ValidationError(
"cannot encode {} into timedelta".format(value)
'cannot encode {} into timedelta'.format(value)
) from None
@property
def json_schema(self) -> JsonDict:
return {"type": "string"}
return {'type': 'string'}
class NVEnum(StrEnum):
novalue = "novalue"
novalue = 'novalue'
def __eq__(self, other):
return isinstance(other, NVEnum)
@@ -83,17 +81,14 @@ class NVEnum(StrEnum):
@dataclass
class NoValue(dbtClassMixin):
"""Sometimes, you want a way to say none that isn't None"""
novalue: NVEnum = NVEnum.novalue
dbtClassMixin.register_field_encoders(
{
Port: PortEncoder(),
timedelta: TimeDeltaFieldEncoder(),
Path: PathEncoder(),
}
)
dbtClassMixin.register_field_encoders({
Port: PortEncoder(),
timedelta: TimeDeltaFieldEncoder(),
Path: PathEncoder(),
})
FQNPath = Tuple[str, ...]

View File

@@ -5,8 +5,8 @@ from typing import Union, Dict, Any
class ModelHookType(StrEnum):
PreHook = "pre-hook"
PostHook = "post-hook"
PreHook = 'pre-hook'
PostHook = 'post-hook'
def get_hook_dict(source: Union[str, Dict[str, Any]]) -> Dict[str, Any]:
@@ -18,4 +18,4 @@ def get_hook_dict(source: Union[str, Dict[str, Any]]) -> Dict[str, Any]:
try:
return json.loads(source)
except ValueError:
return {"sql": source}
return {'sql': source}

View File

@@ -1,6 +1,7 @@
import os
PACKAGE_PATH = os.path.dirname(__file__)
PROJECT_NAME = "dbt"
PROJECT_NAME = 'dbt'
DOCS_INDEX_FILE_PATH = os.path.normpath(os.path.join(PACKAGE_PATH, "..", "index.html"))
DOCS_INDEX_FILE_PATH = os.path.normpath(
os.path.join(PACKAGE_PATH, '..', "index.html"))

View File

@@ -1,5 +1,5 @@
{% macro get_columns_in_query(select_sql) -%}
{{ return(adapter.dispatch('get_columns_in_query')(select_sql)) }}
{{ return(adapter.dispatch('get_columns_in_query', 'dbt')(select_sql)) }}
{% endmacro %}
{% macro default__get_columns_in_query(select_sql) %}
@@ -15,7 +15,7 @@
{% endmacro %}
{% macro create_schema(relation) -%}
{{ adapter.dispatch('create_schema')(relation) }}
{{ adapter.dispatch('create_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__create_schema(relation) -%}
@@ -25,7 +25,7 @@
{% endmacro %}
{% macro drop_schema(relation) -%}
{{ adapter.dispatch('drop_schema')(relation) }}
{{ adapter.dispatch('drop_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__drop_schema(relation) -%}
@@ -35,7 +35,7 @@
{% endmacro %}
{% macro create_table_as(temporary, relation, sql) -%}
{{ adapter.dispatch('create_table_as')(temporary, relation, sql) }}
{{ adapter.dispatch('create_table_as', 'dbt')(temporary, relation, sql) }}
{%- endmacro %}
{% macro default__create_table_as(temporary, relation, sql) -%}
@@ -51,8 +51,31 @@
{% endmacro %}
{% macro get_create_index_sql(relation, index_dict) -%}
{{ return(adapter.dispatch('get_create_index_sql', 'dbt')(relation, index_dict)) }}
{% endmacro %}
{% macro default__get_create_index_sql(relation, index_dict) -%}
{% do return(None) %}
{% endmacro %}
{% macro create_indexes(relation) -%}
{{ adapter.dispatch('create_indexes', 'dbt')(relation) }}
{%- endmacro %}
{% macro default__create_indexes(relation) -%}
{%- set _indexes = config.get('indexes', default=[]) -%}
{% for _index_dict in _indexes %}
{% set create_index_sql = get_create_index_sql(relation, _index_dict) %}
{% if create_index_sql %}
{% do run_query(create_index_sql) %}
{% endif %}
{% endfor %}
{% endmacro %}
{% macro create_view_as(relation, sql) -%}
{{ adapter.dispatch('create_view_as')(relation, sql) }}
{{ adapter.dispatch('create_view_as', 'dbt')(relation, sql) }}
{%- endmacro %}
{% macro default__create_view_as(relation, sql) -%}
@@ -66,7 +89,7 @@
{% macro get_catalog(information_schema, schemas) -%}
{{ return(adapter.dispatch('get_catalog')(information_schema, schemas)) }}
{{ return(adapter.dispatch('get_catalog', 'dbt')(information_schema, schemas)) }}
{%- endmacro %}
{% macro default__get_catalog(information_schema, schemas) -%}
@@ -81,7 +104,7 @@
{% macro get_columns_in_relation(relation) -%}
{{ return(adapter.dispatch('get_columns_in_relation')(relation)) }}
{{ return(adapter.dispatch('get_columns_in_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro sql_convert_columns_in_relation(table) -%}
@@ -98,13 +121,13 @@
{% endmacro %}
{% macro alter_column_type(relation, column_name, new_column_type) -%}
{{ return(adapter.dispatch('alter_column_type')(relation, column_name, new_column_type)) }}
{{ return(adapter.dispatch('alter_column_type', 'dbt')(relation, column_name, new_column_type)) }}
{% endmacro %}
{% macro alter_column_comment(relation, column_dict) -%}
{{ return(adapter.dispatch('alter_column_comment')(relation, column_dict)) }}
{{ return(adapter.dispatch('alter_column_comment', 'dbt')(relation, column_dict)) }}
{% endmacro %}
{% macro default__alter_column_comment(relation, column_dict) -%}
@@ -113,7 +136,7 @@
{% endmacro %}
{% macro alter_relation_comment(relation, relation_comment) -%}
{{ return(adapter.dispatch('alter_relation_comment')(relation, relation_comment)) }}
{{ return(adapter.dispatch('alter_relation_comment', 'dbt')(relation, relation_comment)) }}
{% endmacro %}
{% macro default__alter_relation_comment(relation, relation_comment) -%}
@@ -122,7 +145,7 @@
{% endmacro %}
{% macro persist_docs(relation, model, for_relation=true, for_columns=true) -%}
{{ return(adapter.dispatch('persist_docs')(relation, model, for_relation, for_columns)) }}
{{ return(adapter.dispatch('persist_docs', 'dbt')(relation, model, for_relation, for_columns)) }}
{% endmacro %}
{% macro default__persist_docs(relation, model, for_relation, for_columns) -%}
@@ -157,7 +180,7 @@
{% macro drop_relation(relation) -%}
{{ return(adapter.dispatch('drop_relation')(relation)) }}
{{ return(adapter.dispatch('drop_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -168,7 +191,7 @@
{% endmacro %}
{% macro truncate_relation(relation) -%}
{{ return(adapter.dispatch('truncate_relation')(relation)) }}
{{ return(adapter.dispatch('truncate_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -179,7 +202,7 @@
{% endmacro %}
{% macro rename_relation(from_relation, to_relation) -%}
{{ return(adapter.dispatch('rename_relation')(from_relation, to_relation)) }}
{{ return(adapter.dispatch('rename_relation', 'dbt')(from_relation, to_relation)) }}
{% endmacro %}
{% macro default__rename_relation(from_relation, to_relation) -%}
@@ -191,7 +214,7 @@
{% macro information_schema_name(database) %}
{{ return(adapter.dispatch('information_schema_name')(database)) }}
{{ return(adapter.dispatch('information_schema_name', 'dbt')(database)) }}
{% endmacro %}
{% macro default__information_schema_name(database) -%}
@@ -204,7 +227,7 @@
{% macro list_schemas(database) -%}
{{ return(adapter.dispatch('list_schemas')(database)) }}
{{ return(adapter.dispatch('list_schemas', 'dbt')(database)) }}
{% endmacro %}
{% macro default__list_schemas(database) -%}
@@ -218,7 +241,7 @@
{% macro check_schema_exists(information_schema, schema) -%}
{{ return(adapter.dispatch('check_schema_exists')(information_schema, schema)) }}
{{ return(adapter.dispatch('check_schema_exists', 'dbt')(information_schema, schema)) }}
{% endmacro %}
{% macro default__check_schema_exists(information_schema, schema) -%}
@@ -233,7 +256,7 @@
{% macro list_relations_without_caching(schema_relation) %}
{{ return(adapter.dispatch('list_relations_without_caching')(schema_relation)) }}
{{ return(adapter.dispatch('list_relations_without_caching', 'dbt')(schema_relation)) }}
{% endmacro %}
@@ -244,7 +267,7 @@
{% macro current_timestamp() -%}
{{ adapter.dispatch('current_timestamp')() }}
{{ adapter.dispatch('current_timestamp', 'dbt')() }}
{%- endmacro %}
@@ -255,7 +278,7 @@
{% macro collect_freshness(source, loaded_at_field, filter) %}
{{ return(adapter.dispatch('collect_freshness')(source, loaded_at_field, filter))}}
{{ return(adapter.dispatch('collect_freshness', 'dbt')(source, loaded_at_field, filter))}}
{% endmacro %}
@@ -273,7 +296,7 @@
{% endmacro %}
{% macro make_temp_relation(base_relation, suffix='__dbt_tmp') %}
{{ return(adapter.dispatch('make_temp_relation')(base_relation, suffix))}}
{{ return(adapter.dispatch('make_temp_relation', 'dbt')(base_relation, suffix))}}
{% endmacro %}
{% macro default__make_temp_relation(base_relation, suffix) %}
@@ -287,3 +310,35 @@
{% macro set_sql_header(config) -%}
{{ config.set('sql_header', caller()) }}
{%- endmacro %}
{% macro alter_relation_add_remove_columns(relation, add_columns = none, remove_columns = none) -%}
{{ return(adapter.dispatch('alter_relation_add_remove_columns', 'dbt')(relation, add_columns, remove_columns)) }}
{% endmacro %}
{% macro default__alter_relation_add_remove_columns(relation, add_columns, remove_columns) %}
{% if add_columns is none %}
{% set add_columns = [] %}
{% endif %}
{% if remove_columns is none %}
{% set remove_columns = [] %}
{% endif %}
{% set sql -%}
alter {{ relation.type }} {{ relation }}
{% for column in add_columns %}
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
{% endfor %}{{ ',' if remove_columns | length > 0 }}
{% for column in remove_columns %}
drop column {{ column.name }}{{ ',' if not loop.last }}
{% endfor %}
{%- endset -%}
{% do run_query(sql) %}
{% endmacro %}

View File

@@ -13,6 +13,10 @@
#}
{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_alias_name', 'dbt')(custom_alias_name, node)) %}
{%- endmacro %}
{% macro default__generate_alias_name(custom_alias_name=none, node=none) -%}
{%- if custom_alias_name is none -%}

View File

@@ -14,7 +14,7 @@
#}
{% macro generate_database_name(custom_database_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_database_name')(custom_database_name, node)) %}
{% do return(adapter.dispatch('generate_database_name', 'dbt')(custom_database_name, node)) %}
{%- endmacro %}
{% macro default__generate_database_name(custom_database_name=none, node=none) -%}

Some files were not shown because too many files have changed in this diff Show More