Compare commits

...

276 Commits

Author SHA1 Message Date
Emily Rockman
74dda5aa19 Merge pull request #3893 from dbt-labs/2798_enact_deprecations
removed deprecation for materialization-return and replaced with exception
2021-09-22 14:35:05 -05:00
Emily Rockman
092e96ce70 Merge branch 'develop' into 2798_enact_deprecations 2021-09-22 14:09:35 -05:00
Kyle Wigley
18102027ba Pull in changes for the 0.21.0rc1 release (#3935)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-22 13:53:43 -05:00
Emily Rockman
f80825d63e updated changelog 2021-09-22 12:55:49 -05:00
Kyle Wigley
9316e47b77 Pull in changes for the 0.21.0rc1 release (#3935)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-22 13:25:46 -04:00
Emily Rockman
f99cf1218a fixed conflict 2021-09-22 11:36:22 -05:00
Emily Rockman
5871915ce9 Merge branch '2798_enact_deprecations' of https://github.com/dbt-labs/dbt into 2798_enact_deprecations
# Conflicts:
#	test/integration/012_deprecation_tests/test_deprecations.py
2021-09-22 11:34:51 -05:00
Emily Rockman
5ce290043f more explicit error check 2021-09-22 11:16:59 -05:00
Emily Rockman
080d27321b removed deprication for materialization-return and replaced it with an exception 2021-09-22 11:16:59 -05:00
Gerda Shank
1d0936bd14 Merge pull request #3889 from dbt-labs/3886_pp_log_levels
[#3886] Tweak partial parsing log messages
2021-09-22 10:48:21 -04:00
Gerda Shank
706b8ca9df Merge pull request #3839 from dbt-labs/2990_global_cli_flags
[#2990] Normalize global CLI args/flags
2021-09-22 10:47:54 -04:00
Nathaniel May
7dc491b7ba Merge pull request #3936 from dbt-labs/regression-test-tweaks
Performance Regression Testing: Add timestamps to results and make filenames unique.
2021-09-22 10:18:24 -04:00
Gerda Shank
779c789a64 [#2990] Normalize global CLI args/flags 2021-09-22 09:58:07 -04:00
Gerda Shank
409b4ba109 [#3886] Tweak partial parsing log messages 2021-09-22 09:20:24 -04:00
Nathaniel May
59d131d3ac add timestamps to results, and make filenames unique 2021-09-21 18:10:37 -04:00
Joel Labes
6563d09ba7 Add logging for skipped resources (#3833)
Add --greedy flag to subparser

Add greedy flag, override default behaviour when parsing union

Add greedy support to ls (I think?!)

That was suspiciously easy

Fix flake issues

Try adding tests with greedy support

Remove trailing whitespace

Fix return type for select_nodes

forgot to add greedy arg

Remove incorrectly expected test

Use named param for greedy

Pull alert_unused_nodes out into its own function

rename resources -> tests

Add expand_selection explanation of --greedy flag

Add greedy support to yaml selectors. Update warning, tests

Fix failing tests

Fix tests. Changelog

Fix flake8

Co-authored-by: Joel Labes c/o Jeremy Cohen <jeremy@dbtlabs.com>
2021-09-18 19:58:38 +02:00
Kyle Wigley
05dea18b62 update develop (#3914)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
Co-authored-by: sungchun12 <sungwonchung3@gmail.com>
Co-authored-by: Drew Banin <drew@fishtownanalytics.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-09-17 14:07:42 -04:00
Nathaniel May
d7177c7d89 Merge pull request #3910 from dbt-labs/sample-more-exp-parser
sample exp parser more often
2021-09-17 13:34:14 -04:00
Gerda Shank
35f0fea804 Merge pull request #3888 from dbt-labs/skip_reloading_files
[#3563] Don't reload and validate schema files if they haven't changed
2021-09-17 13:32:01 -04:00
Gerda Shank
8953c7c533 [#3563] Partial parsing: don't reload and validate schema files if they haven't changed 2021-09-17 13:27:25 -04:00
Jeremy Cohen
76c59a5545 Fix logging for default selector (#3892)
* Fix logging for default selector

* Fix flake8, mypy
2021-09-17 18:00:14 +02:00
Emily Rockman
237048c7ac more explicit error check 2021-09-17 10:51:53 -05:00
Emily Rockman
30ff395b7b removed deprication for materialization-return and replaced it with an exception 2021-09-17 10:51:53 -05:00
Nathaniel May
5c0a31b829 sample exp parser more often 2021-09-17 11:42:35 -04:00
Nathaniel May
243bc3d41d Merge pull request #3877 from dbt-labs/exp-parser-detect-macros
Make ModelParser detect macro overrides for ref, source, and config.
2021-09-17 11:39:16 -04:00
Sam Debruyn
67b594a950 group by column_name in accepted_values (#3906)
* group by column_name in accepted_values
Group by index is not ANSI SQL and not supported in every database engine (e.g. MS SQL). Use group by column_name in shared code.

* update changelog
2021-09-17 17:08:41 +02:00
Kyle Wigley
2493c21649 Cleanup CHANGELOG (#3902) 2021-09-17 09:22:37 -04:00
Kyle Wigley
d3826e670f Build task arg parity (#3884) 2021-09-16 16:21:37 -04:00
Drew Banin
4b5b1696b7 Merge pull request #3894 from dbt-labs/feature/source-freshness-timing-in-artifacts
Add timing and thread info to sources.json artifact
2021-09-16 14:59:18 -04:00
Drew Banin
abb59ef14f add changelog entry 2021-09-16 13:54:04 -04:00
Drew Banin
3b7c2816b9 (#3804) Add timing and thread info to sources.json artifact 2021-09-16 13:28:08 -04:00
Teddy
484517416f 3448 user defined default selection criteria (#3875)
* Added selector default

* Updated code from review feedback

* Fix schema.yml location

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-16 16:13:59 +02:00
Nathaniel May
39447055d3 add macro override detection for ref, source, and config for ModelParser 2021-09-15 12:09:47 -04:00
Kyle Wigley
95cca277c9 Handle failing tests for build command (#3792)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-15 10:39:19 -04:00
Christophe Oudar
96083dcaf5 add support for execution project for BigQuery adapter (#3707)
* add support for execution project for BigQuery adapter

* Add integration test for BigQuery execution_project

* review changes

* Update changelog
2021-09-15 14:09:08 +02:00
Emily Rockman
75b4cf691b Merge pull request #3879 from dbt-labs/emmyoop/update-contributing
Fixed typo and added some more setup details to CONTRIBUTING.md
2021-09-14 09:35:12 -05:00
Emily Rockman
7c9171b00b Fixed typo and added some more setup details 2021-09-13 14:03:02 -05:00
Kyle Wigley
3effade266 deepcopy args when passed down to rpc pask (#3850) 2021-09-10 16:34:57 -04:00
Jeremy Cohen
44e7390526 Specify macro_namespace of global_project dispatch macros (#3851)
* Specify macro_namespace of global_project dispatch macros

* Dispatch get_custom_alias, too

* Add integration tests

* Add changelog entry
2021-09-09 12:59:39 +02:00
Jeremy Cohen
c141798abc Use a macro for where subquery in tests (#3859)
* Use a macro for where subquery in tests

* Fix existing tests

* Add test case reproducing #3857

* Add changelog entry
2021-09-09 12:38:09 +02:00
sungchun12
df7ec3fb37 Fix dbt deps sorting behavior for non-standard version semantics (#3856)
* robust sorting and default to original version str

* more unique version semantics

* add another non-standard version example

* fix mypy issue
2021-09-09 12:13:09 +02:00
AndreasTA-AW
90e5507d03 #3682 Changed how tables and views are generated to be able to use differen… (#3691)
* Changed how tables and views are generated to be able to use different options

* 3682 added unit tests

* 3682 had conflict in changelog and became a bit messy

* 3682 Tested to add default kms to dataset and accidently pushed the changes
2021-09-09 12:01:02 +02:00
leahwicz
332d3494b3 Adding ADR directory, guidelines, first one (#3844)
* Adding ADR README

* Adding perf testing ADR

* fill in adr sections

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-09-07 14:05:21 -04:00
Anna Filippova
6393f5a5d7 Feature: Add support for Package name changes on the Hub (#3825)
* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* Add warning about new package name

* Update CHANGELOG.md

* make linter happy

* move warnings to deprecations

* Update core/dbt/clients/registry.py

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

* add comments for posterity

* Update core/dbt/deprecations.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* add deprecation test

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-07 11:53:02 -04:00
Kyle Wigley
ce97a9ca7a explicitly define __init__ for Profile class (#3855) 2021-09-07 10:38:51 -04:00
Jeremy Cohen
9af071bfe4 Add adapter_unique_id to invocation tracking (#3796)
* Add properties + methods for adapter_unique_id

* Turn on tracking
2021-09-03 16:38:20 +02:00
sungchun12
45a41202f3 fix prerelease imports with loose version semantics (#3852)
* fix prereleases imports with looseversion

* update for non-standard versions
2021-09-02 17:52:36 +02:00
Kyle Wigley
9768999ca1 Only set single_threaded for the rpc list task, not the cli list task (#3848) 2021-09-02 10:11:39 -04:00
juma-adoreme
fc0d11c0a5 Parametrize key selection for the list task (#3838)
* Parametrize key selectinf for list task

* Remove trailing whitespace

* Add output_keys to RPC List Parameters

* Move up changelog entry, add contributor note

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-09-02 14:58:00 +02:00
Joel Labes
e6344205bb Add colourful count of pass/fail tests in dbt debug (#3832)
* Add colourful count of pass/fail tests in dbt debug

* Remove number of checks, move error messages into shared list

* Fix flake issues

* Update CHANGELOG.md
2021-09-02 12:15:03 +02:00
Jason Gluck
9d7a6556ef configurable postgres connect timeout (#3582)
* configurable postgres connect timeout

* changelog for #3582

* test default and change connect_timeout

* Move up contributor note in changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 19:45:31 +02:00
Daniel Bartley
15f4add0b8 Add target_project and target_dataset config aliases for snapshots on BigQuery (#3834)
* add bq alias for target_project and target_dataset

* Update CHANGELOG.md

add #3694 to changelog

* Update CHANGELOG.md

Be more specific about the change to bigquery synonym for schema only.

* Set integration test bigquery configs to use alias

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 16:08:20 +02:00
Anders
464becacd0 fewer adapters will need to re-implemnt basic_load_csv_rows (#3623)
* fewer adapters will need to re-implemnt basic_load_csv_rows

* hack version

* reordering per convention

* make redundant basic_load_csv_rows

* for next version

* Update core/dbt/include/global_project/macros/materializations/seed/seed.sql

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Move up changelog entry

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 15:47:36 +02:00
sungchun12
51a76d0d63 Better dbt packages version logging to aid in upgrading outdated packages (#3759)
* start blueprinting changes

* extend registry handler for latest package version

* conditional logging for latest version

* remove todo

* add conditional logging

* Upgrades is clearer

* update if elif conditions and log msg

* remove TODO

* fix flake8 errors

* blueprint unit tests

* conditions specific to hub registry

* 1 passing test for get latest version

* DRY method calls

* move version latest to hub only

* add a new line

* remove other draft tests

* update changelog

* update log language for clarity

* pass flake8

* fix changelog

* Update test/unit/test_deps.py

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update changelog

* remove hub language

* sort for latest version and include prereleases

* fix flake8

* resolves another issue

* fix prerelease string formatting

* fix broken test

* update logging to past tense

* built-in version sorting

* handle prereleases for latest version checks

* get version latest unit test based on prerelease

* update unit test for sorting functionality

* consistent test names

* fix flake8

* clean up contributors list

* simplify if else logic

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-31 10:31:45 +02:00
Slava Kalashnikov
052e54d43a BigQuery copy materialization enhancement (#3606)
* Change BigQuery copy materialization

Change BigQuery copy materialization macros to copy data from several sources into single target

* Change BigQuery copy materialization

Change BigQuery connections.py to copy data from several sources into single target via copy materialization

* Change BigQuery copy materialization

Test to check default value of `copy_materialization` if it is absent in config

* Change BigQuery copy materialization

Update changelog

* Update changelog

* Var renaming + test addition

* Changelog updated

* Changelog updated

* Fix test for copy table

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update impl.py

* Update connections.py

* Update test_bigquery_adapter.py

* Update test_bigquery_adapter.py

* Update connections.py

* Align calls from mock and from adapter

* Split long code ilnes

* Create additional.sql

* Update copy_as_several_tables.sql

* Update schema.yml

* Update copy.sql

* Update connections.py

* Update test_bigquery_copy_models.py

* Add contributor
2021-08-30 16:28:35 +02:00
Kyle Wigley
9e796671dd Update workflow concurrency (#3824) 2021-08-26 17:13:54 -04:00
Kyle Wigley
a9a6254f52 Address unexpected cancelled CI workflows and stop blocking Postgres integration tests (#3813) 2021-08-26 10:37:50 -04:00
Kyle Wigley
8b3a09c7ae Run postgres integration tests when dev dependency changes (#3819) 2021-08-26 10:36:24 -04:00
Jeremy Cohen
6aa4d812d4 Rewrite generic tests to support column expressions (#3812)
* Rewrite generic tests to support column expressions, too

* Fix naming
2021-08-26 10:30:03 -04:00
Kyle Wigley
07fa719fb0 Revert "Bump freezegun from 0.3.12 to 1.1.0" (#3818)
This reverts commit 650b34ae24.
2021-08-26 10:11:56 -04:00
dependabot[bot]
650b34ae24 Bump freezegun from 0.3.12 to 1.1.0 (#3206)
Bumps [freezegun](https://github.com/spulec/freezegun) from 0.3.12 to 1.1.0.
- [Release notes](https://github.com/spulec/freezegun/releases)
- [Changelog](https://github.com/spulec/freezegun/blob/master/CHANGELOG)
- [Commits](https://github.com/spulec/freezegun/compare/0.3.12...1.1.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 18:18:37 -04:00
dependabot[bot]
0a935855f3 Update sqlparse requirement from <0.4,>=0.2.3 to >=0.2.3,<0.5 in /core (#3074)
Updates the requirements on [sqlparse](https://github.com/andialbrecht/sqlparse) to permit the latest version.
- [Release notes](https://github.com/andialbrecht/sqlparse/releases)
- [Changelog](https://github.com/andialbrecht/sqlparse/blob/master/CHANGELOG)
- [Commits](https://github.com/andialbrecht/sqlparse/compare/0.2.3...0.4.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:59:36 -04:00
dependabot[bot]
d500aae4dc Bump ubuntu from 18.04 to 20.04 (#3073)
Bumps ubuntu from 18.04 to 20.04.

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 09:35:24 -04:00
Kyle Wigley
370d3e746d Remove fishtown-analytics references 😢 (#3801) 2021-08-25 09:24:41 -04:00
Kyle Wigley
ab06149c81 Moving CI to GitHub actions (#3669)
* test

* test test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix sh script for building dists

* fix windows build

* add concurrency

* fix random 'Cannot track experimental parser info when active user is None' error

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* temp commit to test

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feeback: update order of script for setting up postgres on macos runner

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* testing push event

* test

* try this again

* test actions in same repo

* nvm revert

* formatting

* fix windows build

* add concurrency

* fix build workflow

* test slim ci

* has changes

* set up postgres for other OS

* update descriptions

* turn off python3.9 unit tests

* add changelog

* clean up todo

* Update .github/workflows/main.yml

* create actions for common code

* cosmetic updates

* dev review feedback

* updates

* fix build checks

* rm auto formatting changes

* review feedback: add reasoning for not using secrets in workflow

* review feedback: rm unnecessary changes

* more review feedback

* test pull_request_target action

* fix path to cli tool

* split up lint and unit workflows for clear resposibilites

* rm `branches-ignore` filter from pull request trigger

* test dynamic matrix generation

* update label logic

* finishing touches

* align naming

* pass opts to pytest

* slim down push matrix, there are a lot of jobs

* test bump num of proc

* update matrix for all event triggers

* handle case when no changes require integration tests

* dev review feedback

* clean up and add branch name for testing

* Add test results publishing as artifact (#3794)

* Test failures file

* Add testing branch

* Adding upload steps

* Adding date to name

* Adding to integration

* Always upload artifacts

* Adding adapter type

* Always publish unit test results

* Adding comments

* rm unecessary env var

* fix changelog

* update job name

* clean up python deps

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-08-24 17:12:42 -04:00
Gerda Shank
e72895c7c9 Merge pull request #3791 from dbt-labs/3210_select_equals_model
[#3210] Make --models and --select synonyms, except for 'ls'
2021-08-24 14:44:24 -04:00
Gerda Shank
fe4a67daa4 Use 'select' instead of 'models' for internal args processing and RPC 2021-08-24 13:55:37 -04:00
leahwicz
09ea989d81 Retry GitHub download failures (#3729)
* Retry GitHub download failures

* Refactor and add tests

* Fixed linting and added comment

* Fixing unit test assertRaises

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Fixing casing

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

* Changing to use partial for function calls

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-08-24 13:35:09 -04:00
leahwicz
7fa14b6948 Fixing changelog (#3776) 2021-08-23 13:19:58 -04:00
Gerda Shank
d4974cd35c [#3210] Make --models and --select synonyms, except for 'ls' 2021-08-23 11:40:59 -04:00
Kyle Wigley
459178811b rm git backports for previous debian release, use git package (#3785) 2021-08-22 21:20:34 -04:00
Snyk bot
b37f6a010e fix: docker/Dockerfile to reduce vulnerabilities (#3771)
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-DEBIAN10-SQLITE3-537598
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345386
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
- https://snyk.io/vuln/SNYK-DEBIAN10-SYSTEMD-345391
2021-08-19 09:26:53 -04:00
Gerda Shank
e817164d31 Merge pull request #3767 from dbt-labs/3764_analysis_descriptions
[#3764] Fix bug in analysis patch application
2021-08-18 09:05:10 -04:00
Gerda Shank
09ce43edbf [#3764] Fix bug in analysis patch application 2021-08-17 18:07:35 -04:00
sungchun12
2980cd17df Fix/bigquery job label length (#3703)
* add blueprints to resolve issue

* revert to previous version

* intentionally failing test

* add imports

* add validation in existing function

* add passing test for length validation

* add current sanitized label

* remove duplicate var

* Make logging output 2 lines

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Raise RuntimeException to better handle error

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* update test

* fix flake8 errors

* update changelog

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-17 14:54:31 -04:00
Gerda Shank
8c804de643 Merge pull request #3758 from dbt-labs/3757_pp_version_mismatch
[#3757] Produce better information about partial parsing version mismatches
2021-08-17 13:23:23 -04:00
Gerda Shank
c8241b87e6 [#3757] Produce better information about partial parsing version
mismatches.
2021-08-17 12:48:20 -04:00
Gerda Shank
f204d24ed8 Merge pull request #3616 from dbt-labs/config_in_schema_files
Configs in schema files
2021-08-17 12:18:04 -04:00
Gerda Shank
d5461ccd8b [#2401] Configs in schema files 2021-08-17 11:50:08 -04:00
Gerda Shank
a20d2d93d3 Merge pull request #3750 from dbt-labs/fix_remove_tests
[#3711] Check that test unique_id exists in nodes when removing
2021-08-16 09:06:36 -04:00
Gerda Shank
57e1eec165 [#3711] Check that test unique_id exists in nodes when removing 2021-08-13 17:23:36 -04:00
Nathaniel May
d2dbe6afe4 Merge pull request #3739 from dbt-labs/perf-testing-tweak
Bump minimum performance runs to 20
2021-08-13 14:05:10 -04:00
Gerda Shank
72eb163223 Merge pull request #3733 from dbt-labs/pp_trap_errors
Trap partial parsing errors and switch to full reparse on exceptions
2021-08-13 13:54:40 -04:00
Gerda Shank
af16c74c3a [#3725] Switch to full reparse on partial parsing exceptions. Log and
report exception information.
2021-08-13 13:38:47 -04:00
Kyle Wigley
664f6584b9 add missing versions and format (#3738) 2021-08-13 13:31:38 -04:00
Nathaniel May
76fd3bdf8c minimum performance runs to 20 2021-08-13 13:20:30 -04:00
Jeremy Cohen
b633adb881 Use is_relational check for schema caching (#3716)
* Use is_relational check for schema caching

* Fix flake8

* Update changelog
2021-08-12 18:18:28 -04:00
Jeremy Cohen
b6e534cdd0 Feature: state:modified.macros (#3559)
* First cut at state:modified for macro changes

* First cut at state:modified subselectors

* Update test_graph_selector_methods

* Fix flake8

* Fix mypy

* Update 062_defer_state_test/test_modified_state

* PR feedback. Update changelog
2021-08-12 17:21:48 -04:00
Nathaniel May
1dc4adb86f Merge pull request #3732 from dbt-labs/perf-test-project-swap
Swap dummy perf testing projects for a real one
2021-08-12 10:27:20 -04:00
Nathaniel May
0a4d7c4831 Merge pull request #3731 from dbt-labs/perf-ci-quickfix
remove pr trigger for perf workflow
2021-08-12 10:25:13 -04:00
Nathaniel May
ad67e55d74 swapping dummy perf testing projects for real one 2021-08-11 14:24:26 -04:00
Nathaniel May
2fae64a488 remove pr trigger for perf workflow 2021-08-11 14:14:08 -04:00
Nathaniel May
1a984601ee Merge pull request #3602 from dbt-labs/performance-regression-testing
Add Performance Regression Testing [Rust]
2021-08-11 10:44:51 -04:00
Jeremy Cohen
454168204c Add build RPC method (#3674)
* Add build RPC method

* Add rpc test, some required flags

* Fix flake8

* PR feedback

* Update changelog [skip ci]

* Do not skip CI when rebasing
2021-08-10 10:51:43 -04:00
Drew Banin
43642956a2 Serialize Undefined values to JSON for rpc requests (#3687)
* (#3464) Serialize Undefined values to JSON for rpc requests

* Update changelog, fix typo
2021-08-09 21:26:09 -04:00
Nathaniel May
1fe53750fa add more future work 2021-08-09 12:55:42 -04:00
Nathaniel May
8609c02383 minor change 2021-08-09 12:53:53 -04:00
Nathaniel May
355b0c496e remove unnecessary newline printing 2021-08-09 12:53:07 -04:00
Nathaniel May
cd6894acf4 add to future work 2021-08-09 12:52:53 -04:00
Nathaniel May
b90b3a9c19 avoid abandoned destructors by refactoring usage of exit 2021-08-06 14:46:10 -04:00
leahwicz
e7b8488be8 Remove converter.py since not used anymore (#3699) 2021-08-05 15:27:56 -04:00
Nathaniel May
06cc0c57e8 changelog 2021-08-05 12:27:21 -04:00
Nathaniel May
87072707ed point to manifest 2021-08-05 11:40:48 -04:00
Nathaniel May
ef63319733 add comment 2021-08-05 11:39:19 -04:00
Nathaniel May
2068dd5510 fmt 2021-08-05 11:31:19 -04:00
Nathaniel May
3e1e171c66 add test for regression calculation 2021-08-05 11:31:00 -04:00
Nathaniel May
5f9ed1a83c run twice a day 2021-08-05 11:15:40 -04:00
Nathaniel May
3d9e54d970 add fmt check 2021-08-05 11:14:30 -04:00
Nathaniel May
52a0fdef6c fmt 2021-08-05 11:10:07 -04:00
Nathaniel May
d9b02fb0a0 add lots of comments 2021-08-05 11:03:13 -04:00
Nathaniel May
6c8de62b24 up stddev threshold 2021-08-05 09:48:33 -04:00
Nathaniel May
2d3d1b030a rename to dummy project 2021-08-05 09:47:34 -04:00
Nathaniel May
88acf0727b remove clone 2021-08-04 17:50:32 -04:00
Nathaniel May
02839ec779 iter instead of into_iter 2021-08-04 17:48:13 -04:00
Nathaniel May
44a8f6a3bf more reference improvements 2021-08-04 17:46:45 -04:00
Nathaniel May
751ea92576 make the happy path use references and clone in the exception paths 2021-08-04 17:33:29 -04:00
Nathaniel May
02007b3619 more reference handling 2021-08-04 17:05:23 -04:00
Nathaniel May
fe0b9e7ef5 refactor to use references better 2021-08-04 17:00:16 -04:00
Nathaniel May
4b1c6b51f9 fix spacing 2021-08-04 16:39:53 -04:00
Nathaniel May
0b4689f311 address PR feedback 2021-08-04 15:14:37 -04:00
Nathaniel May
b77eff8f6f errors to warnings in ci 2021-08-04 13:34:46 -04:00
Nathaniel May
2782a33ecf minor refactor 2021-08-04 13:31:48 -04:00
Nathaniel May
94c6cf1b3c remove some clones 2021-08-04 13:19:39 -04:00
Nathaniel May
3c8daacd3e refactor for simpler flow 2021-08-04 13:12:01 -04:00
Nathaniel May
2f9907b072 remove one more unwrap 2021-08-04 12:34:57 -04:00
Nathaniel May
287c4d2b03 revamp exception hierarchy 2021-08-04 12:15:29 -04:00
Nathaniel May
ba9d76b3f9 make final json human readable 2021-08-04 10:11:09 -04:00
Jeremy Cohen
0efaaf7daf Fix typo [skip ci] 2021-08-04 09:50:11 -04:00
Drew Banin
9ae7d68260 Merge pull request #3686 from dbt-labs/fix/cleanup-audit-integration-tests
Fix: Drop audit schema tests in tearDown for test suite
2021-08-03 19:54:36 -04:00
Nathaniel May
486afa9fcd upload results even when regressions are detected 2021-08-03 18:04:22 -04:00
Nathaniel May
1f189f5225 added small paragraph to readme 2021-08-03 17:59:53 -04:00
Nathaniel May
580b1fdd68 minor print change 2021-08-03 17:56:30 -04:00
Nathaniel May
bad0198a36 minor readme changes 2021-08-03 17:55:03 -04:00
Nathaniel May
252280b56e write out results as artifact 2021-08-03 17:34:10 -04:00
Nathaniel May
64bf9c8885 test fix 2021-08-03 17:04:13 -04:00
Nathaniel May
935c138736 type gymnastics 2021-08-03 17:03:37 -04:00
Nathaniel May
5891b59790 better variable names 2021-08-03 16:52:28 -04:00
Nathaniel May
4e020c3878 enforce branch names at calculation time 2021-08-03 16:50:08 -04:00
Nathaniel May
3004969a93 move measure io to main 2021-08-03 16:26:29 -04:00
Nathaniel May
873e9714f8 move io operations to main 2021-08-03 16:22:02 -04:00
Nathaniel May
fe24dd43d4 return all calculations not just regressions 2021-08-03 16:10:16 -04:00
Nathaniel May
ed91ded2c1 branches named dev and baseline 2021-08-03 15:04:05 -04:00
Nathaniel May
757614d57f add stddev regression 2021-08-03 14:59:21 -04:00
Nathaniel May
faff8c00b3 group by run only 2021-08-03 14:37:49 -04:00
Github Build Bot
45fe76eef4 Merge remote-tracking branch 'origin/releases/0.21.0b1' into develop 2021-08-03 18:09:56 +00:00
Nathaniel May
80244a09fe add error for groups that are not two elements large 2021-08-03 14:00:30 -04:00
Github Build Bot
ea772ae419 Release dbt v0.21.0b1 2021-08-03 17:30:32 +00:00
Drew Banin
c68fca7937 Fix: Drop audit schema tests in tearDown for test suite 2021-08-03 13:24:54 -04:00
Nathaniel May
37e86257f5 point to the downloaded artifacts correctly 2021-08-03 12:57:19 -04:00
Nathaniel May
c182c05c2f error when there are no results to process 2021-08-03 12:48:21 -04:00
Nathaniel May
b02875a12b add debug line 2021-08-03 12:35:32 -04:00
Nathaniel May
03332b2955 more gracefully exit than unwrapping 2021-08-03 12:32:56 -04:00
Nathaniel May
f1f99a2371 add name to action 2021-08-03 12:15:44 -04:00
Nathaniel May
95116dbb5b fix measure call 2021-08-03 12:12:25 -04:00
Nathaniel May
868fd64adf download correct artifact 2021-08-03 12:08:00 -04:00
Nathaniel May
2f7ab2d038 unify rust apps as subcommands 2021-08-03 12:04:45 -04:00
Jeremy Cohen
159e79ee6b Update changelog in advance of v0.21.0b1 (#3678)
* Fixup Changelog

* More updates [skip ci]
2021-08-02 20:08:22 -04:00
Nathaniel May
3d4a82cca2 add comments 2021-08-02 18:09:31 -04:00
Nathaniel May
6ba837d73d fix download artifact name 2021-08-02 18:03:04 -04:00
Nathaniel May
f4775d7673 change job name 2021-08-02 17:56:29 -04:00
Nathaniel May
429396aa02 move step dependencies around 2021-08-02 17:55:26 -04:00
Nathaniel May
8a5e9b71a5 add some output to comparitor 2021-08-02 17:50:33 -04:00
Nathaniel May
fa78102eaf add comparison to workflow 2021-08-02 17:23:30 -04:00
Nathaniel May
5466d474c5 add test for exception display 2021-08-02 17:07:41 -04:00
Nathaniel May
80951ae973 fmt 2021-08-02 16:34:47 -04:00
Nathaniel May
d5662ef34c wrap up comparsion logic 2021-08-02 16:33:43 -04:00
leahwicz
57783bb5f6 Adding issue templates for different release types (#3644)
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-08-02 12:50:49 -04:00
Nathaniel May
d73ee588e5 Merge pull request #3637 from dbt-labs/experimental-parser-fix
make experimental parser respect config merge behavior
2021-08-02 10:03:42 -04:00
Nathaniel May
40089d710b experimental parser respects config merge behavior 2021-08-02 09:38:30 -04:00
Jeremy Cohen
6ec61950eb Handle exception from tracker.flush() (#3661) 2021-08-02 08:25:41 -04:00
Gerda Shank
72c831a80a Merge pull request #3659 from dbt-labs/pp_internal_macro_processing
[#3636] Check for unique_ids when recursively removing macros
2021-07-30 15:34:14 -04:00
Gerda Shank
929931a26a Merge pull request #3654 from dbt-labs/change_config_call_handling
Switch from config_call list to config_call_dict dictionary
2021-07-30 14:08:30 -04:00
Gerda Shank
577e2438c1 [#3636] Check for unique_ids when recursively removing macros 2021-07-30 14:01:40 -04:00
Kyle Wigley
2679792199 Add tracking event for full re-parse reasoning (#3652)
* add tracking event for full reparse reason

* update changelog
2021-07-30 09:39:09 -04:00
Kyle Wigley
2adf982991 update links to dbt repo (#3521) 2021-07-30 08:46:58 -04:00
Gerda Shank
1fb4a7f428 Switch from config_call list to config_call_dict dictionary 2021-07-29 18:46:59 -04:00
Kyle Wigley
30e72bc5e2 Use SchemaParser render context to render test configs (#3646)
* use available context when rendering test configs

* add test

* update changelog
2021-07-29 12:59:48 -04:00
Jeremy Cohen
35645a7233 Include dbt-docs changes for 0.20.1-rc1 (#3643) 2021-07-29 09:56:04 -04:00
Gerda Shank
d583c8d737 Merge pull request #3632 from dbt-labs/pp_delete_schema_macro_patch
[#3627] Improve findability of macro_patches, schedule right macro file for processing
2021-07-28 17:49:27 -04:00
Gerda Shank
a83f00c594 [#3627] Improve findability of macro_patches, schedule right macro file
for processing
2021-07-28 17:27:42 -04:00
Nathaniel May
45bb955b55 still wip but compiles 2021-07-28 15:54:04 -04:00
Daniele Frigo
c448702c1b Use old_relation for renaming in default materializations (#3547)
* table and view materializations should rename from old_relation to manage changes from view to table and reverse

* edited changelog

* edited changelog

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
2021-07-28 06:59:27 -04:00
Niall Woodward
558a6a03ac Fix PR link in changelog (#3639)
Fix a typo introduced in https://github.com/dbt-labs/dbt/pull/3624
2021-07-28 06:51:45 -04:00
Niall Woodward
52ec7907d3 dbt deps prerelease install bugs + add install-prerelease parameter to packages.yml (#3624)
* Fix dbt deps prerelease install bugs

* Add install-prerelease parameter to hub packages in packages.yml
2021-07-27 21:59:46 -04:00
Jeremy Cohen
792f39a888 Snowflake: no transactions, except for DML (#3510)
* Rm Snowflake txnal logic. Explicit for DML

* Be less clever. Update create_or_replace_view()

* Seed DML as well

* Changelog entry

* Fix unit test

* One semicolon can change the world
2021-07-27 18:13:35 -04:00
Gerda Shank
16264f58c1 Merge pull request #3621 from dbt-labs/pp_macro_link_processing_error
[#3584] Partial parsing: handle source tests when changing test macro
2021-07-27 16:59:26 -04:00
Nathaniel May
2317c0c3c8 Merge pull request #3630 from dbt-labs/nate-3568
fix awkward exception being raised by a yml file with all comments
2021-07-27 16:50:56 -04:00
Gerda Shank
3c09ab9736 [#3584] Partial parsing: handle source tests when changing test macro 2021-07-27 16:34:23 -04:00
Gerda Shank
f10dc0e1b3 Merge pull request #3618 from dbt-labs/pp_yaml_version
[#3567] Fix partial parsing error with version key if previous file is empty
2021-07-27 16:30:06 -04:00
leahwicz
634bc41d8a Secret scrubbing for env variables (#3617)
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-27 16:06:10 -04:00
Gerda Shank
d7ea3648c6 [#3567] Fix partial parsing error with version key if previous file is
empty
2021-07-27 15:38:52 -04:00
Gerda Shank
e5c8e19ff2 Merge pull request #3619 from dbt-labs/model_config_iterator
[#3573] Put back config iterator for backwards compatibility
2021-07-27 15:34:51 -04:00
Kyle Wigley
4ddba7e44c --wip-- 2021-07-27 12:48:33 -04:00
Kyle Wigley
37b31d10c8 add derived traits to structs 2021-07-27 11:28:32 -04:00
Nathaniel May
93cf1f085f handle None return value from yaml loading 2021-07-27 10:59:27 -04:00
Gerda Shank
a84f824a44 [#3573] Put back config iterator for backwards compatibility 2021-07-26 17:56:35 -04:00
Kyle Wigley
9c58f3465b Fix flaky test related to tracking events (#3604)
* skip all tracking event testing

* Turn off tracking in tests that hits model parsing code path
fix other random test that fails because global tracking.current_user exists but is null

* pytest did not respect skip mark

* fix gh actions
2021-07-26 16:55:16 -04:00
Gerda Shank
0e3778132b Merge pull request #3620 from dbt-labs/pp_already_removed_node
If SQL file already scheduled for parsing, don't reprocess
2021-07-26 15:49:10 -04:00
Jeremy Cohen
72722635f2 Fix error handling in dbt build (#3608)
* RunTask -> BuildTask

* Add test, changelog entry
2021-07-25 22:15:13 -04:00
Gerda Shank
a4c7c7fc55 If SQL file already scheduled for parsing, don't reprocess 2021-07-24 15:43:54 -04:00
Nathaniel May
2bad73eead Merge pull request #3610 from dbt-labs/derp-fix
fixing typo in test
2021-07-23 13:14:55 -04:00
Kyle Wigley
c8bc25d11a add struct 2021-07-22 17:05:08 -04:00
Kyle Wigley
4c06689ff5 create a tuple of measurements to group results by 2021-07-22 16:54:57 -04:00
Kyle Wigley
a45c9d0192 add new arg for runner for branch name 2021-07-22 16:24:46 -04:00
Kyle Wigley
34e2c4f90b fix results output dir 2021-07-22 16:16:21 -04:00
Kyle Wigley
c0e2023c81 update profiles dir path 2021-07-22 15:52:28 -04:00
Nathaniel May
108b55bdc3 add regression type 2021-07-22 13:50:17 -04:00
Nathaniel May
a29367b7fe read files and parse json 2021-07-22 12:30:35 -04:00
Nathaniel May
1d7e8349ed draft of comparison 2021-07-22 12:01:30 -04:00
Nathaniel May
67c194dcd1 fixing typo in test 2021-07-22 09:53:26 -04:00
Nathaniel May
75d3d87d64 fix warmup syntax 2021-07-21 18:00:07 -04:00
Nathaniel May
4ff3f6d4e8 moved config dir out of projects dir 2021-07-21 17:35:54 -04:00
Nathaniel May
d0773f3346 warmup fs caches 2021-07-21 17:34:22 -04:00
Nathaniel May
ee58d27d94 use a profiles.yml file 2021-07-21 17:28:28 -04:00
Nathaniel May
9e3da391a7 draft of regression testing workflow 2021-07-21 17:01:01 -04:00
matt-winkler
bd7010678a Feature: on_schema_change for incremental models (#3387)
* detect and act on schema changes

* update incremental helpers code

* update changelog

* fix error in diff_columns from testing

* abstract code a bit further

* address matching names vs. data types

* Update CHANGELOG.md

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* updates from Jeremy's feedback

* multi-column add / remove with full_refresh

* simple changes from JC's feedback

* updated for snowflake

* reorganize postgres code

* reorganize approach

* updated full refresh trigger logic

* fixed unintentional wipe behavior

* catch final else condition

* remove WHERE string replace

* touch ups

* port core to snowflake

* added bigquery code

* updated impacted unit tests

* updates from linting tests

* updates from linting again

* snowflake updates from further testing

* fix logging

* clean up incremental logic

* updated for bigquery

* update postgres with new strategy

* update nodeconfig

* starting integration tests

* integration test for ignore case

* add test for append_new_columns

* add integration test for sync

* remove extra tests

* add unique key and snowflake test

* move incremental integration test dir

* update integration tests

* update integration tests

* Suggestions for #3387 (#3558)

* PR feedback: rationalize macros + logging, fix + expand tests

* Rm alter_column_types, always true for sync_all_columns

* update logging and integration test on sync

* update integration tests

* test fix SF integration tests

Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>

* rename integration test folder

* Update core/dbt/include/global_project/macros/materializations/incremental/incremental.sql

Accept Jeremy's suggested change

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>

* Update changelog [skip ci]

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-07-21 15:49:19 -04:00
leahwicz
9f716b31b3 Moving unit tests into separate workflow (#3588)
* Moving unit tests into separate workflow

* Fixing CircleCI error
2021-07-21 12:35:04 -04:00
Kyle Wigley
3dd486d8fa Source freshness task node selection and cli command parity (#3554)
* cli: add selection args for source freshness command

* rename command to `source freshness` and maintain alias to old command

* update and add tests for source freshness command and node selection

* update changelog, add comments

* fix formatting

* update changelog
2021-07-21 10:31:40 -04:00
Jeremy Cohen
33217891ca Refactor relationships test to support where config (#3583)
* Rewrite relationships with CTEs

* Update changelog PR num [skip ci]
2021-07-20 19:28:09 -04:00
dependabot[bot]
1d37c4e555 Update snowflake-connector-python[secure-local-storage] requirement (#3594)
Updates the requirements on [snowflake-connector-python[secure-local-storage]](https://github.com/snowflakedb/snowflake-connector-python) to permit the latest version.
- [Release notes](https://github.com/snowflakedb/snowflake-connector-python/releases)
- [Commits](https://github.com/snowflakedb/snowflake-connector-python/commits)

---
updated-dependencies:
- dependency-name: snowflake-connector-python[secure-local-storage]
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-20 14:01:35 -04:00
Nathaniel May
9f62ec2153 add rust performance runner 2021-07-20 09:42:12 -04:00
Jeremy Cohen
372eca76b8 Bump werkzeug lower bound, werkzeug-refresh-token script (#3590)
* Update werkzeug, refresh-token script

* Add changelog note
2021-07-20 08:45:56 -04:00
Ian Knox
e3cb050bbc Merge pull request #3490 from dbt-labs/feature/dbt-build
dbt `build`
2021-07-19 19:09:50 -05:00
Jeremy Cohen
0ae93c7f54 Fix store_failures modifier for unique, not_null (#3577)
* Fix store_failures modifier for unique, not_null

* Add test, changelog
2021-07-16 14:50:06 -04:00
leahwicz
1f6386d760 Adding missing v0.20.0 files back to develop (#3566) 2021-07-15 16:46:22 -04:00
Ian Knox
66eb3964e2 PR feedback, Changelog 2021-07-14 13:23:46 -05:00
Nathaniel May
f460d275ba add experimental parser tracking (#3553) 2021-07-09 17:13:34 -04:00
Jeremy Cohen
fb91bad800 Update create_adapter_plugins.py (#3509)
* Update create_adapter_plugins.py

* Bump, changelog [skip ci]
2021-07-09 14:53:01 -04:00
Jeremy Cohen
eaec22ae53 Reconcile changelogs bw 0.20.latest + develop (#3552) 2021-07-09 13:39:42 -04:00
Jeremy Cohen
b7c1768cca Update changelog (#3551) 2021-07-09 13:35:16 -04:00
Nathaniel May
387b26a202 Merge pull request #3549 from dbt-labs/test-tweak
Tweak function to return a value instead of asserting
2021-07-09 12:20:22 -04:00
Ian Knox
8a1e6438f1 Revert downstream blocking on test failure logic due to test issues 😧 2021-07-09 10:52:12 -05:00
Ian Knox
aaac5ff2e6 New node discovery logic for internal work queue 2021-07-09 10:52:12 -05:00
Ian Knox
4dc29630b5 Race condition fix + downstream test blocking 2021-07-09 10:52:12 -05:00
Ian Knox
f716631439 Grouped Topologic sorting logic 2021-07-09 10:52:12 -05:00
Ian Knox
648a780850 First pass dbt build, cli only 2021-07-09 10:51:53 -05:00
Jeremy Cohen
de0919ff88 Include dbt-docs changes for 0.20.0 final (#3544) 2021-07-09 11:33:13 -04:00
Nathaniel May
8b1ea5fb6c return value from test function 2021-07-08 18:14:38 -04:00
Jeremy Cohen
85627aafcd Speed up Snowflake column comments, while still avoiding errors (#3543)
* Have our cake and eat it quickly, too

* Update changelog
2021-07-07 18:18:26 -04:00
Jeremy Cohen
49065158f5 Add gitignore, ignore pycache (#3536) 2021-07-06 11:51:17 -04:00
Gerda Shank
bdb3049218 Merge pull request #3522 from dbt-labs/pp_fix_simul_model_patch_deletion
Partial parsing: check if a node has already been deleted [#3516]
2021-07-06 11:46:02 -04:00
jmriego
e10d1b0f86 '+' config prefix handling whitespace (#3526)
* '+' config prefix handling whitespace

* rerun ci
2021-07-02 12:25:18 -04:00
Drew Banin
83b98c8ebf Merge pull request #3499 from dbt-labs/fix/agate-undesirable-casting
Prevent Agate from coercing values in query result sets
2021-07-01 10:53:28 -04:00
Jeremy Cohen
b9d5123aa3 Update changelog for 0.20.0rc2 2021-06-30 16:24:25 -04:00
Gerda Shank
c09300bfd2 Partial parsing: check if a node has already been deleted [#3516] 2021-06-30 11:35:13 -04:00
leahwicz
fc490cee7b Adding link to plugin release instructions 2021-06-30 09:51:53 -04:00
Jeremy Cohen
3baa3d7fe8 Update badge links 2021-06-30 08:46:33 -04:00
Jeremy Cohen
764c7c0fdc Update logo, badges (#3518)
* Fix logo, badges

* Update commit hash

* A few more edits

* Point Circle badge to develop [skip ci]

* Final fixups [skip ci]
2021-06-30 08:43:05 -04:00
Drew Banin
c97ebbbf35 Update License.md (#3517) 2021-06-29 22:43:50 -04:00
Drew Banin
85fe32bd08 Move to 0.21 changelog section 2021-06-29 20:21:31 -04:00
Drew Banin
eba3fd2255 Merge branch 'develop' of github.com:fishtown-analytics/dbt into fix/agate-undesirable-casting 2021-06-29 20:20:18 -04:00
Jeremy Cohen
e2f2c07873 Include dbt-docs changes for 0.20.0rc2 (#3511) 2021-06-29 18:44:42 -04:00
Nathaniel May
70850cd362 Merge pull request #3497 from fishtown-analytics/experimental-parser-rust
Experimental Parser: Swap python extractor for rust dependency
2021-06-29 16:21:26 -04:00
Gerda Shank
16992e6391 Merge pull request #3505 from fishtown-analytics/pp_testing
Expand partial parsing tests; fix macro partial parsing [#3449]
2021-06-29 15:45:33 -04:00
Gerda Shank
fd0d95140e Expand partial parsing tests; fix macro partial parsing [#3449] 2021-06-29 11:53:38 -04:00
Nathaniel May
ac65fcd557 update experimental parser to use rust dependency 2021-06-28 17:30:07 -04:00
Kyle Wigley
4d246567b9 Update project load tracking to include experimental parser info (#3495)
* Fix docs generation for cross-db sources in REDSHIFT RA3 node (#3408)

* Fix docs generating for cross-db sources

* Code reorganization

* Code adjustments according to flake8

* Error message adjusted to be more precise

* CHANGELOG update

* add static analysis info to parsing data

* update changelog

* don't use `meta`! need better separation between dbt internal objects and external facing data. hacked an internal field on the manifest to save off this parsing info for the time being

* fix partial parsing case

Co-authored-by: kostek-pl <67253952+kostek-pl@users.noreply.github.com>
2021-06-28 10:09:50 -04:00
Drew Banin
1ad1c834f3 (#2984) Prevent Agate from coercing values in query result sets 2021-06-26 13:24:30 -04:00
Jeremy Cohen
41610b822c Touch up init, fix missing adapter errors (#3483)
* Some love to init

* Update changelog
2021-06-24 15:51:36 -04:00
leahwicz
c794600242 Move starter project into dbt repo (#3474)
Addresses issue #3005
2021-06-22 11:03:01 -04:00
Jessica Laughlin
9d414f6ec3 add optional SSL parameters to Postgres connector (#3473)
Allows users to optionally set values for sslcert, sslkey, and
sslrootcert in their Postgres profiles.

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-06-21 17:52:59 -04:00
Ted Conbeer
552e831306 Feature/faster materializations with fewer transactions (#3468)
* drop if exists in view and table materializations

* Add integration test

* Add changelog entry

* PR Review 1
2021-06-21 14:58:32 -04:00
leahwicz
c712c96a0b Commenting our flaky integration test (#3477)
Test continually fails on time out so commenting out until we can fix it
2021-06-21 10:51:36 -04:00
Gerda Shank
eb46bfc3d6 Merge pull request #3460 from fishtown-analytics/minimal_pp_validation
Add minimal validation of schema file yaml prior to partial parsing
2021-06-18 15:08:59 -04:00
dependabot[bot]
f52537b606 Update typing-extensions requirement in /core (#3310)
Updates the requirements on [typing-extensions](https://github.com/python/typing) to permit the latest version.
- [Release notes](https://github.com/python/typing/releases)
- [Commits](https://github.com/python/typing/compare/3.7.4...3.10.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-06-18 13:05:55 -04:00
dependabot[bot]
762419d2fe Update werkzeug requirement from <2.0,>=0.15 to >=0.15,<3.0 in /core (#3390)
Updates the requirements on [werkzeug](https://github.com/pallets/werkzeug) to permit the latest version.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/0.15.0...2.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-18 10:16:28 -04:00
dependabot[bot]
4feb7cb15b Update idna requirement from <3,>=2.5 to >=2.5,<4 in /core (#3429)
Updates the requirements on [idna](https://github.com/kjd/idna) to permit the latest version.
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](https://github.com/kjd/idna/compare/v2.5...v3.2)

---
updated-dependencies:
- dependency-name: idna
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-18 09:52:38 -04:00
Gerda Shank
eb47b85148 Add minimal validation of schema file yaml prior to partial parsing
[#3246]
2021-06-17 13:25:04 -04:00
Anders
9faa019a07 dispatch logic of new test materialization (#3461)
* dispatch logic of new test materialization

allow custom adapters to override the core test select statement functionality

* rename macro

* raconte moi une histoire

Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
2021-06-17 12:09:34 -04:00
Jeremy Cohen
9589dc91fa Fix quoting for stringy test configs (#3459)
* Fix quoting for stringy test configs

* Update changelog
2021-06-16 14:26:07 -04:00
Gerda Shank
14507a283e Merge pull request #3454 from fishtown-analytics/adapter_dispatch_infinite_recursion
Fix macro depends_on recursion error when macros call themselves (dbt…utils.datediff)
2021-06-11 11:22:05 -04:00
Gerda Shank
af0fe120ec Fix macro depends_on recursion error when macros call themselves (dbt_utils.datediff) 2021-06-11 10:28:16 -04:00
Gerda Shank
16501ec1c6 Merge pull request #3445 from fishtown-analytics/fix_serialization
create _lock when deserializing manifest, plus cleanup file serialization
2021-06-11 09:30:10 -04:00
leahwicz
bf867f6aff Add minor version release template (#3442)
Add minor version release template
2021-06-09 09:31:21 -04:00
kostek-pl
eb4ad4444f Fix docs generation for cross-db sources in REDSHIFT RA3 node (#3408)
* Fix docs generating for cross-db sources

* Code reorganization

* Code adjustments according to flake8

* Error message adjusted to be more precise

* CHANGELOG update
2021-06-09 09:08:52 -04:00
Gerda Shank
8fdba17ac6 create _lock when deserializing manifest, plus cleanup file
serialization
2021-06-08 16:08:12 -04:00
4398 changed files with 63772 additions and 5512 deletions

View File

@@ -1,23 +1,27 @@
[bumpversion]
current_version = 0.20.0rc1
current_version = 0.21.0rc1
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
((?P<prerelease>[a-z]+)(?P<num>\d+))?
((?P<prekind>a|b|rc)
(?P<pre>\d+) # pre-release version num
)?
serialize =
{major}.{minor}.{patch}{prerelease}{num}
{major}.{minor}.{patch}{prekind}{pre}
{major}.{minor}.{patch}
commit = False
tag = False
[bumpversion:part:prerelease]
[bumpversion:part:prekind]
first_value = a
optional_value = final
values =
a
b
rc
final
[bumpversion:part:num]
[bumpversion:part:pre]
first_value = 1
[bumpversion:file:setup.py]
@@ -26,6 +30,8 @@ first_value = 1
[bumpversion:file:core/dbt/version.py]
[bumpversion:file:core/scripts/create_adapter_plugins.py]
[bumpversion:file:plugins/postgres/setup.py]
[bumpversion:file:plugins/redshift/setup.py]

View File

@@ -1,123 +0,0 @@
version: 2.1
jobs:
unit:
docker: &test_only
- image: fishtownanalytics/test-container:12
environment:
DBT_INVOCATION_ENV: circle
DOCKER_TEST_DATABASE_HOST: "database"
TOX_PARALLEL_NO_SPINNER: 1
steps:
- checkout
- run: tox -p -e py36,py37,py38
lint:
docker: *test_only
steps:
- checkout
- run: tox -e mypy,flake8 -- -v
build-wheels:
docker: *test_only
steps:
- checkout
- run:
name: Build wheels
command: |
python3.8 -m venv "${PYTHON_ENV}"
export PYTHON_BIN="${PYTHON_ENV}/bin/python"
$PYTHON_BIN -m pip install -U pip setuptools
$PYTHON_BIN -m pip install -r requirements.txt
$PYTHON_BIN -m pip install -r dev-requirements.txt
/bin/bash ./scripts/build-wheels.sh
$PYTHON_BIN ./scripts/collect-dbt-contexts.py > ./dist/context_metadata.json
$PYTHON_BIN ./scripts/collect-artifact-schema.py > ./dist/artifact_schemas.json
environment:
PYTHON_ENV: /home/tox/build_venv/
- store_artifacts:
path: ./dist
destination: dist
integration-postgres:
docker:
- image: fishtownanalytics/test-container:12
environment:
DBT_INVOCATION_ENV: circle
DOCKER_TEST_DATABASE_HOST: "database"
TOX_PARALLEL_NO_SPINNER: 1
- image: postgres
name: database
environment:
POSTGRES_USER: "root"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "dbt"
steps:
- checkout
- run:
name: Setup postgres
command: bash test/setup_db.sh
environment:
PGHOST: database
PGUSER: root
PGPASSWORD: password
PGDATABASE: postgres
- run:
name: Postgres integration tests
command: tox -p -e py36-postgres,py38-postgres -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-snowflake:
docker: *test_only
steps:
- checkout
- run:
name: Snowflake integration tests
command: tox -p -e py36-snowflake,py38-snowflake -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-redshift:
docker: *test_only
steps:
- checkout
- run:
name: Redshift integration tests
command: tox -p -e py36-redshift,py38-redshift -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
integration-bigquery:
docker: *test_only
steps:
- checkout
- run:
name: Bigquery integration test
command: tox -p -e py36-bigquery,py38-bigquery -- -v -n4
no_output_timeout: 30m
- store_artifacts:
path: ./logs
workflows:
version: 2
test-everything:
jobs:
- lint
- unit
- integration-postgres:
requires:
- unit
- integration-redshift:
requires:
- unit
- integration-bigquery:
requires:
- unit
- integration-snowflake:
requires:
- unit
- build-wheels:
requires:
- lint
- unit
- integration-postgres
- integration-redshift
- integration-bigquery
- integration-snowflake

View File

@@ -0,0 +1,27 @@
---
name: Beta minor version release
about: Creates a tracking checklist of items for a Beta minor version release
title: "[Tracking] v#.##.#B# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Engineering] Verify new release branch is created in the repo
- [ ] [Product] Finalize migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Create an epic for the RC release

View File

@@ -0,0 +1,28 @@
---
name: Final minor version release
about: Creates a tracking checklist of items for a final minor version release
title: "[Tracking] v#.##.# final release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Merge `next` into `current` for docs.getdbt.com
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Update discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.

View File

@@ -0,0 +1,29 @@
---
name: RC minor version release
about: Creates a tracking checklist of items for a RC minor version release
title: "[Tracking] v#.##.#RC# release "
labels: 'release'
assignees: ''
---
### Release Core
- [ ] [Engineering] Verify all necessary changes exist on the release branch
- [ ] [Engineering] Follow [dbt-release workflow](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#03ff37da697d4d8ba63d24fae1bfa817)
- [ ] [Product] Update migration guide (next.docs.getdbt.com)
### Release Cloud
- [ ] [Engineering] Create a platform issue to update dbt Cloud and verify it is completed. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Engineering] Determine if schemas have changed. If so, generate new schemas and push to schemas.getdbt.com
### Announce
- [ ] [Product] Publish discourse
- [ ] [Product] Announce in dbt Slack
### Post-release
- [ ] [Engineering] [Bump plugin versions](https://www.notion.so/dbtlabs/Releasing-b97c5ea9a02949e79e81db3566bbc8ef#f01854e8da3641179fbcbe505bdf515c) (dbt-spark + dbt-presto), add compatibility as needed
- [ ] [Spark](https://github.com/dbt-labs/dbt-spark)
- [ ] [Presto](https://github.com/dbt-labs/dbt-presto)
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud. [Example issue](https://github.com/dbt-labs/dbt-cloud/issues/3481)
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.
- [ ] [Engineering] Create an epic for the final release

View File

@@ -0,0 +1,10 @@
name: "Set up postgres (linux)"
description: "Set up postgres service on linux vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
sudo systemctl start postgresql.service
pg_isready
sudo -u postgres bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,24 @@
name: "Set up postgres (macos)"
description: "Set up postgres service on macos vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: bash
run: |
brew services start postgresql
echo "Check PostgreSQL service is running"
i=10
COMMAND='pg_isready'
while [ $i -gt -1 ]; do
if [ $i == 0 ]; then
echo "PostgreSQL service not ready, all attempts exhausted"
exit 1
fi
echo "Check PostgreSQL service status"
eval $COMMAND && break
echo "PostgreSQL service not ready, wait 10 more sec, attempts left: $i"
sleep 10
((i--))
done
createuser -s postgres
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -0,0 +1,12 @@
name: "Set up postgres (windows)"
description: "Set up postgres service on windows vm for dbt integration tests"
runs:
using: "composite"
steps:
- shell: pwsh
run: |
$pgService = Get-Service -Name postgresql*
Set-Service -InputObject $pgService -Status running -StartupType automatic
Start-Process -FilePath "$env:PGBIN\pg_isready" -Wait -PassThru
$env:Path += ";$env:PGBIN"
bash ${{ github.action_path }}/setup_db.sh

View File

@@ -0,0 +1 @@
../../../test/setup_db.sh

View File

@@ -9,14 +9,13 @@ resolves #
resolves #1234
-->
### Description
<!--- Describe the Pull Request here -->
### Checklist
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.

View File

@@ -0,0 +1,95 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.6", "3.7", "3.8", "3.9"];
const supportedAdapters = ["snowflake", "postgres", "bigquery", "redshift"];
// if PR, generate matrix based on files changed and PR labels
if (context.eventName.includes("pull_request")) {
// `changes` is a list of adapter names that have related
// file changes in the PR
// ex: ['postgres', 'snowflake']
const changes = JSON.parse(process.env.CHANGES);
const labels = context.payload.pull_request.labels.map(({ name }) => name);
console.log("labels", labels);
console.log("changes", changes);
const testAllLabel = labels.includes("test all");
const include = [];
for (const adapter of supportedAdapters) {
if (
changes.includes(adapter) ||
testAllLabel ||
labels.includes(`test ${adapter}`)
) {
for (const pythonVersion of supportedPythonVersions) {
if (
pythonVersion === defaultPythonVersion ||
labels.includes(`test python${pythonVersion}`) ||
testAllLabel
) {
// always run tests on ubuntu by default
include.push({
os: "ubuntu-latest",
adapter,
"python-version": pythonVersion,
});
if (labels.includes("test windows") || testAllLabel) {
include.push({
os: "windows-latest",
adapter,
"python-version": pythonVersion,
});
}
if (labels.includes("test macos") || testAllLabel) {
include.push({
os: "macos-latest",
adapter,
"python-version": pythonVersion,
});
}
}
}
}
}
console.log("matrix", { include });
return {
include,
};
}
// if not PR, generate matrix of python version, adapter, and operating
// system to run integration tests on
const include = [];
// run for all adapters and python versions on ubuntu
for (const adapter of supportedAdapters) {
for (const pythonVersion of supportedPythonVersions) {
include.push({
os: 'ubuntu-latest',
adapter: adapter,
"python-version": pythonVersion,
});
}
}
// additionally include runs for all adapters, on macos and windows,
// but only for the default python version
for (const adapter of supportedAdapters) {
for (const operatingSystem of ["windows-latest", "macos-latest"]) {
include.push({
os: operatingSystem,
adapter: adapter,
"python-version": defaultPythonVersion,
});
}
}
console.log("matrix", { include });
return {
include,
};
};

266
.github/workflows/integration.yml vendored Normal file
View File

@@ -0,0 +1,266 @@
# **what?**
# This workflow runs all integration tests for supported OS
# and python versions and core adapters. If triggered by PR,
# the workflow will only run tests for adapters related
# to code changes. Use the `test all` and `test ${adapter}`
# label to run all or additional tests. Use `ok to test`
# label to mark PRs from forked repositories that are safe
# to run integration tests for. Requires secrets to run
# against different warehouses.
# **why?**
# This checks the functionality of dbt from a user's perspective
# and attempts to catch functional regressions.
# **when?**
# This workflow will run on every push to a protected branch
# and when manually triggered. It will also run for all PRs, including
# PRs from forks. The workflow will be skipped until there is a label
# to mark the PR as safe to run.
name: Adapter Integration Tests
on:
# pushes to release branches
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
# all PRs, important to note that `pull_request_target` workflows
# will run in the context of the target branch of a PR
pull_request_target:
# manual tigger
workflow_dispatch:
# explicitly turn off permissions for `GITHUB_TOKEN`
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
# sets default shell to bash, for all operating systems
defaults:
run:
shell: bash
jobs:
# generate test metadata about what files changed and the testing matrix to use
test-metadata:
# run if not a PR from a forked repository or has a label to mark as safe to test
if: >-
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.generate-matrix.outputs.result }}
steps:
- name: Check out the repository (non-PR)
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Check if relevant files changed
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: get-changes
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
postgres:
- 'core/**'
- 'plugins/postgres/**'
- 'dev-requirements.txt'
snowflake:
- 'core/**'
- 'plugins/snowflake/**'
bigquery:
- 'core/**'
- 'plugins/bigquery/**'
redshift:
- 'core/**'
- 'plugins/redshift/**'
- 'plugins/postgres/**'
- name: Generate integration test matrix
id: generate-matrix
uses: actions/github-script@v4
env:
CHANGES: ${{ steps.get-changes.outputs.changes }}
with:
script: |
const script = require('./.github/scripts/integration-test-matrix.js')
const matrix = script({ context })
console.log(matrix)
return matrix
test:
name: ${{ matrix.adapter }} / python ${{ matrix.python-version }} / ${{ matrix.os }}
# run if not a PR from a forked repository or has a label to mark as safe to test
# also checks that the matrix generated is not empty
if: >-
needs.test-metadata.outputs.matrix &&
fromJSON( needs.test-metadata.outputs.matrix ).include[0] &&
(
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
)
runs-on: ${{ matrix.os }}
needs: test-metadata
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.test-metadata.outputs.matrix) }}
env:
TOXENV: integration-${{ matrix.adapter }}
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
# explicity checkout the branch for the PR,
# this is necessary for the `pull_request_target` event
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox (postgres)
if: matrix.adapter == 'postgres'
run: tox
- name: Run tox (redshift)
if: matrix.adapter == 'redshift'
env:
REDSHIFT_TEST_DBNAME: ${{ secrets.REDSHIFT_TEST_DBNAME }}
REDSHIFT_TEST_PASS: ${{ secrets.REDSHIFT_TEST_PASS }}
REDSHIFT_TEST_USER: ${{ secrets.REDSHIFT_TEST_USER }}
REDSHIFT_TEST_PORT: ${{ secrets.REDSHIFT_TEST_PORT }}
REDSHIFT_TEST_HOST: ${{ secrets.REDSHIFT_TEST_HOST }}
run: tox
- name: Run tox (snowflake)
if: matrix.adapter == 'snowflake'
env:
SNOWFLAKE_TEST_ACCOUNT: ${{ secrets.SNOWFLAKE_TEST_ACCOUNT }}
SNOWFLAKE_TEST_PASSWORD: ${{ secrets.SNOWFLAKE_TEST_PASSWORD }}
SNOWFLAKE_TEST_USER: ${{ secrets.SNOWFLAKE_TEST_USER }}
SNOWFLAKE_TEST_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_WAREHOUSE }}
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: ${{ secrets.SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN }}
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_ID }}
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET }}
SNOWFLAKE_TEST_ALT_DATABASE: ${{ secrets.SNOWFLAKE_TEST_ALT_DATABASE }}
SNOWFLAKE_TEST_ALT_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_ALT_WAREHOUSE }}
SNOWFLAKE_TEST_DATABASE: ${{ secrets.SNOWFLAKE_TEST_DATABASE }}
SNOWFLAKE_TEST_QUOTED_DATABASE: ${{ secrets.SNOWFLAKE_TEST_QUOTED_DATABASE }}
SNOWFLAKE_TEST_ROLE: ${{ secrets.SNOWFLAKE_TEST_ROLE }}
run: tox
- name: Run tox (bigquery)
if: matrix.adapter == 'bigquery'
env:
BIGQUERY_TEST_SERVICE_ACCOUNT_JSON: ${{ secrets.BIGQUERY_TEST_SERVICE_ACCOUNT_JSON }}
BIGQUERY_TEST_ALT_DATABASE: ${{ secrets.BIGQUERY_TEST_ALT_DATABASE }}
run: tox
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs
path: ./logs
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.adapter }}-${{ steps.date.outputs.date }}.csv
path: integration_results.csv
require-label-comment:
runs-on: ubuntu-latest
needs: test
permissions:
pull-requests: write
steps:
- name: Needs permission PR comment
if: >-
needs.test.result == 'skipped' &&
github.event_name == 'pull_request_target' &&
github.event.pull_request.head.repo.full_name != github.repository
uses: unsplash/comment-on-pr@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: |
"You do not have permissions to run integration tests, @dbt-labs/core "\
"needs to label this PR with `ok to test` in order to run integration tests!"
check_for_duplicate_msg: true

206
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,206 @@
# **what?**
# Runs code quality checks, unit tests, and verifies python build on
# all code commited to the repository. This workflow should not
# require any secrets since it runs for PRs from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code for dbt meets a certain quality standard.
# **when?**
# This will run for all PRs, when code is pushed to a release
# branch, and when manually triggered.
name: Tests and Code Checks
on:
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
pull_request:
workflow_dispatch:
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
defaults:
run:
shell: bash
jobs:
code-quality:
name: ${{ matrix.toxenv }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [flake8, mypy]
env:
TOXENV: ${{ matrix.toxenv }}
PYTEST_ADDOPTS: "-v --color=yes"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
unit:
name: unit test / python ${{ matrix.python-version }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: [3.6, 3.7, 3.8] # TODO: support unit testing for python 3.9 (https://github.com/dbt-labs/dbt/issues/3689)
env:
TOXENV: "unit"
PYTEST_ADDOPTS: "-v --color=yes --csv unit_results.csv"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: unit_results_${{ matrix.python-version }}-${{ steps.date.outputs.date }}.csv
path: unit_results.csv
build:
name: build packages
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
- name: Build distributions
run: ./scripts/build-dist.sh
- name: Show distributions
run: ls -lh dist/
- name: Check distribution descriptions
run: |
twine check dist/*
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test-build:
name: verify packages / python ${{ matrix.python-version }} / ${{ matrix.os }}
needs: build
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version

176
.github/workflows/performance.yml vendored Normal file
View File

@@ -0,0 +1,176 @@
name: Performance Regression Tests
# Schedule triggers
on:
# runs twice a day at 10:05am and 10:05pm
schedule:
- cron: "5 10,22 * * *"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# checks fmt of runner code
# purposefully not a dependency of any other job
# will block merging, but not prevent developing
fmt:
name: Cargo fmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- run: rustup component add rustfmt
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --manifest-path performance/runner/Cargo.toml --all -- --check
# runs any tests associated with the runner
# these tests make sure the runner logic is correct
test-runner:
name: Test Runner
runs-on: ubuntu-latest
env:
# turns errors into warnings
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: test
args: --manifest-path performance/runner/Cargo.toml
# build an optimized binary to be used as the runner in later steps
build-runner:
needs: [test-runner]
name: Build Runner
runs-on: ubuntu-latest
env:
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: build
args: --release --manifest-path performance/runner/Cargo.toml
- uses: actions/upload-artifact@v2
with:
name: runner
path: performance/runner/target/release/runner
# run the performance measurements on the current or default branch
measure-dev:
needs: [build-runner]
name: Measure Dev Branch
runs-on: ubuntu-latest
steps:
- name: checkout dev
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run
run: ./runner measure -b dev -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: dev-results
path: performance/results/
# run the performance measurements on the release branch which we use
# as a performance baseline. This part takes by far the longest, so
# we do everything we can first so the job fails fast.
# -----
# we need to checkout dbt twice in this job: once for the baseline dbt
# version, and once to get the latest regression testing projects,
# metrics, and runner code from the develop or current branch so that
# the calculations match for both versions of dbt we are comparing.
measure-baseline:
needs: [build-runner]
name: Measure Baseline Branch
runs-on: ubuntu-latest
steps:
- name: checkout latest
uses: actions/checkout@v2
with:
ref: "0.20.latest"
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: move repo up a level
run: mkdir ${{ github.workspace }}/../baseline/ && cp -r ${{ github.workspace }} ${{ github.workspace }}/../baseline
- name: "[debug] ls new dbt location"
run: ls ${{ github.workspace }}/../baseline/dbt/
# installation creates egg-links so we have to preserve source
- name: install dbt from new location
run: cd ${{ github.workspace }}/../baseline/dbt/ && pip install -r dev-requirements.txt -r editable-requirements.txt
# checkout the current branch to get all the target projects
# this deletes the old checked out code which is why we had to copy before
- name: checkout dev
uses: actions/checkout@v2
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run runner
run: ./runner measure -b baseline -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: baseline-results
path: performance/results/
# detect regressions on the output generated from measuring
# the two branches. Exits with non-zero code if a regression is detected.
calculate-regressions:
needs: [measure-dev, measure-baseline]
name: Compare Results
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v2
with:
name: dev-results
- uses: actions/download-artifact@v2
with:
name: baseline-results
- name: "[debug] ls result files"
run: ls
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: make results directory
run: mkdir ./final-output/
- name: run calculation
run: ./runner calculate -r ./ -o ./final-output/
# always attempt to upload the results even if there were regressions found
- uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: final-calculations
path: ./final-output/*

View File

@@ -1,178 +0,0 @@
# This is a workflow to run our unit and integration tests for windows and mac
name: dbt Tests
# Triggers
on:
# Triggers the workflow on push or pull request events and also adds a manual trigger
push:
branches:
- 'develop'
- '*.latest'
- 'releases/*'
pull_request_target:
branches:
- 'develop'
- '*.latest'
- 'pr/*'
- 'releases/*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
Linting:
runs-on: ubuntu-latest #no need to run on every OS
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Linting'
run: tox -e mypy,flake8 -- -v
UnitTest:
strategy:
matrix:
os: [windows-latest, ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.8'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run unit tests'
run: python -m tox -e py -- -v
PostgresIntegrationTest:
runs-on: 'windows-latest' #TODO: Add Mac support
environment: 'Postgres'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: 'Install postgresql and set up database'
shell: pwsh
run: |
$serviceName = Get-Service -Name postgresql*
Set-Service -InputObject $serviceName -StartupType Automatic
Start-Service -InputObject $serviceName
& $env:PGBIN\createdb.exe -U postgres dbt
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE root WITH PASSWORD '$env:ROOT_PASSWORD';"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE root WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CREATE, CONNECT ON DATABASE dbt TO root WITH GRANT OPTION;"
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE noaccess WITH PASSWORD '$env:NOACCESS_PASSWORD' NOSUPERUSER;"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE noaccess WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CONNECT ON DATABASE dbt TO noaccess;"
env:
ROOT_PASSWORD: ${{ secrets.ROOT_PASSWORD }}
NOACCESS_PASSWORD: ${{ secrets.NOACCESS_PASSWORD }}
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-postgres -- -v -n4
# These three are all similar except secure environment variables, which MUST be passed along to their tasks,
# but there's probably a better way to do this!
SnowflakeIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Snowflake'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-snowflake -- -v -n4
env:
SNOWFLAKE_TEST_ACCOUNT: ${{ secrets.SNOWFLAKE_TEST_ACCOUNT }}
SNOWFLAKE_TEST_PASSWORD: ${{ secrets.SNOWFLAKE_TEST_PASSWORD }}
SNOWFLAKE_TEST_USER: ${{ secrets.SNOWFLAKE_TEST_USER }}
SNOWFLAKE_TEST_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_WAREHOUSE }}
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: ${{ secrets.SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN }}
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_ID }}
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: ${{ secrets.SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET }}
SNOWFLAKE_TEST_ALT_DATABASE: ${{ secrets.SNOWFLAKE_TEST_ALT_DATABASE }}
SNOWFLAKE_TEST_ALT_WAREHOUSE: ${{ secrets.SNOWFLAKE_TEST_ALT_WAREHOUSE }}
SNOWFLAKE_TEST_DATABASE: ${{ secrets.SNOWFLAKE_TEST_DATABASE }}
SNOWFLAKE_TEST_QUOTED_DATABASE: ${{ secrets.SNOWFLAKE_TEST_QUOTED_DATABASE }}
SNOWFLAKE_TEST_ROLE: ${{ secrets.SNOWFLAKE_TEST_ROLE }}
BigQueryIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Bigquery'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-bigquery -- -v -n4
env:
BIGQUERY_SERVICE_ACCOUNT_JSON: ${{ secrets.BIGQUERY_SERVICE_ACCOUNT_JSON }}
BIGQUERY_TEST_ALT_DATABASE: ${{ secrets.BIGQUERY_TEST_ALT_DATABASE }}
RedshiftIntegrationTest:
strategy:
matrix:
os: [windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
environment: 'Redshift'
needs: UnitTest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: '3.7'
architecture: 'x64'
- name: 'Install dependencies'
run: python -m pip install --upgrade pip && pip install tox
- name: 'Run integration tests'
run: python -m tox -e py-redshift -- -v -n4
env:
REDSHIFT_TEST_DBNAME: ${{ secrets.REDSHIFT_TEST_DBNAME }}
REDSHIFT_TEST_PASS: ${{ secrets.REDSHIFT_TEST_PASS }}
REDSHIFT_TEST_USER: ${{ secrets.REDSHIFT_TEST_USER }}
REDSHIFT_TEST_PORT: ${{ secrets.REDSHIFT_TEST_PORT }}
REDSHIFT_TEST_HOST: ${{ secrets.REDSHIFT_TEST_HOST }}

View File

@@ -26,7 +26,7 @@ This is the docs website code. It comes from the dbt-docs repository, and is gen
## Adapters
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/fishtown-analytics/dbt-spark), [dbt-presto](https://github.com/fishtown-analytics/dbt-presto)).
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/dbt-labs/dbt-spark), [dbt-presto](https://github.com/dbt-labs/dbt-presto)).
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ Please note that all contributors to `dbt` must sign the [Contributor License Ag
### Defining the problem
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/fishtown-analytics/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/dbt-labs/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
> **Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
@@ -36,7 +36,7 @@ After you open an issue, a `dbt` maintainer will follow up by commenting on your
If an issue is appropriately well scoped and describes a beneficial change to the `dbt` codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/fishtown-analytics/dbt/contribute) page.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/dbt-labs/dbt/contribute) page.
Here's a good workflow:
- Comment on the open issue, expressing your interest in contributing the required code change
@@ -52,15 +52,15 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
| tag | description |
| --- | ----------- |
| [triage](https://github.com/fishtown-analytics/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/fishtown-analytics/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/fishtown-analytics/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/fishtown-analytics/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/fishtown-analytics/`dbt`/labels/help%20wanted) / [discussion](https://github.com/fishtown-analytics/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/fishtown-analytics/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/fishtown-analytics/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/fishtown-analytics/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/fishtown-analytics/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
| [triage](https://github.com/dbt-labs/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/dbt-labs/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/dbt-labs/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/dbt-labs/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/dbt-labs/dbt/labels/help%20wanted) / [discussion](https://github.com/dbt-labs/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/dbt-labs/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/dbt-labs/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/dbt-labs/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/dbt-labs/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
#### Branching Strategy
@@ -68,7 +68,7 @@ The `dbt` maintainers use labels to categorize open issues. Some labels indicate
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk branch or a specific release branch.
## Getting the code
@@ -78,17 +78,17 @@ You will need `git` in order to download and modify the `dbt` source code. On ma
### External contributors
If you are not a member of the `fishtown-analytics` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
1. fork the `dbt` repository
2. clone your fork locally
3. check out a new branch for your proposed changes
4. push changes to your fork
5. open a pull request against `fishtown-analytics/dbt` from your forked repository
5. open a pull request against `dbt-labs/dbt` from your forked repository
### Core contributors
If you are a member of the `fishtown-analytics` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
## Setting up an environment
@@ -135,7 +135,7 @@ brew install postgresql
### Installation
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt` (and its dependencies) with:
```sh
make dev
@@ -155,7 +155,7 @@ Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as
Getting the `dbt` integration tests set up in your local environment will be very helpful as you start to make changes to your local version of `dbt`. The section that follows outlines some helpful tips for setting up the test environment.
Since `dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, Fishtown Analytics provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
Since `dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, dbt Labs provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
### Initial setup
@@ -170,6 +170,8 @@ docker-compose up -d database
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
```
Note that you may need to run the previous command twice as it does not currently wait for the database to be running before attempting to run commands against it. This will be fixed with [#3876](https://github.com/dbt-labs/dbt/issues/3876).
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
```
@@ -224,7 +226,7 @@ python -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
> is a list of useful command-line options for `pytest` to use while developing.
## Submitting a Pull Request
Fishtown Analytics provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `fishtown-analytics/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
dbt Labs provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `dbt-labs/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
A `dbt` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.

View File

@@ -1,4 +1,4 @@
FROM ubuntu:18.04
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND noninteractive

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Copyright 2021 dbt Labs, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,28 +1,18 @@
<p align="center">
<img src="https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
<img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
</p>
<p align="center">
<a href="https://codeclimate.com/github/fishtown-analytics/dbt">
<img src="https://codeclimate.com/github/fishtown-analytics/dbt/badges/gpa.svg" alt="Code Climate"/>
<a href="https://github.com/dbt-labs/dbt/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://circleci.com/gh/fishtown-analytics/dbt/tree/master">
<img src="https://circleci.com/gh/fishtown-analytics/dbt/tree/master.svg?style=svg" alt="CircleCI" />
</a>
<a href="https://ci.appveyor.com/project/DrewBanin/dbt/branch/development">
<img src="https://ci.appveyor.com/api/projects/status/v01rwd3q91jnwp9m/branch/development?svg=true" alt="AppVeyor" />
</a>
<a href="https://community.getdbt.com">
<img src="https://community.getdbt.com/badge.svg" alt="Slack" />
<a href="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** (data build tool) enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
![dbt architecture](https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
dbt can be used to [aggregate pageviews into sessions](https://github.com/fishtown-analytics/snowplow), calculate [ad spend ROI](https://github.com/fishtown-analytics/facebook-ads), or report on [email campaign performance](https://github.com/fishtown-analytics/mailchimp).
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
## Understanding dbt
@@ -30,28 +20,22 @@ Analysts using dbt can transform their data by simply writing select statements,
These select statements, or "models", form a dbt project. Models frequently build on top of one another dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
![dbt dag](https://raw.githubusercontent.com/fishtown-analytics/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
![dbt dag](https://raw.githubusercontent.com/dbt-labs/dbt/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
## Getting started
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [documentation](https://docs.getdbt.com/).
- Productionize your dbt project with [dbt Cloud](https://www.getdbt.com)
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Find out more
## Join the dbt Community
- Check out the [Introduction to dbt](https://docs.getdbt.com/docs/introduction/).
- Read the [dbt Viewpoint](https://docs.getdbt.com/docs/about/viewpoint/).
## Join thousands of analysts in the dbt community
- Join the [chat](http://community.getdbt.com/) on Slack.
- Find community posts on [dbt Discourse](https://discourse.getdbt.com).
- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/fishtown-analytics/dbt/issues/new).
- Want to help us build dbt? Check out the [Contributing Getting Started Guide](https://github.com/fishtown-analytics/dbt/blob/HEAD/CONTRIBUTING.md)
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
## Code of Conduct

View File

@@ -1,154 +0,0 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
trigger:
branches:
include:
- develop
- '*.latest'
- pr/*
jobs:
- job: UnitTest
pool:
vmImage: 'vs2017-win2016'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py -- -v
displayName: Run unit tests
- job: PostgresIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
steps:
- pwsh: |
$serviceName = Get-Service -Name postgresql*
Set-Service -InputObject $serviceName -StartupType Automatic
Start-Service -InputObject $serviceName
& $env:PGBIN\createdb.exe -U postgres dbt
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE root WITH PASSWORD 'password';"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE root WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CREATE, CONNECT ON DATABASE dbt TO root WITH GRANT OPTION;"
& $env:PGBIN\psql.exe -U postgres -c "CREATE ROLE noaccess WITH PASSWORD 'password' NOSUPERUSER;"
& $env:PGBIN\psql.exe -U postgres -c "ALTER ROLE noaccess WITH LOGIN;"
& $env:PGBIN\psql.exe -U postgres -c "GRANT CONNECT ON DATABASE dbt TO noaccess;"
displayName: Install postgresql and set up database
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-postgres -- -v -n4
displayName: Run integration tests
# These three are all similar except secure environment variables, which MUST be passed along to their tasks,
# but there's probably a better way to do this!
- job: SnowflakeIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-snowflake -- -v -n4
env:
SNOWFLAKE_TEST_ACCOUNT: $(SNOWFLAKE_TEST_ACCOUNT)
SNOWFLAKE_TEST_PASSWORD: $(SNOWFLAKE_TEST_PASSWORD)
SNOWFLAKE_TEST_USER: $(SNOWFLAKE_TEST_USER)
SNOWFLAKE_TEST_WAREHOUSE: $(SNOWFLAKE_TEST_WAREHOUSE)
SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN: $(SNOWFLAKE_TEST_OAUTH_REFRESH_TOKEN)
SNOWFLAKE_TEST_OAUTH_CLIENT_ID: $(SNOWFLAKE_TEST_OAUTH_CLIENT_ID)
SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET: $(SNOWFLAKE_TEST_OAUTH_CLIENT_SECRET)
displayName: Run integration tests
- job: BigQueryIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-bigquery -- -v -n4
env:
BIGQUERY_SERVICE_ACCOUNT_JSON: $(BIGQUERY_SERVICE_ACCOUNT_JSON)
displayName: Run integration tests
- job: RedshiftIntegrationTest
pool:
vmImage: 'vs2017-win2016'
dependsOn: UnitTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip && pip install tox
displayName: 'Install dependencies'
- script: python -m tox -e py-redshift -- -v -n4
env:
REDSHIFT_TEST_DBNAME: $(REDSHIFT_TEST_DBNAME)
REDSHIFT_TEST_PASS: $(REDSHIFT_TEST_PASS)
REDSHIFT_TEST_USER: $(REDSHIFT_TEST_USER)
REDSHIFT_TEST_PORT: $(REDSHIFT_TEST_PORT)
REDSHIFT_TEST_HOST: $(REDSHIFT_TEST_HOST)
displayName: Run integration tests
- job: BuildWheel
pool:
vmImage: 'vs2017-win2016'
dependsOn:
- UnitTest
- PostgresIntegrationTest
- RedshiftIntegrationTest
- SnowflakeIntegrationTest
- BigQueryIntegrationTest
condition: succeeded()
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- script: python -m pip install --upgrade pip setuptools && python -m pip install -r requirements.txt && python -m pip install -r dev-requirements.txt
displayName: Install dependencies
- task: ShellScript@2
inputs:
scriptPath: scripts/build-wheels.sh
- task: CopyFiles@2
inputs:
contents: 'dist\?(*.whl|*.tar.gz)'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: dists

View File

@@ -1,73 +0,0 @@
#!/usr/bin/env python
import json
import yaml
import sys
import argparse
from datetime import datetime, timezone
import dbt.clients.registry as registry
def yaml_type(fname):
with open(fname) as f:
return yaml.load(f)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--project", type=yaml_type, default="dbt_project.yml")
parser.add_argument("--namespace", required=True)
return parser.parse_args()
def get_full_name(args):
return "{}/{}".format(args.namespace, args.project["name"])
def init_project_in_packages(args, packages):
full_name = get_full_name(args)
if full_name not in packages:
packages[full_name] = {
"name": args.project["name"],
"namespace": args.namespace,
"latest": args.project["version"],
"assets": {},
"versions": {},
}
return packages[full_name]
def add_version_to_package(args, project_json):
project_json["versions"][args.project["version"]] = {
"id": "{}/{}".format(get_full_name(args), args.project["version"]),
"name": args.project["name"],
"version": args.project["version"],
"description": "",
"published_at": datetime.now(timezone.utc).astimezone().isoformat(),
"packages": args.project.get("packages") or [],
"works_with": [],
"_source": {
"type": "github",
"url": "",
"readme": "",
},
"downloads": {
"tarball": "",
"format": "tgz",
"sha1": "",
},
}
def main():
args = parse_args()
packages = registry.packages()
project_json = init_project_in_packages(args, packages)
if args.project["version"] in project_json["versions"]:
raise Exception("Version {} already in packages JSON"
.format(args.project["version"]),
file=sys.stderr)
add_version_to_package(args, project_json)
print(json.dumps(packages, indent=2))
if __name__ == "__main__":
main()

View File

@@ -1 +1 @@
recursive-include dbt/include *.py *.sql *.yml *.html *.md
recursive-include dbt/include *.py *.sql *.yml *.html *.md .gitkeep .gitignore

View File

@@ -238,12 +238,6 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
@classmethod
def _rollback(cls, connection: Connection) -> None:
"""Roll back the given connection."""
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In _rollback, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
f'Tried to rollback transaction on connection '
@@ -257,12 +251,6 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
@classmethod
def close(cls, connection: Connection) -> Connection:
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In close, got {connection} - not a Connection!'
)
# if the connection is in closed or init, there's nothing to do
if connection.state in {ConnectionState.CLOSED, ConnectionState.INIT}:
return connection

View File

@@ -16,7 +16,6 @@ from dbt.exceptions import (
get_relation_returned_multiple_results,
InternalException, NotImplementedException, RuntimeException,
)
from dbt import flags
from dbt import deprecations
from dbt.adapters.protocol import (
@@ -31,7 +30,6 @@ from dbt.contracts.graph.compiled import (
from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error
from dbt.node_types import NodeType
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import filter_null_values, executor
@@ -290,9 +288,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def _schema_is_cached(self, database: Optional[str], schema: str) -> bool:
"""Check if the schema is cached, and by default logs if it is not."""
if flags.USE_CACHE is False:
return False
elif (database, schema) not in self.cache:
if (database, schema) not in self.cache:
logger.debug(
'On "{}": cache miss for schema "{}.{}", this is inefficient'
.format(self.nice_connection_name(), database, schema)
@@ -310,8 +306,7 @@ class BaseAdapter(metaclass=AdapterMeta):
self.Relation.create_from(self.config, node).without_identifier()
for node in manifest.nodes.values()
if (
node.resource_type in NodeType.executable() and
not node.is_ephemeral_model
node.is_relational and not node.is_ephemeral_model
)
}
@@ -342,9 +337,6 @@ class BaseAdapter(metaclass=AdapterMeta):
"""Populate the relations cache for the given schemas. Returns an
iterable of the schemas populated, as strings.
"""
if not flags.USE_CACHE:
return
cache_schemas = self._get_cache_schemas(manifest)
with executor(self.config) as tpe:
futures: List[Future[List[BaseRelation]]] = []
@@ -377,9 +369,6 @@ class BaseAdapter(metaclass=AdapterMeta):
"""Run a query that gets a populated cache of the relations in the
database and set the cache on this adapter.
"""
if not flags.USE_CACHE:
return
with self.cache.lock:
if clear:
self.cache.clear()
@@ -393,8 +382,7 @@ class BaseAdapter(metaclass=AdapterMeta):
raise_compiler_error(
'Attempted to cache a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.add(relation)
self.cache.add(relation)
# so jinja doesn't render things
return ''
@@ -408,8 +396,7 @@ class BaseAdapter(metaclass=AdapterMeta):
raise_compiler_error(
'Attempted to drop a null relation for {}'.format(name)
)
if flags.USE_CACHE:
self.cache.drop(relation)
self.cache.drop(relation)
return ''
@available
@@ -430,8 +417,7 @@ class BaseAdapter(metaclass=AdapterMeta):
.format(src_name, dst_name, name)
)
if flags.USE_CACHE:
self.cache.rename(from_relation, to_relation)
self.cache.rename(from_relation, to_relation)
return ''
###
@@ -513,7 +499,7 @@ class BaseAdapter(metaclass=AdapterMeta):
def get_columns_in_relation(
self, relation: BaseRelation
) -> List[BaseColumn]:
"""Get a list of the columns in the given Relation."""
"""Get a list of the columns in the given Relation. """
raise NotImplementedException(
'`get_columns_in_relation` is not implemented for this adapter!'
)

View File

@@ -433,13 +433,14 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
for schema in schemas:
yield information_schema_name, schema
def flatten(self):
def flatten(self, allow_multiple_databases: bool = False):
new = self.__class__()
# make sure we don't have duplicates
seen = {r.database.lower() for r in self if r.database}
if len(seen) > 1:
dbt.exceptions.raise_compiler_error(str(seen))
# make sure we don't have multiple databases if allow_multiple_databases is set to False
if not allow_multiple_databases:
seen = {r.database.lower() for r in self if r.database}
if len(seen) > 1:
dbt.exceptions.raise_compiler_error(str(seen))
for information_schema_name, schema in self.search():
path = {

View File

@@ -11,7 +11,6 @@ from dbt.contracts.connection import (
Connection, ConnectionState, AdapterResponse
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import flags
class SQLConnectionManager(BaseConnectionManager):
@@ -144,13 +143,6 @@ class SQLConnectionManager(BaseConnectionManager):
def begin(self):
connection = self.get_thread_connection()
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In begin, got {connection} - not a Connection!'
)
if connection.transaction_open is True:
raise dbt.exceptions.InternalException(
'Tried to begin a new transaction on connection "{}", but '
@@ -163,12 +155,6 @@ class SQLConnectionManager(BaseConnectionManager):
def commit(self):
connection = self.get_thread_connection()
if flags.STRICT_MODE:
if not isinstance(connection, Connection):
raise dbt.exceptions.CompilerException(
f'In commit, got {connection} - not a Connection!'
)
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
'Tried to commit transaction on connection "{}", but '

View File

@@ -35,7 +35,11 @@ class ISODateTime(agate.data_types.DateTime):
)
def build_type_tester(text_columns: Iterable[str]) -> agate.TypeTester:
def build_type_tester(
text_columns: Iterable[str],
string_null_values: Optional[Iterable[str]] = ('null', '')
) -> agate.TypeTester:
types = [
agate.data_types.Number(null_values=('null', '')),
agate.data_types.Date(null_values=('null', ''),
@@ -46,10 +50,10 @@ def build_type_tester(text_columns: Iterable[str]) -> agate.TypeTester:
agate.data_types.Boolean(true_values=('true',),
false_values=('false',),
null_values=('null', '')),
agate.data_types.Text(null_values=('null', ''))
agate.data_types.Text(null_values=string_null_values)
]
force = {
k: agate.data_types.Text(null_values=('null', ''))
k: agate.data_types.Text(null_values=string_null_values)
for k in text_columns
}
return agate.TypeTester(force=force, types=types)
@@ -66,7 +70,13 @@ def table_from_rows(
if text_only_columns is None:
column_types = DEFAULT_TYPE_TESTER
else:
column_types = build_type_tester(text_only_columns)
# If text_only_columns are present, prevent coercing empty string or
# literal 'null' strings to a None representation.
column_types = build_type_tester(
text_only_columns,
string_null_values=()
)
return agate.Table(rows, column_names, column_types=column_types)
@@ -86,19 +96,34 @@ def table_from_data(data, column_names: Iterable[str]) -> agate.Table:
def table_from_data_flat(data, column_names: Iterable[str]) -> agate.Table:
"Convert list of dictionaries into an Agate table"
"""
Convert a list of dictionaries into an Agate table. This method does not
coerce string values into more specific types (eg. '005' will not be
coerced to '5'). Additionally, this method does not coerce values to
None (eg. '' or 'null' will retain their string literal representations).
"""
rows = []
text_only_columns = set()
for _row in data:
row = []
for value in list(_row.values()):
for col_name in column_names:
value = _row[col_name]
if isinstance(value, (dict, list, tuple)):
row.append(json.dumps(value, cls=dbt.utils.JSONEncoder))
else:
row.append(value)
# Represent container types as json strings
value = json.dumps(value, cls=dbt.utils.JSONEncoder)
text_only_columns.add(col_name)
elif isinstance(value, str):
text_only_columns.add(col_name)
row.append(value)
rows.append(row)
return table_from_rows(rows=rows, column_names=column_names)
return table_from_rows(
rows=rows,
column_names=column_names,
text_only_columns=text_only_columns
)
def empty_table():

View File

@@ -153,7 +153,7 @@ def statically_parse_adapter_dispatch(func_call, ctx, db_wrapper):
package_name = packages_arg.node.node.name
macro_name = packages_arg.node.attr
if (macro_name.startswith('_get') and 'namespaces' in macro_name):
# noqa: https://github.com/fishtown-analytics/dbt-utils/blob/9e9407b/macros/cross_db_utils/_get_utils_namespaces.sql
# noqa: https://github.com/dbt-labs/dbt-utils/blob/9e9407b/macros/cross_db_utils/_get_utils_namespaces.sql
var_name = f'{package_name}_dispatch_list'
# hard code compatibility for fivetran_utils, just a teensy bit different
# noqa: https://github.com/fivetran/dbt_fivetran_utils/blob/0978ba2/macros/_get_utils_namespaces.sql

View File

@@ -1,10 +1,9 @@
from functools import wraps
import functools
import requests
from dbt.exceptions import RegistryException
from dbt.utils import memoized
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import deprecations
import os
import time
if os.getenv('DBT_PACKAGE_HUB_URL'):
DEFAULT_REGISTRY_BASE_URL = os.getenv('DBT_PACKAGE_HUB_URL')
@@ -19,26 +18,11 @@ def _get_url(url, registry_base_url=None):
return '{}{}'.format(registry_base_url, url)
def _wrap_exceptions(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
max_attempts = 5
attempt = 0
while True:
attempt += 1
try:
return fn(*args, **kwargs)
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout) as exc:
if attempt < max_attempts:
time.sleep(1)
continue
raise RegistryException(
'Unable to connect to registry hub'
) from exc
return wrapper
def _get_with_retries(path, registry_base_url=None):
get_fn = functools.partial(_get, path, registry_base_url)
return connection_exception_retry(get_fn, 5)
@_wrap_exceptions
def _get(path, registry_base_url=None):
url = _get_url(path, registry_base_url)
logger.debug('Making package registry request: GET {}'.format(url))
@@ -50,22 +34,44 @@ def _get(path, registry_base_url=None):
def index(registry_base_url=None):
return _get('api/v1/index.json', registry_base_url)
return _get_with_retries('api/v1/index.json', registry_base_url)
index_cached = memoized(index)
def packages(registry_base_url=None):
return _get('api/v1/packages.json', registry_base_url)
return _get_with_retries('api/v1/packages.json', registry_base_url)
def package(name, registry_base_url=None):
return _get('api/v1/{}.json'.format(name), registry_base_url)
response = _get_with_retries('api/v1/{}.json'.format(name), registry_base_url)
# Either redirectnamespace or redirectname in the JSON response indicate a redirect
# redirectnamespace redirects based on package ownership
# redirectname redirects based on package name
# Both can be present at the same time, or neither. Fails gracefully to old name
if ('redirectnamespace' in response) or ('redirectname' in response):
if ('redirectnamespace' in response) and response['redirectnamespace'] is not None:
use_namespace = response['redirectnamespace']
else:
use_namespace = response['namespace']
if ('redirectname' in response) and response['redirectname'] is not None:
use_name = response['redirectname']
else:
use_name = response['name']
new_nwo = use_namespace + "/" + use_name
deprecations.warn('package-redirect', old_name=name, new_name=new_nwo)
return response
def package_version(name, version, registry_base_url=None):
return _get('api/v1/{}/{}.json'.format(name, version), registry_base_url)
return _get_with_retries('api/v1/{}/{}.json'.format(name, version), registry_base_url)
def get_available_versions(name):

View File

@@ -1,4 +1,5 @@
import errno
import functools
import fnmatch
import json
import os
@@ -15,9 +16,8 @@ from typing import (
)
import dbt.exceptions
import dbt.utils
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import _connection_exception_retry as connection_exception_retry
if sys.platform == 'win32':
from ctypes import WinDLL, c_bool
@@ -30,7 +30,7 @@ def find_matching(
root_path: str,
relative_paths_to_search: List[str],
file_pattern: str,
) -> List[Dict[str, str]]:
) -> List[Dict[str, Any]]:
"""
Given an absolute `root_path`, a list of relative paths to that
absolute root path (`relative_paths_to_search`), and a `file_pattern`
@@ -61,11 +61,19 @@ def find_matching(
relative_path = os.path.relpath(
absolute_path, absolute_path_to_search
)
modification_time = 0.0
try:
modification_time = os.path.getmtime(absolute_path)
except OSError:
logger.exception(
f"Error retrieving modification time for file {absolute_path}"
)
if reobj.match(local_file):
matching.append({
'searched_path': relative_path_to_search,
'absolute_path': absolute_path,
'relative_path': relative_path,
'modification_time': modification_time,
})
return matching
@@ -441,6 +449,13 @@ def run_cmd(
return out, err
def download_with_retries(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:
download_fn = functools.partial(download, url, path, timeout)
connection_exception_retry(download_fn, 5)
def download(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:

View File

@@ -1,5 +1,5 @@
import dbt.exceptions
from typing import Any, Dict, Optional
import yaml
import yaml.scanner
@@ -56,7 +56,7 @@ def contextualized_yaml_error(raw_contents, error):
raw_error=error)
def safe_load(contents):
def safe_load(contents) -> Optional[Dict[str, Any]]:
return yaml.load(contents, Loader=SafeLoader)

View File

@@ -10,7 +10,7 @@ from dbt.adapters.factory import get_adapter
from dbt.clients import jinja
from dbt.clients.system import make_directory
from dbt.context.providers import generate_runtime_model
from dbt.contracts.graph.manifest import Manifest
from dbt.contracts.graph.manifest import Manifest, UniqueID
from dbt.contracts.graph.compiled import (
COMPILED_TYPES,
CompiledSchemaTestNode,
@@ -107,6 +107,18 @@ def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
_add_prepended_cte(prepended_ctes, new_cte)
def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueID]:
""" Get a list of tests that depend on the node with the
provided unique id """
return [
node.unique_id
for _, node in manifest.nodes.items()
if node.resource_type == NodeType.Test and
unique_id in node.depends_on_nodes
]
class Linker:
def __init__(self, data=None):
if data is None:
@@ -142,7 +154,7 @@ class Linker:
include all nodes in their corresponding graph entries.
"""
out_graph = self.graph.copy()
for node_id in self.graph.nodes():
for node_id in self.graph:
data = manifest.expect(node_id).to_dict(omit_none=True)
out_graph.add_node(node_id, **data)
nx.write_gpickle(out_graph, outfile)
@@ -412,13 +424,80 @@ class Compiler:
self.link_node(linker, node, manifest)
for exposure in manifest.exposures.values():
self.link_node(linker, exposure, manifest)
# linker.add_node(exposure.unique_id)
cycle = linker.find_cycles()
if cycle:
raise RuntimeError("Found a cycle: {}".format(cycle))
self.resolve_graph(linker, manifest)
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
""" This method adds additional edges to the DAG. For a given non-test
executable node, add an edge from an upstream test to the given node if
the set of nodes the test depends on is a proper/strict subset of the
upstream nodes for the given node. """
# Given a graph:
# model1 --> model2 --> model3
# | |
# | \/
# \/ test 2
# test1
#
# Produce the following graph:
# model1 --> model2 --> model3
# | | /\ /\
# | \/ | |
# \/ test2 ------- |
# test1 -------------------
for node_id in linker.graph:
# If node is executable (in manifest.nodes) and does _not_
# represent a test, continue.
if (
node_id in manifest.nodes and
manifest.nodes[node_id].resource_type != NodeType.Test
):
# Get *everything* upstream of the node
all_upstream_nodes = nx.traversal.bfs_tree(
linker.graph, node_id, reverse=True
)
# Get the set of upstream nodes not including the current node.
upstream_nodes = set([
n for n in all_upstream_nodes if n != node_id
])
# Get all tests that depend on any upstream nodes.
upstream_tests = []
for upstream_node in upstream_nodes:
upstream_tests += _get_tests_for_node(
manifest,
upstream_node
)
for upstream_test in upstream_tests:
# Get the set of all nodes that the test depends on
# including the upstream_node itself. This is necessary
# because tests can depend on multiple nodes (ex:
# relationship tests). Test nodes do not distinguish
# between what node the test is "testing" and what
# node(s) it depends on.
test_depends_on = set(
manifest.nodes[upstream_test].depends_on_nodes
)
# If the set of nodes that an upstream test depends on
# is a proper (or strict) subset of all upstream nodes of
# the current node, add an edge from the upstream test
# to the current node. Must be a proper/strict subset to
# avoid adding a circular dependency to the graph.
if (test_depends_on < upstream_nodes):
linker.graph.add_edge(
upstream_test,
node_id
)
def compile(self, manifest: Manifest, write=True) -> Graph:
self.initialize()
linker = Linker()

View File

@@ -1,4 +1,4 @@
# all these are just exports, they need "noqa" so flake8 will not complain.
from .profile import Profile, PROFILES_DIR, read_user_config # noqa
from .profile import Profile, read_user_config # noqa
from .project import Project, IsFQNResource # noqa
from .runtime import RuntimeConfig, UnsetProfileConfig # noqa

View File

@@ -20,10 +20,8 @@ from dbt.utils import coerce_dict_str
from .renderer import ProfileRenderer
DEFAULT_THREADS = 1
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
PROFILES_DIR = os.path.expanduser(
os.getenv('DBT_PROFILES_DIR', DEFAULT_PROFILES_DIR)
)
INVALID_PROFILE_MESSAGE = """
dbt encountered an error while trying to read your profiles.yml file.
@@ -43,7 +41,7 @@ Here, [profile name] should be replaced with a profile name
defined in your profiles.yml file. You can find profiles.yml here:
{profiles_file}/profiles.yml
""".format(profiles_file=PROFILES_DIR)
""".format(profiles_file=DEFAULT_PROFILES_DIR)
def read_profile(profiles_dir: str) -> Dict[str, Any]:
@@ -73,10 +71,10 @@ def read_user_config(directory: str) -> UserConfig:
try:
profile = read_profile(directory)
if profile:
user_cfg = coerce_dict_str(profile.get('config', {}))
if user_cfg is not None:
UserConfig.validate(user_cfg)
return UserConfig.from_dict(user_cfg)
user_config = coerce_dict_str(profile.get('config', {}))
if user_config is not None:
UserConfig.validate(user_config)
return UserConfig.from_dict(user_config)
except (RuntimeException, ValidationError):
pass
return UserConfig()
@@ -84,14 +82,32 @@ def read_user_config(directory: str) -> UserConfig:
# The Profile class is included in RuntimeConfig, so any attribute
# additions must also be set where the RuntimeConfig class is created
@dataclass
# `init=False` is a workaround for https://bugs.python.org/issue45081
@dataclass(init=False)
class Profile(HasCredentials):
profile_name: str
target_name: str
config: UserConfig
user_config: UserConfig
threads: int
credentials: Credentials
def __init__(
self,
profile_name: str,
target_name: str,
user_config: UserConfig,
threads: int,
credentials: Credentials
):
"""Explicitly defining `__init__` to work around bug in Python 3.9.7
https://bugs.python.org/issue45081
"""
self.profile_name = profile_name
self.target_name = target_name
self.user_config = user_config
self.threads = threads
self.credentials = credentials
def to_profile_info(
self, serialize_credentials: bool = False
) -> Dict[str, Any]:
@@ -106,12 +122,12 @@ class Profile(HasCredentials):
result = {
'profile_name': self.profile_name,
'target_name': self.target_name,
'config': self.config,
'user_config': self.user_config,
'threads': self.threads,
'credentials': self.credentials,
}
if serialize_credentials:
result['config'] = self.config.to_dict(omit_none=True)
result['user_config'] = self.user_config.to_dict(omit_none=True)
result['credentials'] = self.credentials.to_dict(omit_none=True)
return result
@@ -125,7 +141,7 @@ class Profile(HasCredentials):
'name': self.target_name,
'target_name': self.target_name,
'profile_name': self.profile_name,
'config': self.config.to_dict(omit_none=True),
'config': self.user_config.to_dict(omit_none=True),
})
return target
@@ -220,7 +236,7 @@ class Profile(HasCredentials):
threads: int,
profile_name: str,
target_name: str,
user_cfg: Optional[Dict[str, Any]] = None
user_config: Optional[Dict[str, Any]] = None
) -> 'Profile':
"""Create a profile from an existing set of Credentials and the
remaining information.
@@ -229,20 +245,20 @@ class Profile(HasCredentials):
:param threads: The number of threads to use for connections.
:param profile_name: The profile name used for this profile.
:param target_name: The target name used for this profile.
:param user_cfg: The user-level config block from the
:param user_config: The user-level config block from the
raw profiles, if specified.
:raises DbtProfileError: If the profile is invalid.
:returns: The new Profile object.
"""
if user_cfg is None:
user_cfg = {}
UserConfig.validate(user_cfg)
config = UserConfig.from_dict(user_cfg)
if user_config is None:
user_config = {}
UserConfig.validate(user_config)
user_config_obj: UserConfig = UserConfig.from_dict(user_config)
profile = cls(
profile_name=profile_name,
target_name=target_name,
config=config,
user_config=user_config_obj,
threads=threads,
credentials=credentials
)
@@ -295,7 +311,7 @@ class Profile(HasCredentials):
raw_profile: Dict[str, Any],
profile_name: str,
renderer: ProfileRenderer,
user_cfg: Optional[Dict[str, Any]] = None,
user_config: Optional[Dict[str, Any]] = None,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> 'Profile':
@@ -307,7 +323,7 @@ class Profile(HasCredentials):
disk as yaml and its values rendered with jinja.
:param profile_name: The profile name used.
:param renderer: The config renderer.
:param user_cfg: The global config for the user, if it
:param user_config: The global config for the user, if it
was present.
:param target_override: The target to use, if provided on
the command line.
@@ -317,9 +333,9 @@ class Profile(HasCredentials):
target could not be found
:returns: The new Profile object.
"""
# user_cfg is not rendered.
if user_cfg is None:
user_cfg = raw_profile.get('config')
# user_config is not rendered.
if user_config is None:
user_config = raw_profile.get('config')
# TODO: should it be, and the values coerced to bool?
target_name, profile_data = cls.render_profile(
raw_profile, profile_name, target_override, renderer
@@ -340,7 +356,7 @@ class Profile(HasCredentials):
profile_name=profile_name,
target_name=target_name,
threads=threads,
user_cfg=user_cfg
user_config=user_config
)
@classmethod
@@ -383,13 +399,13 @@ class Profile(HasCredentials):
error_string=msg
)
)
user_cfg = raw_profiles.get('config')
user_config = raw_profiles.get('config')
return cls.from_raw_profile_info(
raw_profile=raw_profile,
profile_name=profile_name,
renderer=renderer,
user_cfg=user_cfg,
user_config=user_config,
target_override=target_override,
threads_override=threads_override,
)

View File

@@ -645,13 +645,24 @@ class Project:
def hashed_name(self):
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
def get_selector(self, name: str) -> SelectionSpec:
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
if name not in self.selectors:
raise RuntimeException(
f'Could not find selector named {name}, expected one of '
f'{list(self.selectors)}'
)
return self.selectors[name]
return self.selectors[name]["definition"]
def get_default_selector_name(self) -> Union[str, None]:
"""This function fetch the default selector to use on `dbt run` (if any)
:return: either a selector if default is set or None
:rtype: Union[SelectionSpec, None]
"""
for selector_name, selector in self.selectors.items():
if selector["default"] is True:
return selector_name
return None
def get_macro_search_order(self, macro_namespace: str):
for dispatch_entry in self.dispatch:

View File

@@ -147,7 +147,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
if first in {'seeds', 'models', 'snapshots', 'tests'}:
keypath_parts = {
(k.lstrip('+') if isinstance(k, str) else k)
(k.lstrip('+ ') if isinstance(k, str) else k)
for k in keypath
}
# model-level hooks

View File

@@ -12,6 +12,7 @@ from .profile import Profile
from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt import tracking
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
@@ -117,7 +118,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
unrendered=project.unrendered,
profile_name=profile.profile_name,
target_name=profile.target_name,
config=profile.config,
user_config=profile.user_config,
threads=profile.threads,
credentials=profile.credentials,
args=args,
@@ -144,7 +145,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
project = Project.from_project_root(
project_root,
renderer,
verify_version=getattr(self.args, 'version_check', False),
verify_version=bool(flags.VERSION_CHECK),
)
cfg = self.from_parts(
@@ -197,7 +198,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = getattr(args, 'version_check', False)
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(
project_root,
verify_version=version_check
@@ -391,6 +392,10 @@ class UnsetCredentials(Credentials):
def type(self):
return None
@property
def unique_field(self):
return None
def connection_info(self, *args, **kwargs):
return {}
@@ -412,7 +417,7 @@ class UnsetConfig(UserConfig):
class UnsetProfile(Profile):
def __init__(self):
self.credentials = UnsetCredentials()
self.config = UnsetConfig()
self.user_config = UnsetConfig()
self.profile_name = ''
self.target_name = ''
self.threads = -1
@@ -509,7 +514,7 @@ class UnsetProfileConfig(RuntimeConfig):
unrendered=project.unrendered,
profile_name='',
target_name='',
config=UnsetConfig(),
user_config=UnsetConfig(),
threads=getattr(args, 'threads', 1),
credentials=UnsetCredentials(),
args=args,

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Dict, Any
from typing import Dict, Any, Union
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, Loader, Dumper, load_yaml_text
)
@@ -29,13 +29,14 @@ Validator Error:
"""
class SelectorConfig(Dict[str, SelectionSpec]):
class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
@classmethod
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
try:
SelectorFile.validate(data)
selector_file = SelectorFile.from_dict(data)
validate_selector_default(selector_file)
selectors = parse_from_selectors_definition(selector_file)
except ValidationError as exc:
yaml_sel_cfg = yaml.dump(exc.instance)
@@ -118,6 +119,24 @@ def selector_config_from_data(
return selectors
def validate_selector_default(selector_file: SelectorFile) -> None:
"""Check if a selector.yml file has more than 1 default key set to true"""
default_set: bool = False
default_selector_name: Union[str, None] = None
for selector in selector_file.selectors:
if selector.default is True and default_set is False:
default_set = True
default_selector_name = selector.name
continue
if selector.default is True and default_set is True:
raise DbtSelectorsError(
"Error when parsing the selector file. "
"Found multiple selectors with `default: true`:"
f"{default_selector_name} and {selector.name}"
)
# These are utilities to clean up the dictionary created from
# selectors.yml by turning the cli-string format entries into
# normalized dictionary entries. It parallels the flow in

View File

@@ -526,8 +526,6 @@ class BaseContext(metaclass=ContextMeta):
The list of valid flags are:
- `flags.STRICT_MODE`: True if `--strict` (or `-S`) was provided on the
command line
- `flags.FULL_REFRESH`: True if `--full-refresh` was provided on the
command line
- `flags.NON_DESTRUCTIVE`: True if `--non-destructive` was provided on

View File

@@ -97,7 +97,7 @@ class BaseContextConfigGenerator(Generic[T]):
result = {}
for key, value in level_config.items():
if key.startswith('+'):
result[key[1:]] = deepcopy(value)
result[key[1:].strip()] = deepcopy(value)
elif not isinstance(value, dict):
result[key] = deepcopy(value)
@@ -120,11 +120,12 @@ class BaseContextConfigGenerator(Generic[T]):
def calculate_node_config(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any] = None
) -> BaseConfig:
own_config = self.get_node_project(project_name)
@@ -134,8 +135,15 @@ class BaseContextConfigGenerator(Generic[T]):
for fqn_config in project_configs:
result = self._update_from_config(result, fqn_config)
for config_call in config_calls:
result = self._update_from_config(result, config_call)
# When schema files patch config, it has lower precedence than
# config in the models (config_call_dict), so we add the patch_config_dict
# before the config_call_dict
if patch_config_dict:
result = self._update_from_config(result, patch_config_dict)
# config_calls are created in the 'experimental' model parser and
# the ParseConfigObject (via add_config_call)
result = self._update_from_config(result, config_call_dict)
if own_config.project_name != self._active_project.project_name:
for fqn_config in self._active_project_configs(fqn, resource_type):
@@ -147,11 +155,12 @@ class BaseContextConfigGenerator(Generic[T]):
@abstractmethod
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: Dict[str, Any],
) -> Dict[str, Any]:
...
@@ -186,18 +195,20 @@ class ContextConfigGenerator(BaseContextConfigGenerator[C]):
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
config = self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
finalized = config.finalize_and_validate()
return finalized.to_dict(omit_none=True)
@@ -209,18 +220,20 @@ class UnrenderedConfigGenerator(BaseContextConfigGenerator[Dict[str, Any]]):
def calculate_node_config_dict(
self,
config_calls: List[Dict[str, Any]],
config_call_dict: Dict[str, Any],
fqn: List[str],
resource_type: NodeType,
project_name: str,
base: bool,
patch_config_dict: dict = None
) -> Dict[str, Any]:
return self.calculate_node_config(
config_calls=config_calls,
config_call_dict=config_call_dict,
fqn=fqn,
resource_type=resource_type,
project_name=project_name,
base=base,
patch_config_dict=patch_config_dict
)
def initial_result(
@@ -251,20 +264,39 @@ class ContextConfig:
resource_type: NodeType,
project_name: str,
) -> None:
self._config_calls: List[Dict[str, Any]] = []
self._config_call_dict: Dict[str, Any] = {}
self._active_project = active_project
self._fqn = fqn
self._resource_type = resource_type
self._project_name = project_name
def update_in_model_config(self, opts: Dict[str, Any]) -> None:
self._config_calls.append(opts)
def add_config_call(self, opts: Dict[str, Any]) -> None:
dct = self._config_call_dict
self._add_config_call(dct, opts)
@classmethod
def _add_config_call(cls, config_call_dict, opts: Dict[str, Any]) -> None:
for k, v in opts.items():
# MergeBehavior for post-hook and pre-hook is to collect all
# values, instead of overwriting
if k in BaseConfig.mergebehavior['append']:
if not isinstance(v, list):
v = [v]
if k in BaseConfig.mergebehavior['update'] and not isinstance(v, dict):
raise InternalException(f'expected dict, got {v}')
if k in config_call_dict and isinstance(config_call_dict[k], list):
config_call_dict[k].extend(v)
elif k in config_call_dict and isinstance(config_call_dict[k], dict):
config_call_dict[k].update(v)
else:
config_call_dict[k] = v
def build_config_dict(
self,
base: bool = False,
*,
rendered: bool = True,
patch_config_dict: dict = None
) -> Dict[str, Any]:
if rendered:
src = ContextConfigGenerator(self._active_project)
@@ -272,9 +304,10 @@ class ContextConfig:
src = UnrenderedConfigGenerator(self._active_project)
return src.calculate_node_config_dict(
config_calls=self._config_calls,
config_call_dict=self._config_call_dict,
fqn=self._fqn,
resource_type=self._resource_type,
project_name=self._project_name,
base=base,
patch_config_dict=patch_config_dict
)

View File

@@ -169,6 +169,8 @@ class TestMacroNamespace:
def recursively_get_depends_on_macros(self, depends_on_macros, dep_macros):
for macro_unique_id in depends_on_macros:
if macro_unique_id in dep_macros:
continue
dep_macros.append(macro_unique_id)
if macro_unique_id in self.macro_resolver.macros:
macro = self.macro_resolver.macros[macro_unique_id]

View File

@@ -279,7 +279,7 @@ class Config(Protocol):
...
# `config` implementations
# Implementation of "config(..)" calls in models
class ParseConfigObject(Config):
def __init__(self, model, context_config: Optional[ContextConfig]):
self.model = model
@@ -316,7 +316,7 @@ class ParseConfigObject(Config):
raise RuntimeException(
'At parse time, did not receive a context config'
)
self.context_config.update_in_model_config(opts)
self.context_config.add_config_call(opts)
return ''
def set(self, name, value):
@@ -1243,7 +1243,7 @@ class ModelContext(ProviderContext):
@contextproperty
def pre_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [
h.to_dict(omit_none=True) for h in self.model.config.pre_hook
@@ -1251,7 +1251,7 @@ class ModelContext(ProviderContext):
@contextproperty
def post_hooks(self) -> List[Dict[str, Any]]:
if isinstance(self.model, ParsedSourceDefinition):
if self.model.resource_type in [NodeType.Source, NodeType.Test]:
return []
return [
h.to_dict(omit_none=True) for h in self.model.config.post_hook

View File

@@ -1,5 +1,6 @@
import abc
import itertools
import hashlib
from dataclasses import dataclass, field
from typing import (
Any, ClassVar, Dict, Tuple, Iterable, Optional, List, Callable,
@@ -127,6 +128,15 @@ class Credentials(
'type not implemented for base credentials class'
)
@abc.abstractproperty
def unique_field(self) -> str:
raise NotImplementedError(
'type not implemented for base credentials class'
)
def hashed_unique_field(self) -> str:
return hashlib.md5(self.unique_field.encode('utf-8')).hexdigest()
def connection_info(
self, *, with_aliases: bool = False
) -> Iterable[Tuple[str, Any]]:
@@ -176,14 +186,11 @@ class UserConfigContract(Protocol):
partial_parse: Optional[bool] = None
printer_width: Optional[int] = None
def set_values(self, cookie_dir: str) -> None:
...
class HasCredentials(Protocol):
credentials: Credentials
profile_name: str
config: UserConfigContract
user_config: UserConfigContract
target_name: str
threads: int

View File

@@ -42,6 +42,7 @@ parse_file_type_to_parser = {
class FilePath(dbtClassMixin):
searched_path: str
relative_path: str
modification_time: float
project_root: str
@property
@@ -132,6 +133,10 @@ class RemoteFile(dbtClassMixin):
def original_file_path(self):
return 'from remote system'
@property
def modification_time(self):
return 'from remote system'
@dataclass
class BaseSourceFile(dbtClassMixin, SerializableType):
@@ -150,26 +155,15 @@ class BaseSourceFile(dbtClassMixin, SerializableType):
def file_id(self):
if isinstance(self.path, RemoteFile):
return None
if self.checksum.name == 'none':
return None
return f'{self.project_name}://{self.path.original_file_path}'
def _serialize(self):
dct = self.to_dict()
if 'pp_files' in dct:
del dct['pp_files']
if 'pp_test_index' in dct:
del dct['pp_test_index']
return dct
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
if dct['parse_file_type'] == 'schema':
# TODO: why are these keys even here
if 'pp_files' in dct:
del dct['pp_files']
if 'pp_test_index' in dct:
del dct['pp_test_index']
sf = SchemaSourceFile.from_dict(dct)
else:
sf = SourceFile.from_dict(dct)
@@ -223,13 +217,13 @@ class SourceFile(BaseSourceFile):
class SchemaSourceFile(BaseSourceFile):
dfy: Dict[str, Any] = field(default_factory=dict)
# these are in the manifest.nodes dictionary
tests: List[str] = field(default_factory=list)
tests: Dict[str, Any] = field(default_factory=dict)
sources: List[str] = field(default_factory=list)
exposures: List[str] = field(default_factory=list)
# node patches contain models, seeds, snapshots, analyses
ndp: List[str] = field(default_factory=list)
# any macro patches in this file by macro unique_id.
mcp: List[str] = field(default_factory=list)
mcp: Dict[str, str] = field(default_factory=dict)
# any source patches in this file. The entries are package, name pairs
# Patches are only against external sources. Sources can be
# created too, but those are in 'sources'
@@ -255,14 +249,53 @@ class SchemaSourceFile(BaseSourceFile):
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if 'pp_files' in dct:
del dct['pp_files']
if 'pp_test_index' in dct:
del dct['pp_test_index']
# Remove partial parsing specific data
for key in ('pp_files', 'pp_test_index', 'pp_dict'):
if key in dct:
del dct[key]
return dct
def append_patch(self, yaml_key, unique_id):
self.node_patches.append(unique_id)
def add_test(self, node_unique_id, test_from):
name = test_from['name']
key = test_from['key']
if key not in self.tests:
self.tests[key] = {}
if name not in self.tests[key]:
self.tests[key][name] = []
self.tests[key][name].append(node_unique_id)
def remove_tests(self, yaml_key, name):
if yaml_key in self.tests:
if name in self.tests[yaml_key]:
del self.tests[yaml_key][name]
def get_tests(self, yaml_key, name):
if yaml_key in self.tests:
if name in self.tests[yaml_key]:
return self.tests[yaml_key][name]
return []
def get_key_and_name_for_test(self, test_unique_id):
yaml_key = None
block_name = None
for key in self.tests.keys():
for name in self.tests[key]:
for unique_id in self.tests[key][name]:
if unique_id == test_unique_id:
yaml_key = key
block_name = name
break
return (yaml_key, block_name)
def get_all_test_ids(self):
test_ids = []
for key in self.tests.keys():
for name in self.tests[key]:
test_ids.extend(self.tests[key][name])
return test_ids
AnySourceFile = Union[SchemaSourceFile, SourceFile]

View File

@@ -109,7 +109,9 @@ class CompiledSnapshotNode(CompiledNode):
@dataclass
class CompiledDataTestNode(CompiledNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
@dataclass
@@ -117,7 +119,9 @@ class CompiledSchemaTestNode(CompiledNode, HasTestMetadata):
# keep this in sync with ParsedSchemaTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
def same_contents(self, other) -> bool:
if other is None:

View File

@@ -14,7 +14,7 @@ from dbt.contracts.graph.compiled import (
CompileResultNode, ManifestNode, NonSourceCompiledNode, GraphMemberNode
)
from dbt.contracts.graph.parsed import (
ParsedMacro, ParsedDocumentation, ParsedNodePatch, ParsedMacroPatch,
ParsedMacro, ParsedDocumentation,
ParsedSourceDefinition, ParsedExposure, HasUniqueID,
UnpatchedSourceDefinition, ManifestNodes
)
@@ -26,9 +26,7 @@ from dbt.contracts.util import (
from dbt.dataclass_schema import dbtClassMixin
from dbt.exceptions import (
CompilationException,
raise_duplicate_resource_name, raise_compiler_error, warn_or_error,
raise_duplicate_patch_name,
raise_duplicate_macro_patch_name, raise_duplicate_source_patch_name,
raise_duplicate_resource_name, raise_compiler_error,
)
from dbt.helper_types import PathSet
from dbt.logger import GLOBAL_LOGGER as logger
@@ -172,7 +170,7 @@ class RefableLookup(dbtClassMixin):
class AnalysisLookup(RefableLookup):
_lookup_types: ClassVar[set] = set(NodeType.Analysis)
_lookup_types: ClassVar[set] = set([NodeType.Analysis])
def _search_packages(
@@ -225,9 +223,7 @@ class ManifestMetadata(BaseArtifactMetadata):
self.user_id = tracking.active_user.id
if self.send_anonymous_usage_stats is None:
self.send_anonymous_usage_stats = (
not tracking.active_user.do_not_track
)
self.send_anonymous_usage_stats = flags.SEND_ANONYMOUS_USAGE_STATS
@classmethod
def default(cls):
@@ -243,7 +239,7 @@ def _sort_values(dct):
return {k: sorted(v) for k, v in dct.items()}
def build_edges(nodes: List[ManifestNode]):
def build_node_edges(nodes: List[ManifestNode]):
"""Build the forward and backward edges on the given list of ParsedNodes
and return them as two separate dictionaries, each mapping unique IDs to
lists of edges.
@@ -259,6 +255,18 @@ def build_edges(nodes: List[ManifestNode]):
return _sort_values(forward_edges), _sort_values(backward_edges)
# Build a map of children of macros
def build_macro_edges(nodes: List[Any]):
forward_edges: Dict[str, List[str]] = {
n.unique_id: [] for n in nodes if n.unique_id.startswith('macro') or n.depends_on.macros
}
for node in nodes:
for unique_id in node.depends_on.macros:
if unique_id in forward_edges.keys():
forward_edges[unique_id].append(node.unique_id)
return _sort_values(forward_edges)
def _deepcopy(value):
return value.from_dict(value.to_dict(omit_none=True))
@@ -525,6 +533,12 @@ class MacroMethods:
return candidates
@dataclass
class ParsingInfo:
static_analysis_parsed_path_count: int = 0
static_analysis_path_count: int = 0
@dataclass
class ManifestStateCheck(dbtClassMixin):
vars_hash: FileHash = field(default_factory=FileHash.empty)
@@ -566,9 +580,13 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
_analysis_lookup: Optional[AnalysisLookup] = field(
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
)
_parsing_info: ParsingInfo = field(
default_factory=ParsingInfo,
metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
)
_lock: Lock = field(
default_factory=flags.MP_CONTEXT.Lock,
metadata={'serialize': lambda x: None, 'deserialize': lambda x: flags.MP_CONTEXT.Lock}
metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
)
def __pre_serialize__(self):
@@ -577,6 +595,11 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.source_patches = {}
return self
@classmethod
def __post_deserialize__(cls, obj):
obj._lock = flags.MP_CONTEXT.Lock()
return obj
def sync_update_node(
self, new_node: NonSourceCompiledNode
) -> NonSourceCompiledNode:
@@ -691,60 +714,6 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
resource_fqns[resource_type_plural].add(tuple(resource.fqn))
return resource_fqns
# This is called by 'parse_patch' in the NodePatchParser
def add_patch(
self, source_file: SchemaSourceFile, patch: ParsedNodePatch,
) -> None:
if patch.yaml_key in ['models', 'seeds', 'snapshots']:
unique_id = self.ref_lookup.get_unique_id(patch.name, None)
elif patch.yaml_key == 'analyses':
unique_id = self.analysis_lookup.get_unique_id(patch.name, None)
else:
raise dbt.exceptions.InternalException(
f'Unexpected yaml_key {patch.yaml_key} for patch in '
f'file {source_file.path.original_file_path}'
)
if unique_id is None:
# This will usually happen when a node is disabled
return
# patches can't be overwritten
node = self.nodes.get(unique_id)
if node:
if node.patch_path:
package_name, existing_file_path = node.patch_path.split('://')
raise_duplicate_patch_name(patch, existing_file_path)
source_file.append_patch(patch.yaml_key, unique_id)
node.patch(patch)
def add_macro_patch(
self, source_file: SchemaSourceFile, patch: ParsedMacroPatch,
) -> None:
# macros are fully namespaced
unique_id = f'macro.{patch.package_name}.{patch.name}'
macro = self.macros.get(unique_id)
if not macro:
warn_or_error(
f'WARNING: Found documentation for macro "{patch.name}" '
f'which was not found'
)
return
if macro.patch_path:
package_name, existing_file_path = macro.patch_path.split('://')
raise_duplicate_macro_patch_name(patch, existing_file_path)
source_file.macro_patches.append(unique_id)
macro.patch(patch)
def add_source_patch(
self, source_file: SchemaSourceFile, patch: SourcePatch,
) -> None:
# source patches must be unique
key = (patch.overrides, patch.name)
if key in self.source_patches:
raise_duplicate_source_patch_name(patch, self.source_patches[key])
self.source_patches[key] = patch
source_file.source_patches.append(key)
def get_used_schemas(self, resource_types=None):
return frozenset({
(node.database, node.schema) for node in
@@ -779,10 +748,18 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.sources.values(),
self.exposures.values(),
))
forward_edges, backward_edges = build_edges(edge_members)
forward_edges, backward_edges = build_node_edges(edge_members)
self.child_map = forward_edges
self.parent_map = backward_edges
def build_macro_child_map(self):
edge_members = list(chain(
self.nodes.values(),
self.macros.values(),
))
forward_edges = build_macro_edges(edge_members)
return forward_edges
def writable_manifest(self):
self.build_parent_and_child_maps()
return WritableManifest(
@@ -1016,10 +993,11 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
_check_duplicates(node, self.nodes)
self.nodes[node.unique_id] = node
def add_node(self, source_file: AnySourceFile, node: ManifestNodes):
def add_node(self, source_file: AnySourceFile, node: ManifestNodes, test_from=None):
self.add_node_nofile(node)
if isinstance(source_file, SchemaSourceFile):
source_file.tests.append(node.unique_id)
assert test_from
source_file.add_test(node.unique_id, test_from)
else:
source_file.nodes.append(node.unique_id)
@@ -1034,10 +1012,11 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
else:
self._disabled[node.unique_id] = [node]
def add_disabled(self, source_file: AnySourceFile, node: CompileResultNode):
def add_disabled(self, source_file: AnySourceFile, node: CompileResultNode, test_from=None):
self.add_disabled_nofile(node)
if isinstance(source_file, SchemaSourceFile):
source_file.tests.append(node.unique_id)
assert test_from
source_file.add_test(node.unique_id, test_from)
else:
source_file.nodes.append(node.unique_id)

View File

@@ -2,13 +2,13 @@ from dataclasses import field, Field, dataclass
from enum import Enum
from itertools import chain
from typing import (
Any, List, Optional, Dict, Union, Type, TypeVar
Any, List, Optional, Dict, Union, Type, TypeVar, Callable
)
from dbt.dataclass_schema import (
dbtClassMixin, ValidationError, register_pattern,
)
from dbt.contracts.graph.unparsed import AdditionalPropertiesAllowed
from dbt.exceptions import InternalException
from dbt.exceptions import InternalException, CompilationException
from dbt.contracts.util import Replaceable, list_str
from dbt import hooks
from dbt.node_types import NodeType
@@ -204,6 +204,34 @@ class BaseConfig(
else:
self._extra[key] = value
def __delitem__(self, key):
if hasattr(self, key):
msg = (
'Error, tried to delete config key "{}": Cannot delete '
'built-in keys'
).format(key)
raise CompilationException(msg)
else:
del self._extra[key]
def _content_iterator(self, include_condition: Callable[[Field], bool]):
seen = set()
for fld, _ in self._get_fields():
seen.add(fld.name)
if include_condition(fld):
yield fld.name
for key in self._extra:
if key not in seen:
seen.add(key)
yield key
def __iter__(self):
yield from self._content_iterator(include_condition=lambda f: True)
def __len__(self):
return len(self._get_fields()) + len(self._extra)
@staticmethod
def compare_key(
unrendered: Dict[str, Any],
@@ -239,8 +267,15 @@ class BaseConfig(
return False
return True
# This is used in 'add_config_call' to created the combined config_call_dict.
# 'meta' moved here from node
mergebehavior = {
"append": ['pre-hook', 'pre_hook', 'post-hook', 'post_hook', 'tags'],
"update": ['quoting', 'column_types', 'meta'],
}
@classmethod
def _extract_dict(
def _merge_dicts(
cls, src: Dict[str, Any], data: Dict[str, Any]
) -> Dict[str, Any]:
"""Find all the items in data that match a target_field on this class,
@@ -286,10 +321,10 @@ class BaseConfig(
adapter_config_cls = get_config_class_by_name(adapter_type)
self_merged = self._extract_dict(dct, data)
self_merged = self._merge_dicts(dct, data)
dct.update(self_merged)
adapter_merged = adapter_config_cls._extract_dict(dct, data)
adapter_merged = adapter_config_cls._merge_dicts(dct, data)
dct.update(adapter_merged)
# any remaining fields must be "clobber"
@@ -321,33 +356,8 @@ class SourceConfig(BaseConfig):
@dataclass
class NodeConfig(BaseConfig):
class NodeAndTestConfig(BaseConfig):
enabled: bool = True
materialized: str = 'view'
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
# this only applies for config v1, so it doesn't participate in comparison
vars: Dict[str, Any] = field(
default_factory=dict,
metadata=metas(CompareBehavior.Exclude, MergeBehavior.Update),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# these fields are included in serialized output, but are not part of
# config comparison (they are part of database_representation)
alias: Optional[str] = field(
@@ -368,7 +378,38 @@ class NodeConfig(BaseConfig):
MergeBehavior.Append,
CompareBehavior.Exclude),
)
meta: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
@dataclass
class NodeConfig(NodeAndTestConfig):
# Note: if any new fields are added with MergeBehavior, also update the
# 'mergebehavior' dictionary
materialized: str = 'view'
persist_docs: Dict[str, Any] = field(default_factory=dict)
post_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
pre_hook: List[Hook] = field(
default_factory=list,
metadata=MergeBehavior.Append.meta(),
)
quoting: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
# This is actually only used by seeds. Should it be available to others?
# That would be a breaking change!
column_types: Dict[str, Any] = field(
default_factory=dict,
metadata=MergeBehavior.Update.meta(),
)
full_refresh: Optional[bool] = None
on_schema_change: Optional[str] = 'ignore'
@classmethod
def __pre_deserialize__(cls, data):
@@ -410,7 +451,8 @@ class SeedConfig(NodeConfig):
@dataclass
class TestConfig(NodeConfig):
class TestConfig(NodeAndTestConfig):
# this is repeated because of a different default
schema: Optional[str] = field(
default='dbt_test__audit',
metadata=CompareBehavior.Exclude.meta(),

View File

@@ -148,6 +148,7 @@ class ParsedNodeMixins(dbtClassMixin):
"""Given a ParsedNodePatch, add the new information to the node."""
# explicitly pick out the parts to update so we don't inadvertently
# step on the model name or anything
# Note: config should already be updated
self.patch_path: Optional[str] = patch.file_id
# update created_at so process_docs will run in partial parsing
self.created_at = int(time.time())
@@ -155,20 +156,10 @@ class ParsedNodeMixins(dbtClassMixin):
self.columns = patch.columns
self.meta = patch.meta
self.docs = patch.docs
if flags.STRICT_MODE:
# It seems odd that an instance can be invalid
# Maybe there should be validation or restrictions
# elsewhere?
assert isinstance(self, dbtClassMixin)
dct = self.to_dict(omit_none=False)
self.validate(dct)
def get_materialization(self):
return self.config.materialized
def local_vars(self):
return self.config.vars
@dataclass
class ParsedNodeMandatory(
@@ -203,6 +194,7 @@ class ParsedNodeDefaults(ParsedNodeMandatory):
deferred: bool = False
unrendered_config: Dict[str, Any] = field(default_factory=dict)
created_at: int = field(default_factory=lambda: int(time.time()))
config_call_dict: Dict[str, Any] = field(default_factory=dict)
def write_node(self, target_path: str, subdirectory: str, payload: str):
if (os.path.basename(self.path) ==
@@ -229,6 +221,11 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
def _serialize(self):
return self.to_dict()
def __post_serialize__(self, dct):
if 'config_call_dict' in dct:
del dct['config_call_dict']
return dct
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
# The serialized ParsedNodes do not differ from each other
@@ -258,10 +255,16 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
return cls.from_dict(dct)
def _persist_column_docs(self) -> bool:
return bool(self.config.persist_docs.get('columns'))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('columns'))
return False
def _persist_relation_docs(self) -> bool:
return bool(self.config.persist_docs.get('relation'))
if hasattr(self.config, 'persist_docs'):
assert isinstance(self.config, NodeConfig)
return bool(self.config.persist_docs.get('relation'))
return False
def same_body(self: T, other: T) -> bool:
return self.raw_sql == other.raw_sql
@@ -411,7 +414,9 @@ class HasTestMetadata(dbtClassMixin):
@dataclass
class ParsedDataTestNode(ParsedNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
@dataclass
@@ -419,7 +424,9 @@ class ParsedSchemaTestNode(ParsedNode, HasTestMetadata):
# keep this in sync with CompiledSchemaTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
config: TestConfig = field(default_factory=TestConfig)
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
def same_contents(self, other) -> bool:
if other is None:
@@ -456,6 +463,7 @@ class ParsedPatch(HasYamlMetadata, Replaceable):
description: str
meta: Dict[str, Any]
docs: Docs
config: Dict[str, Any]
# The parsed node update is only the 'patch', not the test. The test became a
@@ -487,9 +495,6 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
arguments: List[MacroArgument] = field(default_factory=list)
created_at: int = field(default_factory=lambda: int(time.time()))
def local_vars(self):
return {}
def patch(self, patch: ParsedMacroPatch):
self.patch_path: Optional[str] = patch.file_id
self.description = patch.description
@@ -497,11 +502,6 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
self.meta = patch.meta
self.docs = patch.docs
self.arguments = patch.arguments
if flags.STRICT_MODE:
# What does this actually validate?
assert isinstance(self, dbtClassMixin)
dct = self.to_dict(omit_none=False)
self.validate(dct)
def same_contents(self, other: Optional['ParsedMacro']) -> bool:
if other is None:
@@ -592,7 +592,8 @@ class ParsedSourceDefinition(
UnparsedBaseNode,
HasUniqueID,
HasRelationMetadata,
HasFqn
HasFqn,
):
name: str
source_name: str
@@ -689,6 +690,10 @@ class ParsedSourceDefinition(
def depends_on_nodes(self):
return []
@property
def depends_on(self):
return DependsOn(macros=[], nodes=[])
@property
def refs(self):
return []

View File

@@ -126,12 +126,17 @@ class HasYamlMetadata(dbtClassMixin):
@dataclass
class UnparsedAnalysisUpdate(HasColumnDocs, HasDocs, HasYamlMetadata):
class HasConfig():
config: Dict[str, Any] = field(default_factory=dict)
@dataclass
class UnparsedAnalysisUpdate(HasConfig, HasColumnDocs, HasDocs, HasYamlMetadata):
pass
@dataclass
class UnparsedNodeUpdate(HasColumnTests, HasTests, HasYamlMetadata):
class UnparsedNodeUpdate(HasConfig, HasColumnTests, HasTests, HasYamlMetadata):
quote_columns: Optional[bool] = None
@@ -143,7 +148,7 @@ class MacroArgument(dbtClassMixin):
@dataclass
class UnparsedMacroUpdate(HasDocs, HasYamlMetadata):
class UnparsedMacroUpdate(HasConfig, HasDocs, HasYamlMetadata):
arguments: List[MacroArgument] = field(default_factory=list)
@@ -261,6 +266,7 @@ class UnparsedSourceDefinition(dbtClassMixin, Replaceable):
loaded_at_field: Optional[str] = None
tables: List[UnparsedSourceTableDefinition] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
@property
def yaml_key(self) -> 'str':

View File

@@ -1,9 +1,7 @@
from dbt.contracts.util import Replaceable, Mergeable, list_str
from dbt.contracts.connection import UserConfigContract, QueryComment
from dbt.contracts.connection import QueryComment, UserConfigContract
from dbt.helper_types import NoValue
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt import tracking
from dbt import ui
from dbt.dataclass_schema import (
dbtClassMixin, ValidationError,
HyphenatedDbtClassMixin,
@@ -83,6 +81,7 @@ class GitPackage(Package):
class RegistryPackage(Package):
package: str
version: Union[RawVersion, List[RawVersion]]
install_prerelease: Optional[bool] = False
def get_versions(self) -> List[str]:
if isinstance(self.version, list):
@@ -229,25 +228,20 @@ class UserConfig(ExtensibleDbtClassMixin, Replaceable, UserConfigContract):
use_colors: Optional[bool] = None
partial_parse: Optional[bool] = None
printer_width: Optional[int] = None
def set_values(self, cookie_dir):
if self.send_anonymous_usage_stats:
tracking.initialize_tracking(cookie_dir)
else:
tracking.do_not_track()
if self.use_colors is not None:
ui.use_colors(self.use_colors)
if self.printer_width:
ui.printer_width(self.printer_width)
write_json: Optional[bool] = None
warn_error: Optional[bool] = None
log_format: Optional[bool] = None
debug: Optional[bool] = None
version_check: Optional[bool] = None
fail_fast: Optional[bool] = None
use_experimental_parser: Optional[bool] = None
@dataclass
class ProfileConfig(HyphenatedDbtClassMixin, Replaceable):
profile_name: str = field(metadata={'preserve_underscore': True})
target_name: str = field(metadata={'preserve_underscore': True})
config: UserConfig
user_config: UserConfig = field(metadata={'preserve_underscore': True})
threads: int
# TODO: make this a dynamic union of some kind?
credentials: Optional[Dict[str, Any]]

View File

@@ -78,6 +78,7 @@ class TestStatus(StrEnum):
Error = NodeStatus.Error
Fail = NodeStatus.Fail
Warn = NodeStatus.Warn
Skipped = NodeStatus.Skipped
class FreshnessStatus(StrEnum):
@@ -284,6 +285,9 @@ class SourceFreshnessOutput(dbtClassMixin):
status: FreshnessStatus
criteria: FreshnessThreshold
adapter_response: Dict[str, Any]
timing: List[TimingInfo]
thread_id: str
execution_time: float
@dataclass
@@ -332,7 +336,10 @@ def process_freshness_result(
max_loaded_at_time_ago_in_s=result.age,
status=result.status,
criteria=criteria,
adapter_response=result.adapter_response
adapter_response=result.adapter_response,
timing=result.timing,
thread_id=result.thread_id,
execution_time=result.execution_time,
)

View File

@@ -58,6 +58,7 @@ class RPCExecParameters(RPCParameters):
class RPCCompileParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@@ -71,12 +72,14 @@ class RPCListParameters(RPCParameters):
select: Union[None, str, List[str]] = None
selector: Optional[str] = None
output: Optional[str] = 'json'
output_keys: Optional[List[str]] = None
@dataclass
class RPCRunParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@@ -116,6 +119,17 @@ class RPCDocsGenerateParameters(RPCParameters):
state: Optional[str] = None
@dataclass
class RPCBuildParameters(RPCParameters):
resource_types: Optional[List[str]] = None
select: Union[None, str, List[str]] = None
threads: Optional[int] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCCliParameters(RPCParameters):
cli: str
@@ -186,6 +200,8 @@ class RPCRunOperationParameters(RPCParameters):
class RPCSourceFreshnessParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
@dataclass

View File

@@ -9,6 +9,7 @@ class SelectorDefinition(dbtClassMixin):
name: str
definition: Union[str, Dict[str, Any]]
description: str = ''
default: bool = False
@dataclass

View File

@@ -57,22 +57,6 @@ class DispatchPackagesDeprecation(DBTDeprecation):
'''
class MaterializationReturnDeprecation(DBTDeprecation):
_name = 'materialization-return'
_description = '''\
The materialization ("{materialization}") did not explicitly return a list
of relations to add to the cache. By default the target relation will be
added, but this behavior will be removed in a future version of dbt.
For more information, see:
https://docs.getdbt.com/v0.15/docs/creating-new-materializations#section-6-returning-relations
'''
class NotADictionaryDeprecation(DBTDeprecation):
_name = 'not-a-dictionary'
@@ -131,6 +115,14 @@ class AdapterMacroDeprecation(DBTDeprecation):
'''
class PackageRedirectDeprecation(DBTDeprecation):
_name = 'package-redirect'
_description = '''\
The `{old_name}` package is deprecated in favor of `{new_name}`. Please update
your `packages.yml` configuration to use `{new_name}` instead.
'''
_adapter_renamed_description = """\
The adapter function `adapter.{old_name}` is deprecated and will be removed in
a future release of dbt. Please use `adapter.{new_name}` instead.
@@ -170,12 +162,12 @@ active_deprecations: Set[str] = set()
deprecations_list: List[DBTDeprecation] = [
DispatchPackagesDeprecation(),
MaterializationReturnDeprecation(),
NotADictionaryDeprecation(),
ColumnQuotingDeprecation(),
ModelsKeyNonModelDeprecation(),
ExecuteMacrosReleaseDeprecation(),
AdapterMacroDeprecation(),
PackageRedirectDeprecation()
]
deprecations: Dict[str, DBTDeprecation] = {

View File

@@ -30,9 +30,13 @@ class RegistryPackageMixin:
class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def __init__(self, package: str, version: str) -> None:
def __init__(self,
package: str,
version: str,
version_latest: str) -> None:
super().__init__(package)
self.version = version
self.version_latest = version_latest
@property
def name(self):
@@ -44,6 +48,9 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
def get_version(self):
return self.version
def get_version_latest(self):
return self.version_latest
def nice_version_name(self):
return 'version {}'.format(self.version)
@@ -61,7 +68,7 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
system.make_directory(os.path.dirname(tar_path))
download_url = metadata.downloads.tarball
system.download(download_url, tar_path)
system.download_with_retries(download_url, tar_path)
deps_path = project.modules_path
package_name = self.get_project_name(project, renderer)
system.untar_package(tar_path, deps_path, package_name)
@@ -71,10 +78,14 @@ class RegistryUnpinnedPackage(
RegistryPackageMixin, UnpinnedPackage[RegistryPinnedPackage]
):
def __init__(
self, package: str, versions: List[semver.VersionSpecifier]
self,
package: str,
versions: List[semver.VersionSpecifier],
install_prerelease: bool
) -> None:
super().__init__(package)
self.versions = versions
self.install_prerelease = install_prerelease
def _check_in_index(self):
index = registry.index_cached()
@@ -91,13 +102,18 @@ class RegistryUnpinnedPackage(
semver.VersionSpecifier.from_version_string(v)
for v in raw_version
]
return cls(package=contract.package, versions=versions)
return cls(
package=contract.package,
versions=versions,
install_prerelease=contract.install_prerelease
)
def incorporate(
self, other: 'RegistryUnpinnedPackage'
) -> 'RegistryUnpinnedPackage':
return RegistryUnpinnedPackage(
package=self.package,
install_prerelease=self.install_prerelease,
versions=self.versions + other.versions,
)
@@ -111,12 +127,18 @@ class RegistryUnpinnedPackage(
raise DependencyException(new_msg) from e
available = registry.get_available_versions(self.package)
installable = semver.filter_installable(
available,
self.install_prerelease
)
available_latest = installable[-1]
# for now, pick a version and then recurse. later on,
# we'll probably want to traverse multiple options
# so we can match packages. not going to make a difference
# right now.
target = semver.resolve_to_specific_version(range_, available)
target = semver.resolve_to_specific_version(range_, installable)
if not target:
package_version_not_found(self.package, range_, available)
return RegistryPinnedPackage(package=self.package, version=target)
package_version_not_found(self.package, range_, installable)
return RegistryPinnedPackage(package=self.package, version=target,
version_latest=available_latest)

View File

@@ -710,11 +710,11 @@ def system_error(operation_name):
raise_compiler_error(
"dbt encountered an error when attempting to {}. "
"If this error persists, please create an issue at: \n\n"
"https://github.com/fishtown-analytics/dbt"
"https://github.com/dbt-labs/dbt"
.format(operation_name))
class RegistryException(Exception):
class ConnectionException(Exception):
pass

View File

@@ -6,18 +6,47 @@ if os.name != 'nt':
from pathlib import Path
from typing import Optional
# initially all flags are set to None, the on-load call of reset() will set
# them for their first time.
STRICT_MODE = None
FULL_REFRESH = None
USE_CACHE = None
WARN_ERROR = None
TEST_NEW_PARSER = None
# PROFILES_DIR must be set before the other flags
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
PROFILES_DIR = os.path.expanduser(
os.getenv('DBT_PROFILES_DIR', DEFAULT_PROFILES_DIR)
)
STRICT_MODE = False # Only here for backwards compatibility
FULL_REFRESH = False # subcommand
STORE_FAILURES = False # subcommand
GREEDY = None # subcommand
# Global CLI commands
USE_EXPERIMENTAL_PARSER = None
WARN_ERROR = None
WRITE_JSON = None
PARTIAL_PARSE = None
USE_COLORS = None
STORE_FAILURES = None
DEBUG = None
LOG_FORMAT = None
VERSION_CHECK = None
FAIL_FAST = None
SEND_ANONYMOUS_USAGE_STATS = None
PRINTER_WIDTH = 80
# Global CLI defaults. These flags are set from three places:
# CLI args, environment variables, and user_config (profiles.yml).
# Environment variables use the pattern 'DBT_{flag name}', like DBT_PROFILES_DIR
flag_defaults = {
"USE_EXPERIMENTAL_PARSER": False,
"WARN_ERROR": False,
"WRITE_JSON": True,
"PARTIAL_PARSE": False,
"USE_COLORS": True,
"PROFILES_DIR": DEFAULT_PROFILES_DIR,
"DEBUG": False,
"LOG_FORMAT": None,
"VERSION_CHECK": True,
"FAIL_FAST": False,
"SEND_ANONYMOUS_USAGE_STATS": True,
"PRINTER_WIDTH": 80
}
def env_set_truthy(key: str) -> Optional[str]:
@@ -30,6 +59,12 @@ def env_set_truthy(key: str) -> Optional[str]:
return value
def env_set_bool(env_value):
if env_value in ('1', 't', 'true', 'y', 'yes'):
return True
return False
def env_set_path(key: str) -> Optional[Path]:
value = os.getenv(key)
if value is None:
@@ -50,56 +85,72 @@ def _get_context():
return multiprocessing.get_context('spawn')
# This is not a flag, it's a place to store the lock
MP_CONTEXT = _get_context()
def reset():
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
STRICT_MODE = False
FULL_REFRESH = False
USE_CACHE = True
WARN_ERROR = False
TEST_NEW_PARSER = False
USE_EXPERIMENTAL_PARSER = False
WRITE_JSON = True
PARTIAL_PARSE = False
MP_CONTEXT = _get_context()
USE_COLORS = True
STORE_FAILURES = False
def set_from_args(args):
global STRICT_MODE, FULL_REFRESH, USE_CACHE, WARN_ERROR, TEST_NEW_PARSER, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, MP_CONTEXT, USE_COLORS, \
STORE_FAILURES
USE_CACHE = getattr(args, 'use_cache', USE_CACHE)
def set_from_args(args, user_config):
global STRICT_MODE, FULL_REFRESH, WARN_ERROR, \
USE_EXPERIMENTAL_PARSER, WRITE_JSON, PARTIAL_PARSE, USE_COLORS, \
STORE_FAILURES, PROFILES_DIR, DEBUG, LOG_FORMAT, GREEDY, \
VERSION_CHECK, FAIL_FAST, SEND_ANONYMOUS_USAGE_STATS, PRINTER_WIDTH
STRICT_MODE = False # backwards compatibility
# cli args without user_config or env var option
FULL_REFRESH = getattr(args, 'full_refresh', FULL_REFRESH)
STRICT_MODE = getattr(args, 'strict', STRICT_MODE)
WARN_ERROR = (
STRICT_MODE or
getattr(args, 'warn_error', STRICT_MODE or WARN_ERROR)
)
TEST_NEW_PARSER = getattr(args, 'test_new_parser', TEST_NEW_PARSER)
USE_EXPERIMENTAL_PARSER = getattr(args, 'use_experimental_parser', USE_EXPERIMENTAL_PARSER)
WRITE_JSON = getattr(args, 'write_json', WRITE_JSON)
PARTIAL_PARSE = getattr(args, 'partial_parse', None)
MP_CONTEXT = _get_context()
# The use_colors attribute will always have a value because it is assigned
# None by default from the add_mutually_exclusive_group function
use_colors_override = getattr(args, 'use_colors')
if use_colors_override is not None:
USE_COLORS = use_colors_override
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
GREEDY = getattr(args, 'greedy', GREEDY)
# global cli flags with env var and user_config alternatives
USE_EXPERIMENTAL_PARSER = get_flag_value('USE_EXPERIMENTAL_PARSER', args, user_config)
WARN_ERROR = get_flag_value('WARN_ERROR', args, user_config)
WRITE_JSON = get_flag_value('WRITE_JSON', args, user_config)
PARTIAL_PARSE = get_flag_value('PARTIAL_PARSE', args, user_config)
USE_COLORS = get_flag_value('USE_COLORS', args, user_config)
DEBUG = get_flag_value('DEBUG', args, user_config)
LOG_FORMAT = get_flag_value('LOG_FORMAT', args, user_config)
VERSION_CHECK = get_flag_value('VERSION_CHECK', args, user_config)
FAIL_FAST = get_flag_value('FAIL_FAST', args, user_config)
SEND_ANONYMOUS_USAGE_STATS = get_flag_value('SEND_ANONYMOUS_USAGE_STATS', args, user_config)
PRINTER_WIDTH = get_flag_value('PRINTER_WIDTH', args, user_config)
# initialize everything to the defaults on module load
reset()
def get_flag_value(flag, args, user_config):
lc_flag = flag.lower()
flag_value = getattr(args, lc_flag, None)
if flag_value is None:
# Environment variables use pattern 'DBT_{flag name}'
env_flag = f"DBT_{flag}"
env_value = os.getenv(env_flag)
if env_value is not None and env_value != '':
env_value = env_value.lower()
# non Boolean values
if flag in ['LOG_FORMAT', 'PRINTER_WIDTH']:
flag_value = env_value
else:
flag_value = env_set_bool(env_value)
elif user_config is not None and getattr(user_config, lc_flag, None) is not None:
flag_value = getattr(user_config, lc_flag)
else:
flag_value = flag_defaults[flag]
if flag == 'PRINTER_WIDTH': # printer_width must be an int or it hangs
flag_value = int(flag_value)
return flag_value
def get_flag_dict():
return {
"use_experimental_parser": USE_EXPERIMENTAL_PARSER,
"warn_error": WARN_ERROR,
"write_json": WRITE_JSON,
"partial_parse": PARTIAL_PARSE,
"use_colors": USE_COLORS,
"profiles_dir": PROFILES_DIR,
"debug": DEBUG,
"log_format": LOG_FORMAT,
"version_check": VERSION_CHECK,
"fail_fast": FAIL_FAST,
"send_anonymous_usage_stats": SEND_ANONYMOUS_USAGE_STATS,
"printer_width": PRINTER_WIDTH,
}

View File

@@ -1,4 +1,5 @@
# special support for CLI argument parsing.
from dbt import flags
import itertools
from dbt.clients.yaml_helper import yaml, Loader, Dumper # noqa: F401
@@ -66,7 +67,7 @@ def parse_union_from_default(
def parse_difference(
include: Optional[List[str]], exclude: Optional[List[str]]
) -> SelectionDifference:
included = parse_union_from_default(include, DEFAULT_INCLUDES)
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
return SelectionDifference(components=[included, excluded])
@@ -180,7 +181,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
union_def_parts = _get_list_dicts(definition, 'union')
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
union = SelectionUnion(components=include)
union = SelectionUnion(components=include, greedy_warning=False)
if exclude is None:
union.raw = definition
@@ -188,7 +189,8 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
else:
return SelectionDifference(
components=[union, exclude],
raw=definition
raw=definition,
greedy_warning=False
)
@@ -197,7 +199,7 @@ def parse_intersection_definition(
) -> SelectionSpec:
intersection_def_parts = _get_list_dicts(definition, 'intersection')
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
intersection = SelectionIntersection(components=include)
intersection = SelectionIntersection(components=include, greedy_warning=False)
if exclude is None:
intersection.raw = definition
@@ -205,7 +207,8 @@ def parse_intersection_definition(
else:
return SelectionDifference(
components=[intersection, exclude],
raw=definition
raw=definition,
greedy_warning=False
)
@@ -239,7 +242,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
if diff_arg is None:
return base
else:
return SelectionDifference(components=[base, diff_arg])
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
def parse_from_definition(
@@ -271,10 +274,12 @@ def parse_from_definition(
def parse_from_selectors_definition(
source: SelectorFile
) -> Dict[str, SelectionSpec]:
result: Dict[str, SelectionSpec] = {}
) -> Dict[str, Dict[str, Union[SelectionSpec, bool]]]:
result: Dict[str, Dict[str, Union[SelectionSpec, bool]]] = {}
selector: SelectorDefinition
for selector in source.selectors:
result[selector.name] = parse_from_definition(selector.definition,
rootlevel=True)
result[selector.name] = {
"default": selector.default,
"definition": parse_from_definition(selector.definition, rootlevel=True)
}
return result

View File

@@ -1,10 +1,8 @@
import threading
from queue import PriorityQueue
from typing import (
Dict, Set, Optional
)
import networkx as nx # type: ignore
import threading
from queue import PriorityQueue
from typing import Dict, Set, List, Generator, Optional
from .graph import UniqueId
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedExposure
@@ -21,9 +19,8 @@ class GraphQueue:
that separate threads do not call `.empty()` or `__len__()` and `.get()` at
the same time, as there is an unlocked race!
"""
def __init__(
self, graph: nx.DiGraph, manifest: Manifest, selected: Set[UniqueId]
):
def __init__(self, graph: nx.DiGraph, manifest: Manifest, selected: Set[UniqueId]):
self.graph = graph
self.manifest = manifest
self._selected = selected
@@ -37,7 +34,7 @@ class GraphQueue:
# this lock controls most things
self.lock = threading.Lock()
# store the 'score' of each node as a number. Lower is higher priority.
self._scores = self._calculate_scores()
self._scores = self._get_scores(self.graph)
# populate the initial queue
self._find_new_additions()
# awaits after task end
@@ -56,30 +53,59 @@ class GraphQueue:
return False
return True
def _calculate_scores(self) -> Dict[UniqueId, int]:
"""Calculate the 'value' of each node in the graph based on how many
blocking descendants it has. We use this score for the internal
priority queue's ordering, so the quality of this metric is important.
@staticmethod
def _grouped_topological_sort(
graph: nx.DiGraph,
) -> Generator[List[str], None, None]:
"""Topological sort of given graph that groups ties.
The score is stored as a negative number because the internal
PriorityQueue picks lowest values first.
Adapted from `nx.topological_sort`, this function returns a topo sort of a graph however
instead of arbitrarily ordering ties in the sort order, ties are grouped together in
lists.
We could do this in one pass over the graph instead of len(self.graph)
passes but this is easy. For large graphs this may hurt performance.
Args:
graph: The graph to be sorted.
This operates on the graph, so it would require a lock if called from
outside __init__.
:return Dict[str, int]: The score dict, mapping unique IDs to integer
scores. Lower scores are higher priority.
Returns:
A generator that yields lists of nodes, one list per graph depth level.
"""
indegree_map = {v: d for v, d in graph.in_degree() if d > 0}
zero_indegree = [v for v, d in graph.in_degree() if d == 0]
while zero_indegree:
yield zero_indegree
new_zero_indegree = []
for v in zero_indegree:
for _, child in graph.edges(v):
indegree_map[child] -= 1
if not indegree_map[child]:
new_zero_indegree.append(child)
zero_indegree = new_zero_indegree
def _get_scores(self, graph: nx.DiGraph) -> Dict[str, int]:
"""Scoring nodes for processing order.
Scores are calculated by the graph depth level. Lowest score (0) should be processed first.
Args:
graph: The graph to be scored.
Returns:
A dictionary consisting of `node name`:`score` pairs.
"""
# split graph by connected subgraphs
subgraphs = (
graph.subgraph(x) for x in nx.connected_components(nx.Graph(graph))
)
# score all nodes in all subgraphs
scores = {}
for node in self.graph.nodes():
score = -1 * len([
d for d in nx.descendants(self.graph, node)
if self._include_in_cost(d)
])
scores[node] = score
for subgraph in subgraphs:
grouped_nodes = self._grouped_topological_sort(subgraph)
for level, group in enumerate(grouped_nodes):
for node in group:
scores[node] = level
return scores
def get(
@@ -133,8 +159,6 @@ class GraphQueue:
def _find_new_additions(self) -> None:
"""Find any nodes in the graph that need to be added to the internal
queue and add them.
Callers must hold the lock.
"""
for node, in_degree in self.graph.in_degree():
if not self._already_known(node) and in_degree == 0:

View File

@@ -1,4 +1,3 @@
from typing import Set, List, Optional, Tuple
from .graph import Graph, UniqueId
@@ -30,6 +29,24 @@ def alert_non_existence(raw_spec, nodes):
)
def alert_unused_nodes(raw_spec, node_names):
summary_nodes_str = ("\n - ").join(node_names[:3])
debug_nodes_str = ("\n - ").join(node_names)
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
summary_msg = (
f"\nSome tests were excluded because at least one parent is not selected. "
f"Use the --greedy flag to include them."
f"\n - {summary_nodes_str}{and_more_str}"
)
logger.info(summary_msg)
if len(node_names) > 4:
debug_msg = (
f"Full list of tests that were excluded:"
f"\n - {debug_nodes_str}"
)
logger.debug(debug_msg)
def can_select_indirectly(node):
"""If a node is not selected itself, but its parent(s) are, it may qualify
for indirect selection.
@@ -151,16 +168,16 @@ class NodeSelector(MethodManager):
return direct_nodes, indirect_nodes
def select_nodes(self, spec: SelectionSpec) -> Set[UniqueId]:
def select_nodes(self, spec: SelectionSpec) -> Tuple[Set[UniqueId], Set[UniqueId]]:
"""Select the nodes in the graph according to the spec.
This is the main point of entry for turning a spec into a set of nodes:
- Recurse through spec, select by criteria, combine by set operation
- Return final (unfiltered) selection set
"""
direct_nodes, indirect_nodes = self.select_nodes_recursively(spec)
return direct_nodes
indirect_only = indirect_nodes.difference(direct_nodes)
return direct_nodes, indirect_only
def _is_graph_member(self, unique_id: UniqueId) -> bool:
if unique_id in self.manifest.sources:
@@ -213,6 +230,8 @@ class NodeSelector(MethodManager):
# - If ANY parent is missing, return it separately. We'll keep it around
# for later and see if its other parents show up.
# We use this for INCLUSION.
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
# or by specifying `greedy: true` in a yaml selector
direct_nodes = set(selected)
indirect_nodes = set()
@@ -251,15 +270,24 @@ class NodeSelector(MethodManager):
- node selection. Based on the include/exclude sets, the set
of matched unique IDs is returned
- expand the graph at each leaf node, before combination
- selectors might override this. for example, this is where
tests are added
- includes direct + indirect selection (for tests)
- filtering:
- selectors can filter the nodes after all of them have been
selected
"""
selected_nodes = self.select_nodes(spec)
selected_nodes, indirect_only = self.select_nodes(spec)
filtered_nodes = self.filter_selection(selected_nodes)
if indirect_only:
filtered_unused_nodes = self.filter_selection(indirect_only)
if filtered_unused_nodes and spec.greedy_warning:
# log anything that didn't make the cut
unused_node_names = []
for unique_id in filtered_unused_nodes:
name = self.manifest.nodes[unique_id].name
unused_node_names.append(name)
alert_unused_nodes(spec, unused_node_names)
return filtered_nodes
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:

View File

@@ -22,13 +22,11 @@ from dbt.contracts.graph.parsed import (
ParsedSourceDefinition,
)
from dbt.contracts.state import PreviousState
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.exceptions import (
InternalException,
RuntimeException,
)
from dbt.node_types import NodeType
from dbt.ui import warning_tag
SELECTOR_GLOB = '*'
@@ -381,7 +379,7 @@ class TestTypeSelectorMethod(SelectorMethod):
class StateSelectorMethod(SelectorMethod):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.macros_were_modified: Optional[List[str]] = None
self.modified_macros: Optional[List[str]] = None
def _macros_modified(self) -> List[str]:
# we checked in the caller!
@@ -394,44 +392,74 @@ class StateSelectorMethod(SelectorMethod):
modified = []
for uid, macro in new_macros.items():
name = f'{macro.package_name}.{macro.name}'
if uid in old_macros:
old_macro = old_macros[uid]
if macro.macro_sql != old_macro.macro_sql:
modified.append(f'{name} changed')
modified.append(uid)
else:
modified.append(f'{name} added')
modified.append(uid)
for uid, macro in old_macros.items():
if uid not in new_macros:
modified.append(f'{macro.package_name}.{macro.name} removed')
modified.append(uid)
return modified[:3]
return modified
def check_modified(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def recursively_check_macros_modified(self, node):
# check if there are any changes in macros the first time
if self.modified_macros is None:
self.modified_macros = self._macros_modified()
# loop through all macros that this node depends on
for macro_uid in node.depends_on.macros:
# is this macro one of the modified macros?
if macro_uid in self.modified_macros:
return True
# if not, and this macro depends on other macros, keep looping
macro = self.manifest.macros[macro_uid]
if len(macro.depends_on.macros) > 0:
return self.recursively_check_macros_modified(macro)
else:
return False
return False
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
different_contents = not new.same_contents(old) # type: ignore
upstream_macro_change = self.recursively_check_macros_modified(new)
return different_contents or upstream_macro_change
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_body"):
return not new.same_body(old) # type: ignore
else:
return False
def check_modified_configs(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_config"):
return not new.same_config(old) # type: ignore
else:
return False
def check_modified_persisted_descriptions(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
# check if there are any changes in macros, if so, log a warning the
# first time
if self.macros_were_modified is None:
self.macros_were_modified = self._macros_modified()
if self.macros_were_modified:
log_str = ', '.join(self.macros_were_modified)
logger.warning(warning_tag(
f'During a state comparison, dbt detected a change in '
f'macros. This will not be marked as a modification. Some '
f'macros: {log_str}'
))
if hasattr(new, "same_persisted_description"):
return not new.same_persisted_description(old) # type: ignore
else:
return False
return not new.same_contents(old) # type: ignore
def check_new(
self,
old: Optional[SelectorTarget],
new: SelectorTarget,
def check_modified_relation(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
if hasattr(new, "same_database_representation"):
return not new.same_database_representation(old) # type: ignore
else:
return False
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
return self.recursively_check_macros_modified(new)
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
return old is None
def search(
@@ -443,8 +471,15 @@ class StateSelectorMethod(SelectorMethod):
)
state_checks = {
# it's new if there is no old version
'new': lambda old, _: old is None,
# use methods defined above to compare properties of old + new
'modified': self.check_modified,
'new': self.check_new,
'modified.body': self.check_modified_body,
'modified.configs': self.check_modified_configs,
'modified.persisted_descriptions': self.check_modified_persisted_descriptions,
'modified.relation': self.check_modified_relation,
'modified.macros': self.check_modified_macros,
}
if selector in state_checks:
checker = state_checks[selector]

View File

@@ -67,6 +67,7 @@ class SelectionCriteria:
children: bool
children_depth: Optional[int]
greedy: bool = False
greedy_warning: bool = False # do not raise warning for yaml selectors
def __post_init__(self):
if self.children and self.childrens_parents:
@@ -124,11 +125,11 @@ class SelectionCriteria:
parents_depth=parents_depth,
children=bool(dct.get('children')),
children_depth=children_depth,
greedy=greedy
greedy=(greedy or bool(dct.get('greedy'))),
)
@classmethod
def dict_from_single_spec(cls, raw: str, greedy: bool = False):
def dict_from_single_spec(cls, raw: str):
result = RAW_SELECTOR_PATTERN.match(raw)
if result is None:
return {'error': 'Invalid selector spec'}
@@ -145,6 +146,8 @@ class SelectionCriteria:
dct['parents'] = bool(dct.get('parents'))
if 'children' in dct:
dct['children'] = bool(dct.get('children'))
if 'greedy' in dct:
dct['greedy'] = bool(dct.get('greedy'))
return dct
@classmethod
@@ -162,10 +165,12 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
self,
components: Iterable[SelectionSpec],
expect_exists: bool = False,
greedy_warning: bool = True,
raw: Any = None,
):
self.components: List[SelectionSpec] = list(components)
self.expect_exists = expect_exists
self.greedy_warning = greedy_warning
self.raw = raw
def __iter__(self) -> Iterator[SelectionSpec]:

View File

@@ -1,5 +1,5 @@
{% macro get_columns_in_query(select_sql) -%}
{{ return(adapter.dispatch('get_columns_in_query')(select_sql)) }}
{{ return(adapter.dispatch('get_columns_in_query', 'dbt')(select_sql)) }}
{% endmacro %}
{% macro default__get_columns_in_query(select_sql) %}
@@ -15,7 +15,7 @@
{% endmacro %}
{% macro create_schema(relation) -%}
{{ adapter.dispatch('create_schema')(relation) }}
{{ adapter.dispatch('create_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__create_schema(relation) -%}
@@ -25,7 +25,7 @@
{% endmacro %}
{% macro drop_schema(relation) -%}
{{ adapter.dispatch('drop_schema')(relation) }}
{{ adapter.dispatch('drop_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__drop_schema(relation) -%}
@@ -35,7 +35,7 @@
{% endmacro %}
{% macro create_table_as(temporary, relation, sql) -%}
{{ adapter.dispatch('create_table_as')(temporary, relation, sql) }}
{{ adapter.dispatch('create_table_as', 'dbt')(temporary, relation, sql) }}
{%- endmacro %}
{% macro default__create_table_as(temporary, relation, sql) -%}
@@ -52,7 +52,7 @@
{% endmacro %}
{% macro get_create_index_sql(relation, index_dict) -%}
{{ return(adapter.dispatch('get_create_index_sql')(relation, index_dict)) }}
{{ return(adapter.dispatch('get_create_index_sql', 'dbt')(relation, index_dict)) }}
{% endmacro %}
{% macro default__get_create_index_sql(relation, index_dict) -%}
@@ -60,7 +60,7 @@
{% endmacro %}
{% macro create_indexes(relation) -%}
{{ adapter.dispatch('create_indexes')(relation) }}
{{ adapter.dispatch('create_indexes', 'dbt')(relation) }}
{%- endmacro %}
{% macro default__create_indexes(relation) -%}
@@ -75,7 +75,7 @@
{% endmacro %}
{% macro create_view_as(relation, sql) -%}
{{ adapter.dispatch('create_view_as')(relation, sql) }}
{{ adapter.dispatch('create_view_as', 'dbt')(relation, sql) }}
{%- endmacro %}
{% macro default__create_view_as(relation, sql) -%}
@@ -89,7 +89,7 @@
{% macro get_catalog(information_schema, schemas) -%}
{{ return(adapter.dispatch('get_catalog')(information_schema, schemas)) }}
{{ return(adapter.dispatch('get_catalog', 'dbt')(information_schema, schemas)) }}
{%- endmacro %}
{% macro default__get_catalog(information_schema, schemas) -%}
@@ -104,7 +104,7 @@
{% macro get_columns_in_relation(relation) -%}
{{ return(adapter.dispatch('get_columns_in_relation')(relation)) }}
{{ return(adapter.dispatch('get_columns_in_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro sql_convert_columns_in_relation(table) -%}
@@ -121,13 +121,13 @@
{% endmacro %}
{% macro alter_column_type(relation, column_name, new_column_type) -%}
{{ return(adapter.dispatch('alter_column_type')(relation, column_name, new_column_type)) }}
{{ return(adapter.dispatch('alter_column_type', 'dbt')(relation, column_name, new_column_type)) }}
{% endmacro %}
{% macro alter_column_comment(relation, column_dict) -%}
{{ return(adapter.dispatch('alter_column_comment')(relation, column_dict)) }}
{{ return(adapter.dispatch('alter_column_comment', 'dbt')(relation, column_dict)) }}
{% endmacro %}
{% macro default__alter_column_comment(relation, column_dict) -%}
@@ -136,7 +136,7 @@
{% endmacro %}
{% macro alter_relation_comment(relation, relation_comment) -%}
{{ return(adapter.dispatch('alter_relation_comment')(relation, relation_comment)) }}
{{ return(adapter.dispatch('alter_relation_comment', 'dbt')(relation, relation_comment)) }}
{% endmacro %}
{% macro default__alter_relation_comment(relation, relation_comment) -%}
@@ -145,7 +145,7 @@
{% endmacro %}
{% macro persist_docs(relation, model, for_relation=true, for_columns=true) -%}
{{ return(adapter.dispatch('persist_docs')(relation, model, for_relation, for_columns)) }}
{{ return(adapter.dispatch('persist_docs', 'dbt')(relation, model, for_relation, for_columns)) }}
{% endmacro %}
{% macro default__persist_docs(relation, model, for_relation, for_columns) -%}
@@ -180,7 +180,7 @@
{% macro drop_relation(relation) -%}
{{ return(adapter.dispatch('drop_relation')(relation)) }}
{{ return(adapter.dispatch('drop_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -191,7 +191,7 @@
{% endmacro %}
{% macro truncate_relation(relation) -%}
{{ return(adapter.dispatch('truncate_relation')(relation)) }}
{{ return(adapter.dispatch('truncate_relation', 'dbt')(relation)) }}
{% endmacro %}
@@ -202,7 +202,7 @@
{% endmacro %}
{% macro rename_relation(from_relation, to_relation) -%}
{{ return(adapter.dispatch('rename_relation')(from_relation, to_relation)) }}
{{ return(adapter.dispatch('rename_relation', 'dbt')(from_relation, to_relation)) }}
{% endmacro %}
{% macro default__rename_relation(from_relation, to_relation) -%}
@@ -214,7 +214,7 @@
{% macro information_schema_name(database) %}
{{ return(adapter.dispatch('information_schema_name')(database)) }}
{{ return(adapter.dispatch('information_schema_name', 'dbt')(database)) }}
{% endmacro %}
{% macro default__information_schema_name(database) -%}
@@ -227,7 +227,7 @@
{% macro list_schemas(database) -%}
{{ return(adapter.dispatch('list_schemas')(database)) }}
{{ return(adapter.dispatch('list_schemas', 'dbt')(database)) }}
{% endmacro %}
{% macro default__list_schemas(database) -%}
@@ -241,7 +241,7 @@
{% macro check_schema_exists(information_schema, schema) -%}
{{ return(adapter.dispatch('check_schema_exists')(information_schema, schema)) }}
{{ return(adapter.dispatch('check_schema_exists', 'dbt')(information_schema, schema)) }}
{% endmacro %}
{% macro default__check_schema_exists(information_schema, schema) -%}
@@ -256,7 +256,7 @@
{% macro list_relations_without_caching(schema_relation) %}
{{ return(adapter.dispatch('list_relations_without_caching')(schema_relation)) }}
{{ return(adapter.dispatch('list_relations_without_caching', 'dbt')(schema_relation)) }}
{% endmacro %}
@@ -267,7 +267,7 @@
{% macro current_timestamp() -%}
{{ adapter.dispatch('current_timestamp')() }}
{{ adapter.dispatch('current_timestamp', 'dbt')() }}
{%- endmacro %}
@@ -278,7 +278,7 @@
{% macro collect_freshness(source, loaded_at_field, filter) %}
{{ return(adapter.dispatch('collect_freshness')(source, loaded_at_field, filter))}}
{{ return(adapter.dispatch('collect_freshness', 'dbt')(source, loaded_at_field, filter))}}
{% endmacro %}
@@ -296,7 +296,7 @@
{% endmacro %}
{% macro make_temp_relation(base_relation, suffix='__dbt_tmp') %}
{{ return(adapter.dispatch('make_temp_relation')(base_relation, suffix))}}
{{ return(adapter.dispatch('make_temp_relation', 'dbt')(base_relation, suffix))}}
{% endmacro %}
{% macro default__make_temp_relation(base_relation, suffix) %}
@@ -311,3 +311,34 @@
{{ config.set('sql_header', caller()) }}
{%- endmacro %}
{% macro alter_relation_add_remove_columns(relation, add_columns = none, remove_columns = none) -%}
{{ return(adapter.dispatch('alter_relation_add_remove_columns', 'dbt')(relation, add_columns, remove_columns)) }}
{% endmacro %}
{% macro default__alter_relation_add_remove_columns(relation, add_columns, remove_columns) %}
{% if add_columns is none %}
{% set add_columns = [] %}
{% endif %}
{% if remove_columns is none %}
{% set remove_columns = [] %}
{% endif %}
{% set sql -%}
alter {{ relation.type }} {{ relation }}
{% for column in add_columns %}
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
{% endfor %}{{ ',' if remove_columns | length > 0 }}
{% for column in remove_columns %}
drop column {{ column.name }}{{ ',' if not loop.last }}
{% endfor %}
{%- endset -%}
{% do run_query(sql) %}
{% endmacro %}

View File

@@ -13,6 +13,10 @@
#}
{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_alias_name', 'dbt')(custom_alias_name, node)) %}
{%- endmacro %}
{% macro default__generate_alias_name(custom_alias_name=none, node=none) -%}
{%- if custom_alias_name is none -%}

View File

@@ -14,7 +14,7 @@
#}
{% macro generate_database_name(custom_database_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_database_name')(custom_database_name, node)) %}
{% do return(adapter.dispatch('generate_database_name', 'dbt')(custom_database_name, node)) %}
{%- endmacro %}
{% macro default__generate_database_name(custom_database_name=none, node=none) -%}

View File

@@ -15,6 +15,10 @@
#}
{% macro generate_schema_name(custom_schema_name, node) -%}
{{ return(adapter.dispatch('generate_schema_name', 'dbt')(custom_schema_name, node)) }}
{% endmacro %}
{% macro default__generate_schema_name(custom_schema_name, node) -%}
{%- set default_schema = target.schema -%}
{%- if custom_schema_name is none -%}

View File

@@ -0,0 +1,15 @@
{% macro get_where_subquery(relation) -%}
{% do return(adapter.dispatch('get_where_subquery')(relation)) %}
{%- endmacro %}
{% macro default__get_where_subquery(relation) -%}
{% set where = config.get('where', '') %}
{% if where %}
{%- set filtered -%}
(select * from {{ relation }} where {{ where }}) dbt_subquery
{%- endset -%}
{% do return(filtered) %}
{%- else -%}
{% do return(relation) %}
{%- endif -%}
{%- endmacro %}

View File

@@ -1,17 +1,17 @@
{% macro get_merge_sql(target, source, unique_key, dest_columns, predicates=none) -%}
{{ adapter.dispatch('get_merge_sql')(target, source, unique_key, dest_columns, predicates) }}
{{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, predicates) }}
{%- endmacro %}
{% macro get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{{ adapter.dispatch('get_delete_insert_merge_sql')(target, source, unique_key, dest_columns) }}
{{ adapter.dispatch('get_delete_insert_merge_sql', 'dbt')(target, source, unique_key, dest_columns) }}
{%- endmacro %}
{% macro get_insert_overwrite_merge_sql(target, source, dest_columns, predicates, include_sql_header=false) -%}
{{ adapter.dispatch('get_insert_overwrite_merge_sql')(target, source, dest_columns, predicates, include_sql_header) }}
{{ adapter.dispatch('get_insert_overwrite_merge_sql', 'dbt')(target, source, dest_columns, predicates, include_sql_header) }}
{%- endmacro %}
@@ -79,7 +79,7 @@
(
select {{ dest_cols_csv }}
from {{ source }}
);
)
{%- endmacro %}

View File

@@ -1,5 +1,6 @@
{% macro incremental_upsert(tmp_relation, target_relation, unique_key=none, statement_name="main") %}
{%- set dest_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set dest_cols_csv = dest_columns | map(attribute='quoted') | join(', ') -%}

View File

@@ -5,6 +5,26 @@
{% set target_relation = this.incorporate(type='table') %}
{% set existing_relation = load_relation(this) %}
{% set tmp_relation = make_temp_relation(target_relation) %}
{%- set full_refresh_mode = (should_full_refresh()) -%}
{% set on_schema_change = incremental_validate_on_schema_change(config.get('on_schema_change'), default='ignore') %}
{% set tmp_identifier = model['name'] + '__dbt_tmp' %}
{% set backup_identifier = model['name'] + "__dbt_backup" %}
-- the intermediate_ and backup_ relations should not already exist in the database; get_relation
-- will return None in that case. Otherwise, we get a relation that we can drop
-- later, before we try to use this name for the current operation. This has to happen before
-- BEGIN, in a separate transaction
{% set preexisting_intermediate_relation = adapter.get_relation(identifier=tmp_identifier,
schema=schema,
database=database) %}
{% set preexisting_backup_relation = adapter.get_relation(identifier=backup_identifier,
schema=schema,
database=database) %}
{{ drop_relation_if_exists(preexisting_intermediate_relation) }}
{{ drop_relation_if_exists(preexisting_backup_relation) }}
{{ run_hooks(pre_hooks, inside_transaction=False) }}
@@ -12,29 +32,30 @@
{{ run_hooks(pre_hooks, inside_transaction=True) }}
{% set to_drop = [] %}
{# -- first check whether we want to full refresh for source view or config reasons #}
{% set trigger_full_refresh = (full_refresh_mode or existing_relation.is_view) %}
{% if existing_relation is none %}
{% set build_sql = create_table_as(False, target_relation, sql) %}
{% elif existing_relation.is_view or should_full_refresh() %}
{% elif trigger_full_refresh %}
{#-- Make sure the backup doesn't exist so we don't encounter issues with the rename below #}
{% set tmp_identifier = model['name'] + '__dbt_tmp' %}
{% set backup_identifier = model['name'] + "__dbt_backup" %}
{% set backup_identifier = model['name'] + '__dbt_backup' %}
{% set intermediate_relation = existing_relation.incorporate(path={"identifier": tmp_identifier}) %}
{% set backup_relation = existing_relation.incorporate(path={"identifier": backup_identifier}) %}
{% do adapter.drop_relation(intermediate_relation) %}
{% do adapter.drop_relation(backup_relation) %}
{% set build_sql = create_table_as(False, intermediate_relation, sql) %}
{% set need_swap = true %}
{% do to_drop.append(backup_relation) %}
{% else %}
{% set tmp_relation = make_temp_relation(target_relation) %}
{% do run_query(create_table_as(True, tmp_relation, sql)) %}
{% do adapter.expand_target_column_types(
{% do run_query(create_table_as(True, tmp_relation, sql)) %}
{% do adapter.expand_target_column_types(
from_relation=tmp_relation,
to_relation=target_relation) %}
{% set build_sql = incremental_upsert(tmp_relation, target_relation, unique_key=unique_key) %}
{% do process_schema_changes(on_schema_change, tmp_relation, existing_relation) %}
{% set build_sql = incremental_upsert(tmp_relation, target_relation, unique_key=unique_key) %}
{% endif %}
{% call statement("main") %}

View File

@@ -0,0 +1,164 @@
{% macro incremental_validate_on_schema_change(on_schema_change, default='ignore') %}
{% if on_schema_change not in ['sync_all_columns', 'append_new_columns', 'fail', 'ignore'] %}
{% set log_message = 'Invalid value for on_schema_change (%s) specified. Setting default value of %s.' % (on_schema_change, default) %}
{% do log(log_message) %}
{{ return(default) }}
{% else %}
{{ return(on_schema_change) }}
{% endif %}
{% endmacro %}
{% macro diff_columns(source_columns, target_columns) %}
{% set result = [] %}
{% set source_names = source_columns | map(attribute = 'column') | list %}
{% set target_names = target_columns | map(attribute = 'column') | list %}
{# --check whether the name attribute exists in the target - this does not perform a data type check #}
{% for sc in source_columns %}
{% if sc.name not in target_names %}
{{ result.append(sc) }}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro diff_column_data_types(source_columns, target_columns) %}
{% set result = [] %}
{% for sc in source_columns %}
{% set tc = target_columns | selectattr("name", "equalto", sc.name) | list | first %}
{% if tc %}
{% if sc.data_type != tc.data_type %}
{{ result.append( { 'column_name': tc.name, 'new_type': sc.data_type } ) }}
{% endif %}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro check_for_schema_changes(source_relation, target_relation) %}
{% set schema_changed = False %}
{%- set source_columns = adapter.get_columns_in_relation(source_relation) -%}
{%- set target_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set source_not_in_target = diff_columns(source_columns, target_columns) -%}
{%- set target_not_in_source = diff_columns(target_columns, source_columns) -%}
{% set new_target_types = diff_column_data_types(source_columns, target_columns) %}
{% if source_not_in_target != [] %}
{% set schema_changed = True %}
{% elif target_not_in_source != [] or new_target_types != [] %}
{% set schema_changed = True %}
{% elif new_target_types != [] %}
{% set schema_changed = True %}
{% endif %}
{% set changes_dict = {
'schema_changed': schema_changed,
'source_not_in_target': source_not_in_target,
'target_not_in_source': target_not_in_source,
'new_target_types': new_target_types
} %}
{% set msg %}
In {{ target_relation }}:
Schema changed: {{ schema_changed }}
Source columns not in target: {{ source_not_in_target }}
Target columns not in source: {{ target_not_in_source }}
New column types: {{ new_target_types }}
{% endset %}
{% do log(msg) %}
{{ return(changes_dict) }}
{% endmacro %}
{% macro sync_column_schemas(on_schema_change, target_relation, schema_changes_dict) %}
{%- set add_to_target_arr = schema_changes_dict['source_not_in_target'] -%}
{%- if on_schema_change == 'append_new_columns'-%}
{%- if add_to_target_arr | length > 0 -%}
{%- do alter_relation_add_remove_columns(target_relation, add_to_target_arr, none) -%}
{%- endif -%}
{% elif on_schema_change == 'sync_all_columns' %}
{%- set remove_from_target_arr = schema_changes_dict['target_not_in_source'] -%}
{%- set new_target_types = schema_changes_dict['new_target_types'] -%}
{% if add_to_target_arr | length > 0 or remove_from_target_arr | length > 0 %}
{%- do alter_relation_add_remove_columns(target_relation, add_to_target_arr, remove_from_target_arr) -%}
{% endif %}
{% if new_target_types != [] %}
{% for ntt in new_target_types %}
{% set column_name = ntt['column_name'] %}
{% set new_type = ntt['new_type'] %}
{% do alter_column_type(target_relation, column_name, new_type) %}
{% endfor %}
{% endif %}
{% endif %}
{% set schema_change_message %}
In {{ target_relation }}:
Schema change approach: {{ on_schema_change }}
Columns added: {{ add_to_target_arr }}
Columns removed: {{ remove_from_target_arr }}
Data types changed: {{ new_target_types }}
{% endset %}
{% do log(schema_change_message) %}
{% endmacro %}
{% macro process_schema_changes(on_schema_change, source_relation, target_relation) %}
{% if on_schema_change != 'ignore' %}
{% set schema_changes_dict = check_for_schema_changes(source_relation, target_relation) %}
{% if schema_changes_dict['schema_changed'] %}
{% if on_schema_change == 'fail' %}
{% set fail_msg %}
The source and target schemas on this incremental model are out of sync!
They can be reconciled in several ways:
- set the `on_schema_change` config to either append_new_columns or sync_all_columns, depending on your situation.
- Re-run the incremental model with `full_refresh: True` to update the target schema.
- update the schema manually and re-run the process.
{% endset %}
{% do exceptions.raise_compiler_error(fail_msg) %}
{# -- unless we ignore, run the sync operation per the config #}
{% else %}
{% do sync_column_schemas(on_schema_change, target_relation, schema_changes_dict) %}
{% endif %}
{% endif %}
{% endif %}
{% endmacro %}

View File

@@ -1,14 +1,6 @@
{% macro create_csv_table(model, agate_table) -%}
{{ adapter.dispatch('create_csv_table')(model, agate_table) }}
{%- endmacro %}
{% macro reset_csv_table(model, full_refresh, old_relation, agate_table) -%}
{{ adapter.dispatch('reset_csv_table')(model, full_refresh, old_relation, agate_table) }}
{%- endmacro %}
{% macro load_csv_rows(model, agate_table) -%}
{{ adapter.dispatch('load_csv_rows')(model, agate_table) }}
{{ adapter.dispatch('create_csv_table', 'dbt')(model, agate_table) }}
{%- endmacro %}
{% macro default__create_csv_table(model, agate_table) %}
@@ -33,6 +25,9 @@
{{ return(sql) }}
{% endmacro %}
{% macro reset_csv_table(model, full_refresh, old_relation, agate_table) -%}
{{ adapter.dispatch('reset_csv_table', 'dbt')(model, full_refresh, old_relation, agate_table) }}
{%- endmacro %}
{% macro default__reset_csv_table(model, full_refresh, old_relation, agate_table) %}
{% set sql = "" %}
@@ -47,6 +42,21 @@
{{ return(sql) }}
{% endmacro %}
{% macro get_binding_char() -%}
{{ adapter.dispatch('get_binding_char', 'dbt')() }}
{%- endmacro %}
{% macro default__get_binding_char() %}
{{ return('%s') }}
{% endmacro %}
{% macro get_batch_size() -%}
{{ adapter.dispatch('get_batch_size', 'dbt')() }}
{%- endmacro %}
{% macro default__get_batch_size() %}
{{ return(10000) }}
{% endmacro %}
{% macro get_seed_column_quoted_csv(model, column_names) %}
{%- set quote_seed_column = model['config'].get('quote_columns', None) -%}
@@ -59,47 +69,47 @@
{{ return(dest_cols_csv) }}
{% endmacro %}
{% macro basic_load_csv_rows(model, batch_size, agate_table) %}
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
{% set bindings = [] %}
{% set statements = [] %}
{% for chunk in agate_table.rows | batch(batch_size) %}
{% set bindings = [] %}
{% for row in chunk %}
{% do bindings.extend(row) %}
{% endfor %}
{% set sql %}
insert into {{ this.render() }} ({{ cols_sql }}) values
{% for row in chunk -%}
({%- for column in agate_table.column_names -%}
%s
{%- if not loop.last%},{%- endif %}
{%- endfor -%})
{%- if not loop.last%},{%- endif %}
{%- endfor %}
{% endset %}
{% do adapter.add_query(sql, bindings=bindings, abridge_sql_log=True) %}
{% if loop.index0 == 0 %}
{% do statements.append(sql) %}
{% endif %}
{% endfor %}
{# Return SQL so we can render it out into the compiled files #}
{{ return(statements[0]) }}
{% endmacro %}
{% macro load_csv_rows(model, agate_table) -%}
{{ adapter.dispatch('load_csv_rows', 'dbt')(model, agate_table) }}
{%- endmacro %}
{% macro default__load_csv_rows(model, agate_table) %}
{{ return(basic_load_csv_rows(model, 10000, agate_table) )}}
{% endmacro %}
{% set batch_size = get_batch_size() %}
{% set cols_sql = get_seed_column_quoted_csv(model, agate_table.column_names) %}
{% set bindings = [] %}
{% set statements = [] %}
{% for chunk in agate_table.rows | batch(batch_size) %}
{% set bindings = [] %}
{% for row in chunk %}
{% do bindings.extend(row) %}
{% endfor %}
{% set sql %}
insert into {{ this.render() }} ({{ cols_sql }}) values
{% for row in chunk -%}
({%- for column in agate_table.column_names -%}
{{ get_binding_char() }}
{%- if not loop.last%},{%- endif %}
{%- endfor -%})
{%- if not loop.last%},{%- endif %}
{%- endfor %}
{% endset %}
{% do adapter.add_query(sql, bindings=bindings, abridge_sql_log=True) %}
{% if loop.index0 == 0 %}
{% do statements.append(sql) %}
{% endif %}
{% endfor %}
{# Return SQL so we can render it out into the compiled files #}
{{ return(statements[0]) }}
{% endmacro %}
{% materialization seed, default %}

View File

@@ -2,7 +2,7 @@
Add new columns to the table if applicable
#}
{% macro create_columns(relation, columns) %}
{{ adapter.dispatch('create_columns')(relation, columns) }}
{{ adapter.dispatch('create_columns', 'dbt')(relation, columns) }}
{% endmacro %}
{% macro default__create_columns(relation, columns) %}
@@ -15,7 +15,7 @@
{% macro post_snapshot(staging_relation) %}
{{ adapter.dispatch('post_snapshot')(staging_relation) }}
{{ adapter.dispatch('post_snapshot', 'dbt')(staging_relation) }}
{% endmacro %}
{% macro default__post_snapshot(staging_relation) %}

View File

@@ -1,6 +1,6 @@
{% macro snapshot_merge_sql(target, source, insert_cols) -%}
{{ adapter.dispatch('snapshot_merge_sql')(target, source, insert_cols) }}
{{ adapter.dispatch('snapshot_merge_sql', 'dbt')(target, source, insert_cols) }}
{%- endmacro %}
@@ -21,7 +21,6 @@
and DBT_INTERNAL_SOURCE.dbt_change_type = 'insert'
then insert ({{ insert_cols_csv }})
values ({{ insert_cols_csv }})
;
{% endmacro %}

View File

@@ -36,7 +36,7 @@
Create SCD Hash SQL fields cross-db
#}
{% macro snapshot_hash_arguments(args) -%}
{{ adapter.dispatch('snapshot_hash_arguments')(args) }}
{{ adapter.dispatch('snapshot_hash_arguments', 'dbt')(args) }}
{%- endmacro %}
@@ -52,7 +52,7 @@
Get the current time cross-db
#}
{% macro snapshot_get_time() -%}
{{ adapter.dispatch('snapshot_get_time')() }}
{{ adapter.dispatch('snapshot_get_time', 'dbt')() }}
{%- endmacro %}
{% macro default__snapshot_get_time() -%}
@@ -75,7 +75,7 @@
table instead of assuming that the user-supplied {{ updated_at }}
will be present in the historical data.
See https://github.com/fishtown-analytics/dbt/issues/2350
See https://github.com/dbt-labs/dbt/issues/2350
*/ #}
{% set row_changed_expr -%}
({{ snapshotted_rel }}.dbt_valid_from < {{ current_rel }}.{{ updated_at }})
@@ -94,7 +94,7 @@
{% macro snapshot_string_as_time(timestamp) -%}
{{ adapter.dispatch('snapshot_string_as_time')(timestamp) }}
{{ adapter.dispatch('snapshot_string_as_time', 'dbt')(timestamp) }}
{%- endmacro %}

View File

@@ -12,7 +12,12 @@
schema=schema,
database=database,
type='table') -%}
-- the intermediate_relation should not already exist in the database; get_relation
-- will return None in that case. Otherwise, we get a relation that we can drop
-- later, before we try to use this name for the current operation
{%- set preexisting_intermediate_relation = adapter.get_relation(identifier=tmp_identifier,
schema=schema,
database=database) -%}
/*
See ../view/view.sql for more information about this relation.
*/
@@ -21,14 +26,15 @@
schema=schema,
database=database,
type=backup_relation_type) -%}
{%- set exists_as_table = (old_relation is not none and old_relation.is_table) -%}
{%- set exists_as_view = (old_relation is not none and old_relation.is_view) -%}
-- as above, the backup_relation should not already exist
{%- set preexisting_backup_relation = adapter.get_relation(identifier=backup_identifier,
schema=schema,
database=database) -%}
-- drop the temp relations if they exists for some reason
{{ adapter.drop_relation(intermediate_relation) }}
{{ adapter.drop_relation(backup_relation) }}
-- drop the temp relations if they exist already in the database
{{ drop_relation_if_exists(preexisting_intermediate_relation) }}
{{ drop_relation_if_exists(preexisting_backup_relation) }}
{{ run_hooks(pre_hooks, inside_transaction=False) }}
@@ -42,7 +48,7 @@
-- cleanup
{% if old_relation is not none %}
{{ adapter.rename_relation(target_relation, backup_relation) }}
{{ adapter.rename_relation(old_relation, backup_relation) }}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -1,3 +1,19 @@
{% macro get_test_sql(main_sql, fail_calc, warn_if, error_if, limit) -%}
{{ adapter.dispatch('get_test_sql', 'dbt')(main_sql, fail_calc, warn_if, error_if, limit) }}
{%- endmacro %}
{% macro default__get_test_sql(main_sql, fail_calc, warn_if, error_if, limit) -%}
select
{{ fail_calc }} as failures,
{{ fail_calc }} {{ warn_if }} as should_warn,
{{ fail_calc }} {{ error_if }} as should_error
from (
{{ main_sql }}
{{ "limit " ~ limit if limit != none }}
) dbt_internal_test
{%- endmacro %}
{%- materialization test, default -%}
{% set relations = [] %}
@@ -39,14 +55,7 @@
{% call statement('main', fetch_result=True) -%}
select
{{ fail_calc }} as failures,
{{ fail_calc }} {{ warn_if }} as should_warn,
{{ fail_calc }} {{ error_if }} as should_error
from (
{{ main_sql }}
{{ "limit " ~ limit if limit != none }}
) dbt_internal_test
{{ get_test_sql(main_sql, fail_calc, warn_if, error_if, limit)}}
{%- endcall %}

View File

@@ -1,9 +1,10 @@
{% macro handle_existing_table(full_refresh, old_relation) %}
{{ adapter.dispatch('handle_existing_table', macro_namespace = 'dbt')(full_refresh, old_relation) }}
{{ adapter.dispatch('handle_existing_table', 'dbt')(full_refresh, old_relation) }}
{% endmacro %}
{% macro default__handle_existing_table(full_refresh, old_relation) %}
{{ log("Dropping relation " ~ old_relation ~ " because it is of type " ~ old_relation.type) }}
{{ adapter.drop_relation(old_relation) }}
{% endmacro %}
@@ -19,7 +20,7 @@
*/
#}
{% macro create_or_replace_view(run_outside_transaction_hooks=True) %}
{% macro create_or_replace_view() %}
{%- set identifier = model['alias'] -%}
{%- set old_relation = adapter.get_relation(database=database, schema=schema, identifier=identifier) -%}
@@ -30,13 +31,7 @@
identifier=identifier, schema=schema, database=database,
type='view') -%}
{% if run_outside_transaction_hooks %}
-- no transactions on BigQuery
{{ run_hooks(pre_hooks, inside_transaction=False) }}
{% endif %}
-- `BEGIN` happens here on Snowflake
{{ run_hooks(pre_hooks, inside_transaction=True) }}
{{ run_hooks(pre_hooks) }}
-- If there's a table with the same name and we weren't told to full refresh,
-- that's an error. If we were told to full refresh, drop it. This behavior differs
@@ -50,14 +45,7 @@
{{ create_view_as(target_relation, sql) }}
{%- endcall %}
{{ run_hooks(post_hooks, inside_transaction=True) }}
{{ adapter.commit() }}
{% if run_outside_transaction_hooks %}
-- No transactions on BigQuery
{{ run_hooks(post_hooks, inside_transaction=False) }}
{% endif %}
{{ run_hooks(post_hooks) }}
{{ return({'relations': [target_relation]}) }}

View File

@@ -9,7 +9,12 @@
type='view') -%}
{%- set intermediate_relation = api.Relation.create(identifier=tmp_identifier,
schema=schema, database=database, type='view') -%}
-- the intermediate_relation should not already exist in the database; get_relation
-- will return None in that case. Otherwise, we get a relation that we can drop
-- later, before we try to use this name for the current operation
{%- set preexisting_intermediate_relation = adapter.get_relation(identifier=tmp_identifier,
schema=schema,
database=database) -%}
/*
This relation (probably) doesn't exist yet. If it does exist, it's a leftover from
a previous run, and we're going to try to drop it immediately. At the end of this
@@ -27,14 +32,16 @@
{%- set backup_relation = api.Relation.create(identifier=backup_identifier,
schema=schema, database=database,
type=backup_relation_type) -%}
{%- set exists_as_view = (old_relation is not none and old_relation.is_view) -%}
-- as above, the backup_relation should not already exist
{%- set preexisting_backup_relation = adapter.get_relation(identifier=backup_identifier,
schema=schema,
database=database) -%}
{{ run_hooks(pre_hooks, inside_transaction=False) }}
-- drop the temp relations if they exists for some reason
{{ adapter.drop_relation(intermediate_relation) }}
{{ adapter.drop_relation(backup_relation) }}
-- drop the temp relations if they exist already in the database
{{ drop_relation_if_exists(preexisting_intermediate_relation) }}
{{ drop_relation_if_exists(preexisting_backup_relation) }}
-- `BEGIN` happens here:
{{ run_hooks(pre_hooks, inside_transaction=True) }}
@@ -47,7 +54,7 @@
-- cleanup
-- move the existing view out of the way
{% if old_relation is not none %}
{{ adapter.rename_relation(target_relation, backup_relation) }}
{{ adapter.rename_relation(old_relation, backup_relation) }}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -7,7 +7,7 @@ with all_values as (
count(*) as n_records
from {{ model }}
group by 1
group by {{ column_name }}
)
@@ -28,6 +28,6 @@ where value_field not in (
{% test accepted_values(model, column_name, values, quote=True) %}
{% set macro = adapter.dispatch('test_accepted_values') %}
{% set macro = adapter.dispatch('test_accepted_values', 'dbt') %}
{{ macro(model, column_name, values, quote) }}
{% endtest %}

View File

@@ -8,6 +8,6 @@ where {{ column_name }} is null
{% test not_null(model, column_name) %}
{% set macro = adapter.dispatch('test_not_null') %}
{% set macro = adapter.dispatch('test_not_null', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

View File

@@ -1,21 +1,30 @@
{% macro default__test_relationships(model, column_name, to, field) %}
with child as (
select {{ column_name }} as from_field
from {{ model }}
where {{ column_name }} is not null
),
parent as (
select {{ field }} as to_field
from {{ to }}
)
select
child.{{ column_name }}
from_field
from {{ model }} as child
from child
left join parent
on child.from_field = parent.to_field
left join {{ to }} as parent
on child.{{ column_name }} = parent.{{ field }}
where child.{{ column_name }} is not null
and parent.{{ field }} is null
where parent.to_field is null
{% endmacro %}
{% test relationships(model, column_name, to, field) %}
{% set macro = adapter.dispatch('test_relationships') %}
{% set macro = adapter.dispatch('test_relationships', 'dbt') %}
{{ macro(model, column_name, to, field) }}
{% endtest %}

View File

@@ -1,7 +1,7 @@
{% macro default__test_unique(model, column_name) %}
select
{{ column_name }},
{{ column_name }} as unique_field,
count(*) as n_records
from {{ model }}
@@ -13,6 +13,6 @@ having count(*) > 1
{% test unique(model, column_name) %}
{% set macro = adapter.dispatch('test_unique') %}
{% set macro = adapter.dispatch('test_unique', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,4 @@
target/
dbt_modules/
logs/

View File

@@ -0,0 +1,15 @@
Welcome to your new dbt project!
### Using the starter project
Try running the following commands:
- dbt run
- dbt test
### Resources:
- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
- Join the [chat](https://community.getdbt.com/) on Slack for live discussions and support
- Find [dbt events](https://events.getdbt.com) near you
- Check out [the blog](https://blog.getdbt.com/) for the latest news on dbt's development and best practices

View File

@@ -0,0 +1,3 @@
import os
PACKAGE_PATH = os.path.dirname(__file__)

View File

@@ -0,0 +1,38 @@
# Name your project! Project names should contain only lowercase characters
# and underscores. A good package name should reflect your organization's
# name or the intended use of these models
name: 'my_new_project'
version: '1.0.0'
config-version: 2
# This setting configures which "profile" dbt uses for this project.
profile: 'default'
# These configurations specify where dbt should look for different types of files.
# The `source-paths` config, for example, states that models in this project can be
# found in the "models/" directory. You probably won't need to change these!
source-paths: ["models"]
analysis-paths: ["analysis"]
test-paths: ["tests"]
data-paths: ["data"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target" # directory which will store compiled SQL files
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_modules"
# Configuring models
# Full documentation: https://docs.getdbt.com/docs/configuring-models
# In this example config, we tell dbt to build all models in the example/ directory
# as tables. These settings can be overridden in the individual model files
# using the `{{ config(...) }}` macro.
models:
my_new_project:
# Config indicated by + and applies to all files under models/example/
example:
+materialized: view

View File

@@ -0,0 +1,27 @@
/*
Welcome to your first dbt model!
Did you know that you can also configure models directly within SQL files?
This will override configurations stated in dbt_project.yml
Try changing "table" to "view" below
*/
{{ config(materialized='table') }}
with source_data as (
select 1 as id
union all
select null as id
)
select *
from source_data
/*
Uncomment the line below to remove records with null `id` values
*/
-- where id is not null

View File

@@ -0,0 +1,6 @@
-- Use the `ref` function to select from other models
select *
from {{ ref('my_first_dbt_model') }}
where id = 1

View File

@@ -0,0 +1,21 @@
version: 2
models:
- name: my_first_dbt_model
description: "A starter dbt model"
columns:
- name: id
description: "The primary key for this table"
tests:
- unique
- not_null
- name: my_second_dbt_model
description: "A starter dbt model"
columns:
- name: id
description: "The primary key for this table"
tests:
- unique
- not_null

Some files were not shown because too many files have changed in this diff Show More