Compare commits

...

311 Commits

Author SHA1 Message Date
Gerda Shank
b6064203e3 Possible method to get model_str from test node 2022-05-04 12:02:22 -04:00
Matthew McKnight
c270a77552 propseal for modification to drop_test_schema (#5198)
* propseal for modification to drop_test_schema

* changelog

* remove hard coded run_dbt version and put back previous version of drop_test_schema, add commit to drop_schema
2022-05-03 13:10:02 -05:00
Alex Rosenfeld
a2e040f389 Allow the target name to be set in profile_template.yml (#5184)
* Allow the target name to be set in profile_template.yml
2022-05-03 11:44:40 -05:00
Michael Manganiello
a4376b96d8 seed: Add new macro get_csv_sql (#5207)
new macro `get_csv_sql`
2022-05-03 11:32:24 -05:00
Gerda Shank
ed5df342ca Convert 013_context_vars_tests to context_methods. Move existing cli_vars into context_methods (#5199) 2022-05-02 16:02:37 -04:00
Emily Rockman
96f063e077 update label check (#5194) 2022-05-02 11:51:23 -05:00
leahwicz
ee8f81de6a Adding Skip Changelog label to Version Bump action (#5203) 2022-05-02 12:40:58 -04:00
Emily Rockman
935edc70aa remove reference to unused ok to test label (#5149) 2022-05-02 09:21:17 -05:00
dependabot[bot]
28c44a9be7 Bump ubuntu from 20.04 to 22.04 (#5141)
* Bump ubuntu from 20.04 to 22.04
2022-04-29 15:53:25 -05:00
Stu Kilgore
a2b3602485 Convert list tests to pytest (#5178) 2022-04-28 15:00:39 -05:00
Jeremy Yeo
3733817488 Fix: add warning on duplicated yaml keys (#5146)
* add warning on duplicated yaml keys

* update structure and tests

* fix old test schema file

* add changelog
2022-04-28 09:31:18 -04:00
Jeremy Cohen
c5fb6c275a Update README for dbt-tests-adapter (#5182)
* Update README for dbt-tests-adapter

* Add logo
2022-04-28 15:26:02 +02:00
Gerda Shank
f633e9936f When parsing 'all_sources' should be a list of unique dirs (#5176)
* When parsing 'all_sources' should be a list of unique dirs

* Changie

* Fix some unit tests of all_source_paths

* Convert 039_config_tests

* Remove old 039_config_tests

* Add test for duplicate directories in 'all_source_files'
2022-04-27 21:02:51 -04:00
Gerda Shank
4e57c51c7a Ct-65 metrics names with spaces (#5173)
* Convert existing metrics test

* add non-failing test for names with spaces

* Raise ParsingException if metrics name contains spaces

* Remove old metrics tests
2022-04-27 10:57:32 -04:00
Gary James
6267572ba7 Fix adding new cols to check_cols in snapshots (#4893) 2022-04-26 18:55:14 -04:00
Daniel Diamond
32e1924c3b Add selector method capabilities to selectors (#4827) 2022-04-26 18:52:08 -04:00
Chenyu Li
55af3c78d7 remove extra class and add connection test (#5163)
* remove extra class and add connection test

* add project artifact to avoid breaking other tests

* add comment
2022-04-26 16:07:23 -06:00
Mila Page
bdff19d909 migrate 009_data_tests to new test framework (#5139)
* Fold so-called 'data' test into new framework with new vocabulary to match.
* Add missing files including changelog.
* Remove unneeded Changelog per team policy on test conversions.
* Refactor test code to better use our pytest framework. Strengthen assertions.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-04-26 14:59:27 -07:00
Ben Dowling
f87c7819fb Add itertools to modules (#5140)
* GH hygiene: contributing guide, templates, stalebot (#4967)

* Update contributing guide

* Update issue + PR templates

* Stalebot for all issues, no exceptions

* Update links

* Missed one

* PR feedback

* Update CHANGELOG

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2022-04-26 16:32:06 -05:00
Mila Page
33694f3772 Ct 488/migrate invalid model tests (#5143)
* First test completed.
* Convert and update more test cases.
* Complete test migration and remove old files.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-04-26 13:38:34 -07:00
Emily Rockman
ebfc18408b Tweak triage-label.yml to trigger off issue labels instead of PR labels (#5168)
* fix label check

* fix label filter to go against issues not PRs
2022-04-26 15:25:20 -05:00
Emily Rockman
6958f4f12e add triage label workflow (#5164) 2022-04-26 13:39:09 -05:00
Gerda Shank
1f898c859a Use yaml renderer (with target context) for rendering selectors (#5136)
* Use yaml renderer (with target context) for rendering selectors

* Changie

* Convert cli_vars tests

* Add test for var in profiles

* Add test for cli vars in packages

* Add test for vars in selectors
2022-04-26 11:42:50 -04:00
Jeremy Cohen
ce0bcc08a6 Even more scrubbing (#5152)
* Even more scrubbing

* Changelog entry

* Even more

* remove reduendent scrub

* remove reduendent scrub

* fix encoding issue

* keep scrubbed log in args

Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
2022-04-26 09:35:01 -06:00
leahwicz
d1ae9dd37f Updating backport Action permissions (#5121) 2022-04-25 09:33:49 -04:00
Emily Rockman
31a3f2bdee fix retry logic failures (#5137)
* fix retry logic failures

* changelog

* add tests to make sure data is getting where it needs to

* rename file

* remove duplicate file
2022-04-25 06:08:57 -05:00
Jeremy Cohen
1390715590 GH hygiene: contributing guide, templates, stalebot (#4967)
* Update contributing guide

* Update issue + PR templates

* Stalebot for all issues, no exceptions

* Update links

* Missed one

* PR feedback
2022-04-22 13:57:15 +02:00
Doug Beatty
d09459c980 Restore ability to utilize updated_at for check_cols snapshots (#5077)
* Restore ability to configure and utilize `updated_at` for snapshots using the check_cols strategy

* Changelog entry

* Optional comparison of column names starting with `dbt_`

* Functional test for check cols snapshots using `updated_at`

* Comments to explain the test implementation
2022-04-21 06:56:19 -06:00
Emily Rockman
979e1c74bf add new GHA for dependabot PRs (#5065)
* add new GHA for dependabot PRs

* Add automated changelog entry

* code cleanup

* remove changelog file

* permissions tweak

* Add automated changelog yaml from template

* update commit author

* Add automated changelog yaml from template

* fix formatting, remove changelog

* revert to separate files and comment out changelog check temporarily

* Add automated changelog yaml from template

* add back changelog check, update how commit works

* remove file

* Add automated changelog yaml from template

* WIP update to use PAT

* update PAT name

* remove file

* Add automated changelog yaml from template

* format file with quotes

* delete file

* Add automated changelog yaml from template

* remove extra line

* remove file

* Add automated changelog yaml from template

* Delete Dependencies-20220418-194629.yaml

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-04-20 15:55:58 -05:00
Gerda Shank
7d0fccd63f Convert docs_generate_tests to new framework (#5058) 2022-04-20 11:30:34 -04:00
Emily Rockman
37b8b65aad cleanup changelog files on main after cutting release branch (#5112)
* cleanup cahngelog files on main after release branch was cut

* last file that is backported
2022-04-19 15:24:45 -05:00
Emily Rockman
0211668361 CT-476 convert deprecation tests (#5034)
* first pass at 012 test conversion

* convert 012_deprecation tests

* add logic around dropping schema

* swap exception

* added clarifying comment
2022-04-19 13:07:05 -05:00
leahwicz
f8c8322bb4 Updating backport action to latest (#5082)
* Updating backport action to latest

* Updating to PR trigger with permissions instead

This is a better model for closing down all permissions and just granting what we actually want

* Updating IF when merged and backport label exists

* Changing to only trigger on label being added
2022-04-18 15:36:43 -04:00
Mila Page
14c2bd9959 Ct 488/migrate simple seed (#5060)
* (finally) idiomatically rewrite a class of tests into the new framework.

* Get simple seed mostly working with design tweaks needed.

* Revamp tests to use more of the framework. Fix TODOs

* Complete migration of 005 and remove old files.

* Fix BOM test for Windows and add changelog entry

* Finalize tests in the adapter zone per conversation with Chenyu.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-04-14 13:18:31 -07:00
Emily Rockman
8db6bac1db move deprecation check outside package caching (#5069)
* move deprecation check outside package caching

* add changelog
2022-04-14 14:13:38 -05:00
Joel Labes
080dd41876 Clarify steps to reopen a stale issue (#4802)
I haven't added a message for stale PRs because they're likely to only impact the opening user (who I assume can reopen their own PR) and they're less of a problem. Happy to add that in as well, as well as to take feedback on the specific phrasing here.
2022-04-14 17:50:27 +02:00
Amy Byrum
8e9702cec5 Add updated dbt diagram for readme (#5055) 2022-04-13 14:34:17 -06:00
Michael Manganiello
5ff81c244e Flexibilize MarkupSafe pinned version (#5039)
* Flexibilize MarkupSafe pinned version

The current `MarkupSafe` pinned version has been added in #4746 as a
temporary fix for #4745.

However, the current restrictive approach isn't compatible with other
libraries that could require an even older version of `MarkupSafe`, like
Airflow `2.2.2` [0], which requires `markupsafe>=1.1.1, <2.0`.

To avoid that issue, we can allow a greater range of supported
`MarkupSafe` versions. Considering the direct dependency `dbt-core` has
is `Jinja2==2.11.3`, we can use its pinning as the lower bound, which is
`MarkupSafe>=0.23` [1].

This fix should be also backported this to `1.0.latest` for inclusion in
the next v1.0 patch.

[0] https://github.com/adamantike/airflow/blob/2.2.2/setup.cfg#L125
[1] https://github.com/pallets/jinja/blob/2.11.3/setup.py#L53
2022-04-13 13:44:27 -06:00
github-actions[bot]
cfe81e81fd Bumping version to 1.2.0a1 (#5045)
* Bumping version to 1.2.0a1

* Fixing spacing issue

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-04-12 17:00:07 -04:00
leahwicz
365414b5fc Bumping manifest schema to v5 (#5032)
* Bumping manifest schema to v5

* Adding changelog
2022-04-12 16:06:24 -04:00
Nathaniel May
ec46be7368 Perf regression testing - overhaul of readme and runner (#4602) 2022-04-12 16:00:55 -04:00
Stu Kilgore
f23a403468 Update version output and logic (#5029)
Update version output and logic
2022-04-12 14:36:55 -05:00
Benoit Perigaud
15ad34e415 Add selected_resources to the Jinja context (#5001)
* Add selected_resources in the Jinja context

* Add tests for the Jinja variable selected_resources

* Add Changie entry for the addition of selected_resources

* Move variable to the ProviderContext

* Move selected_resources from ModelContext to ProviderContext

* Update unit tests for context to cater for the new selected_resources variable

* Move tests to a Class where tests are run after a dbt build
2022-04-12 10:25:45 -06:00
Jeremy Cohen
bacc891703 Add experimental cache_selected_only config (#5036)
* cache schema for selected models

* Create Features-20220316-003847.yaml

* rename flag, update postgres adapter

rename flag to cache_selected_only, update postgres adapter: function _relations_cache_for_schemas

* Update Features-20220316-003847.yaml

* added test for cache_selected_only flag

* formatted as per pre-commit

* Add breaking change note for adapter plugin maintainers

* Fix whitespace

* Add a test

Co-authored-by: karunpoudel-chr <poudel.karun@gmail.com>
Co-authored-by: karunpoudel-chr <62040859+karunpoudel@users.noreply.github.com>
2022-04-12 18:04:39 +02:00
Emily Rockman
a2e167761c add more complete logic around changelog contributors section (#5037)
* add more complete logic around changelog contributors section

* add instructions for future core team members

* Update .changie.yaml
2022-04-12 10:35:21 -05:00
Emily Rockman
cce8fda06c Add enabled as a source config (#5008)
* initial pass at source config test w/o overrides

* Update tests/functional/sources/test_source_configs.py

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>

* Update tests/functional/sources/test_source_configs.py

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>

* tweaks from feedback

* clean up some test logic - add override tests

* add new fields to source config class

* fix odd formatting

* got a test working

* removed unused tests

* removed extra fields from SourceConfig class

* fixed next failing unit test

* adding back missing import

* first pass at adding source table configs

* updated remaining tests to pass

* remove source override tests

* add comment for config merging

* changelog

* remove old comments

* hacky fix for permission test

* remove unhelpful test

* adding back test file that was accidentally deleted

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
2022-04-12 10:27:29 -05:00
leahwicz
dd4ac1ba4a Updating tests and doc to support Python 3.10 (#5025)
* Updating tests and doc to support Python 3.10

* Single quotes needed for python version matrix

* Adding changelog
2022-04-12 10:52:44 -04:00
Sung Won Chung
401ebc2768 Smart Source Freshness Runs (#4256)
* first draft

* working selector code

* remove debug print logs

* copy test template

* add todo

* smarter depends on graph searching notes

* add excluded source children nodes

* remove prints and clean up logger

* opinionated fresh node selection

* better if handling

* include logs with meaningul info

* add concurrent selectors note

* cleaner logging

* Revert "Merge branch 'main' of https://github.com/dbt-labs/dbt into feature/smart-source-freshness-runs"

This reverts commit 7fee4d44bf, reversing
changes made to 17c47ff42d.

* tidy up logs

* remove comments

* handle criterion that does not match nodes

* use a blank set instead

* Revert "Revert "Merge branch 'main' of https://github.com/dbt-labs/dbt into feature/smart-source-freshness-runs""

This reverts commit 71125167a1.

* make compatible with rc and new logger

* new log format

* new selector flag name

* clarify that status needs to be correct

* compare current and previous state

* correct import

* add current state

* remove print

* add todo

* fix error conditions

* clearer refresh language

* don't run wasteful logs

* remove for now

* cleaner syntax

* turn generator into set

* remove print

* add fresh selector

* data bookmarks matter only

* remove exclusion logic for status

* keep it DRY

* remove unused import

* dynamic project root

* dynamic cwd

* add TODO

* simple profiles_dir import

* add default target path

* headless path utils

* draft work

* add previous sources artifact read

* make PreviousState aware of t-2 sources

* make SourceFreshSelectorMethod aware of t-2 sources

* add archive_path() for t-2 sources to freshness.py

* clean up merged branches

* add to changelog

* rename file

* remove files

* remove archive path logic

* add in currentstate and previousstate defaults

* working version of source fresher

* syntax source_fresher: works

* fix quoting

* working version of target_path default

* None default to sources_current

* updated source selection semantics

* remove todo

* move to test_sources folder

* copy over baseline source freshness tests

* clean up

* remove test file

* update state with version checks

* fix flake tests

* add changelog

* fix name

* add base test template

* delegate tests

* add basic test to ensure nothing runs

* add another basic test

* fix test with copy state

* run error test

* run warn test

* run pass test

* error handling for runtime error in source freshness

* error handling for runtime error in source freshness

* add back fresher selector condition

* top level selector condition

* add runtime error test

* testing source fresher test selection methods

* fix formatting issues

* fix broken tests

* remove old comments

* fix regressions in other tests

* add Anais test cases

* result selector test case

Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>
2022-04-12 15:08:06 +02:00
Emily Rockman
83612a98b7 cache after retrying instead of while retrying (#5028) 2022-04-11 19:53:11 -05:00
Leopoldo Araujo
827eae2750 Added no-print flag (#4854)
* Added no-print flag

* Updated changelog

* Updated changelog

* Removed changes from CHANGELOG.md

* Updated CHANGELOG.MD with changie

* Update .changes/unreleased/Features-20220408-114118.yaml

Co-authored-by: Emily Rockman <ebuschang@gmail.com>

Co-authored-by: Emily Rockman <ebuschang@gmail.com>
2022-04-11 13:48:34 -05:00
Emily Rockman
3a3bedcd8e Update index file for docs generation (#4995)
* Update index file for docs generation

* add changelog entries
2022-04-11 11:31:03 -05:00
Stu Kilgore
c1dfb4e6e6 Convert --version tests to pytest (#5026)
Convert --version tests to pytest
2022-04-11 11:04:45 -05:00
Gerda Shank
5852f17f0b Fix hard_delete_snapshot test to do the right thing. (#5020) 2022-04-08 16:18:01 -04:00
dependabot[bot]
a94156703d Bump black from 22.1.0 to 22.3.0 (#4972)
* Bump black from 22.1.0 to 22.3.0
2022-04-08 15:10:36 -05:00
Ian Knox
2b3fb7a5d0 updated docker readme CT-452 (#5018) 2022-04-08 14:30:25 -05:00
Ian Knox
5f2a43864f Decouple project creation logic from tasks CT-299 (#4981) 2022-04-08 14:28:37 -05:00
Ian Knox
88fbc94db2 added git-blame-ignore-revs file (#5019) 2022-04-08 14:20:43 -05:00
Chenyu Li
6c277b5fe1 make graph_selection tests just checking selection (#5012)
* make graph_selection tests just checking selection

* use util function
2022-04-08 11:04:54 -06:00
Chenyu Li
40e64b238c adapter_methods (#4939)
* adapter_methods

* fix fixture scope

* update table compare method

* remove unneeded part

* update test name and comment
2022-04-08 08:32:21 -06:00
Ian Knox
581bf51574 updated event message (#5011) 2022-04-08 09:12:49 -05:00
Gerda Shank
899b0ef224 Remove TableComparison and convert existing calls to use dbt.tests.util (#4986) 2022-04-07 13:04:03 -04:00
Matthew McKnight
3ade206e86 init push up of converted unique_key tests (#4958)
* init push up of converted unique_key tests

* testing cause of failure

* adding changelog entry

* moving non basic test up one directory to be more broadly part of adapter zone

* minor changes to the bad_unique_key tests

* removed unused fixture

* moving tests to base class and inheriting in a simple class

* taking in chenyu's changes to fixtures

* remove older test_unique_key tests

* removed commented out code

* uncommenting seed_count

* v2 based on feedback for base version of testing, plus small removal of leftover breakpoint

* create incremental test directory in adapter zone

* commenting out TableComparision and trying to implement check_relations_equal instead

* remove unused commented out code

* changing cast for date to fix test to work on bigquery
2022-04-07 11:29:52 -05:00
agoblet
58bd750007 add DO_NOT_TRACK environment variable support (#5000) 2022-04-07 11:45:29 -04:00
Matthew McKnight
0ec829a096 include directory README (#4685)
* start of a README for the include directory

* minor updates

* minor updates after comments from gerda and emily

* trailing space issue?

* black formatting

* minor word change

* typo update

* minor fixes and changelog creation

* remove changelog
2022-04-06 11:53:59 -05:00
Emily Rockman
7f953a6d48 [CT-352] catch and retry malformed json (#4982)
* catch None and malformed json reponses

* add json.dumps for format

* format

* Cache registry request results. Avoid one request per version

* updated to be direct in type checking

* add changelog entry

* add back logic for none check

* PR feedback: memoize > global

* add checks for expected types and keys

* consolidated cache and retry logic

* minor cleanup for clarity/consistency

* add pr review suggestions

* update unit test

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2022-04-05 10:44:00 -05:00
Snyk bot
0b92f04683 [Snyk] Security upgrade python from 3.9.9-slim-bullseye to 3.10.3-slim-bullseye (#4963)
* fix: docker/Dockerfile to reduce vulnerabilities

The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-DEBIAN11-EXPAT-2403512
- https://snyk.io/vuln/SNYK-DEBIAN11-EXPAT-2406127
- https://snyk.io/vuln/SNYK-DEBIAN11-OPENSSL-2388380
- https://snyk.io/vuln/SNYK-DEBIAN11-OPENSSL-2426309
- https://snyk.io/vuln/SNYK-DEBIAN11-OPENSSL-2426309

* add changelog entry

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2022-04-04 12:57:43 -04:00
Jeremy Cohen
3f37a43a8c Remove unneeded code in default snapshot materialization (#4993)
* Rm unneeded create_schema in snapshot mtlzn

* Add changelog entry
2022-04-04 17:25:53 +02:00
Gerda Shank
204d53516a Create a dbt.tests.adapter release when releasing dbt and postgres (#4948)
* update black version for pre-commit
2022-03-29 19:38:33 -04:00
Jeremy Cohen
5071b00baa Custom names for generic tests (#4898)
* Support user-supplied name for generic tests

* Support dict-style generic test spec

* Add changelog entry

* Add TODO comment

* Rework raise_duplicate_resource_name

* Add functional tests

* Update comments, rm TODO

* PR feedback
2022-03-25 17:09:35 +01:00
Emily Rockman
81118d904a Convert source tests (#4935)
* convert 059 to new test framework

* remove replaced tests

* WIP, has pre-commit errors

* WIP, has pre-commit errors

* one failing test, most issued resolved

* fixed final test and cleaned up fixtures

* remove converted tests

* updated test to work on windows

* remove config version
2022-03-24 09:19:54 -05:00
Jeremy Cohen
69cdc4148e Cosmetic changelog/changie fixups (#4944)
* Reorder kinds in changie

* Reorder change categories for v1.1.0b1

* Update language for breaking change

* Contributors deserve an h3

* Make pre-commit happy? Update language

* Rm trailing whitespace
2022-03-24 12:17:55 +01:00
Chenyu Li
1c71bf414d remove capping version of typing extensions (#4934) 2022-03-23 14:08:26 -04:00
Chenyu Li
7cf57ae72d add compliation and cache tracking (#4912) 2022-03-23 14:05:50 -04:00
kadero
1b6f95fef4 Fix inconsistent timestamps snapshots (#4513) 2022-03-23 12:05:42 -05:00
github-actions[bot]
38940eeeea Bumping version to 1.1.0b1 (#4933)
* Bumping version to 1.1.0b1
2022-03-23 09:28:50 -05:00
Ian Knox
6c950bad7c updated bumpversion (#4932) 2022-03-22 15:02:52 -05:00
Joel Labes
5e681929ae Add space before justification periods (#4744)
* Update format.py

* Update CHANGELOG.md

* add change file

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2022-03-22 15:18:38 -04:00
Matthew McKnight
ea5a9da71e update of macro for postgres/redshift use of unique_key as a list (#4858)
* pre-commit additions

* added changie changelog entry

* moving integration test over

* Pair programming

* removing ref to mapping as seems to be unnecessary check, unique_key tests pass locally for postgres

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2022-03-22 10:24:21 -05:00
leahwicz
9c5ee59e19 Updating backport workflow to use forked action (#4920) 2022-03-22 09:10:30 -04:00
Emily Rockman
55b1d3a191 changie - convert changelogs to yaml files and make quality of life improvements (#4917)
* convert changelog to changie yaml files

* update contributor format and README instructions

* update action to rerun when labeled/unlabled

* remove synchronize from action

* remove md file replaced by the yaml

* add synchronize and comment of what's happening

* tweak formatting
2022-03-21 20:15:52 -05:00
Ian Knox
a968aa7725 added permissions settings for docker release workflow (#4903) 2022-03-18 10:40:05 -05:00
Gerda Shank
5e0a765917 Set up adapter testing framework for use by adapter test repos (#4846) 2022-03-17 18:01:09 -04:00
Ian Knox
0aeb9976f4 remove missing setup.py file (holdover from pip install dbt (#4896) 2022-03-17 16:52:02 -05:00
Nathaniel May
30a7da8112 [HOTFIX] update dbt-extractor dependency (#4890)
* use pep 0440 compatible release operator for dbt-extractor dependency. bump to 0.4.1.
2022-03-17 16:44:30 -04:00
Matthew McKnight
f6a9dae422 FEAT: new columns in snapshots for adapters w/o bools (#4871)
* FEAT: new columns in snapshots for adapters w/o bools

* trigger gha workflow

* using changie to make changelog

* updating to be on par with main

Co-authored-by: swanderz <swanson.anders@gmail.com>
2022-03-17 10:10:23 -05:00
Gerda Shank
62a7163334 Use cli_vars instead of context to create package and selector renderers (#4878) 2022-03-17 09:27:39 -04:00
Mila Page
e2f0467f5d Add bugged version tag value to finds. (#4816)
* Change property file version exception to reflect current name and offer clearer guidance in comments.
* Add example in case of noninteger version tag just to drive the point home to readers.
2022-03-16 14:59:48 -07:00
Mila Page
3e3ecb1c3f get_response type hint is AdapterResponse only. (#4869)
* get_response type hint is AdapterResponse only.
* Propagate changes to get_response return type to execute
2022-03-16 14:54:39 -07:00
Nathaniel May
27511d807f update test project (#4875) 2022-03-16 16:35:07 -04:00
Ian Knox
15077d087c python 3.10 support (#4866)
* python 3.10 support
2022-03-15 19:35:28 -05:00
Emily Rockman
5b01cc0c22 catch all requests exceptions to retry (#4865)
* catch all requests exceptions to retry

* add changelog
2022-03-15 11:57:07 -05:00
Chenyu Li
d1bcff865d pytest conversion test_selection, schema_tests, fail_fast, permission (#4826) 2022-03-15 11:12:30 -04:00
Emily Rockman
0fce63665c Small changie fixes (#4857)
* fix broken links, update GHA to not repost comment

* tweak GHA

* convert GHA used

* consolidate GHA

* fix PR numbers and pull comment as var

* fix name of workflow step

* changie merge to fix link at top of changelog

* add changelog yaml
2022-03-11 14:54:33 -06:00
Emily Rockman
1183e85eb4 Er/ct 303 004 simple snapshot (#4838)
* convert single test in 004

* WIP

* incremental conversion

* WIP test not running

* WIP

* convert test_missing_strategy, cross_schema_snapshot

* comment

* converting to class based test

* clean up

* WIP

* converted 2 more tests

* convert hard delete test

* fixing inconsistencies, adding comments

* more conversion

* implementing class scope changes

* clean up unsed code

* remove old test, get all new ones running

* fix typos

* append file names with snapshot to reduce collision

* moved all fixtures into test files

* stop using tests as fixtures
2022-03-11 14:52:54 -06:00
dependabot[bot]
3b86243f04 Update typing-extensions requirement from <3.11,>=3.7.4 to >=3.7.4,<4.2 in /core (#4719)
* Update typing-extensions requirement in /core

Updates the requirements on [typing-extensions](https://github.com/python/typing) to permit the latest version.
- [Release notes](https://github.com/python/typing/releases)
- [Changelog](https://github.com/python/typing/blob/master/typing_extensions/CHANGELOG)
- [Commits](https://github.com/python/typing/compare/3.7.4...4.1.0)

---
updated-dependencies:
- dependency-name: typing-extensions
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Empty-Commit

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: ChenyuLi <chenyu.li@dbtlabs.com>
2022-03-10 15:42:20 -05:00
willbowditch
c251dae75e [CT-271] [Feature] not_null test selects column instead of * (#4777)
* Only select target column for not_null test

* If storing failures include all columns in the select, if not, only select the column being tested

It's desirable for this test to include the full row output when using --store-failures. If the query result stored in the database contained just the null values of the null column, it can't do much to contextualize why those rows are null.

* Update changelog

* chore: update changelog using changie

* Revert "Update changelog"

This reverts commit 281d805959e15694784cfa3a078fc5ef059c06b5.
2022-03-09 21:31:15 -05:00
Emily Rockman
ecfd77f1ca Small updates to clarify change destinations (#4841)
* update to reflect this branch is for the 1.1 release

* update to use next

* remove next logic

* add yaml changes also marked for unreleased 1.0.4
2022-03-08 13:18:24 -06:00
Emily Rockman
9a0abc1bfc Automate changelog (#4743)
* initial setup to use changie

* added `dbt-core` to version line

* fix formatting

* rename to be more accurate

* remove extra file

* add stug for contributing section

* updated docs for contributing and changelog

* first pass at changelog check

* Fix workflow name

* comment on handling failure

* add automatic contributors section via footer

* removed unused initialization

* add script to automate entire changelog creation and handle prereleases

* stub out README

* add changelog entry!

* no longer need to add contributors ourselves

* fixed formatted and excluded core team

* fix typo and collapse if statement

* updated to reflect automatic pre-release handling

Removed custom script in favor of built in pre-release functionality in new version of changie.

* update contributing doc

* pass at GHA

* fix path

* all changed files

* more GHA work

* continued GHA work

* try another approach

* testing

* adding comment via GHA

* added uses for GHA

* more debugging

* fixed formatting

* another comment attempt

* remove read permission

* add label check

* fix quotes

* checking label logic

* test forcing failure

* remove extra script tag

* removed logic for having changelog

* Revert "removed logic for having changelog"

This reverts commit 490bda8256.

* remove unused workflow section

* update header and readme

* update with current version of changelog

* add step failure for missing changelog file

* fix typos and formatting

* small tweaks per feedback

* Update so changelog end up onlywith current version, not past

* update changelog to recent contents

* added the rest of our releases to previous release list

* clarifying the readme

* updated to reflect current changelog state

* updated so only 1.1 changes are on main
2022-03-07 20:12:33 -06:00
Gerda Shank
490d68e076 Switch to using class scope fixtures (#4835)
* Switch to using class scope fixtures

* Reorganize some graph selection tests because of ci errors
2022-03-07 14:38:36 -05:00
Stu Kilgore
c45147fe6d Fix macro modified from previous state (#4820)
* Fix macro modified from previous state

Previously, if the first node selected by state:modified had multiple
dependencies, the first of which had not been changed, the rest of the
macro dependencies of the node would not be checked for changes. This
commit fixes this behavior, so the remainder of the macro dependencies
of the node will be checked as well.
2022-03-07 08:23:59 -06:00
Gerda Shank
bc3468e649 Convert tests in dbt-adapter-tests to use new pytest framework (#4815)
* Convert tests in dbt-adapter-tests to use new pytest framework

* Filter out ResourceWarning for log file

* Move run_sql to dbt.tests.util, fix check_cols definition

* Convert jaffle_shop fixture and test to use classes

* Tweak run_sql methods, rename some adapter file pieces, add comment
to dbt.tests.adapter.

* Add some more comments
2022-03-03 16:53:41 -05:00
Kyle Wigley
8fff6729a2 simplify and cleanup gha workflow (#4803) 2022-03-02 10:21:39 -05:00
varun-dc
08f50acb9e Fix stdout piped colored output on MacOS and Linux (#4792)
* Fix stdout pipe output coloring

* Update CHANGELOG.md

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
2022-03-01 17:23:51 -05:00
Chenyu Li
436a5f5cd4 add coverage (#4791) 2022-02-28 09:17:33 -05:00
Emily Rockman
aca710048f ct-237 test conversion 002_varchar_widening_tests (#4795)
* convert 002 integration test

* remove original test

* moved varchar test under basic folder
2022-02-25 14:25:22 -06:00
Emily Rockman
673ad50e21 updated index file to fix DAG errors for operations & work around null columns (#4763)
* updated index file to fix DAG errors for operations

* update index file to reflect dbt-docs fixes

* add changelog
2022-02-25 13:02:26 -06:00
Chenyu Li
8ee86a61a0 rewrite graph selection (#4783)
* rewrite graph selection
2022-02-25 12:09:11 -05:00
Gerda Shank
0dda0a90cf Fix errors on Windows tests in new tests/functional (#4767)
* [#4781] Convert reads and writes in project fixture to text/utf-8 encoding

* Switch to using write_file and read_file functions

* Add comment
2022-02-25 11:13:15 -05:00
Gerda Shank
220d8b888c Fix "dbt found two resources" error with multiple snapshot blocks in one file (#4773)
* Fix handling of multiple snapshot blocks in partial parsing

* Update tests for partial parsing snapshots
2022-02-25 10:54:07 -05:00
dependabot[bot]
42d5812577 Bump black from 21.12b0 to 22.1.0 (#4718)
Bumps [black](https://github.com/psf/black) from 21.12b0 to 22.1.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits/22.1.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-24 13:28:23 -05:00
Ian Knox
dea4f5f8ff update flake8 to remove line length req (#4779) 2022-02-24 11:22:25 -06:00
Dmytro Kazanzhy
8f50eee330 Fixed misspellings, typos, and duplicated words (#4545) 2022-02-22 18:05:43 -05:00
Gerda Shank
8fd8dfcf74 Initial pass at switching integration tests to pytest (#4691)
Author: Emily Rockman <emily.rockman@dbtlabs.com>
    route logs to dbt-core/logs instead of each test folder (#4711)

 * Initial pass at switching integration tests to pytest

* Reorganize dbt.tests.tables. Cleanup adapter handling

* Move run_sql to TestProjInfo and TableComparison.
Add comments, cleanup adapter schema setup

* Tweak unique_schema name generation

* Update CHANGELOG.md
2022-02-22 15:34:14 -05:00
Hein Bekker
10b27b9633 Deduplicate postgres relations (#3058) (#4521)
* Deduplicate postgres relations (#3058)

* Add changelog entry for #3058, #4521
2022-02-21 16:48:15 -06:00
Gerda Shank
5808ee6dd7 Fix bug accessing target in deps and clean commands (#4758)
* Create DictDefaultNone for to_target_dict in deps and clean commands

* Update test case to handle

* update CHANGELOG.md

* Switch to DictDefaultEmptyStr for to_target_dict
2022-02-21 13:26:29 -05:00
Jeremy Cohen
a66fe7f467 Pin MarkupSafe==2.0.1 (#4746) 2022-02-18 14:35:27 +01:00
Gerda Shank
18fef38702 Ensure meta is both at node top level and in node.config. Fix snapshots with schema config. (#4726)
* Do not overwrite node.meta with empty patch.meta

* Restore config_call_dict in snapshot node transform

* Test for snapshot with schema file config

* Test for meta in both toplevel node and node config
2022-02-17 12:15:11 -05:00
Ian Knox
3ad61d5d81 ignore markdown files for trim-trailing-whitespace hook (#4727) 2022-02-16 10:25:52 -06:00
Emily Rockman
bb1f5b43be Initital pass at deps README (#4686)
* Initital pass at README

* Finished the sentence

* fixed typo and added changelog
2022-02-15 13:58:22 -06:00
Michiel De Smet
a642b20abc Allow override of Column string and numeric type by classes inheritin… (#4604)
* Allow override of Column string and numeric type by classes inheriting from the Column class

* updating based on new black formatter

Co-authored-by: Matthew McKnight <91097623+McKnight-42@users.noreply.github.com>
Co-authored-by: Matthew McKnight <matthew.mcknight@dbtlabs.com>
2022-02-14 15:22:43 -06:00
Ian Knox
c112050455 Pre commit Hooks (black, flake8, mypy, etc) [CT-105] (#4639)
Added pre-commit and configured hooks (black, flake8, mypy, white space formatters)
Removed code checks from tox
updated CI
2022-02-11 12:57:16 -06:00
Ian Knox
43e3fc22c4 Reformat core [CT-104 CT-105] (#4697)
Reformatting dbt-core via black, flake8, mypy, and assorted pre-commit hooks.
2022-02-11 12:17:31 -06:00
elizabeth martens
41c6177ae2 Add --quiet flag and print Jinja function (#4701)
* Add `--quiet` flag

* Add print() macro

* Update tests for --quiet and print()

* Updating changelog

* Apply suggestions from PR review
2022-02-10 13:24:42 -06:00
Tristan Willy
72ecd1ce74 task init: support older click v7.0 (#4681)
* task init: support older click v7.0

`dbt init` uses click for interactively setting up a project. The
version constraints currently ask for click >= 8 but v7.0 has nearly the
same prompt/confirm/echo API. prompt added a feature that isn't used.
confirm has a behavior change if the default is None, but
confirm(..., default=None) is not used. Long story short, we can relax
the version constraint to allow installing with an older click library.

Ref: Issue #4566

* Update CHANGELOG.md

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
2022-02-07 14:14:22 -05:00
Nathaniel May
2d0b975b6c fix test to use a secret username (#4682) 2022-02-04 14:57:03 -05:00
Rachel
8a0bc39a66 Set flags from args in lib module for dbt-server (#4623) 2022-02-04 10:14:41 -05:00
nkyuray
f3c7b6bfd1 adapter compability messaging added. (#4565)
* adapter compability messaging added.

* edited plugin version compatibility message

* edited test version for plugin compability

* compare using only major and minor

* Add checking PYPI and update changelog

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: ChenyuLi <chenyu.li@dbtlabs.com>
2022-02-03 17:27:31 -05:00
Nathaniel May
0391e4e53a add changelog entry for #4665 (#4673) 2022-02-03 15:48:05 -05:00
Gerda Shank
3ad3c21886 [#2479] Allow unique_id to take a list (#4618)
* Add unique_key to NodeConfig

`unique_key` can be a string or a list.

* merge.sql update to work with unique_key as list

extend the functionality to support both single and multiple keys

Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>

* Updated test to include unique_key

Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>

* updated tests

Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>

* Fix unit and integration tests

* Update Changelog for 2479/4618

Co-authored-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>
2022-02-03 12:55:06 -05:00
Nathaniel May
6e0ed751e1 Avoid saving secrets in SecretContext (#4665) 2022-02-03 12:54:45 -05:00
Gerda Shank
c43c79a995 Initial file creation of code documentation READMEs (#4654) 2022-02-02 18:29:47 -05:00
Ian Knox
d6cc8b3042 Docker release CT-3 (#4616)
* new docker setup

* formatting

* Updated spark: support for extras

* Added third-party adapter support

* More selective lib installs for spark

* added docker to bumpversion

* Updated refs to be tag-based because bumpversion doesn't understand 'latest'

* Updated docs per PR feedback

* reducing RUNs and formatting/pip best practices changes

* Added multi-architecture support and small test script, updated docs

* typo

* Added a few more tests

* fixed tests output, clarified dbt-postgres special case-ness

* Fix merge conflicts

* formatting

* Updated spark: support for extras

* Added third-party adapter support

* More selective lib installs for spark

* added docker to bumpversion

* Updated refs to be tag-based because bumpversion doesn't understand 'latest'

* Updated docs per PR feedback

* reducing RUNs and formatting/pip best practices changes

* Added multi-architecture support and small test script, updated docs

* typo

* Added a few more tests

* fixed tests output, clarified dbt-postgres special case-ness

* changelog

* basic framework

* PR ready excepts docs

* PR feedback
2022-02-01 16:49:33 -06:00
Chenyu Li
2f4a6e33ec fix changelog for a community pr (#4659) 2022-02-01 13:50:58 -05:00
Alec Wang
b9867e89cb added semver official regex pattern (#4644)
* added semver official regex pattern

* removed extra character

* added comma

* added contribution

* fixed contribution

* Update CHANGELOG.md

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-02-01 08:19:37 -05:00
Nathaniel May
13b18654f0 Guard against unnecessarily calling dump_graph in logging (#4619)
* add lazy type and apply to cache events
2022-01-31 14:14:34 -05:00
Jeremy Cohen
aafa1c7f47 Change InvalidRefInTestNode level to DEBUG (#4647)
* Debug-level test depends on disabled

* Add PR link to Changelog
2022-01-31 18:28:43 +01:00
Jeremy Cohen
638e3ad299 Drop support for Python <3.7.2 (#4643)
* Drop support for 3.7.1 + 3.7.2

* Rm root level setup.py

* Rm 'dbt' pkg from build-dist script

* Fixup changelog
2022-01-31 17:31:20 +01:00
Emily Rockman
d9cfeb1ea3 Retry after failure to download or failure to open files (#4609)
* add retry logic, tests when extracting tarfile fails

* fixed bug with not catching empty responses

* specify compression type

* WIP test

* more testing work

* fixed up unit test

* add changelog

* Add more comments!

* clarify why we do the json() check for None
2022-01-31 10:26:51 -06:00
Chenyu Li
e6786a2bc3 fix comparision for new model/body (#4631)
* fix comparison for new model/body
2022-01-31 10:33:35 -05:00
leahwicz
13571435a3 Initial addition of CODEOWNERS file (#4620)
* Initial addition of CODEOWNERS file

* Proposed sub-team ownership (#4632)

* Updating for the events module to be both language and execution

* Adding more comment details

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2022-01-27 16:23:55 -05:00
Gerda Shank
efb890db2d [#4504] Use mashumaro for serializing logging events (#4505) 2022-01-27 14:43:26 -05:00
Niall Woodward
f3735187a6 Run check_if_can_write_profile before create_profile_using_project_profile_template [CT-67] [Backport 1.0.latest] (#4447)
* Run check_if_can_write_profile before create_profile_using_project_profile_template

* Changelog

Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2022-01-27 11:17:28 -06:00
Gerda Shank
3032594b26 [#4554] Don't require a profile for dbt deps and clean commands (#4610) 2022-01-25 12:26:44 -05:00
Joel Labes
1df7a029b4 Clarify "incompatible package version" error msg (#4587)
* Clarify "incompatible package version" error msg

* Clarify error message when they shouldn't fall fwd
2022-01-24 18:33:45 -05:00
leahwicz
f467fba151 Changing Jira mirroring workflows to point to shared Actions (#4615) 2022-01-24 12:20:12 -05:00
Amir Kadivar
8791313ec5 Validate project names in interactive dbt init (#4536)
* Validate project names in interactive dbt init

- workflow: ask the user to provide a valid project name until they do.
- new integration tests
- supported scenarios:
  - dbt init
  - dbt init -s
  - dbt init [name]
  - dbt init [name] -s

* Update Changelog.md

* Add full URLs to CHANGELOG.md

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
2022-01-21 18:24:26 -05:00
leahwicz
7798f932a0 Add Backport Action (#4605) 2022-01-21 12:40:55 -05:00
Nathaniel May
a588607ec6 drop support for Python 3.7.0 and 3.7.1 (#4585) 2022-01-19 12:24:37 -05:00
Joel Labes
348764d99d Rename data directory to seeds (#4589)
* Rename data directory to seeds

* Update CHANGELOG.md
2022-01-19 10:04:35 -06:00
Gerda Shank
5aeb088a73 [#3988] Fix test deprecation warnings (#4556) 2022-01-12 17:03:11 -05:00
leahwicz
e943b9fc84 Mirror labels to Jira (#4550)
* Adding Jira label mirroring

* Fixing bad step name
2022-01-05 09:29:52 -05:00
leahwicz
892426eecb Mirroring issues to Jira (#4548)
* Adding issue creation Jira Action

* Adding issue closing Jira Action

* Add labeling logic
2022-01-04 17:00:03 -05:00
Emily Rockman
1d25b2b046 test name standardization (#4509)
* rename tests for standardization

* more renaming

* rename tests to remove duplicate numbers

* removed unused file

* removed unused files in 016

* removed unused files in 017

* fixed schema number mismatch 027

* fixed to be actual directory name 025

* remove unused dir 029

* remove unused files 039

* remove unused files 053

* updated changelog
2022-01-04 11:36:47 -06:00
github-actions[bot]
da70840be8 Bumping version to 1.0.1 (#4543)
* Bumping version to 1.0.1

* Update CHANGELOG.md

* Update CHANGELOG.md

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-01-03 13:04:50 -05:00
leahwicz
7632782ecd Removing Docker from bumpversion script (#4542) 2022-01-03 12:48:03 -05:00
Nathaniel May
6fae647097 copy over windows compat logic for colored log output (#4474) 2022-01-03 12:37:36 -05:00
leahwicz
fc8b8c11d5 Commenting our Docker portion of Version Bump (#4541) 2022-01-03 12:37:20 -05:00
Topherhindman
26a7922a34 Fix small typo in architecture doc (#4533) 2022-01-03 12:00:04 +01:00
Emily Rockman
c18b4f1f1a removed unused code in unit tests (#4496)
* removed unused code

* add changelog

* moved changelog entry
2021-12-23 08:26:22 -06:00
Nathaniel May
fa31a67499 Add Structured Logging ADR (#4308) 2021-12-22 10:26:14 -05:00
Ian Knox
742cd990ee New Dockerfile (#4487)
New Dockerfile supporting individual db adapters and architectures
2021-12-22 08:29:21 -06:00
Gerda Shank
8463af35c3 [#4523] Fix error with env_var in hook (#4524) 2021-12-20 14:19:05 -05:00
github-actions[bot]
b34a4ab493 Bumping version to 1.0.1rc1 (#4517)
* Bumping version to 1.0.1rc1

* Update CHANGELOG.md

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-12-19 15:33:38 -05:00
Jeremy Cohen
417ccdc3b4 Fix bool coercion to 0/1 (#4512)
* Fix bool coercion

* Fix unit test
2021-12-19 10:30:25 -05:00
Emily Rockman
7c46b784ef scrub message of secrets (#4507)
* scrub message of secrets

* update changelog

* use new scrubbing and scrub more places using git

* fixed small miss of string conv and missing raise

* fix bug with cloning error

* resolving message issues

* better, more specific scrubbing
2021-12-17 16:05:57 -06:00
Gerda Shank
067b861d30 Improve checking of schema version for pre-1.0.0 manifests (#4497)
* [#4470] Improve checking of schema version for pre-1.0.0 manifests

* Check exception code instead of message in test
2021-12-16 13:30:52 -05:00
Emily Rockman
9f6ed3cec3 update log message to use adapter name (#4501)
* update log message to use adapter name

* add changelog
2021-12-16 11:46:28 -06:00
Nathaniel May
43edc887f9 Simplify Log Destinations (#4483) 2021-12-16 11:40:05 -05:00
Emily Rockman
6d4c64a436 compile new index file for docs (#4484)
* compile new index file for docs

* Add changelog

* move changleog entries for docs changes
2021-12-16 10:09:02 -06:00
Gerda Shank
0ed14fa236 [#4464] Check specifically for generic node type for some partial parsing actions (#4465)
* [#4464] Check specifically for generic node type for some partial parsing actions

* Add check for existence of macro file in saved_files

* Check for existence of patch file in saved_files
2021-12-14 16:28:40 -05:00
Emily Rockman
51f2daf4b0 updated DepsStartPackageInstall event to use package name (#4482)
* updated event to user package name

* add changelog
2021-12-14 14:25:29 -06:00
Matthew McKnight
76f7bf9900 made change to test of str (#4463)
* made change to test of str

* changelog update
2021-12-13 11:55:19 -06:00
Matthew McKnight
3fc715f066 updating contributing.md based on suggestions from updates to adapter… (#4356)
* updating contributing.md based on suggestions from updates to adapter contributing files.

* removed section refering to non-postgres databases for core contributing.md

* making suggested changes to contributing.md based on kyle's initial lookover

* Update CONTRIBUTING.md

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>

Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-12-10 13:14:49 -06:00
Rebekka Moyson
b6811da84f Fix dbt docs overview to working url (#4442)
* Fix to working url

* add fix to changelog
2021-12-08 10:30:41 -06:00
Nathaniel May
1dffccd9da point latest version check to dbt-core package (#4434) 2021-12-03 16:13:38 -05:00
github-actions[bot]
9ed9936c84 Bumping version to 1.0.0 (#4431)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-12-03 13:27:46 -05:00
Jeremy Cohen
e75ae8c754 Changelog entries for rc3 -> final (#4389)
* Changelog entries for rc3 -> final

* More updates

* Final entry

* Last fix, and the date

* These few, these happy few
2021-12-03 19:16:46 +01:00
Nathaniel May
b68535b8cb relax version specifier for dbt-extractor (#4427) 2021-12-03 12:56:03 -05:00
Nathaniel May
5310498647 add new interop tests for black-box json log schema testing (#4327) 2021-12-03 12:51:28 -05:00
Ian Knox
22b1a09aa2 stringify generic exceptions (#4424) 2021-12-03 10:32:22 -06:00
Jeremy Cohen
6855fe06a7 Info vs debug text formatting (#4418) 2021-12-03 14:36:42 +01:00
Jeremy Cohen
affd8619c2 Sources aren't materialized (#4417) 2021-12-03 14:36:35 +01:00
Jeremy Cohen
b67d5f396b Add flag to main.py. Reinstantiate after flags (#4416) 2021-12-03 14:36:25 +01:00
Emily Rockman
b3039fdc76 add node type codes to more events + more hook log data (#4378)
* add node type codes to more events + more hook log

* minor fixes

* renames started/finished keys

* made process more clear

* fixed errors

* Put back report_node_data in fresshness.py

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2021-12-02 19:25:57 -05:00
Nathaniel May
9bdf5fe74a use reference keys instead of relations (#4410) 2021-12-02 18:35:51 -05:00
Emily Rockman
c675c2d318 Logging README (#4395)
* WIP

* more README cleanup

* readme tweaks

* small tweaks

* wording updates
2021-12-02 17:04:23 -06:00
Ian Knox
2cd1f7d98e user configurable event buffer size (#4411) 2021-12-02 16:47:31 -06:00
Jeremy Cohen
ce9ac8ea10 Rollover + backup for dbt.log (#4405) 2021-12-02 22:10:11 +01:00
Jeremy Cohen
b90ab74975 A few final logging touch-ups (#4388)
* Rm unused events, per #4104

* More structured ConcurrencyLine

* Replace \n prefixes with EmptyLine

* Reimplement ui.warning_tag to centralize logic

* Use warning_tag for deprecations too

* Rm more unused event types

* Exclude EmptyLine from json logs

* loglines are not always created by events (#4406)

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 22:09:46 +01:00
Emily Rockman
6d3c3f1995 update file name (#4402) 2021-12-02 15:04:29 -06:00
Nathaniel May
74fbaa18cd change json override strategy (#4396) 2021-12-02 15:04:52 -05:00
Emily Rockman
fc7c073691 allow log_format to be set in profile configs (#4394) 2021-12-02 13:51:45 -06:00
leahwicz
29f504e201 Fix release process (#4385) 2021-12-02 11:18:49 -05:00
Nathaniel May
eeb490ed15 use rfc3339 format for log time stamps (#4384) 2021-12-02 09:44:10 -05:00
Gerda Shank
c220b1e42c [#4354] Different output for console and file logs (#4379)
* [#4354] Different output for console and file logs

* Tweak some log formats

* Change loging of thread names
2021-12-02 08:23:25 -05:00
Jeremy Cohen
d973ae9ec6 Tiny touchups for deps, clean (#4366)
* Use actual profile name for log msg

* Raise clean dep warning iff configured path missing
2021-12-02 12:12:49 +01:00
Ian Knox
f461683df5 Add windows OS error supressing for temp dir cleanups (#4380) 2021-12-01 17:25:56 -06:00
Nathaniel May
41ed976941 move event code up a level (#4381)
move event code up a level plus minor fixes
2021-12-01 17:30:19 -05:00
Gerda Shank
e93ad5f118 Make the stdout logger actually go to stdout (#4368) 2021-11-30 17:48:23 -05:00
Emily Rockman
d75ed964f8 only log events in cache.py when flag is set set (#4369)
flag is --log-cache-events
2021-11-30 15:17:08 -06:00
Nathaniel May
284ac9b138 better dataclass field handling (#4361)
fix serializing dataclass fields so they show up at all
2021-11-30 13:34:57 -05:00
github-actions[bot]
7448ec5adb Bumping version to 1.0.0rc3 (#4363)
* Bumping version to 1.0.0rc3

* Updating Changelog for release

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-11-30 09:35:03 -05:00
Emily Rockman
caa6269bc7 add node_info to relevant logs (#4336)
* WIP

* fixed some merg issues

* WIP

* first pass with node_status logging

* add node details to one more

* another pass at node info

* fixed failures

* convert to classes

* more tweaks to basic implementation

* added in ststus, organized a bit

* saving broken state

* working state with lots of todos

* formatting

* add start/end tiemstamps

* adding node_status logging to more events

* adding node_status to more events

* Add RunningStatus and set in node

* Add NodeCompiling and NodeExecuting events, switch to _event_status dict

* add _event_status to SourceDefinition

* small tweaks to NodeInfo

* fixed misnamed attr

* small fix to validation

* rename logging timestamps to minimize name collision

* fixed flake failure

* move str formatting to events

* incorporate serialization changes

* add node_status to event_to_serializable_dict

* convert nodeInfo to dict with dataclass builtin

* Try to fix failing unit, flake8, mypy tests (#4362)

* fixed leftover merge conflict

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2021-11-30 09:34:28 -05:00
Gerda Shank
31691c3b88 Events with graph_func include actual output of graph_func (#4360) 2021-11-29 20:20:22 -05:00
Ian Knox
3a904a811f Event buffer for structlog (#4358)
Add Internal event buffer

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-11-29 20:12:20 -05:00
Nathaniel May
b927a31a53 make json serialization overridable for events (#4326)
* simplify scrubbing

* add overridable serialize method to events

* add imperfect test for json serialization of events

Co-authored-by: Ian Knox <ian.knox@fishtownanalytics.com>
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-11-29 18:19:34 -05:00
Kyle Wigley
d8dd75320c set invocation id when generating psuedo config (#4359) 2021-11-29 17:29:12 -05:00
Nathaniel May
a613556246 add thread_name to json output (#4353) 2021-11-29 14:01:50 -05:00
Jeremy Cohen
8d2351d541 Logging: restore previous (small) behaviors (#4341)
* Log formatting from flags earlier

* WARN-level stdout for list task

* Readd tracking events to File

* PR feedback, annotate hacks

* Revert "PR feedback, annotate hacks"

This reverts commit 5508fa230b26f51c01ce0b9789f12111620ebd92.

* This is maybe better

* Annotate main.py

* One more comment in base.py

* Update changelog
2021-11-29 19:05:39 +01:00
leahwicz
f72b603196 Adding release workflow (#4288) 2021-11-29 10:37:14 -05:00
Gerda Shank
4eb17b57fb Provide function to set the invocation_id (#4351) 2021-11-29 10:15:19 -05:00
Cor
85a4b87267 Use cls in classmethod (#4345)
Instead of calling the class explicitly, use the `cls` variable instead.
2021-11-29 09:57:52 -05:00
jan zens
0d320c58da fix typo in UnparsedSourceDefinition.__post_serialize_ (#4349)
* fix typo in UnparsedSourceDefinition.__post_serialize_

fix typo in UnparsedSourceDefinition.__post_serialize_

* update CHANGELOG.md

update CHANGELOG.md

add #4349

* Update changelog

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-29 11:36:11 +01:00
Emilie Lima Schario
ed1ff2caac Adjust logic when finding approx matches for model or test matching (#4076)
* adjust logic when finding approx matches

* update changelog

* Update core/dbt/adapters/base/relation.py

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Update changelog

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-29 11:20:01 +01:00
sarah-weatherbee
d80646c258 adds additional augmented assignment statements (#4315) (#4331)
* adds additional augmented assignment statements (#4315)

* Per PR comments, revised CHANGELOG.md to note change and contributor info
2021-11-27 09:04:40 -06:00
Matthew McKnight
a9b4316346 Mc knight 42/test event codes (#4338)
* pushing up to get eye on from Nate

* updating to compare

* latest push

* finished test for duplicate codes with a lot of help from Nate

* resolving suggestions

* removed duplicated code in types.py, made minor changes to test_events.py

* added missing func call
2021-11-24 16:03:43 -06:00
Gerda Shank
36776b96e7 [#4337] Always create an invocation_id, even when not tracking (#4340) 2021-11-24 16:54:17 -05:00
Jeremy Cohen
7f2d3cd24f Fix static parser tracking logic (#4332)
* Fix static parser tracking logic

* Add changelog note
2021-11-24 17:26:56 +01:00
Gerda Shank
d046ae0606 [#4253] Support partial parsing of env_vars in metrics definitions (#4322) 2021-11-23 15:02:47 -05:00
Gerda Shank
e8c267275e [#4254] Change some CompilationExceptions to ParsingException in the parser (#4328) 2021-11-23 13:50:00 -05:00
github-actions[bot]
a4951749a8 Bumping version to 1.0.0rc2 (#4321)
* Bumping version to 1.0.0rc2

* Update changelog

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-22 21:26:15 +01:00
Ian Knox
e1a2e8d9f5 Add codes to all log events (re-work of PR #4268) (#4319)
* re-work of old branch
2021-11-22 13:14:33 -06:00
Emily Rockman
f80c78e3a5 add logic to scrub more than str types (#4317) 2021-11-22 12:58:10 -06:00
Emily Rockman
c541eca592 structured logging: add data attributes to json log output (#4301)
* simplified data construction

* fixed missed scrubbing of secrets

* switched to vars()

* scrub entire log line, update how attributes get pulled

* get ahead of serialization errors

* store if data is serialized and modify values instead of a copy of values

* fixed unused import from merge
2021-11-19 15:43:26 -06:00
Nathaniel May
726aba0586 version logging (#4289)
* start adding version logging, noticed some wrong stuff

* fix bad pid and ts

* remove level format on json logs

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-11-19 14:53:50 -06:00
Jeremy Cohen
d300addee1 SecretContext for secret env vars, profiles + packages only (#4311)
* SecretContext for secret env vars

* Cleanup exception. Add + edit tests

* Add changelog entry
2021-11-19 19:36:19 +01:00
Kyle Wigley
d5d16f01f4 Fix flags import (#4307) 2021-11-18 14:59:49 -05:00
Kyle Wigley
2cb26e2699 Add supported dbt tasks (#4200) 2021-11-18 14:05:00 -05:00
Nathaniel May
b4793b4f9b Fix adapter failures due to string formatting issues (#4305)
fix adapter failures due to string formatting issues
2021-11-18 12:54:20 -05:00
Gerda Shank
045e70ccf1 [#4298] Fix 'created_at' in ParsedMetric to allow recalculating metrics depends_on refs (#4299) 2021-11-18 09:29:09 -05:00
Jeremy Cohen
ba23395c8e Fix metrics count in compile stats (#4292)
* Fix metrics count in compile stats

* Add changelog entry
2021-11-18 09:28:13 +01:00
Joel Labes
0aacd99168 Get prerelease packages when specifically requested (#4295)
* Get prerelease packages when specifically required. Add test validating it works

* Update CHANGELOG.md
2021-11-18 09:11:49 +01:00
Nathaniel May
e4b5d73dc4 adjust level length for text only (#4303)
adjust level length for text log lines only
2021-11-17 17:32:15 -05:00
Gerda Shank
bd950f687a [#4252] Serialization error when missing quotes in metrics model ref() call (#4287) 2021-11-17 17:14:32 -05:00
Gerda Shank
aea23a488b [#4272] Move validator keyword argument in jinja 'config.get' to after 'default' (#4297) 2021-11-17 17:12:26 -05:00
Jeremy Cohen
22731df07b Fix: default log formatting (#4302)
* Respect log formatting

* PR feedback
2021-11-17 21:10:14 +01:00
Jeremy Cohen
c55be164e6 Separate warnings. Fix duplication (#4291) 2021-11-17 18:01:28 +01:00
kadero
9d73304c1a Alow snapshot defer (#4296)
* Alow snapshot defer

* Update changelog
2021-11-17 16:56:37 +01:00
Nathaniel May
719b2026ab Minor Cleanup of Structured Logging Module (#4266)
* cleanup structured logging module

* update adapter logger to preserve new-style logging formatting
2021-11-16 20:22:11 -05:00
kadero
22416980d1 Avoid errors when missing column in yaml doc (#4285)
* Update postgres__alter_column_comment

* Update changelog

* Add integration test
2021-11-16 13:22:18 +01:00
Gerda Shank
3d28b6704c [#3689] Fix filesystem searcher and tests that mock it (#4271) 2021-11-15 09:46:17 -05:00
Mila Page
5d1b104e1f Feature/3997 profiles test selection flag (#4270)
* Address 3997. Test selection flag can be in profile.yml.

* Per Jerco's 4104 PR unresolved comments, unify i.s. predicate and add env var.

* Couple of flake8 touchups.

* Classier error handling using enum semantics.

* Cherry-pick in part of Gerda's commit to hopefully avoid a future merge conflict.

* Add 3997 to changelog.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2021-11-15 14:07:22 +01:00
Jeremy Cohen
4a8a68049d Try removing dupe logging during integration tests (#4275) 2021-11-15 11:00:29 +01:00
Jeremy Cohen
4b7fd1d46a Update dbt-postgres readme (#4263)
* Update dbt-postgres readme

* Rm redshift references

* Update plugins/postgres/README.md

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

* Update plugins/postgres/README.md

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-11-12 17:12:00 +01:00
github-actions[bot]
0722922c03 Bumping version to 1.0.0rc1 (#4234)
* Bumping version to 1.0.0rc1

* Update changelog

* Add Dockerfile to bumpversion, update reqs

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-10 14:24:40 +01:00
kadero
40321d7966 Dbt init with provided project name (#4249)
* Dbt init with provided project name

* Update changelog.md

* Fix changelog.md

* Fix typo in help

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-10 11:58:49 +01:00
Nathaniel May
434f3d2678 Merge pull request #4055 from dbt-labs/feature/structured-logging
Add Structured Logging
2021-11-09 17:42:19 -05:00
Jeremy Cohen
6dd9c2c5ba Env var shim to enable legacy logger (#4255)
* Env var shim to reenable logbook

* Rename to ENABLE_LEGACY_LOGGER
2021-11-09 23:04:47 +01:00
Nathaniel May
5e6be1660e configure event logger for integration tests (#4257)
* apply test fixes

* remove presto test
2021-11-09 16:13:13 -05:00
Nathaniel May
31acb95d7a rebased on main and added new partial parsing event 2021-11-09 11:40:18 -05:00
Nathaniel May
683190b711 fixes 2021-11-09 11:26:01 -05:00
Nathaniel May
ebb84c404f postgres adapter to use new logger 2021-11-09 11:26:01 -05:00
Nathaniel May
2ca6ce688b whitespace change 2021-11-09 11:26:01 -05:00
Nathaniel May
a40550b89d std logger for structured logging (#4231)
structured logging powered by the stdlib logger

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2021-11-09 11:26:01 -05:00
Ian Knox
b2aea11cdb Struct log for adapter call sites (#4189)
graph call sites for structured logging

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-11-09 11:26:01 -05:00
Emily Rockman
43b39fd1aa removed redundant timestamp (#4239) 2021-11-09 11:26:01 -05:00
Emily Rockman
5cc8626e96 updates associated with merging main
- removed 3 new log call sites and replaced with structured logs
- removed 2 unused struc logs
2021-11-09 11:26:01 -05:00
Nathaniel May
f95e9efbc0 use event types in main even before the logger is set up. (#4219) 2021-11-09 11:26:01 -05:00
Nathaniel May
25c974af8c lazy logging in event module (#4210)
* switches on debug level to guard against expensive messages

* adds memoization to msg construction
2021-11-09 11:26:01 -05:00
Emily Rockman
b5c6f09a9e remove unused import (#4217) 2021-11-09 11:26:01 -05:00
Emily Rockman
bd3e623240 test/integration call sites (#4209)
* added struct logging to base

* fixed merge wierdness

* convert to use single type for integration tests

* converted to 3 reusable test types in sep module

* tweak message

* clean up and making test_types complete for future

* fix missed import
2021-11-09 11:26:01 -05:00
Emily Rockman
63343653a9 trivial logger removal (#4216) 2021-11-09 11:26:01 -05:00
Emily Rockman
d8b97c1077 call sites in core/dbt (excluding main.py) (#4202)
* add struct logging to compilation

* add struct logging to tracking

* add struct logging to utils

* add struct logging to exceptions

* fixed some misc errors

* updated to send raw ex, removed resulting circ dep
2021-11-09 11:26:01 -05:00
Emily Rockman
e0b0edaeed deps call sites (#4199)
* add struct logging to base

* add struct logging to git

* add struct logging to deps

* remove blank line

* fixed stray merge error
2021-11-09 11:26:01 -05:00
Emily Rockman
3cafc9e13f task callsites: part 2 (#4188)
* add struct logging to docs serve

* remove merge fluff

* struct logging to seed command

* converting print to use structured logging

* more structured logging print conversion

* pulling apart formatting more

* added struct logging by disecting printer.py

* add struct logging to runnable

* add struct logging to task init

* fixed formatting

* more formatting and moving things around
2021-11-09 11:26:01 -05:00
Nathaniel May
13f31aed90 scrub the secrets (#4203)
scrub secrets in event module
2021-11-09 11:26:01 -05:00
Nathaniel May
d513491046 Show Exception should trigger a stack trace (#4190) 2021-11-09 11:26:01 -05:00
Emily Rockman
281d2491a5 task call sites part 1 (#4183)
* add struct logging to base.py

* struct logging in run_operation

* add struct logging to base

* add struct logging to clean

* add struct logging to debug

* add struct logging to deps

* fix errors

* add struct logging to run.py

* fixed flake error

* add struct logging to geneerate

* added debug level stack trace

* fixed flake error

* added struct logging to compile

* added struct logging to freshness

* cleaned up errors

* resolved bug that broke everything

* removed accidental import

* fixed bug with unused args
2021-11-09 11:26:01 -05:00
Emily Rockman
9857e1dd83 parser call sites (#4177)
* convert generic_test to structured logging

* convert macros to structured logging

* add struc logging to most of manifest.py

* add struct logging to models.py

* added struct logging to partial.py

* finished conversion of manifest

* fixing errors

* fixed 1 todo and added another

* fixed bugs from merge
2021-11-09 11:26:01 -05:00
Emily Rockman
6b36b18029 config call sites (#4169)
* update config use structured logging

* WIP

* minor cleanup

* fixed merge error

* added in ShowException

* added todo to remove defaults after dropping 3.6

* removed todo that is obsolete
2021-11-09 11:26:01 -05:00
Ian Knox
d8868c5197 Dataclass compatibility (#4180)
* use __post_init__() instead of fake dataclass member vars
2021-11-09 11:26:01 -05:00
Emily Rockman
b141620125 contracts call sites (#4166)
* first pass adding structured logging
2021-11-09 11:26:01 -05:00
Emily Rockman
51d8440dd4 Change Graph logger call sites (#4165)
graph call sites for structured logging
2021-11-09 11:26:01 -05:00
Nathaniel May
5b2562a919 Client call sites (#4163)
update log call sites with new event system
2021-11-09 11:26:01 -05:00
Nathaniel May
44a9da621e Handle exec info (#4168)
handle exec info
2021-11-09 11:26:01 -05:00
Emily Rockman
69aa6bf964 context call sites (#4164)
* updated context dir to new structured logging
2021-11-09 11:26:01 -05:00
Nathaniel May
f9ef9da110 Initial structured logging work with fire_event (#4137)
add event type modeling and fire_event calls
2021-11-09 11:26:01 -05:00
Nathaniel May
57ae9180c2 init 2021-11-09 11:26:01 -05:00
Jeremy Cohen
efe926d20c Change user instead of pass (#4250) 2021-11-09 13:10:34 +01:00
Jeremy Cohen
1081b8e720 Improve error msg on pip install dbt (#4244) 2021-11-09 10:40:45 +01:00
Kyle Wigley
8205921c4b Update docs (#4241)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-08 19:47:28 -05:00
Jeremy Cohen
da6c211611 Wrap get_batch_size() in return() (#4240) 2021-11-09 00:46:15 +01:00
Jeremy Cohen
354c1e0d4d Rm py36 tests, pkg metadata, bump reqs (#4223) 2021-11-09 00:19:09 +01:00
Gerda Shank
855419d698 [#4071] Add metrics feature (#4235)
* first cut at supporting metrics definitions

* teach dbt about metrics

* wip

* support partial parsing for metrics

* working on tests

* Fix some tests

* Add partial parsing metrics test

* Fix some more tests

* Update CHANGELOG.md

* Fix partial parsing yaml file to correct model syntax

Co-authored-by: Drew Banin <drew@fishtownanalytics.com>
2021-11-08 17:44:01 -05:00
Gerda Shank
e94fd61b24 Issue message instead of exception when patch does not have a matching (#4236) node 2021-11-08 15:35:14 -05:00
Kyle Wigley
4cf9b73c3d Raise parsing error instead of compilation when extracting test args (#4237) 2021-11-08 14:51:52 -05:00
Jeremy Cohen
8442fb66a5 Reorganize global project (macros) (#4154)
* Add integration tests

* Reorganize + dispatch more global macros

* Reorg materializations subdir

* Move around + document generic tests

* Fix failing tests

* Fix merge conflict

* Grab fix from #4148

* PR feedback

* Fixup

* Add load_relation back, it was nice

* Last few test fixes

* Rm incremental_upsert, now unused

* Add changelog entry
2021-11-08 19:09:54 +01:00
dependabot[bot]
f8cefa3eff Update agate requirement from <1.6.2,>=1.6 to >=1.6,<1.6.4 in /core (#3585)
Updates the requirements on [agate](https://github.com/wireservice/agate) to permit the latest version.
- [Release notes](https://github.com/wireservice/agate/releases)
- [Changelog](https://github.com/wireservice/agate/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/wireservice/agate/compare/1.6.0...1.6.3)

---
updated-dependencies:
- dependency-name: agate
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-08 18:41:04 +01:00
dependabot[bot]
d83e0fb8d8 Bump mashumaro from 2.5 to 2.9 in /core (#4193)
Bumps [mashumaro](https://github.com/Fatal1ty/mashumaro) from 2.5 to 2.9.
- [Release notes](https://github.com/Fatal1ty/mashumaro/releases)
- [Commits](https://github.com/Fatal1ty/mashumaro/compare/v2.5...v2.9)

---
updated-dependencies:
- dependency-name: mashumaro
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-08 18:40:25 +01:00
Gerda Shank
3e9da06365 [#3885] Skip partial parsing if project env vars change (#4212)
* [#3885] Skip partial parsing if project env vars change

* Support env_vars in the profile
2021-11-08 11:51:38 -05:00
Gerda Shank
bda70c988e [#3885] Partially parse when environment variables in schema files change (#4162)
* [#3885] Partially parse when environment variables in schema files
change

* Add documentation for test kwargs

* Add test and fix for schema configs with env_var
2021-11-08 11:28:43 -05:00
Rachel
229e897070 Clears adapters before registering to fix dbt-server cacheing behavior (#4218) 2021-11-08 10:33:39 -05:00
Benoit Perigaud
f20e83a32b Fix/dbt deps retry none answer (#4225)
* Fix issue #4178
Allow retries when the answer is None

* Include fix for #4178
Allow retries when the answer from dbt deps is None

* Add link to the PR

* Update exception and shorten line size

* Add test when dbt deps returns None
2021-11-08 12:30:38 +01:00
Jeremy Cohen
dd84f9a896 Raise error on pip install dbt (#4133)
* Raise error on pip install dbt

* Fix relative path logic

* Do not build dist for dbt

* Fix long descriptions

* Trigger code checks

* Using root readme more trouble than good

* only fail on install, not build

* Edit dist script. Avoid README duplication

* jk, be less clever

* Ignore 'dbt' source distribution when testing

* Add changelog entry

Co-authored-by: Kyle Wigley <kyle@dbtlabs.com>
2021-11-07 17:55:30 +01:00
Mila Page
e6df4266f6 Parser no longer takes greedy. Accepts indirect selection, a bool. (#4104)
* Parser no longer takes greedy. Accepts indirect selection, a bool.

* Remove references to greedy and supporting functions.

* 1. Set testing flag default to True. 2. Improve arg parsing.

* Update tests and add new case for when flag unset.

* Update names and styling to fit test requirements. Add default value for option.

* Correct several failing tests now that default behavior was flipped.

* Tests expect eager on by default.

* All but selector test passing.

* Get integration tests working, add them, and mix in selector syntax.

* Clean code and correct test.

* Add changelog entry

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-07 17:41:56 +01:00
Christophe Oudar
b591e1a2b7 Use common columns for incremental schema changes (#4170)
* Use common columns for incremental schema changes

* on_schema_change:append_new_columns should gracefully handle column removal

* review changes

* Lean approach for `process_schema_changes` to simplify

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-07 17:31:30 +01:00
Jeremy Cohen
3dab058c73 incorporate_indirect_nodes should pass if not needed (#4214)
* Pass incorporate_indirect_nodes if not needed

* Fix flake8

* Add changelog entry
2021-11-05 16:55:58 +01:00
Robert
c7bc6eb812 Add error surfacing for git cloning errors (#4124)
* Add error surfacing for git cloning errors

* Update CHANGELOG.md

* Fix formatting and remove redundant except: raise

* Turn error handling for duplicate packages back on
2021-11-05 10:12:07 +01:00
Jeremy Cohen
c690ecc1fd Fixup changelog (#4206) 2021-11-04 13:54:23 +01:00
Jeremy Cohen
73e272f06e Add get_where_subquery to test namespace (#4197)
* Add get_where_subquery to test namespace

* Add integration test

* Fix test, add comment, smarter approach

* Fix unit tests

* Add changelog entry
2021-11-04 11:53:28 +01:00
leahwicz
95d087b51b Bumping artifact versions for v1 (#4191)
* Bumping artifact versions for v1

* Adding schema in Changelog

* Update CHANGELOG.md

* Update changelog entry

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-04 11:19:36 +01:00
Jeremy Cohen
40ae6b6bc8 Any subset, strict or not (#4160) 2021-11-02 17:59:46 +01:00
Jeremy Cohen
fe20534a98 Add extra graph edges for build only (#4143)
* Resolve extra graph edges for build only

* Fix flake8

* Change test to reflect functional change

* Rename method + args. Add changelog entry
2021-11-02 17:41:14 +01:00
leahwicz
dd7af477ac Perf improvement to subgraph selection (#4155)
Perf improvement to get_subset_graph
Co-authored-by: Ian Knox <ian.knox@fishtownanalytics.com>
2021-10-29 16:06:09 -05:00
Jeremy Cohen
178f74b753 Fix comma if only removing columns in on_schema_change: sync_all_columns (#4148)
* Fix comma if only removing in on_schema_change: sync

* Add changelog entry
2021-10-28 10:19:05 +02:00
Emily Rockman
a14f563ec8 port error scrub from 0.21.latests up for main (#4145) 2021-10-27 14:06:22 -05:00
Kyle Wigley
ff109e1806 Expose lib to to run tasks and compile/execute sql (#4111) 2021-10-27 13:30:46 -04:00
Frank Cash
5e46694b68 assertRaisesRegexp => assertRaisesRegex (#4136)
* assertRaisesRegexp => assertRaisesRegex

* Update CHANGELOG.md

* Update CHANGELOG.md
2021-10-27 13:05:24 +02:00
Gerda Shank
73af9a56e5 [#3885] Handle env_vars in partial parsing of SQL files (#4101)
* [#3885] Handle env_vars in partial parsing

* Comment method to build env_vars_to_source_files
2021-10-26 11:16:36 -04:00
kadero
d2aa920275 Feature: nullable error_after in source (#3955)
* Add nullable error after feature

* add merge_error_after method

* Fix FreshnessThreshold merged test

* Fix other tests

* Fix merge error after

* Fix test docs generate integration test

* Fix source integration test

* Typo and fix linting.

* Fix mypy test

* More terse way to express merge_freshness_time_thresholds

* Update Changelog.md

* Add integration test

* Fix conflict

* Fix contributing.md

* Fix integration tests

* Move up changelog entry

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-10-26 15:23:57 +02:00
Gerda Shank
c34f3530c8 Use platform agnostic code when searching generic test directory (#4131) 2021-10-25 22:46:36 -04:00
5149 changed files with 51475 additions and 51586 deletions

View File

@@ -1,12 +1,12 @@
[bumpversion]
current_version = 1.0.0b2
current_version = 1.2.0a1
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
((?P<prekind>a|b|rc)
(?P<pre>\d+) # pre-release version num
)?
serialize =
serialize =
{major}.{minor}.{patch}{prekind}{pre}
{major}.{minor}.{patch}
commit = False
@@ -15,7 +15,7 @@ tag = False
[bumpversion:part:prekind]
first_value = a
optional_value = final
values =
values =
a
b
rc
@@ -24,8 +24,6 @@ values =
[bumpversion:part:pre]
first_value = 1
[bumpversion:file:setup.py]
[bumpversion:file:core/setup.py]
[bumpversion:file:core/dbt/version.py]
@@ -35,3 +33,9 @@ first_value = 1
[bumpversion:file:plugins/postgres/setup.py]
[bumpversion:file:plugins/postgres/dbt/adapters/postgres/__version__.py]
[bumpversion:file:docker/Dockerfile]
[bumpversion:file:tests/adapter/setup.py]
[bumpversion:file:tests/adapter/dbt/tests/adapter/__version__.py]

17
.changes/0.0.0.md Normal file
View File

@@ -0,0 +1,17 @@
## Previous Releases
For information on prior major and minor releases, see their changelogs:
* [1.1](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md)
* [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md)
* [0.21](https://github.com/dbt-labs/dbt-core/blob/0.21.latest/CHANGELOG.md)
* [0.20](https://github.com/dbt-labs/dbt-core/blob/0.20.latest/CHANGELOG.md)
* [0.19](https://github.com/dbt-labs/dbt-core/blob/0.19.latest/CHANGELOG.md)
* [0.18](https://github.com/dbt-labs/dbt-core/blob/0.18.latest/CHANGELOG.md)
* [0.17](https://github.com/dbt-labs/dbt-core/blob/0.17.latest/CHANGELOG.md)
* [0.16](https://github.com/dbt-labs/dbt-core/blob/0.16.latest/CHANGELOG.md)
* [0.15](https://github.com/dbt-labs/dbt-core/blob/0.15.latest/CHANGELOG.md)
* [0.14](https://github.com/dbt-labs/dbt-core/blob/0.14.latest/CHANGELOG.md)
* [0.13](https://github.com/dbt-labs/dbt-core/blob/0.13.latest/CHANGELOG.md)
* [0.12](https://github.com/dbt-labs/dbt-core/blob/0.12.latest/CHANGELOG.md)
* [0.11 and earlier](https://github.com/dbt-labs/dbt-core/blob/0.11.latest/CHANGELOG.md)

53
.changes/README.md Normal file
View File

@@ -0,0 +1,53 @@
# CHANGELOG Automation
We use [changie](https://changie.dev/) to automate `CHANGELOG` generation. For installation and format/command specifics, see the documentation.
### Quick Tour
- All new change entries get generated under `/.changes/unreleased` as a yaml file
- `header.tpl.md` contains the contents of the entire CHANGELOG file
- `0.0.0.md` contains the contents of the footer for the entire CHANGELOG file. changie looks to be in the process of supporting a footer file the same as it supports a header file. Switch to that when available. For now, the 0.0.0 in the file name forces it to the bottom of the changelog no matter what version we are releasing.
- `.changie.yaml` contains the fields in a change, the format of a single change, as well as the format of the Contributors section for each version.
### Workflow
#### Daily workflow
Almost every code change we make associated with an issue will require a `CHANGELOG` entry. After you have created the PR in GitHub, run `changie new` and follow the command prompts to generate a yaml file with your change details. This only needs to be done once per PR.
The `changie new` command will ensure correct file format and file name. There is a one to one mapping of issues to changes. Multiple issues cannot be lumped into a single entry. If you make a mistake, the yaml file may be directly modified and saved as long as the format is preserved.
Note: If your PR has been cleared by the Core Team as not needing a changelog entry, the `Skip Changelog` label may be put on the PR to bypass the GitHub action that blacks PRs from being merged when they are missing a `CHANGELOG` entry.
#### Prerelease Workflow
These commands batch up changes in `/.changes/unreleased` to be included in this prerelease and move those files to a directory named for the release version. The `--move-dir` will be created if it does not exist and is created in `/.changes`.
```
changie batch <version> --move-dir '<version>' --prerelease 'rc1'
changie merge
```
Example
```
changie batch 1.0.5 --move-dir '1.0.5' --prerelease 'rc1'
changie merge
```
#### Final Release Workflow
These commands batch up changes in `/.changes/unreleased` as well as `/.changes/<version>` to be included in this final release and delete all prereleases. This rolls all prereleases up into a single final release. All `yaml` files in `/unreleased` and `<version>` will be deleted at this point.
```
changie batch <version> --include '<version>' --remove-prereleases
changie merge
```
Example
```
changie batch 1.0.5 --include '1.0.5' --remove-prereleases
changie merge
```
### A Note on Manual Edits & Gotchas
- Changie generates markdown files in the `.changes` directory that are parsed together with the `changie merge` command. Every time `changie merge` is run, it regenerates the entire file. For this reason, any changes made directly to `CHANGELOG.md` will be overwritten on the next run of `changie merge`.
- If changes need to be made to the `CHANGELOG.md`, make the changes to the relevant `<version>.md` file located in the `/.changes` directory. You will then run `changie merge` to regenerate the `CHANGELOG.MD`.
- Do not run `changie batch` again on released versions. Our final release workflow deletes all of the yaml files associated with individual changes. If for some reason modifications to the `CHANGELOG.md` are required after we've generated the final release `CHANGELOG.md`, the modifications need to be done manually to the `<version>.md` file in the `/.changes` directory.
- changie can modify, create and delete files depending on the command you run. This is expected. Be sure to commit everything that has been modified and deleted.

6
.changes/header.tpl.md Executable file
View File

@@ -0,0 +1,6 @@
# dbt Core Changelog
- This file provides a full account of all changes to `dbt-core` and `dbt-postgres`
- Changes are listed under the (pre)release in which they first appear. Subsequent releases include changes from previous releases.
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)

View File

@@ -0,0 +1,7 @@
kind: Dependencies
body: "Bump ubuntu from 20.04 to 22.04"
time: 2022-04-27T19:51:28.000000-05:00
custom:
Author: dependabot[bot]
Issue: "4904"
PR: "5141"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Add selector method when reading selector definitions
time: 2022-04-08T11:26:10.713088+10:00
custom:
Author: danieldiamond
Issue: "4821"
PR: "4827"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Adds itertools to modules Jinja namespace
time: 2022-04-24T13:26:55.008246+01:00
custom:
Author: bd3dowling
Issue: "5130"
PR: "5140"

View File

@@ -0,0 +1,7 @@
kind: Features
body: allow target as an option in profile_template.yml
time: 2022-04-28T06:56:44.511519-04:00
custom:
Author: alexrosenfeld10
Issue: "5179"
PR: "5184"

View File

@@ -0,0 +1,7 @@
kind: Features
body: 'seed: Add new macro get_csv_sql'
time: 2022-05-03T14:29:34.847959075Z
custom:
Author: adamantike
Issue: "5206"
PR: "5207"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Adding new cols to check_cols in snapshots
time: 2022-03-17T21:09:16.977086+01:00
custom:
Author: GtheSheep
Issue: "3146"
PR: "4893"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Restore ability to utilize `updated_at` for check_cols snapshots
time: 2022-04-15T11:29:27.063462-06:00
custom:
Author: dbeatty10
Issue: "5076"
PR: "5077"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Fix retry logic to return values after initial try
time: 2022-04-22T13:12:27.239055-05:00
custom:
Author: emmyoop
Issue: "5023"
PR: "5137"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Use yaml renderer (with target context) for rendering selectors
time: 2022-04-22T13:56:45.147893-04:00
custom:
Author: gshank
Issue: "5131"
PR: "5136"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Scrub secret env vars from CommandError in exception stacktrace
time: 2022-04-25T20:39:24.365495+02:00
custom:
Author: jtcohen6
Issue: "5151"
PR: "5152"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Ensure the metric name does not contain spaces
time: 2022-04-26T20:21:04.360693-04:00
custom:
Author: gshank
Issue: "4572"
PR: "5173"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: When parsing 'all_sources' should be a list of unique dirs
time: 2022-04-27T10:26:48.648388-04:00
custom:
Author: gshank
Issue: "5120"
PR: "5176"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Add warning if yaml contains duplicate keys
time: 2022-04-28T10:01:57.893956+12:00
custom:
Author: jeremyyeo
Issue: "5114"
PR: "5146"

View File

@@ -0,0 +1,8 @@
kind: Fixes
body: Modifying the drop_test_schema to work better with Redshift issues around locked
tables and current transactions
time: 2022-04-29T16:07:42.750046-05:00
custom:
Author: Mcknight-42
Issue: "5200"
PR: "5198"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Migrating 005_simple_seed to the new test framework.
time: 2022-04-09T04:05:39.20045-07:00
custom:
Author: versusfacit
Issue: "200"
PR: "5013"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert 029_docs_generate tests to new framework
time: 2022-04-13T18:30:14.706391-04:00
custom:
Author: gshank
Issue: "5035"
PR: "5058"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Move package deprecation check outside of package cache
time: 2022-04-14T13:22:06.157579-05:00
custom:
Author: emmyoop
Issue: "5068"
PR: "5069"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Converted dbt list tests to pytest
time: 2022-04-27T14:06:28.882908-05:00
custom:
Author: stu-k
Issue: "5049"
PR: "5178"

60
.changie.yaml Executable file
View File

@@ -0,0 +1,60 @@
changesDir: .changes
unreleasedDir: unreleased
headerPath: header.tpl.md
versionHeaderPath: ""
changelogPath: CHANGELOG.md
versionExt: md
versionFormat: '## dbt-core {{.Version}} - {{.Time.Format "January 02, 2006"}}'
kindFormat: '### {{.Kind}}'
changeFormat: '- {{.Body}} ([#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), [#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))'
kinds:
- label: Breaking Changes
- label: Features
- label: Fixes
- label: Docs
- label: Under the Hood
- label: Dependencies
custom:
- key: Author
label: GitHub Username(s) (separated by a single space if multiple)
type: string
minLength: 3
- key: Issue
label: GitHub Issue Number
type: int
minLength: 4
- key: PR
label: GitHub Pull Request Number
type: int
minLength: 4
footerFormat: |
{{- $contributorDict := dict }}
{{- /* any names added to this list should be all lowercase for later matching purposes */}}
{{- $core_team := list "emmyoop" "nathaniel-may" "gshank" "leahwicz" "chenyulinx" "stu-k" "iknox-fa" "versusfacit" "mcknight-42" "jtcohen6" "dependabot" }}
{{- range $change := .Changes }}
{{- $authorList := splitList " " $change.Custom.Author }}
{{- /* loop through all authors for a PR */}}
{{- range $author := $authorList }}
{{- $authorLower := lower $author }}
{{- /* we only want to include non-core team contributors */}}
{{- if not (has $authorLower $core_team)}}
{{- $pr := $change.Custom.PR }}
{{- /* check if this contributor has other PRs associated with them already */}}
{{- if hasKey $contributorDict $author }}
{{- $prList := get $contributorDict $author }}
{{- $prList = append $prList $pr }}
{{- $contributorDict := set $contributorDict $author $prList }}
{{- else }}
{{- $prList := list $change.Custom.PR }}
{{- $contributorDict := set $contributorDict $author $prList }}
{{- end }}
{{- end}}
{{- end}}
{{- end }}
{{- /* no indentation here for formatting so the final markdown doesn't have unneeded indentations */}}
{{- if $contributorDict}}
### Contributors
{{- range $k,$v := $contributorDict }}
- [@{{$k}}](https://github.com/{{$k}}) ({{ range $index, $element := $v }}{{if $index}}, {{end}}[#{{$element}}](https://github.com/dbt-labs/dbt-core/pull/{{$element}}){{end}})
{{- end }}
{{- end }}

12
.flake8 Normal file
View File

@@ -0,0 +1,12 @@
[flake8]
select =
E
W
F
ignore =
W503 # makes Flake8 work like black
W504
E203 # makes Flake8 work like black
E741
E501 # long line checking is done in black
exclude = test

2
.git-blame-ignore-revs Normal file
View File

@@ -0,0 +1,2 @@
# Reformatting dbt-core via black, flake8, mypy, and assorted pre-commit hooks.
43e3fc22c4eae4d3d901faba05e33c40f1f1dc5a

43
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,43 @@
# This file contains the code owners for the dbt-core repo.
# PRs will be automatically assigned for review to the associated
# team(s) or person(s) that touches any files that are mapped to them.
#
# A statement takes precedence over the statements above it so more general
# assignments are found at the top with specific assignments being lower in
# the ordering (i.e. catch all assignment should be the first item)
#
# Consult GitHub documentation for formatting guidelines:
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#example-of-a-codeowners-file
# As a default for areas with no assignment,
# the core team as a whole will be assigned
* @dbt-labs/core
# Changes to GitHub configurations including Actions
/.github/ @leahwicz
# Language core modules
/core/dbt/config/ @dbt-labs/core-language
/core/dbt/context/ @dbt-labs/core-language
/core/dbt/contracts/ @dbt-labs/core-language
/core/dbt/deps/ @dbt-labs/core-language
/core/dbt/parser/ @dbt-labs/core-language
# Execution core modules
/core/dbt/events/ @dbt-labs/core-execution @dbt-labs/core-language # eventually remove language but they have knowledge here now
/core/dbt/graph/ @dbt-labs/core-execution
/core/dbt/task/ @dbt-labs/core-execution
# Adapter interface, scaffold, Postgres plugin
/core/dbt/adapters @dbt-labs/core-adapters
/core/scripts/create_adapter_plugin.py @dbt-labs/core-adapters
/plugins/ @dbt-labs/core-adapters
# Global project: default macros, including generic tests + materializations
/core/dbt/include/global_project @dbt-labs/core-execution @dbt-labs/core-adapters
# Perf regression testing framework
# This excludes the test project files itself since those aren't specific
# framework changes (excluded by not setting an owner next to it- no owner)
/performance @nathaniel-may
/performance/projects

View File

@@ -6,7 +6,7 @@ body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature requests!
Thanks for taking the time to fill out this feature request!
- type: checkboxes
attributes:
label: Is there an existing feature request for this?
@@ -14,6 +14,10 @@ body:
options:
- label: I have searched the existing issues
required: true
label: Is this your first time opening an issue?
options:
- label: I have read the [expectations for open source contributors](https://docs.getdbt.com/docs/contributing/oss-expectations)
required: true
- type: textarea
attributes:
label: Describe the Feature

View File

@@ -0,0 +1,14 @@
FROM python:3-slim AS builder
ADD . /app
WORKDIR /app
# We are installing a dependency here directly into our app source dir
RUN pip install --target=/app requests packaging
# A distroless container image with Python and some basics like SSL certificates
# https://github.com/GoogleContainerTools/distroless
FROM gcr.io/distroless/python3-debian10
COPY --from=builder /app /app
WORKDIR /app
ENV PYTHONPATH /app
CMD ["/app/main.py"]

View File

@@ -0,0 +1,50 @@
# Github package 'latest' tag wrangler for containers
## Usage
Plug in the necessary inputs to determine if the container being built should be tagged 'latest; at the package level, for example `dbt-redshift:latest`.
## Inputs
| Input | Description |
| - | - |
| `package` | Name of the GH package to check against |
| `new_version` | Semver of new container |
| `gh_token` | GH token with package read scope|
| `halt_on_missing` | Return non-zero exit code if requested package does not exist. (defaults to false)|
## Outputs
| Output | Description |
| - | - |
| `latest` | Wether or not the new container should be tagged 'latest'|
| `minor_latest` | Wether or not the new container should be tagged major.minor.latest |
## Example workflow
```yaml
name: Ship it!
on:
workflow_dispatch:
inputs:
package:
description: The package to publish
required: true
version_number:
description: The version number
required: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Wrangle latest tag
id: is_latest
uses: ./.github/actions/latest-wrangler
with:
package: ${{ github.event.inputs.package }}
new_version: ${{ github.event.inputs.new_version }}
gh_token: ${{ secrets.GITHUB_TOKEN }}
- name: Print the results
run: |
echo "Is it latest? Survey says: ${{ steps.is_latest.outputs.latest }} !"
echo "Is it minor.latest? Survey says: ${{ steps.is_latest.outputs.minor_latest }} !"
```

View File

@@ -0,0 +1,20 @@
name: "Github package 'latest' tag wrangler for containers"
description: "Determines wether or not a given dbt container should be given a bare 'latest' tag (I.E. dbt-core:latest)"
inputs:
package_name:
description: "Package to check (I.E. dbt-core, dbt-redshift, etc)"
required: true
new_version:
description: "Semver of the container being built (I.E. 1.0.4)"
required: true
gh_token:
description: "Auth token for github (must have view packages scope)"
required: true
outputs:
latest:
description: "Wether or not built container should be tagged latest (bool)"
minor_latest:
description: "Wether or not built container should be tagged minor.latest (bool)"
runs:
using: "docker"
image: "Dockerfile"

View File

@@ -0,0 +1,26 @@
name: Ship it!
on:
workflow_dispatch:
inputs:
package:
description: The package to publish
required: true
version_number:
description: The version number
required: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Wrangle latest tag
id: is_latest
uses: ./.github/actions/latest-wrangler
with:
package: ${{ github.event.inputs.package }}
new_version: ${{ github.event.inputs.new_version }}
gh_token: ${{ secrets.GITHUB_TOKEN }}
- name: Print the results
run: |
echo "Is it latest? Survey says: ${{ steps.is_latest.outputs.latest }} !"

View File

@@ -0,0 +1,6 @@
{
"inputs": {
"version_number": "1.0.1",
"package": "dbt-redshift"
}
}

95
.github/actions/latest-wrangler/main.py vendored Normal file
View File

@@ -0,0 +1,95 @@
import os
import sys
import requests
from distutils.util import strtobool
from typing import Union
from packaging.version import parse, Version
if __name__ == "__main__":
# get inputs
package = os.environ["INPUT_PACKAGE"]
new_version = parse(os.environ["INPUT_NEW_VERSION"])
gh_token = os.environ["INPUT_GH_TOKEN"]
halt_on_missing = strtobool(os.environ.get("INPUT_HALT_ON_MISSING", "False"))
# get package metadata from github
package_request = requests.get(
f"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions",
auth=("", gh_token),
)
package_meta = package_request.json()
# Log info if we don't get a 200
if package_request.status_code != 200:
print(f"Call to GH API failed: {package_request.status_code} {package_meta['message']}")
# Make an early exit if there is no matching package in github
if package_request.status_code == 404:
if halt_on_missing:
sys.exit(1)
else:
# everything is the latest if the package doesn't exist
print(f"::set-output name=latest::{True}")
print(f"::set-output name=minor_latest::{True}")
sys.exit(0)
# TODO: verify package meta is "correct"
# https://github.com/dbt-labs/dbt-core/issues/4640
# map versions and tags
version_tag_map = {
version["id"]: version["metadata"]["container"]["tags"] for version in package_meta
}
# is pre-release
pre_rel = True if any(x in str(new_version) for x in ["a", "b", "rc"]) else False
# semver of current latest
for version, tags in version_tag_map.items():
if "latest" in tags:
# N.B. This seems counterintuitive, but we expect any version tagged
# 'latest' to have exactly three associated tags:
# latest, major.minor.latest, and major.minor.patch.
# Subtracting everything that contains the string 'latest' gets us
# the major.minor.patch which is what's needed for comparison.
current_latest = parse([tag for tag in tags if "latest" not in tag][0])
else:
current_latest = False
# semver of current_minor_latest
for version, tags in version_tag_map.items():
if f"{new_version.major}.{new_version.minor}.latest" in tags:
# Similar to above, only now we expect exactly two tags:
# major.minor.patch and major.minor.latest
current_minor_latest = parse([tag for tag in tags if "latest" not in tag][0])
else:
current_minor_latest = False
def is_latest(
pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]
) -> bool:
"""Determine if a given contaier should be tagged 'latest' based on:
- it's pre-release status
- it's version
- the version of a previously identified container tagged 'latest'
:param pre_rel: Wether or not the version of the new container is a pre-release
:param new_version: The version of the new container
:param remote_latest: The version of the previously identified container that's
already tagged latest or False
"""
# is a pre-release = not latest
if pre_rel:
return False
# + no latest tag found = is latest
if not remote_latest:
return True
# + if remote version is lower than current = is latest, else not latest
return True if remote_latest <= new_version else False
latest = is_latest(pre_rel, new_version, current_latest)
minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
print(f"::set-output name=latest::{latest}")
print(f"::set-output name=minor_latest::{minor_latest}")

View File

@@ -15,7 +15,9 @@ resolves #
### Checklist
- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change
- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR
- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#Adding-CHANGELOG-Entry)

View File

@@ -1,95 +0,0 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.6", "3.7", "3.8", "3.9"];
const supportedAdapters = ["postgres"];
// if PR, generate matrix based on files changed and PR labels
if (context.eventName.includes("pull_request")) {
// `changes` is a list of adapter names that have related
// file changes in the PR
// ex: ['postgres', 'snowflake']
const changes = JSON.parse(process.env.CHANGES);
const labels = context.payload.pull_request.labels.map(({ name }) => name);
console.log("labels", labels);
console.log("changes", changes);
const testAllLabel = labels.includes("test all");
const include = [];
for (const adapter of supportedAdapters) {
if (
changes.includes(adapter) ||
testAllLabel ||
labels.includes(`test ${adapter}`)
) {
for (const pythonVersion of supportedPythonVersions) {
if (
pythonVersion === defaultPythonVersion ||
labels.includes(`test python${pythonVersion}`) ||
testAllLabel
) {
// always run tests on ubuntu by default
include.push({
os: "ubuntu-latest",
adapter,
"python-version": pythonVersion,
});
if (labels.includes("test windows") || testAllLabel) {
include.push({
os: "windows-latest",
adapter,
"python-version": pythonVersion,
});
}
if (labels.includes("test macos") || testAllLabel) {
include.push({
os: "macos-latest",
adapter,
"python-version": pythonVersion,
});
}
}
}
}
}
console.log("matrix", { include });
return {
include,
};
}
// if not PR, generate matrix of python version, adapter, and operating
// system to run integration tests on
const include = [];
// run for all adapters and python versions on ubuntu
for (const adapter of supportedAdapters) {
for (const pythonVersion of supportedPythonVersions) {
include.push({
os: 'ubuntu-latest',
adapter: adapter,
"python-version": pythonVersion,
});
}
}
// additionally include runs for all adapters, on macos and windows,
// but only for the default python version
for (const adapter of supportedAdapters) {
for (const operatingSystem of ["windows-latest", "macos-latest"]) {
include.push({
os: operatingSystem,
adapter: adapter,
"python-version": defaultPythonVersion,
});
}
}
console.log("matrix", { include });
return {
include,
};
};

40
.github/workflows/backport.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
# **what?**
# When a PR is merged, if it has the backport label, it will create
# a new PR to backport those changes to the given branch. If it can't
# cleanly do a backport, it will comment on the merged PR of the failure.
#
# Label naming convention: "backport <branch name to backport to>"
# Example: backport 1.0.latest
#
# You MUST "Squash and merge" the original PR or this won't work.
# **why?**
# Changes sometimes need to be backported to release branches.
# This automates the backporting process
# **when?**
# Once a PR is "Squash and merge"'d, by adding a backport label, this is triggered
name: Backport
on:
pull_request:
types:
- labeled
permissions:
contents: write
pull-requests: write
jobs:
backport:
name: Backport
runs-on: ubuntu-latest
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
github.event.pull_request.merged
&& contains(github.event.label.name, 'backport')
steps:
- uses: tibdex/backport@v2.0.2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

78
.github/workflows/changelog-check.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
# **what?**
# Checks that a file has been committed under the /.changes directory
# as a new CHANGELOG entry. Cannot check for a specific filename as
# it is dynamically generated by change type and timestamp.
# This workflow should not require any secrets since it runs for PRs
# from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code change gets reflected in the CHANGELOG.
# **when?**
# This will run for all PRs going into main and *.latest. It will
# run when they are opened, reopened, when any label is added or removed
# and when new code is pushed to the branch. The action will then get
# skipped if the 'Skip Changelog' label is present is any of the labels.
name: Check Changelog Entry
on:
pull_request:
types: [opened, reopened, labeled, unlabeled, synchronize]
workflow_dispatch:
defaults:
run:
shell: bash
permissions:
contents: read
pull-requests: write
env:
changelog_comment: 'Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry).'
jobs:
changelog:
name: changelog
if: "!contains(github.event.pull_request.labels.*.name, 'Skip Changelog')"
runs-on: ubuntu-latest
steps:
- name: Check if changelog file was added
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: filter
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
changelog:
- added: '.changes/unreleased/**.yaml'
- name: Check if comment already exists
uses: peter-evans/find-comment@v1
id: changelog_comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: ${{ env.changelog_comment }}
- name: Create PR comment if changelog entry is missing, required, and does not exist
if: |
steps.filter.outputs.changelog == 'false' &&
steps.changelog_comment.outputs.comment-body == ''
uses: peter-evans/create-or-update-comment@v1
with:
issue-number: ${{ github.event.pull_request.number }}
body: ${{ env.changelog_comment }}
- name: Fail job if changelog entry is missing and required
if: steps.filter.outputs.changelog == 'false'
uses: actions/github-script@v6
with:
script: core.setFailed('Changelog entry required to merge.')

View File

@@ -0,0 +1,114 @@
# **what?**
# When dependabot create a PR, it always adds the `dependencies` label. This
# action will add a corresponding changie yaml file to that PR when that label is added.
# The file is created off a template:
#
# kind: Dependencies
# body: <PR title>
# time: <current timestamp>
# custom:
# Author: dependabot
# Issue: 4904
# PR: <PR number>
#
# **why?**
# Automate changelog generation for more visability with automated dependency updates via dependabot.
# **when?**
# Once a PR is created and it has been correctly labeled with `dependencies`. The intended use
# is for the PRs created by dependabot. You can also manually trigger this by adding the
# `dependencies` label at any time.
name: Dependency Changelog
on:
pull_request:
# catch when the PR is opened with the label or when the label is added
types: [opened, labeled]
permissions:
contents: write
pull-requests: read
jobs:
dependency_changelog:
if: "contains(github.event.pull_request.labels.*.name, 'dependencies')"
runs-on: ubuntu-latest
steps:
# timestamp changes the order the changelog entries are listed in the final Changelog.md file. Precision is not
# important here.
# The timestamp on the filename and the timestamp in the contents of the file have different expected formats.
- name: Get File Name Timestamp
id: filename_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYYMMDD-HHmmss'
- name: Get File Content Timestamp
id: file_content_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYY-MM-DDTHH:mm:ss.000000-05:00'
# changie expects files to be named in a specific pattern.
- name: Generate Filepath
id: fp
run: |
FILEPATH=.changes/unreleased/Dependencies-${{ steps.filename_time.outputs.time }}.yaml
echo "::set-output name=FILEPATH::$FILEPATH"
- name: Check if changelog file exists already
# if there's already a changelog entry, don't add another one!
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: changelog_check
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
exists:
- added: '.changes/unreleased/**.yaml'
- name: Checkout Branch
if: steps.changelog_check.outputs.exists == 'false'
uses: actions/checkout@v2
with:
# specifying the ref avoids checking out the repository in a detached state
ref: ${{ github.event.pull_request.head.ref }}
# If this is not set to false, Git push is performed with github.token and not the token
# configured using the env: GITHUB_TOKEN in commit step
persist-credentials: false
- name: Create file from template
if: steps.changelog_check.outputs.exists == 'false'
run: |
echo kind: Dependencies > "${{ steps.fp.outputs.FILEPATH }}"
echo 'body: "${{ github.event.pull_request.title }}"' >> "${{ steps.fp.outputs.FILEPATH }}"
echo time: "${{ steps.file_content_time.outputs.time }}" >> "${{ steps.fp.outputs.FILEPATH }}"
echo custom: >> "${{ steps.fp.outputs.FILEPATH }}"
echo ' Author: ${{ github.event.pull_request.user.login }}' >> "${{ steps.fp.outputs.FILEPATH }}"
echo ' Issue: "4904"' >> "${{ steps.fp.outputs.FILEPATH }}" # github.event.pull_request.issue for auto id?
echo ' PR: "${{ github.event.pull_request.number }}"' >> "${{ steps.fp.outputs.FILEPATH }}"
- name: Commit Changelog File
if: steps.changelog_check.outputs.exists == 'false'
uses: gr2m/create-or-update-pull-request-action@v1
env:
# When using the GITHUB_TOKEN, the resulting commit will not trigger another GitHub Actions
# Workflow run. This is due to limitations set by GitHub.
# See: https://docs.github.com/en/actions/security-guides/automatic-token-authentication#using-the-github_token-in-a-workflow
# When you use the repository's GITHUB_TOKEN to perform tasks on behalf of the GitHub Actions
# app, events triggered by the GITHUB_TOKEN will not create a new workflow run. This prevents
# you from accidentally creating recursive workflow runs. To get around this, use a Personal
# Access Token to commit changes.
GITHUB_TOKEN: ${{ secrets.FISHTOWN_BOT_PAT }}
with:
branch: ${{ github.event.pull_request.head.ref }}
# author expected in the format "Lorem J. Ipsum <lorem@example.com>"
author: "Github Build Bot <buildbot@fishtownanalytics.com>"
commit-message: "Add automated changelog yaml from template"

View File

@@ -1,222 +0,0 @@
# **what?**
# This workflow runs all integration tests for supported OS
# and python versions and core adapters. If triggered by PR,
# the workflow will only run tests for adapters related
# to code changes. Use the `test all` and `test ${adapter}`
# label to run all or additional tests. Use `ok to test`
# label to mark PRs from forked repositories that are safe
# to run integration tests for. Requires secrets to run
# against different warehouses.
# **why?**
# This checks the functionality of dbt from a user's perspective
# and attempts to catch functional regressions.
# **when?**
# This workflow will run on every push to a protected branch
# and when manually triggered. It will also run for all PRs, including
# PRs from forks. The workflow will be skipped until there is a label
# to mark the PR as safe to run.
name: Adapter Integration Tests
on:
# pushes to release branches
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
# all PRs, important to note that `pull_request_target` workflows
# will run in the context of the target branch of a PR
pull_request_target:
# manual tigger
workflow_dispatch:
# explicitly turn off permissions for `GITHUB_TOKEN`
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
# sets default shell to bash, for all operating systems
defaults:
run:
shell: bash
jobs:
# generate test metadata about what files changed and the testing matrix to use
test-metadata:
# run if not a PR from a forked repository or has a label to mark as safe to test
if: >-
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.generate-matrix.outputs.result }}
steps:
- name: Check out the repository (non-PR)
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Check if relevant files changed
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: get-changes
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
postgres:
- 'core/**'
- 'plugins/postgres/**'
- 'dev-requirements.txt'
- name: Generate integration test matrix
id: generate-matrix
uses: actions/github-script@v4
env:
CHANGES: ${{ steps.get-changes.outputs.changes }}
with:
script: |
const script = require('./.github/scripts/integration-test-matrix.js')
const matrix = script({ context })
console.log(matrix)
return matrix
test:
name: ${{ matrix.adapter }} / python ${{ matrix.python-version }} / ${{ matrix.os }}
# run if not a PR from a forked repository or has a label to mark as safe to test
# also checks that the matrix generated is not empty
if: >-
needs.test-metadata.outputs.matrix &&
fromJSON( needs.test-metadata.outputs.matrix ).include[0] &&
(
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
)
runs-on: ${{ matrix.os }}
needs: test-metadata
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.test-metadata.outputs.matrix) }}
env:
TOXENV: integration-${{ matrix.adapter }}
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
# explicity checkout the branch for the PR,
# this is necessary for the `pull_request_target` event
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox (postgres)
if: matrix.adapter == 'postgres'
run: tox
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs
path: ./logs
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.adapter }}-${{ steps.date.outputs.date }}.csv
path: integration_results.csv
require-label-comment:
runs-on: ubuntu-latest
needs: test
permissions:
pull-requests: write
steps:
- name: Needs permission PR comment
if: >-
needs.test.result == 'skipped' &&
github.event_name == 'pull_request_target' &&
github.event.pull_request.head.repo.full_name != github.repository
uses: unsplash/comment-on-pr@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: |
"You do not have permissions to run integration tests, @dbt-labs/core "\
"needs to label this PR with `ok to test` in order to run integration tests!"
check_for_duplicate_msg: true

26
.github/workflows/jira-creation.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
# **what?**
# Mirrors issues into Jira. Includes the information: title,
# GitHub Issue ID and URL
# **why?**
# Jira is our tool for tracking and we need to see these issues in there
# **when?**
# On issue creation or when an issue is labeled `Jira`
name: Jira Issue Creation
on:
issues:
types: [opened, labeled]
permissions:
issues: write
jobs:
call-label-action:
uses: dbt-labs/jira-actions/.github/workflows/jira-creation.yml@main
secrets:
JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}
JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}

26
.github/workflows/jira-label.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
# **what?**
# Calls mirroring Jira label Action. Includes adding a new label
# to an existing issue or removing a label as well
# **why?**
# Jira is our tool for tracking and we need to see these labels in there
# **when?**
# On labels being added or removed from issues
name: Jira Label Mirroring
on:
issues:
types: [labeled, unlabeled]
permissions:
issues: read
jobs:
call-label-action:
uses: dbt-labs/jira-actions/.github/workflows/jira-label.yml@main
secrets:
JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}
JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}

24
.github/workflows/jira-transition.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
# **what?**
# Transition a Jira issue to a new state
# Only supports these GitHub Issue transitions:
# closed, deleted, reopened
# **why?**
# Jira needs to be kept up-to-date
# **when?**
# On issue closing, deletion, reopened
name: Jira Issue Transition
on:
issues:
types: [closed, deleted, reopened]
jobs:
call-label-action:
uses: dbt-labs/jira-actions/.github/workflows/jira-transition.yml@main
secrets:
JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}
JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}

View File

@@ -1,9 +1,8 @@
# **what?**
# Runs code quality checks, unit tests, and verifies python build on
# all code commited to the repository. This workflow should not
# require any secrets since it runs for PRs from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# Runs code quality checks, unit tests, integration tests and
# verifies python build on all code commited to the repository. This workflow
# should not require any secrets since it runs for PRs from forked repos. By
# default, secrets are not passed to workflows running from a forked repos.
# **why?**
# Ensure code for dbt meets a certain quality standard.
@@ -18,7 +17,6 @@ on:
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
pull_request:
@@ -37,24 +35,13 @@ defaults:
jobs:
code-quality:
name: ${{ matrix.toxenv }}
name: code-quality
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
toxenv: [flake8, mypy]
env:
TOXENV: ${{ matrix.toxenv }}
PYTEST_ADDOPTS: "-v --color=yes"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
@@ -62,12 +49,16 @@ jobs:
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
pip install pre-commit
pre-commit --version
pip install mypy==0.782
mypy --version
pip install -r editable-requirements.txt
dbt --version
- name: Run tox
run: tox
- name: Run pre-commit hooks
run: pre-commit run --all-files --show-diff-on-failure
unit:
name: unit test / python ${{ matrix.python-version }}
@@ -77,7 +68,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [3.6, 3.7, 3.8] # TODO: support unit testing for python 3.9 (https://github.com/dbt-labs/dbt/issues/3689)
python-version: ['3.7', '3.8', '3.9', '3.10']
env:
TOXENV: "unit"
@@ -86,8 +77,6 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
@@ -97,8 +86,8 @@ jobs:
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
pip install tox
tox --version
- name: Run tox
@@ -115,6 +104,75 @@ jobs:
name: unit_results_${{ matrix.python-version }}-${{ steps.date.outputs.date }}.csv
path: unit_results.csv
integration:
name: integration test / python ${{ matrix.python-version }} / ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ['3.7', '3.8', '3.9', '3.10']
os: [ubuntu-latest]
include:
- python-version: 3.8
os: windows-latest
- python-version: 3.8
os: macos-latest
env:
TOXENV: integration
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python tools
run: |
pip install --user --upgrade pip
pip --version
pip install tox
tox --version
- name: Run tests
run: tox
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y_%m_%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs_${{ matrix.python-version }}_${{ matrix.os }}_${{ steps.date.outputs.date }}
path: ./logs
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ steps.date.outputs.date }}.csv
path: integration_results.csv
build:
name: build packages
@@ -123,8 +181,6 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
@@ -151,44 +207,6 @@ jobs:
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test-build:
name: verify packages / python ${{ matrix.python-version }} / ${{ matrix.os }}
needs: build
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
@@ -198,8 +216,9 @@ jobs:
dbt --version
- name: Install source distributions
# ignore dbt-1.0.0, which intentionally raises an error when installed from source
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
find ./dist/dbt-[a-z]*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |

View File

@@ -1,176 +0,0 @@
name: Performance Regression Tests
# Schedule triggers
on:
# runs twice a day at 10:05am and 10:05pm
schedule:
- cron: "5 10,22 * * *"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# checks fmt of runner code
# purposefully not a dependency of any other job
# will block merging, but not prevent developing
fmt:
name: Cargo fmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- run: rustup component add rustfmt
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --manifest-path performance/runner/Cargo.toml --all -- --check
# runs any tests associated with the runner
# these tests make sure the runner logic is correct
test-runner:
name: Test Runner
runs-on: ubuntu-latest
env:
# turns errors into warnings
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: test
args: --manifest-path performance/runner/Cargo.toml
# build an optimized binary to be used as the runner in later steps
build-runner:
needs: [test-runner]
name: Build Runner
runs-on: ubuntu-latest
env:
RUSTFLAGS: "-D warnings"
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: actions-rs/cargo@v1
with:
command: build
args: --release --manifest-path performance/runner/Cargo.toml
- uses: actions/upload-artifact@v2
with:
name: runner
path: performance/runner/target/release/runner
# run the performance measurements on the current or default branch
measure-dev:
needs: [build-runner]
name: Measure Dev Branch
runs-on: ubuntu-latest
steps:
- name: checkout dev
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run
run: ./runner measure -b dev -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: dev-results
path: performance/results/
# run the performance measurements on the release branch which we use
# as a performance baseline. This part takes by far the longest, so
# we do everything we can first so the job fails fast.
# -----
# we need to checkout dbt twice in this job: once for the baseline dbt
# version, and once to get the latest regression testing projects,
# metrics, and runner code from the develop or current branch so that
# the calculations match for both versions of dbt we are comparing.
measure-baseline:
needs: [build-runner]
name: Measure Baseline Branch
runs-on: ubuntu-latest
steps:
- name: checkout latest
uses: actions/checkout@v2
with:
ref: "0.20.latest"
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- name: move repo up a level
run: mkdir ${{ github.workspace }}/../baseline/ && cp -r ${{ github.workspace }} ${{ github.workspace }}/../baseline
- name: "[debug] ls new dbt location"
run: ls ${{ github.workspace }}/../baseline/dbt/
# installation creates egg-links so we have to preserve source
- name: install dbt from new location
run: cd ${{ github.workspace }}/../baseline/dbt/ && pip install -r dev-requirements.txt -r editable-requirements.txt
# checkout the current branch to get all the target projects
# this deletes the old checked out code which is why we had to copy before
- name: checkout dev
uses: actions/checkout@v2
- name: install hyperfine
run: wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb && sudo dpkg -i hyperfine_1.11.0_amd64.deb
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: run runner
run: ./runner measure -b baseline -p ${{ github.workspace }}/performance/projects/
- uses: actions/upload-artifact@v2
with:
name: baseline-results
path: performance/results/
# detect regressions on the output generated from measuring
# the two branches. Exits with non-zero code if a regression is detected.
calculate-regressions:
needs: [measure-dev, measure-baseline]
name: Compare Results
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v2
with:
name: dev-results
- uses: actions/download-artifact@v2
with:
name: baseline-results
- name: "[debug] ls result files"
run: ls
- uses: actions/download-artifact@v2
with:
name: runner
- name: change permissions
run: chmod +x ./runner
- name: make results directory
run: mkdir ./final-output/
- name: run calculation
run: ./runner calculate -r ./ -o ./final-output/
# always attempt to upload the results even if there were regressions found
- uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: final-calculations
path: ./final-output/*

116
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
# **what?**
# This workflow will generate a series of docker images for dbt and push them to the github container registry
# **why?**
# Docker images for dbt are used in a number of important places throughout the dbt ecosystem. This is how we keep those images up-to-date.
# **when?**
# This is triggered manually
# **next steps**
# - build this into the release workflow (or conversly, break out the different release methods into their own workflow files)
name: Docker release
permissions:
packages: write
on:
workflow_dispatch:
inputs:
package:
description: The package to release. _One_ of [dbt-core, dbt-redshift, dbt-bigquery, dbt-snowflake, dbt-spark, dbt-postgres]
required: true
version_number:
description: The release version number (i.e. 1.0.0b1). Do not include `latest` tags or a leading `v`!
required: true
jobs:
get_version_meta:
name: Get version meta
runs-on: ubuntu-latest
outputs:
major: ${{ steps.version.outputs.major }}
minor: ${{ steps.version.outputs.minor }}
patch: ${{ steps.version.outputs.patch }}
latest: ${{ steps.latest.outputs.latest }}
minor_latest: ${{ steps.latest.outputs.minor_latest }}
steps:
- uses: actions/checkout@v1
- name: Split version
id: version
run: |
IFS="." read -r MAJOR MINOR PATCH <<< ${{ github.event.inputs.version_number }}
echo "::set-output name=major::$MAJOR"
echo "::set-output name=minor::$MINOR"
echo "::set-output name=patch::$PATCH"
- name: Is pkg 'latest'
id: latest
uses: ./.github/actions/latest-wrangler
with:
package: ${{ github.event.inputs.package }}
new_version: ${{ github.event.inputs.version_number }}
gh_token: ${{ secrets.GITHUB_TOKEN }}
halt_on_missing: False
setup_image_builder:
name: Set up docker image builder
runs-on: ubuntu-latest
needs: [get_version_meta]
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
build_and_push:
name: Build images and push to GHCR
runs-on: ubuntu-latest
needs: [setup_image_builder, get_version_meta]
steps:
- name: Get docker build arg
id: build_arg
run: |
echo "::set-output name=build_arg_name::"$(echo ${{ github.event.inputs.package }} | sed 's/\-/_/g')
echo "::set-output name=build_arg_value::"$(echo ${{ github.event.inputs.package }} | sed 's/postgres/core/g')
- name: Log in to the GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push MAJOR.MINOR.PATCH tag
uses: docker/build-push-action@v2
with:
file: docker/Dockerfile
push: True
target: ${{ github.event.inputs.package }}
build-args: |
${{ steps.build_arg.outputs.build_arg_name }}_ref=${{ steps.build_arg.outputs.build_arg_value }}@v${{ github.event.inputs.version_number }}
tags: |
ghcr.io/dbt-labs/${{ github.event.inputs.package }}:${{ github.event.inputs.version_number }}
- name: Build and push MINOR.latest tag
uses: docker/build-push-action@v2
if: ${{ needs.get_version_meta.outputs.minor_latest == 'True' }}
with:
file: docker/Dockerfile
push: True
target: ${{ github.event.inputs.package }}
build-args: |
${{ steps.build_arg.outputs.build_arg_name }}_ref=${{ steps.build_arg.outputs.build_arg_value }}@v${{ github.event.inputs.version_number }}
tags: |
ghcr.io/dbt-labs/${{ github.event.inputs.package }}:${{ needs.get_version_meta.outputs.major }}.${{ needs.get_version_meta.outputs.minor }}.latest
- name: Build and push latest tag
uses: docker/build-push-action@v2
if: ${{ needs.get_version_meta.outputs.latest == 'True' }}
with:
file: docker/Dockerfile
push: True
target: ${{ github.event.inputs.package }}
build-args: |
${{ steps.build_arg.outputs.build_arg_name }}_ref=${{ steps.build_arg.outputs.build_arg_value }}@v${{ github.event.inputs.version_number }}
tags: |
ghcr.io/dbt-labs/${{ github.event.inputs.package }}:latest

199
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,199 @@
# **what?**
# Take the given commit, run unit tests specifically on that sha, build and
# package it, and then release to GitHub and PyPi with that specific build
# **why?**
# Ensure an automated and tested release process
# **when?**
# This will only run manually with a given sha and version
name: Release to GitHub and PyPi
on:
workflow_dispatch:
inputs:
sha:
description: 'The last commit sha in the release'
required: true
version_number:
description: 'The release version number (i.e. 1.0.0b1)'
required: true
defaults:
run:
shell: bash
jobs:
unit:
name: Unit test
runs-on: ubuntu-latest
env:
TOXENV: "unit"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
build:
name: build packages
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
- name: Build distributions
run: ./scripts/build-dist.sh
- name: Show distributions
run: ls -lh dist/
- name: Check distribution descriptions
run: |
twine check dist/*
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: |
dist/
!dist/dbt-${{github.event.inputs.version_number}}.tar.gz
test-build:
name: verify packages
needs: [build, unit]
runs-on: ubuntu-latest
steps:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version
github-release:
name: GitHub Release
needs: test-build
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: '.'
# Need to set an output variable because env variables can't be taken as input
# This is needed for the next step with releasing to GitHub
- name: Find release type
id: release_type
env:
IS_PRERELEASE: ${{ contains(github.event.inputs.version_number, 'rc') || contains(github.event.inputs.version_number, 'b') }}
run: |
echo ::set-output name=isPrerelease::$IS_PRERELEASE
- name: Creating GitHub Release
uses: softprops/action-gh-release@v1
with:
name: dbt-core v${{github.event.inputs.version_number}}
tag_name: v${{github.event.inputs.version_number}}
prerelease: ${{ steps.release_type.outputs.isPrerelease }}
target_commitish: ${{github.event.inputs.sha}}
body: |
[Release notes](https://github.com/dbt-labs/dbt-core/blob/main/CHANGELOG.md)
files: |
dbt_postgres-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt_core-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt-postgres-${{github.event.inputs.version_number}}.tar.gz
dbt-core-${{github.event.inputs.version_number}}.tar.gz
pypi-release:
name: Pypi release
runs-on: ubuntu-latest
needs: github-release
environment: PypiProd
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: 'dist'
- name: Publish distribution to PyPI
uses: pypa/gh-action-pypi-publish@v1.4.2
with:
password: ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -1,5 +1,5 @@
# **what?**
# Compares the schema of the dbt version of the given ref vs
# Compares the schema of the dbt version of the given ref vs
# the latest official schema releases found in schemas.getdbt.com.
# If there are differences, the workflow will fail and upload the
# diff as an artifact. The metadata team should be alerted to the change.
@@ -37,20 +37,20 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Checkout dbt repo
uses: actions/checkout@v2.3.4
with:
path: ${{ env.DBT_REPO_DIRECTORY }}
- name: Checkout schemas.getdbt.com repo
uses: actions/checkout@v2.3.4
with:
uses: actions/checkout@v2.3.4
with:
repository: dbt-labs/schemas.getdbt.com
ref: 'main'
ssh-key: ${{ secrets.SCHEMA_SSH_PRIVATE_KEY }}
path: ${{ env.SCHEMA_REPO_DIRECTORY }}
- name: Generate current schema
run: |
cd ${{ env.DBT_REPO_DIRECTORY }}
@@ -59,7 +59,7 @@ jobs:
pip install --upgrade pip
pip install -r dev-requirements.txt -r editable-requirements.txt
python scripts/collect-artifact-schema.py --path ${{ env.LATEST_SCHEMA_PATH }}
# Copy generated schema files into the schemas.getdbt.com repo
# Do a git diff to find any changes
# Ignore any date or version changes though

View File

@@ -12,7 +12,6 @@ jobs:
with:
stale-issue-message: "This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days."
stale-pr-message: "This PR has been marked as Stale because it has been open for 180 days with no activity. If you would like the PR to remain open, please remove the stale label or comment on the PR, or it will be closed in 7 days."
close-issue-message: "Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest; add a comment to notify the maintainers."
# mark issues/PRs stale when they haven't seen activity in 180 days
days-before-stale: 180
# ignore checking issues with the following labels
exempt-issue-labels: "epic,discussion"

View File

@@ -0,0 +1,73 @@
# This Action checks makes a dbt run to sample json structured logs
# and checks that they conform to the currently documented schema.
#
# If this action fails it either means we have unintentionally deviated
# from our documented structured logging schema, or we need to bump the
# version of our structured logging and add new documentation to
# communicate these changes.
name: Structured Logging Schema Check
on:
push:
branches:
- "main"
- "*.latest"
- "releases/*"
pull_request:
workflow_dispatch:
permissions: read-all
jobs:
# run the performance measurements on the current or default branch
test-schema:
name: Test Log Schema
runs-on: ubuntu-latest
env:
# turns warnings into errors
RUSTFLAGS: "-D warnings"
# points tests to the log file
LOG_DIR: "/home/runner/work/dbt-core/dbt-core/logs"
# tells integration tests to output into json format
DBT_LOG_FORMAT: "json"
steps:
- name: checkout dev
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip --version
pip install tox
tox --version
- name: Set up postgres
uses: ./.github/actions/setup-postgres-linux
- name: ls
run: ls
# integration tests generate a ton of logs in different files. the next step will find them all.
# we actually care if these pass, because the normal test run doesn't usually include many json log outputs
- name: Run integration tests
run: tox -e integration -- -nauto
# apply our schema tests to every log event from the previous step
# skips any output that isn't valid json
- uses: actions-rs/cargo@v1
with:
command: run
args: --manifest-path test/interop/log_parsing/Cargo.toml

1
.github/workflows/test/.actrc vendored Normal file
View File

@@ -0,0 +1 @@
-P ubuntu-latest=ghcr.io/catthehacker/ubuntu:act-latest

1
.github/workflows/test/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.secrets

View File

@@ -0,0 +1 @@
GITHUB_TOKEN=GH_PERSONAL_ACCESS_TOKEN_GOES_HERE

View File

@@ -0,0 +1,6 @@
{
"inputs": {
"version_number": "1.0.1",
"package": "dbt-postgres"
}
}

33
.github/workflows/triage-labels.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
# **what?**
# When the core team triages, we sometimes need more information from the issue creator. In
# those cases we remove the `triage` label and add the `awaiting_response` label. Once we
# recieve a response in the form of a comment, we want the `awaiting_response` label removed
# in favor of the `triage` label so we are aware that the issue needs action.
# **why?**
# To help with out team triage issue tracking
# **when?**
# This will run when a comment is added to an issue and that issue has to `awaiting_response` label.
name: Update Triage Label
on: issue_comment
defaults:
run:
shell: bash
permissions:
issues: write
jobs:
triage_label:
if: contains(github.event.issue.labels.*.name, 'awaiting_response')
runs-on: ubuntu-latest
steps:
- name: initial labeling
uses: andymckay/labeler@master
with:
add-labels: "triage"
remove-labels: "awaiting_response"

View File

@@ -1,16 +1,16 @@
# **what?**
# This workflow will take a version number and a dry run flag. With that
# it will run versionbump to update the version number everywhere in the
# it will run versionbump to update the version number everywhere in the
# code base and then generate an update Docker requirements file. If this
# is a dry run, a draft PR will open with the changes. If this isn't a dry
# run, the changes will be committed to the branch this is run on.
# **why?**
# This is to aid in releasing dbt and making sure we have updated
# This is to aid in releasing dbt and making sure we have updated
# the versions and Docker requirements in all places.
# **when?**
# This is triggered either manually OR
# This is triggered either manually OR
# from the repository_dispatch event "version-bump" which is sent from
# the dbt-release repo Action
@@ -25,10 +25,10 @@ on:
is_dry_run:
description: 'Creates a draft PR to allow testing instead of committing to a branch'
required: true
default: 'true'
default: 'true'
repository_dispatch:
types: [version-bump]
jobs:
bump:
runs-on: ubuntu-latest
@@ -57,26 +57,26 @@ jobs:
run: |
python3 -m venv env
source env/bin/activate
pip install --upgrade pip
pip install --upgrade pip
- name: Create PR branch
if: ${{ steps.variables.outputs.IS_DRY_RUN == 'true' }}
run: |
git checkout -b bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID
git push origin bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID
git branch --set-upstream-to=origin/bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID
- name: Generate Docker requirements
run: |
source env/bin/activate
pip install -r requirements.txt
pip freeze -l > docker/requirements/requirements.txt
git status
# - name: Generate Docker requirements
# run: |
# source env/bin/activate
# pip install -r requirements.txt
# pip freeze -l > docker/requirements/requirements.txt
# git status
- name: Bump version
run: |
source env/bin/activate
pip install -r dev-requirements.txt
pip install -r dev-requirements.txt
env/bin/bumpversion --allow-dirty --new-version ${{steps.variables.outputs.VERSION_NUMBER}} major
git status
@@ -107,3 +107,5 @@ jobs:
base: ${{github.ref}}
title: 'Bumping version to ${{steps.variables.outputs.VERSION_NUMBER}}'
branch: 'bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_${{GITHUB.RUN_ID}}'
labels: |
Skip Changelog

9
.gitignore vendored
View File

@@ -49,9 +49,8 @@ coverage.xml
*,cover
.hypothesis/
test.env
*.pytest_cache/
# Mypy
.mypy_cache/
# Translations
*.mo
@@ -66,10 +65,10 @@ docs/_build/
# PyBuilder
target/
#Ipython Notebook
# Ipython Notebook
.ipynb_checkpoints
#Emacs
# Emacs
*~
# Sublime Text
@@ -78,6 +77,7 @@ target/
# Vim
*.sw*
# Pyenv
.python-version
# Vim
@@ -90,6 +90,7 @@ venv/
# AWS credentials
.aws/
# MacOS
.DS_Store
# vscode

68
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,68 @@
# Configuration for pre-commit hooks (see https://pre-commit.com/).
# Eventually the hooks described here will be run as tests before merging each PR.
# TODO: remove global exclusion of tests when testing overhaul is complete
exclude: ^test/
# Force all unspecified python hooks to run python 3.8
default_language_version:
python: python3.8
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: check-yaml
args: [--unsafe]
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
exclude_types:
- "markdown"
- id: check-case-conflict
- repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
args:
- "--line-length=99"
- "--target-version=py38"
- id: black
alias: black-check
stages: [manual]
args:
- "--line-length=99"
- "--target-version=py38"
- "--check"
- "--diff"
- repo: https://gitlab.com/pycqa/flake8
rev: 4.0.1
hooks:
- id: flake8
- id: flake8
alias: flake8-check
stages: [manual]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.782
hooks:
- id: mypy
# N.B.: Mypy is... a bit fragile.
#
# By using `language: system` we run this hook in the local
# environment instead of a pre-commit isolated one. This is needed
# to ensure mypy correctly parses the project.
# It may cause trouble
# in that it adds environmental variables out of our control to the
# mix. Unfortunately, there's nothing we can do about per pre-commit's
# author.
# See https://github.com/pre-commit/pre-commit/issues/730 for details.
args: [--show-error-codes]
files: ^core/dbt/
language: system
- id: mypy
alias: mypy-check
stages: [manual]
args: [--show-error-codes, --pretty]
files: ^core/dbt/
language: system

View File

@@ -2,18 +2,25 @@ The core function of dbt is SQL compilation and execution. Users create projects
## dbt-core
Most of the python code in the repository is within the `core/dbt` directory. Currently the main subdirectories are:
Most of the python code in the repository is within the `core/dbt` directory.
- [`single python files`](core/dbt/README.md): A number of individual files, such as 'compilation.py' and 'exceptions.py'
- [`adapters`](core/dbt/adapters): Define base classes for behavior that is likely to differ across databases
- [`clients`](core/dbt/clients): Interface with dependencies (agate, jinja) or across operating systems
- [`config`](core/dbt/config): Reconcile user-supplied configuration from connection profiles, project files, and Jinja macros
- [`context`](core/dbt/context): Build and expose dbt-specific Jinja functionality
- [`contracts`](core/dbt/contracts): Define Python objects (dataclasses) that dbt expects to create and validate
- [`deps`](core/dbt/deps): Package installation and dependency resolution
- [`graph`](core/dbt/graph): Produce a `networkx` DAG of project resources, and selecting those resources given user-supplied criteria
- [`include`](core/dbt/include): The dbt "global project," which defines default implementations of Jinja2 macros
- [`parser`](core/dbt/parser): Read project files, validate, construct python objects
- [`task`](core/dbt/task): Set forth the actions that dbt can perform when invoked
The main subdirectories of core/dbt:
- [`adapters`](core/dbt/adapters/README.md): Define base classes for behavior that is likely to differ across databases
- [`clients`](core/dbt/clients/README.md): Interface with dependencies (agate, jinja) or across operating systems
- [`config`](core/dbt/config/README.md): Reconcile user-supplied configuration from connection profiles, project files, and Jinja macros
- [`context`](core/dbt/context/README.md): Build and expose dbt-specific Jinja functionality
- [`contracts`](core/dbt/contracts/README.md): Define Python objects (dataclasses) that dbt expects to create and validate
- [`deps`](core/dbt/deps/README.md): Package installation and dependency resolution
- [`events`](core/dbt/events/README.md): Logging events
- [`graph`](core/dbt/graph/README.md): Produce a `networkx` DAG of project resources, and selecting those resources given user-supplied criteria
- [`include`](core/dbt/include/README.md): The dbt "global project," which defines default implementations of Jinja2 macros
- [`parser`](core/dbt/parser/README.md): Read project files, validate, construct python objects
- [`task`](core/dbt/task/README.md): Set forth the actions that dbt can perform when invoked
Legacy tests are found in the 'test' directory:
- [`unit tests`](core/dbt/test/unit/README.md): Unit tests
- [`integration tests`](core/dbt/test/integration/README.md): Integration tests
### Invoking dbt
@@ -44,4 +51,4 @@ The [`test/`](test/) subdirectory includes unit and integration tests that run a
- [docker](docker/): All dbt versions are published as Docker images on DockerHub. This subfolder contains the `Dockerfile` (constant) and `requirements.txt` (one for each version).
- [etc](etc/): Images for README
- [scripts](scripts/): Helper scripts for testing, releasing, and producing JSON schemas. These are not included in distributions of dbt, not are they rigorously tested—they're just handy tools for the dbt maintainers :)
- [scripts](scripts/): Helper scripts for testing, releasing, and producing JSON schemas. These are not included in distributions of dbt, nor are they rigorously tested—they're just handy tools for the dbt maintainers :)

3353
CHANGELOG.md Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -1,120 +1,72 @@
# Contributing to `dbt`
# Contributing to `dbt-core`
`dbt-core` is open source software. It is what it is today because community members have opened issues, provided feedback, and [contributed to the knowledge loop](https://www.getdbt.com/dbt-labs/values/). Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
1. [About this document](#about-this-document)
2. [Proposing a change](#proposing-a-change)
3. [Getting the code](#getting-the-code)
4. [Setting up an environment](#setting-up-an-environment)
5. [Running `dbt` in development](#running-dbt-in-development)
6. [Testing](#testing)
7. [Submitting a Pull Request](#submitting-a-pull-request)
2. [Getting the code](#getting-the-code)
3. [Setting up an environment](#setting-up-an-environment)
4. [Running `dbt` in development](#running-dbt-core-in-development)
5. [Testing dbt-core](#testing)
6. [Submitting a Pull Request](#submitting-a-pull-request)
## About this document
This document is a guide intended for folks interested in contributing to `dbt`. Below, we document the process by which members of the community should create issues and submit pull requests (PRs) in this repository. It is not intended as a guide for using `dbt`, and it assumes a certain level of familiarity with Python concepts such as virtualenvs, `pip`, python modules, filesystems, and so on. This guide assumes you are using macOS or Linux and are comfortable with the command line.
There are many ways to contribute to the ongoing development of `dbt-core`, such as by participating in discussions and issues. We encourage you to first read our higher-level document: ["Expectations for Open Source Contributors"](https://docs.getdbt.com/docs/contributing/oss-expectations).
If you're new to python development or contributing to open-source software, we encourage you to read this document from start to finish. If you get stuck, drop us a line in the `#dbt-core-development` channel on [slack](https://community.getdbt.com).
The rest of this document serves as a more granular guide for contributing code changes to `dbt-core` (this repository). It is not intended as a guide for using `dbt-core`, and some pieces assume a level of familiarity with Python development (virtualenvs, `pip`, etc). Specific code snippets in this guide assume you are using macOS or Linux and are comfortable with the command line.
#### Adapters
If you get stuck, we're happy to help! Drop us a line in the `#dbt-core-development` channel in the [dbt Community Slack](https://community.getdbt.com).
If you have an issue or code change suggestion related to a specific database [adapter](https://docs.getdbt.com/docs/available-adapters); please refer to that supported databases seperate repo for those contributions.
### Notes
### Signing the CLA
Please note that all contributors to `dbt` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements) to have their Pull Request merged into the `dbt` codebase. If you are unable to sign the CLA, then the `dbt` maintainers will unfortunately be unable to merge your Pull Request. You are, however, welcome to open issues and comment on existing ones.
## Proposing a change
`dbt` is Apache 2.0-licensed open source software. `dbt` is what it is today because community members like you have opened issues, provided feedback, and contributed to the knowledge loop for the entire communtiy. Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
### Defining the problem
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/dbt-labs/dbt-core/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
> **Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
### Discussing the idea
After you open an issue, a `dbt` maintainer will follow up by commenting on your issue (usually within 1-3 days) to explore your idea further and advise on how to implement the suggested changes. In many cases, community members will chime in with their own thoughts on the problem statement. If you as the issue creator are interested in submitting a Pull Request to address the issue, you should indicate this in the body of the issue. The `dbt` maintainers are _always_ happy to help contributors with the implementation of fixes and features, so please also indicate if there's anything you're unsure about or could use guidance around in the issue.
### Submitting a change
If an issue is appropriately well scoped and describes a beneficial change to the `dbt` codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/dbt-labs/dbt-core/contribute) page.
Here's a good workflow:
- Comment on the open issue, expressing your interest in contributing the required code change
- Outline your planned implementation. If you want help getting started, ask!
- Follow the steps outlined below to develop locally. Once you have opened a PR, one of the `dbt` maintainers will work with you to review your code.
- Add a test! Tests are crucial for both fixes and new features alike. We want to make sure that code works as intended, and that it avoids any bugs previously encountered. Currently, the best resource for understanding `dbt`'s [unit](test/unit) and [integration](test/integration) tests is the tests themselves. One of the maintainers can help by pointing out relevant examples.
In some cases, the right resolution to an open issue might be tangential to the `dbt` codebase. The right path forward might be a documentation update or a change that can be made in user-space. In other cases, the issue might describe functionality that the `dbt` maintainers are unwilling or unable to incorporate into the `dbt` codebase. When it is determined that an open issue describes functionality that will not translate to a code change in the `dbt` repository, the issue will be tagged with the `wontfix` label (see below) and closed.
### Using issue labels
The `dbt` maintainers use labels to categorize open issues. Some labels indicate the databases impacted by the issue, while others describe the domain in the `dbt` codebase germane to the discussion. While most of these labels are self-explanatory (eg. `snowflake` or `bigquery`), there are others that are worth describing.
| tag | description |
| --- | ----------- |
| [triage](https://github.com/dbt-labs/dbt-core/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/dbt-labs/dbt-core/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/dbt-labs/dbt-core/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/dbt-labs/dbt-core/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/dbt-labs/dbt-core/labels/help%20wanted) / [discussion](https://github.com/dbt-labs/dbt-core/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/dbt-labs/dbt-core/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/dbt-labs/dbt-core/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/dbt-labs/dbt-core/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/dbt-labs/dbt-core/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
#### Branching Strategy
`dbt` has three types of branches:
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk branch or a specific release branch.
- **Adapters:** Is your issue or proposed code change related to a specific [database adapter](https://docs.getdbt.com/docs/available-adapters)? If so, please open issues, PRs, and discussions in that adapter's repository instead. The sole exception is Postgres; the `dbt-postgres` plugin lives in this repository (`dbt-core`).
- **CLA:** Please note that anyone contributing code to `dbt-core` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements). If you are unable to sign the CLA, the `dbt-core` maintainers will unfortunately be unable to merge any of your Pull Requests. We welcome you to participate in discussions, open issues, and comment on existing ones.
- **Branches:** All pull requests from community contributors should target the `main` branch (default). If the change is needed as a patch for a minor version of dbt that has already been released (or is already a release candidate), a maintainer will backport the changes in your PR to the relevant "latest" release branch (`1.0.latest`, `1.1.latest`, ...)
## Getting the code
### Installing git
You will need `git` in order to download and modify the `dbt` source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
You will need `git` in order to download and modify the `dbt-core` source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
### External contributors
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt-core` by forking the `dbt-core` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
1. fork the `dbt` repository
2. clone your fork locally
3. check out a new branch for your proposed changes
4. push changes to your fork
5. open a pull request against `dbt-labs/dbt` from your forked repository
1. Fork the `dbt-core` repository
2. Clone your fork locally
3. Check out a new branch for your proposed changes
4. Push changes to your fork
5. Open a pull request against `dbt-labs/dbt-core` from your forked repository
### Core contributors
### dbt Labs contributors
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt-core` repo. Rather than forking `dbt-core` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
## Setting up an environment
There are some tools that will be helpful to you in developing locally. While this is the list relevant for `dbt` development, many of these tools are used commonly across open-source python projects.
There are some tools that will be helpful to you in developing locally. While this is the list relevant for `dbt-core` development, many of these tools are used commonly across open-source python projects.
### Tools
A short list of tools used in `dbt` testing that will be helpful to your understanding:
These are the tools used in `dbt-core` development and testing:
- [`tox`](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions. We currently target the latest patch releases for Python 3.6, Python 3.7, Python 3.8, and Python 3.9
- [`pytest`](https://docs.pytest.org/en/latest/) to discover/run tests
- [`make`](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) - but don't worry too much, nobody _really_ understands how make works and our Makefile is super simple
- [`tox`](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions. We currently target the latest patch releases for Python 3.7, 3.8, 3.9, and 3.10
- [`pytest`](https://docs.pytest.org/en/latest/) to define, discover, and run tests
- [`flake8`](https://flake8.pycqa.org/en/latest/) for code linting
- [`black`](https://github.com/psf/black) for code formatting
- [`mypy`](https://mypy.readthedocs.io/en/stable/) for static type checking
- [Github Actions](https://github.com/features/actions)
- [`pre-commit`](https://pre-commit.com) to easily run those checks
- [`changie`](https://changie.dev/) to create changelog entries, without merge conflicts
- [`make`](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) to run multiple setup or test steps in combination. Don't worry too much, nobody _really_ understands how `make` works, and our Makefile aims to be super simple.
- [GitHub Actions](https://github.com/features/actions) for automating tests and checks, once a PR is pushed to the `dbt-core` repository
A deep understanding of these tools in not required to effectively contribute to `dbt`, but we recommend checking out the attached documentation if you're interested in learning more about them.
A deep understanding of these tools in not required to effectively contribute to `dbt-core`, but we recommend checking out the attached documentation if you're interested in learning more about each one.
#### virtual environments
#### Virtual environments
We strongly recommend using virtual environments when developing code in `dbt`. We recommend creating this virtualenv
in the root of the `dbt` repository. To create a new virtualenv, run:
We strongly recommend using virtual environments when developing code in `dbt-core`. We recommend creating this virtualenv
in the root of the `dbt-core` repository. To create a new virtualenv, run:
```sh
python3 -m venv env
source env/bin/activate
@@ -122,12 +74,12 @@ source env/bin/activate
This will create and activate a new Python virtual environment.
#### docker and docker-compose
#### Docker and `docker-compose`
Docker and docker-compose are both used in testing. Specific instructions for you OS can be found [here](https://docs.docker.com/get-docker/).
Docker and `docker-compose` are both used in testing. Specific instructions for you OS can be found [here](https://docs.docker.com/get-docker/).
#### postgres (optional)
#### Postgres (optional)
For testing, and later in the examples in this document, you may want to have `psql` available so you can poke around in the database and see what happened. We recommend that you use [homebrew](https://brew.sh/) for that on macOS, and your package manager on Linux. You can install any version of the postgres client that you'd like. On macOS, with homebrew setup, you can run:
@@ -135,11 +87,11 @@ For testing, and later in the examples in this document, you may want to have `p
brew install postgresql
```
## Running `dbt` in development
## Running `dbt-core` in development
### Installation
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt` (and its dependencies) with:
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt-core` (and its dependencies) with:
```sh
make dev
@@ -147,23 +99,26 @@ make dev
pip install -r dev-requirements.txt -r editable-requirements.txt
```
When `dbt` is installed this way, any changes you make to the `dbt` source code will be reflected immediately in your next `dbt` run.
When installed in this way, any changes you make to your local copy of the source code will be reflected immediately in your next `dbt` run.
### Running `dbt`
### Running `dbt-core`
With your virtualenv activated, the `dbt` script should point back to the source code you've cloned on your machine. You can verify this by running `which dbt`. This command should show you a path to an executable in your virtualenv.
Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as necessary to connect to your target databases. It may be a good idea to add a new profile pointing to a local postgres instance, or a specific test sandbox within your data warehouse if appropriate.
Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as necessary to connect to your target databases. It may be a good idea to add a new profile pointing to a local Postgres instance, or a specific test sandbox within your data warehouse if appropriate.
## Testing
Getting the `dbt` integration tests set up in your local environment will be very helpful as you start to make changes to your local version of `dbt`. The section that follows outlines some helpful tips for setting up the test environment.
Once you're able to manually test that your code change is working as expected, it's important to run existing automated tests, as well as adding some new ones. These tests will ensure that:
- Your code changes do not unexpectedly break other established functionality
- Your code changes can handle all known edge cases
- The functionality you're adding will _keep_ working in the future
Although `dbt` works with a number of different databases, you won't need to supply credentials for every one of these databases in your test environment. Instead you can test all dbt-core code changes with Python and Postgres.
Although `dbt-core` works with a number of different databases, you won't need to supply credentials for every one of these databases in your test environment. Instead, you can test most `dbt-core` code changes with Python and Postgres.
### Initial setup
We recommend starting with `dbt`'s Postgres tests. These tests cover most of the functionality in `dbt`, are the fastest to run, and are the easiest to set up. To run the Postgres integration tests, you'll have to do one extra step of setting up the test database:
Postgres offers the easiest way to test most `dbt-core` functionality today. They are the fastest to run, and the easiest to set up. To run the Postgres integration tests, you'll have to do one extra step of setting up the test database:
```sh
make setup-db
@@ -174,15 +129,6 @@ docker-compose up -d database
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
```
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
```
cp test.env.sample test.env
$EDITOR test.env
```
> In general, it's most important to have successful unit and Postgres tests. Once you open a PR, `dbt` will automatically run integration tests for the other three core database adapters. Of course, if you are a BigQuery user, contributing a BigQuery-only feature, it's important to run BigQuery tests as well.
### Test commands
There are a few methods for running tests locally.
@@ -198,38 +144,50 @@ make test
# Runs postgres integration tests with py38 in "fail fast" mode.
make integration
```
> These make targets assume you have a recent version of [`tox`](https://tox.readthedocs.io/en/latest/) installed locally,
> These make targets assume you have a local installation of a recent version of [`tox`](https://tox.readthedocs.io/en/latest/) for unit/integration testing and pre-commit for code quality checks,
> unless you use choose a Docker container to run tests. Run `make help` for more info.
Check out the other targets in the Makefile to see other commonly used test
suites.
#### `pre-commit`
[`pre-commit`](https://pre-commit.com) takes care of running all code-checks for formatting and linting. Run `make dev` to install `pre-commit` in your local environment. Once this is done you can use any of the linter-based make targets as well as a git pre-commit hook that will ensure proper formatting and linting.
#### `tox`
[`tox`](https://tox.readthedocs.io/en/latest/) takes care of managing virtualenvs and install dependencies in order to run
tests. You can also run tests in parallel, for example, you can run unit tests
for Python 3.6, Python 3.7, Python 3.8, `flake8` checks, and `mypy` checks in
parallel with `tox -p`. Also, you can run unit tests for specific python versions
with `tox -e py36`. The configuration for these tests in located in `tox.ini`.
[`tox`](https://tox.readthedocs.io/en/latest/) takes care of managing virtualenvs and install dependencies in order to run tests. You can also run tests in parallel, for example, you can run unit tests for Python 3.7, Python 3.8, Python 3.9, and Python 3.10 checks in parallel with `tox -p`. Also, you can run unit tests for specific python versions with `tox -e py37`. The configuration for these tests in located in `tox.ini`.
#### `pytest`
Finally, you can also run a specific test or group of tests using [`pytest`](https://docs.pytest.org/en/latest/) directly. With a virtualenv
active and dev dependencies installed you can do things like:
Finally, you can also run a specific test or group of tests using [`pytest`](https://docs.pytest.org/en/latest/) directly. With a virtualenv active and dev dependencies installed you can do things like:
```sh
# run specific postgres integration tests
python -m pytest -m profile_postgres test/integration/001_simple_copy_test
# run all unit tests in a file
python -m pytest test/unit/test_graph.py
python3 -m pytest test/unit/test_graph.py
# run a specific unit test
python -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
python3 -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
# run specific Postgres integration tests (old way)
python3 -m pytest -m profile_postgres test/integration/074_postgres_unlogged_table_tests
# run specific Postgres integration tests (new way)
python3 -m pytest tests/functional/sources
```
> [Here](https://docs.pytest.org/en/reorganize-docs/new-docs/user/commandlineuseful.html)
> is a list of useful command-line options for `pytest` to use while developing.
> See [pytest usage docs](https://docs.pytest.org/en/6.2.x/usage.html) for an overview of useful command-line options.
## Adding CHANGELOG Entry
We use [changie](https://changie.dev) to generate `CHANGELOG` entries. **Note:** Do not edit the `CHANGELOG.md` directly. Your modifications will be lost.
Follow the steps to [install `changie`](https://changie.dev/guide/installation/) for your system.
Once changie is installed and your PR is created, simply run `changie new` and changie will walk you through the process of creating a changelog entry. Commit the file that's created and your changelog entry is complete!
You don't need to worry about which `dbt-core` version your change will go into. Just create the changelog entry with `changie`, and open your PR against the `main` branch. All merged changes will be included in the next minor version of `dbt-core`. The Core maintainers _may_ choose to "backport" specific changes in order to patch older minor versions. In that case, a maintainer will take care of that backport after merging your PR, before releasing the new version of `dbt-core`.
## Submitting a Pull Request
dbt Labs provides a CI environment to test changes to specific adapters, and periodic maintenance checks of `dbt-core` through Github Actions. For example, if you submit a pull request to the `dbt-redshift` repo, GitHub will trigger automated code checks and tests against Redshift.
A `dbt-core` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
A `dbt` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
Automated tests run via GitHub Actions. If you're a first-time contributor, all tests (including code checks and unit tests) will require a maintainer to approve. Changes in the `dbt-core` repository trigger integration tests against Postgres. dbt Labs also provides CI environments in which to test changes to other adapters, triggered by PRs in those adapters' repositories, as well as periodic maintenance checks of each adapter in concert with the latest `dbt-core` code changes.
Once all tests are passing and your PR has been approved, a `dbt` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:
Once all tests are passing and your PR has been approved, a `dbt-core` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:

View File

@@ -1,4 +1,9 @@
FROM ubuntu:20.04
##
# This dockerfile is used for local development and adapter testing only.
# See `/docker` for a generic and production-ready docker file
##
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
@@ -41,6 +46,9 @@ RUN apt-get update \
python3.9 \
python3.9-dev \
python3.9-venv \
python3.10 \
python3.10-dev \
python3.10-venv \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

View File

@@ -8,45 +8,58 @@ endif
.PHONY: dev
dev: ## Installs dbt-* packages in develop mode along with development dependencies.
pip install -r dev-requirements.txt -r editable-requirements.txt
@\
pip install -r dev-requirements.txt -r editable-requirements.txt && \
pre-commit install
.PHONY: mypy
mypy: .env ## Runs mypy for static type checking.
$(DOCKER_CMD) tox -e mypy
mypy: .env ## Runs mypy against staged changes for static type checking.
@\
$(DOCKER_CMD) pre-commit run --hook-stage manual mypy-check | grep -v "INFO"
.PHONY: flake8
flake8: .env ## Runs flake8 to enforce style guide.
$(DOCKER_CMD) tox -e flake8
flake8: .env ## Runs flake8 against staged changes to enforce style guide.
@\
$(DOCKER_CMD) pre-commit run --hook-stage manual flake8-check | grep -v "INFO"
.PHONY: black
black: .env ## Runs black against staged changes to enforce style guide.
@\
$(DOCKER_CMD) pre-commit run --hook-stage manual black-check -v | grep -v "INFO"
.PHONY: lint
lint: .env ## Runs all code checks in parallel.
$(DOCKER_CMD) tox -p -e flake8,mypy
lint: .env ## Runs flake8 and mypy code checks against staged changes.
@\
$(DOCKER_CMD) pre-commit run flake8-check --hook-stage manual | grep -v "INFO"; \
$(DOCKER_CMD) pre-commit run mypy-check --hook-stage manual | grep -v "INFO"
.PHONY: unit
unit: .env ## Runs unit tests with py38.
@\
$(DOCKER_CMD) tox -e py38
.PHONY: test
test: .env ## Runs unit tests with py38 and code checks in parallel.
$(DOCKER_CMD) tox -p -e py38,flake8,mypy
test: .env ## Runs unit tests with py38 and code checks against staged changes.
@\
$(DOCKER_CMD) tox -e py38; \
$(DOCKER_CMD) pre-commit run black-check --hook-stage manual | grep -v "INFO"; \
$(DOCKER_CMD) pre-commit run flake8-check --hook-stage manual | grep -v "INFO"; \
$(DOCKER_CMD) pre-commit run mypy-check --hook-stage manual | grep -v "INFO"
.PHONY: integration
integration: .env integration-postgres ## Alias for integration-postgres.
integration: .env ## Runs postgres integration tests with py38.
@\
$(DOCKER_CMD) tox -e py38-integration -- -nauto
.PHONY: integration-fail-fast
integration-fail-fast: .env integration-postgres-fail-fast ## Alias for integration-postgres-fail-fast.
.PHONY: integration-postgres
integration-postgres: .env ## Runs postgres integration tests with py38.
$(DOCKER_CMD) tox -e py38-postgres -- -nauto
.PHONY: integration-postgres-fail-fast
integration-postgres-fail-fast: .env ## Runs postgres integration tests with py38 in "fail fast" mode.
$(DOCKER_CMD) tox -e py38-postgres -- -x -nauto
integration-fail-fast: .env ## Runs postgres integration tests with py38 in "fail fast" mode.
@\
$(DOCKER_CMD) tox -e py38-integration -- -x -nauto
.PHONY: setup-db
setup-db: ## Setup Postgres database with docker-compose for system testing.
docker-compose up -d database
@\
docker-compose up -d database && \
PGHOST=localhost PGUSER=root PGPASSWORD=password PGDATABASE=postgres bash test/setup_db.sh
# This rule creates a file named .env that is used by docker-compose for passing
@@ -62,27 +75,29 @@ endif
.PHONY: clean
clean: ## Resets development environment.
rm -f .coverage
rm -rf .eggs/
rm -f .env
rm -rf .tox/
rm -rf build/
rm -rf dbt.egg-info/
rm -f dbt_project.yml
rm -rf dist/
rm -f htmlcov/*.{css,html,js,json,png}
rm -rf logs/
rm -rf target/
find . -type f -name '*.pyc' -delete
find . -type d -name '__pycache__' -depth -delete
@echo 'cleaning repo...'
@rm -f .coverage
@rm -rf .eggs/
@rm -f .env
@rm -rf .tox/
@rm -rf build/
@rm -rf dbt.egg-info/
@rm -f dbt_project.yml
@rm -rf dist/
@rm -f htmlcov/*.{css,html,js,json,png}
@rm -rf logs/
@rm -rf target/
@find . -type f -name '*.pyc' -delete
@find . -type d -name '__pycache__' -depth -delete
@echo 'done.'
.PHONY: help
help: ## Show this help message.
@echo 'usage: make [target] [USE_DOCKER=true]'
@echo
@echo 'targets:'
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
@grep -E '^[8+a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
@echo
@echo 'options:'
@echo 'use USE_DOCKER=true to run target in a docker container'

View File

@@ -3,16 +3,13 @@
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="CI Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt-core/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
![architecture](https://github.com/dbt-labs/dbt-core/blob/202cb7e51e218c7b29eb3b11ad058bd56b7739de/etc/dbt-transform.png)
## Understanding dbt

39
core/README.md Normal file
View File

@@ -0,0 +1,39 @@
<p align="center">
<img src="https://raw.githubusercontent.com/dbt-labs/dbt-core/fa1ea14ddfb1d5ae319d5141844910dd53ab2834/etc/dbt-core.svg" alt="dbt logo" width="750"/>
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="CI Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt-core/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
## Understanding dbt
Analysts using dbt can transform their data by simply writing select statements, while dbt handles turning these statements into tables and views in a data warehouse.
These select statements, or "models", form a dbt project. Models frequently build on top of one another dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
![dbt dag](https://raw.githubusercontent.com/dbt-labs/dbt-core/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
## Getting started
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Join the dbt Community
- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-core/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt-core/blob/HEAD/CONTRIBUTING.md)
## Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).

51
core/dbt/README.md Normal file
View File

@@ -0,0 +1,51 @@
# core/dbt directory README
## The following are individual files in this directory.
### deprecations.py
### flags.py
### main.py
### tracking.py
### version.py
### lib.py
### node_types.py
### helper_types.py
### links.py
### semver.py
### ui.py
### compilation.py
### dataclass_schema.py
### exceptions.py
### hooks.py
### logger.py
### profiler.py
### utils.py
## The subdirectories will be documented in a README in the subdirectory
* config
* include
* adapters
* context
* deps
* graph
* task
* clients
* events

View File

@@ -0,0 +1 @@
# Adapters README

View File

@@ -8,10 +8,10 @@ from dbt.exceptions import RuntimeException
@dataclass
class Column:
TYPE_LABELS: ClassVar[Dict[str, str]] = {
'STRING': 'TEXT',
'TIMESTAMP': 'TIMESTAMP',
'FLOAT': 'FLOAT',
'INTEGER': 'INT'
"STRING": "TEXT",
"TIMESTAMP": "TIMESTAMP",
"FLOAT": "FLOAT",
"INTEGER": "INT",
}
column: str
dtype: str
@@ -24,7 +24,7 @@ class Column:
return cls.TYPE_LABELS.get(dtype.upper(), dtype)
@classmethod
def create(cls, name, label_or_dtype: str) -> 'Column':
def create(cls, name, label_or_dtype: str) -> "Column":
column_type = cls.translate_type(label_or_dtype)
return cls(name, column_type)
@@ -39,16 +39,14 @@ class Column:
@property
def data_type(self) -> str:
if self.is_string():
return Column.string_type(self.string_size())
return self.string_type(self.string_size())
elif self.is_numeric():
return Column.numeric_type(self.dtype, self.numeric_precision,
self.numeric_scale)
return self.numeric_type(self.dtype, self.numeric_precision, self.numeric_scale)
else:
return self.dtype
def is_string(self) -> bool:
return self.dtype.lower() in ['text', 'character varying', 'character',
'varchar']
return self.dtype.lower() in ["text", "character varying", "character", "varchar"]
def is_number(self):
return any([self.is_integer(), self.is_numeric(), self.is_float()])
@@ -56,33 +54,45 @@ class Column:
def is_float(self):
return self.dtype.lower() in [
# floats
'real', 'float4', 'float', 'double precision', 'float8'
"real",
"float4",
"float",
"double precision",
"float8",
]
def is_integer(self) -> bool:
return self.dtype.lower() in [
# real types
'smallint', 'integer', 'bigint',
'smallserial', 'serial', 'bigserial',
"smallint",
"integer",
"bigint",
"smallserial",
"serial",
"bigserial",
# aliases
'int2', 'int4', 'int8',
'serial2', 'serial4', 'serial8',
"int2",
"int4",
"int8",
"serial2",
"serial4",
"serial8",
]
def is_numeric(self) -> bool:
return self.dtype.lower() in ['numeric', 'decimal']
return self.dtype.lower() in ["numeric", "decimal"]
def string_size(self) -> int:
if not self.is_string():
raise RuntimeException("Called string_size() on non-string field!")
if self.dtype == 'text' or self.char_size is None:
if self.dtype == "text" or self.char_size is None:
# char_size should never be None. Handle it reasonably just in case
return 256
else:
return int(self.char_size)
def can_expand_to(self, other_column: 'Column') -> bool:
def can_expand_to(self, other_column: "Column") -> bool:
"""returns True if this column can be expanded to the size of the
other column"""
if not self.is_string() or not other_column.is_string():
@@ -110,12 +120,10 @@ class Column:
return "<Column {} ({})>".format(self.name, self.data_type)
@classmethod
def from_description(cls, name: str, raw_data_type: str) -> 'Column':
match = re.match(r'([^(]+)(\([^)]+\))?', raw_data_type)
def from_description(cls, name: str, raw_data_type: str) -> "Column":
match = re.match(r"([^(]+)(\([^)]+\))?", raw_data_type)
if match is None:
raise RuntimeException(
f'Could not interpret data type "{raw_data_type}"'
)
raise RuntimeException(f'Could not interpret data type "{raw_data_type}"')
data_type, size_info = match.groups()
char_size = None
numeric_precision = None
@@ -123,7 +131,7 @@ class Column:
if size_info is not None:
# strip out the parentheses
size_info = size_info[1:-1]
parts = size_info.split(',')
parts = size_info.split(",")
if len(parts) == 1:
try:
char_size = int(parts[0])
@@ -148,6 +156,4 @@ class Column:
f'could not convert "{parts[1]}" to an integer'
)
return cls(
name, data_type, char_size, numeric_precision, numeric_scale
)
return cls(name, data_type, char_size, numeric_precision, numeric_scale)

View File

@@ -1,24 +1,37 @@
import abc
import os
# multiprocessing.RLock is a function returning this type
from multiprocessing.synchronize import RLock
from threading import get_ident
from typing import (
Dict, Tuple, Hashable, Optional, ContextManager, List, Union
)
from typing import Dict, Tuple, Hashable, Optional, ContextManager, List
import agate
import dbt.exceptions
from dbt.contracts.connection import (
Connection, Identifier, ConnectionState,
AdapterRequiredConfig, LazyHandle, AdapterResponse
Connection,
Identifier,
ConnectionState,
AdapterRequiredConfig,
LazyHandle,
AdapterResponse,
)
from dbt.contracts.graph.manifest import Manifest
from dbt.adapters.base.query_headers import (
MacroQueryStringSetter,
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import (
NewConnection,
ConnectionReused,
ConnectionLeftOpen,
ConnectionLeftOpen2,
ConnectionClosed,
ConnectionClosed2,
Rollback,
RollbackFailed,
)
from dbt import flags
@@ -35,6 +48,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
You must also set the 'TYPE' class attribute with a class-unique constant
string.
"""
TYPE: str = NotImplemented
def __init__(self, profile: AdapterRequiredConfig):
@@ -56,16 +70,14 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
key = self.get_thread_identifier()
with self.lock:
if key not in self.thread_connections:
raise dbt.exceptions.InvalidConnectionException(
key, list(self.thread_connections)
)
raise dbt.exceptions.InvalidConnectionException(key, list(self.thread_connections))
return self.thread_connections[key]
def set_thread_connection(self, conn: Connection) -> None:
key = self.get_thread_identifier()
if key in self.thread_connections:
raise dbt.exceptions.InternalException(
'In set_thread_connection, existing connection exists for {}'
"In set_thread_connection, existing connection exists for {}"
)
self.thread_connections[key] = conn
@@ -105,18 +117,19 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
underlying database.
"""
raise dbt.exceptions.NotImplementedException(
'`exception_handler` is not implemented for this adapter!')
"`exception_handler` is not implemented for this adapter!"
)
def set_connection_name(self, name: Optional[str] = None) -> Connection:
conn_name: str
if name is None:
# if a name isn't specified, we'll re-use a single handle
# named 'master'
conn_name = 'master'
conn_name = "master"
else:
if not isinstance(name, str):
raise dbt.exceptions.CompilerException(
f'For connection name, got {name} - not a string!'
f"For connection name, got {name} - not a string!"
)
assert isinstance(name, str)
conn_name = name
@@ -129,21 +142,17 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
state=ConnectionState.INIT,
transaction_open=False,
handle=None,
credentials=self.profile.credentials
credentials=self.profile.credentials,
)
self.set_thread_connection(conn)
if conn.name == conn_name and conn.state == 'open':
if conn.name == conn_name and conn.state == "open":
return conn
logger.debug(
'Acquiring new {} connection "{}".'.format(self.TYPE, conn_name))
fire_event(NewConnection(conn_name=conn_name, conn_type=self.TYPE))
if conn.state == 'open':
logger.debug(
'Re-using an available connection from the pool (formerly {}).'
.format(conn.name)
)
if conn.state == "open":
fire_event(ConnectionReused(conn_name=conn_name))
else:
conn.handle = LazyHandle(self.open)
@@ -154,7 +163,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def cancel_open(self) -> Optional[List[str]]:
"""Cancel all open connections on the adapter. (passable)"""
raise dbt.exceptions.NotImplementedException(
'`cancel_open` is not implemented for this adapter!'
"`cancel_open` is not implemented for this adapter!"
)
@abc.abstractclassmethod
@@ -167,9 +176,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
This should be thread-safe, or hold the lock if necessary. The given
connection should not be in either in_use or available.
"""
raise dbt.exceptions.NotImplementedException(
'`open` is not implemented for this adapter!'
)
raise dbt.exceptions.NotImplementedException("`open` is not implemented for this adapter!")
def release(self) -> None:
with self.lock:
@@ -189,12 +196,10 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def cleanup_all(self) -> None:
with self.lock:
for connection in self.thread_connections.values():
if connection.state not in {'closed', 'init'}:
logger.debug("Connection '{}' was left open."
.format(connection.name))
if connection.state not in {"closed", "init"}:
fire_event(ConnectionLeftOpen(conn_name=connection.name))
else:
logger.debug("Connection '{}' was properly closed."
.format(connection.name))
fire_event(ConnectionClosed(conn_name=connection.name))
self.close(connection)
# garbage collect these connections
@@ -204,14 +209,14 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
def begin(self) -> None:
"""Begin a transaction. (passable)"""
raise dbt.exceptions.NotImplementedException(
'`begin` is not implemented for this adapter!'
"`begin` is not implemented for this adapter!"
)
@abc.abstractmethod
def commit(self) -> None:
"""Commit a transaction. (passable)"""
raise dbt.exceptions.NotImplementedException(
'`commit` is not implemented for this adapter!'
"`commit` is not implemented for this adapter!"
)
@classmethod
@@ -220,31 +225,28 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
try:
connection.handle.rollback()
except Exception:
logger.debug(
'Failed to rollback {}'.format(connection.name),
exc_info=True
)
fire_event(RollbackFailed(conn_name=connection.name))
@classmethod
def _close_handle(cls, connection: Connection) -> None:
"""Perform the actual close operation."""
# On windows, sometimes connection handles don't have a close() attr.
if hasattr(connection.handle, 'close'):
logger.debug(f'On {connection.name}: Close')
if hasattr(connection.handle, "close"):
fire_event(ConnectionClosed2(conn_name=connection.name))
connection.handle.close()
else:
logger.debug(f'On {connection.name}: No close available on handle')
fire_event(ConnectionLeftOpen2(conn_name=connection.name))
@classmethod
def _rollback(cls, connection: Connection) -> None:
"""Roll back the given connection."""
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
f'Tried to rollback transaction on connection '
f"Tried to rollback transaction on connection "
f'"{connection.name}", but it does not have one open!'
)
logger.debug(f'On {connection.name}: ROLLBACK')
fire_event(Rollback(conn_name=connection.name))
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -256,7 +258,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
return connection
if connection.transaction_open and connection.handle:
logger.debug('On {}: ROLLBACK'.format(connection.name))
fire_event(Rollback(conn_name=connection.name))
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -279,16 +281,16 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
@abc.abstractmethod
def execute(
self, sql: str, auto_begin: bool = False, fetch: bool = False
) -> Tuple[Union[str, AdapterResponse], agate.Table]:
) -> Tuple[AdapterResponse, agate.Table]:
"""Execute the given SQL.
:param str sql: The sql to execute.
:param bool auto_begin: If set, and dbt is not currently inside a
transaction, automatically begin one.
:param bool fetch: If set, fetch results.
:return: A tuple of the status and the results (empty if fetch=False).
:rtype: Tuple[Union[str, AdapterResponse], agate.Table]
:return: A tuple of the query status and results (empty if fetch=False).
:rtype: Tuple[AdapterResponse, agate.Table]
"""
raise dbt.exceptions.NotImplementedException(
'`execute` is not implemented for this adapter!'
"`execute` is not implemented for this adapter!"
)

File diff suppressed because it is too large Load Diff

View File

@@ -30,9 +30,11 @@ class _Available:
x.update(big_expensive_db_query())
return x
"""
def inner(func):
func._parse_replacement_ = parse_replacement
return self(func)
return inner
def deprecated(
@@ -57,13 +59,14 @@ class _Available:
The optional parse_replacement, if provided, will provide a parse-time
replacement for the actual method (see `available.parse`).
"""
def wrapper(func):
func_name = func.__name__
renamed_method(func_name, supported_name)
@wraps(func)
def inner(*args, **kwargs):
warn('adapter:{}'.format(func_name))
warn("adapter:{}".format(func_name))
return func(*args, **kwargs)
if parse_replacement:
@@ -71,6 +74,7 @@ class _Available:
else:
available_function = self
return available_function(inner)
return wrapper
def parse_none(self, func: Callable) -> Callable:
@@ -95,9 +99,7 @@ class AdapterMeta(abc.ABCMeta):
# I'm not sure there is any benefit to it after poking around a bit,
# but having it doesn't hurt on the python side (and omitting it could
# hurt for obscure metaclass reasons, for all I know)
cls = abc.ABCMeta.__new__( # type: ignore
mcls, name, bases, namespace, **kwargs
)
cls = abc.ABCMeta.__new__(mcls, name, bases, namespace, **kwargs) # type: ignore
# this is very much inspired by ABCMeta's own implementation
@@ -109,14 +111,14 @@ class AdapterMeta(abc.ABCMeta):
# collect base class data first
for base in bases:
available.update(getattr(base, '_available_', set()))
replacements.update(getattr(base, '_parse_replacements_', set()))
available.update(getattr(base, "_available_", set()))
replacements.update(getattr(base, "_parse_replacements_", set()))
# override with local data if it exists
for name, value in namespace.items():
if getattr(value, '_is_available_', False):
if getattr(value, "_is_available_", False):
available.add(name)
parse_replacement = getattr(value, '_parse_replacement_', None)
parse_replacement = getattr(value, "_parse_replacement_", None)
if parse_replacement is not None:
replacements[name] = parse_replacement

View File

@@ -8,11 +8,10 @@ from dbt.adapters.protocol import AdapterProtocol
def project_name_from_path(include_path: str) -> str:
# avoid an import cycle
from dbt.config.project import Project
partial = Project.partial_load(include_path)
if partial.project_name is None:
raise CompilationException(
f'Invalid project at {include_path}: name not set!'
)
raise CompilationException(f"Invalid project at {include_path}: name not set!")
return partial.project_name
@@ -23,12 +22,13 @@ class AdapterPlugin:
:param dependencies: A list of adapter names that this adapter depends
upon.
"""
def __init__(
self,
adapter: Type[AdapterProtocol],
credentials: Type[Credentials],
include_path: str,
dependencies: Optional[List[str]] = None
dependencies: Optional[List[str]] = None,
):
self.adapter: Type[AdapterProtocol] = adapter

View File

@@ -15,7 +15,7 @@ class NodeWrapper:
self._inner_node = node
def __getattr__(self, name):
return getattr(self._inner_node, name, '')
return getattr(self._inner_node, name, "")
class _QueryComment(local):
@@ -24,6 +24,7 @@ class _QueryComment(local):
- the current thread's query comment.
- a source_name indicating what set the current thread's query comment
"""
def __init__(self, initial):
self.query_comment: Optional[str] = initial
self.append = False
@@ -35,21 +36,19 @@ class _QueryComment(local):
if self.append:
# replace last ';' with '<comment>;'
sql = sql.rstrip()
if sql[-1] == ';':
if sql[-1] == ";":
sql = sql[:-1]
return '{}\n/* {} */;'.format(sql, self.query_comment.strip())
return "{}\n/* {} */;".format(sql, self.query_comment.strip())
return '{}\n/* {} */'.format(sql, self.query_comment.strip())
return "{}\n/* {} */".format(sql, self.query_comment.strip())
return '/* {} */\n{}'.format(self.query_comment.strip(), sql)
return "/* {} */\n{}".format(self.query_comment.strip(), sql)
def set(self, comment: Optional[str], append: bool):
if isinstance(comment, str) and '*/' in comment:
if isinstance(comment, str) and "*/" in comment:
# tell the user "no" so they don't hurt themselves by writing
# garbage
raise RuntimeException(
f'query comment contains illegal value "*/": {comment}'
)
raise RuntimeException(f'query comment contains illegal value "*/": {comment}')
self.query_comment = comment
self.append = append
@@ -63,15 +62,17 @@ class MacroQueryStringSetter:
self.config = config
comment_macro = self._get_comment_macro()
self.generator: QueryStringFunc = lambda name, model: ''
self.generator: QueryStringFunc = lambda name, model: ""
# if the comment value was None or the empty string, just skip it
if comment_macro:
assert isinstance(comment_macro, str)
macro = '\n'.join((
'{%- macro query_comment_macro(connection_name, node) -%}',
comment_macro,
'{% endmacro %}'
))
macro = "\n".join(
(
"{%- macro query_comment_macro(connection_name, node) -%}",
comment_macro,
"{% endmacro %}",
)
)
ctx = self._get_context()
self.generator = QueryStringGenerator(macro, ctx)
self.comment = _QueryComment(None)
@@ -87,7 +88,7 @@ class MacroQueryStringSetter:
return self.comment.add(sql)
def reset(self):
self.set('master', None)
self.set("master", None)
def set(self, name: str, node: Optional[CompileResultNode]):
wrapped: Optional[NodeWrapper] = None

View File

@@ -1,13 +1,16 @@
from collections.abc import Hashable
from dataclasses import dataclass
from typing import (
Optional, TypeVar, Any, Type, Dict, Union, Iterator, Tuple, Set
)
from typing import Optional, TypeVar, Any, Type, Dict, Union, Iterator, Tuple, Set
from dbt.contracts.graph.compiled import CompiledNode
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedNode
from dbt.contracts.relation import (
RelationType, ComponentName, HasQuoting, FakeAPIObject, Policy, Path
RelationType,
ComponentName,
HasQuoting,
FakeAPIObject,
Policy,
Path,
)
from dbt.exceptions import InternalException
from dbt.node_types import NodeType
@@ -16,7 +19,7 @@ from dbt.utils import filter_null_values, deep_merge, classproperty
import dbt.exceptions
Self = TypeVar('Self', bound='BaseRelation')
Self = TypeVar("Self", bound="BaseRelation")
@dataclass(frozen=True, eq=False, repr=False)
@@ -40,7 +43,7 @@ class BaseRelation(FakeAPIObject, Hashable):
if field.name == field_name:
return field
# this should be unreachable
raise ValueError(f'BaseRelation has no {field_name} field!')
raise ValueError(f"BaseRelation has no {field_name} field!")
def __eq__(self, other):
if not isinstance(other, self.__class__):
@@ -49,20 +52,18 @@ class BaseRelation(FakeAPIObject, Hashable):
@classmethod
def get_default_quote_policy(cls) -> Policy:
return cls._get_field_named('quote_policy').default
return cls._get_field_named("quote_policy").default
@classmethod
def get_default_include_policy(cls) -> Policy:
return cls._get_field_named('include_policy').default
return cls._get_field_named("include_policy").default
def get(self, key, default=None):
"""Override `.get` to return a metadata object so we don't break
dbt_utils.
"""
if key == 'metadata':
return {
'type': self.__class__.__name__
}
if key == "metadata":
return {"type": self.__class__.__name__}
return super().get(key, default)
def matches(
@@ -71,16 +72,19 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[str] = None,
identifier: Optional[str] = None,
) -> bool:
search = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
search = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
if not search:
# nothing was passed in
raise dbt.exceptions.RuntimeException(
"Tried to match relation, but no search path was passed!")
"Tried to match relation, but no search path was passed!"
)
exact_match = True
approximate_match = True
@@ -88,14 +92,13 @@ class BaseRelation(FakeAPIObject, Hashable):
for k, v in search.items():
if not self._is_exactish_match(k, v):
exact_match = False
if self.path.get_lowered_part(k) != v.lower():
approximate_match = False
if str(self.path.get_lowered_part(k)).strip(self.quote_character) != v.lower().strip(
self.quote_character
):
approximate_match = False # type: ignore[union-attr]
if approximate_match and not exact_match:
target = self.create(
database=database, schema=schema, identifier=identifier
)
target = self.create(database=database, schema=schema, identifier=identifier)
dbt.exceptions.approximate_relation_match(target, self)
return exact_match
@@ -109,11 +112,13 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[bool] = None,
identifier: Optional[bool] = None,
) -> Self:
policy = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
policy = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
new_quote_policy = self.quote_policy.replace_dict(policy)
return self.replace(quote_policy=new_quote_policy)
@@ -124,16 +129,18 @@ class BaseRelation(FakeAPIObject, Hashable):
schema: Optional[bool] = None,
identifier: Optional[bool] = None,
) -> Self:
policy = filter_null_values({
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier
})
policy = filter_null_values(
{
ComponentName.Database: database,
ComponentName.Schema: schema,
ComponentName.Identifier: identifier,
}
)
new_include_policy = self.include_policy.replace_dict(policy)
return self.replace(include_policy=new_include_policy)
def information_schema(self, view_name=None) -> 'InformationSchema':
def information_schema(self, view_name=None) -> "InformationSchema":
# some of our data comes from jinja, where things can be `Undefined`.
if not isinstance(view_name, str):
view_name = None
@@ -143,10 +150,10 @@ class BaseRelation(FakeAPIObject, Hashable):
info_schema = InformationSchema.from_relation(self, view_name)
return info_schema.incorporate(path={"schema": None})
def information_schema_only(self) -> 'InformationSchema':
def information_schema_only(self) -> "InformationSchema":
return self.information_schema()
def without_identifier(self) -> 'BaseRelation':
def without_identifier(self) -> "BaseRelation":
"""Return a form of this relation that only has the database and schema
set to included. To get the appropriately-quoted form the schema out of
the result (for use as part of a query), use `.render()`. To get the
@@ -156,9 +163,7 @@ class BaseRelation(FakeAPIObject, Hashable):
"""
return self.include(identifier=False).replace_path(identifier=None)
def _render_iterator(
self
) -> Iterator[Tuple[Optional[ComponentName], Optional[str]]]:
def _render_iterator(self) -> Iterator[Tuple[Optional[ComponentName], Optional[str]]]:
for key in ComponentName:
path_part: Optional[str] = None
@@ -170,27 +175,22 @@ class BaseRelation(FakeAPIObject, Hashable):
def render(self) -> str:
# if there is nothing set, this will return the empty string.
return '.'.join(
part for _, part in self._render_iterator()
if part is not None
)
return ".".join(part for _, part in self._render_iterator() if part is not None)
def quoted(self, identifier):
return '{quote_char}{identifier}{quote_char}'.format(
return "{quote_char}{identifier}{quote_char}".format(
quote_char=self.quote_character,
identifier=identifier,
)
@classmethod
def create_from_source(
cls: Type[Self], source: ParsedSourceDefinition, **kwargs: Any
) -> Self:
def create_from_source(cls: Type[Self], source: ParsedSourceDefinition, **kwargs: Any) -> Self:
source_quoting = source.quoting.to_dict(omit_none=True)
source_quoting.pop('column', None)
source_quoting.pop("column", None)
quote_policy = deep_merge(
cls.get_default_quote_policy().to_dict(omit_none=True),
source_quoting,
kwargs.get('quote_policy', {}),
kwargs.get("quote_policy", {}),
)
return cls.create(
@@ -198,12 +198,12 @@ class BaseRelation(FakeAPIObject, Hashable):
schema=source.schema,
identifier=source.identifier,
quote_policy=quote_policy,
**kwargs
**kwargs,
)
@staticmethod
def add_ephemeral_prefix(name: str):
return f'__dbt__cte__{name}'
return f"__dbt__cte__{name}"
@classmethod
def create_ephemeral_from_node(
@@ -236,7 +236,8 @@ class BaseRelation(FakeAPIObject, Hashable):
schema=node.schema,
identifier=node.alias,
quote_policy=quote_policy,
**kwargs)
**kwargs,
)
@classmethod
def create_from(
@@ -248,15 +249,14 @@ class BaseRelation(FakeAPIObject, Hashable):
if node.resource_type == NodeType.Source:
if not isinstance(node, ParsedSourceDefinition):
raise InternalException(
'type mismatch, expected ParsedSourceDefinition but got {}'
.format(type(node))
"type mismatch, expected ParsedSourceDefinition but got {}".format(type(node))
)
return cls.create_from_source(node, **kwargs)
else:
if not isinstance(node, (ParsedNode, CompiledNode)):
raise InternalException(
'type mismatch, expected ParsedNode or CompiledNode but '
'got {}'.format(type(node))
"type mismatch, expected ParsedNode or CompiledNode but "
"got {}".format(type(node))
)
return cls.create_from_node(config, node, **kwargs)
@@ -269,14 +269,16 @@ class BaseRelation(FakeAPIObject, Hashable):
type: Optional[RelationType] = None,
**kwargs,
) -> Self:
kwargs.update({
'path': {
'database': database,
'schema': schema,
'identifier': identifier,
},
'type': type,
})
kwargs.update(
{
"path": {
"database": database,
"schema": schema,
"identifier": identifier,
},
"type": type,
}
)
return cls.from_dict(kwargs)
def __repr__(self) -> str:
@@ -342,7 +344,7 @@ class BaseRelation(FakeAPIObject, Hashable):
return RelationType
Info = TypeVar('Info', bound='InformationSchema')
Info = TypeVar("Info", bound="InformationSchema")
@dataclass(frozen=True, eq=False, repr=False)
@@ -352,17 +354,15 @@ class InformationSchema(BaseRelation):
def __post_init__(self):
if not isinstance(self.information_schema_view, (type(None), str)):
raise dbt.exceptions.CompilationException(
'Got an invalid name: {}'.format(self.information_schema_view)
"Got an invalid name: {}".format(self.information_schema_view)
)
@classmethod
def get_path(
cls, relation: BaseRelation, information_schema_view: Optional[str]
) -> Path:
def get_path(cls, relation: BaseRelation, information_schema_view: Optional[str]) -> Path:
return Path(
database=relation.database,
schema=relation.schema,
identifier='INFORMATION_SCHEMA',
identifier="INFORMATION_SCHEMA",
)
@classmethod
@@ -393,9 +393,7 @@ class InformationSchema(BaseRelation):
relation: BaseRelation,
information_schema_view: Optional[str],
) -> Info:
include_policy = cls.get_include_policy(
relation, information_schema_view
)
include_policy = cls.get_include_policy(relation, information_schema_view)
quote_policy = cls.get_quote_policy(relation, information_schema_view)
path = cls.get_path(relation, information_schema_view)
return cls(
@@ -417,6 +415,7 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
search for what schemas. The schema values are all lowercased to avoid
duplication.
"""
def add(self, relation: BaseRelation):
key = relation.information_schema_only()
if key not in self:
@@ -426,9 +425,7 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
schema = relation.schema.lower()
self[key].add(schema)
def search(
self
) -> Iterator[Tuple[InformationSchema, Optional[str]]]:
def search(self) -> Iterator[Tuple[InformationSchema, Optional[str]]]:
for information_schema_name, schemas in self.items():
for schema in schemas:
yield information_schema_name, schema
@@ -443,14 +440,13 @@ class SchemaSearchMap(Dict[InformationSchema, Set[Optional[str]]]):
dbt.exceptions.raise_compiler_error(str(seen))
for information_schema_name, schema in self.search():
path = {
'database': information_schema_name.database,
'schema': schema
}
new.add(information_schema_name.incorporate(
path=path,
quote_policy={'database': False},
include_policy={'database': False},
))
path = {"database": information_schema_name.database, "schema": schema}
new.add(
information_schema_name.incorporate(
path=path,
quote_policy={"database": False},
include_policy={"database": False},
)
)
return new

View File

@@ -1,23 +1,27 @@
from collections import namedtuple
from copy import deepcopy
from typing import List, Iterable, Optional, Dict, Set, Tuple, Any
import threading
from copy import deepcopy
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
from dbt.logger import CACHE_LOGGER as logger
from dbt.utils import lowercase
from dbt.adapters.reference_keys import _make_key, _ReferenceKey
import dbt.exceptions
_ReferenceKey = namedtuple('_ReferenceKey', 'database schema identifier')
def _make_key(relation) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting
"""
# databases and schemas can both be None
return _ReferenceKey(lowercase(relation.database),
lowercase(relation.schema),
lowercase(relation.identifier))
from dbt.events.functions import fire_event
from dbt.events.types import (
AddLink,
AddRelation,
DropCascade,
DropMissingRelation,
DropRelation,
DumpAfterAddGraph,
DumpAfterRenameSchema,
DumpBeforeAddGraph,
DumpBeforeRenameSchema,
RenameSchema,
TemporaryRelation,
UncachedRelation,
UpdateReference,
)
from dbt.utils import lowercase
from dbt.helper_types import Lazy
def dot_separated(key: _ReferenceKey) -> str:
@@ -25,7 +29,7 @@ def dot_separated(key: _ReferenceKey) -> str:
:param _ReferenceKey key: The key to stringify.
"""
return '.'.join(map(str, key))
return ".".join(map(str, key))
class _CachedRelation:
@@ -37,14 +41,15 @@ class _CachedRelation:
that refer to this relation.
:attr BaseRelation inner: The underlying dbt relation.
"""
def __init__(self, inner):
self.referenced_by = {}
self.inner = inner
def __str__(self) -> str:
return (
'_CachedRelation(database={}, schema={}, identifier={}, inner={})'
).format(self.database, self.schema, self.identifier, self.inner)
return ("_CachedRelation(database={}, schema={}, identifier={}, inner={})").format(
self.database, self.schema, self.identifier, self.inner
)
@property
def database(self) -> Optional[str]:
@@ -78,7 +83,7 @@ class _CachedRelation:
"""
return _make_key(self)
def add_reference(self, referrer: '_CachedRelation'):
def add_reference(self, referrer: "_CachedRelation"):
"""Add a reference from referrer to self, indicating that if this node
were drop...cascaded, the referrer would be dropped as well.
@@ -122,9 +127,9 @@ class _CachedRelation:
# table_name is ever anything but the identifier (via .create())
self.inner = self.inner.incorporate(
path={
'database': new_relation.inner.database,
'schema': new_relation.inner.schema,
'identifier': new_relation.inner.identifier
"database": new_relation.inner.database,
"schema": new_relation.inner.schema,
"identifier": new_relation.inner.identifier,
},
)
@@ -140,8 +145,9 @@ class _CachedRelation:
"""
if new_key in self.referenced_by:
dbt.exceptions.raise_cache_inconsistent(
'in rename of "{}" -> "{}", new name is in the cache already'
.format(old_key, new_key)
'in rename of "{}" -> "{}", new name is in the cache already'.format(
old_key, new_key
)
)
if old_key not in self.referenced_by:
@@ -157,12 +163,6 @@ class _CachedRelation:
return [dot_separated(r) for r in self.referenced_by]
def lazy_log(msg, func):
if logger.disabled:
return
logger.debug(msg.format(func()))
class RelationsCache:
"""A cache of the relations known to dbt. Keeps track of relationships
declared between tables and handles renames/drops as a real database would.
@@ -172,13 +172,16 @@ class RelationsCache:
The adapters also hold this lock while filling the cache.
:attr Set[str] schemas: The set of known/cached schemas, all lowercased.
"""
def __init__(self) -> None:
self.relations: Dict[_ReferenceKey, _CachedRelation] = {}
self.lock = threading.RLock()
self.schemas: Set[Tuple[Optional[str], Optional[str]]] = set()
def add_schema(
self, database: Optional[str], schema: Optional[str],
self,
database: Optional[str],
schema: Optional[str],
) -> None:
"""Add a schema to the set of known schemas (case-insensitive)
@@ -188,7 +191,9 @@ class RelationsCache:
self.schemas.add((lowercase(database), lowercase(schema)))
def drop_schema(
self, database: Optional[str], schema: Optional[str],
self,
database: Optional[str],
schema: Optional[str],
) -> None:
"""Drop the given schema and remove it from the set of known schemas.
@@ -232,10 +237,7 @@ class RelationsCache:
# self.relations or any cache entry's referenced_by during iteration
# it's a runtime error!
with self.lock:
return {
dot_separated(k): v.dump_graph_entry()
for k, v in self.relations.items()
}
return {dot_separated(k): v.dump_graph_entry() for k, v in self.relations.items()}
def _setdefault(self, relation: _CachedRelation):
"""Add a relation to the cache, or return it if it already exists.
@@ -263,21 +265,20 @@ class RelationsCache:
return
if referenced is None:
dbt.exceptions.raise_cache_inconsistent(
'in add_link, referenced link key {} not in cache!'
.format(referenced_key)
"in add_link, referenced link key {} not in cache!".format(referenced_key)
)
dependent = self.relations.get(dependent_key)
if dependent is None:
dbt.exceptions.raise_cache_inconsistent(
'in add_link, dependent link key {} not in cache!'
.format(dependent_key)
"in add_link, dependent link key {} not in cache!".format(dependent_key)
)
assert dependent is not None # we just raised!
referenced.add_reference(dependent)
# TODO: Is this dead code? I can't seem to find it grepping the codebase.
def add_link(self, referenced, dependent):
"""Add a link between two relations to the database. If either relation
does not exist, it will be added as an "external" relation.
@@ -293,33 +294,22 @@ class RelationsCache:
:raises InternalError: If either entry does not exist.
"""
ref_key = _make_key(referenced)
dep_key = _make_key(dependent)
if (ref_key.database, ref_key.schema) not in self:
# if we have not cached the referenced schema at all, we must be
# referring to a table outside our control. There's no need to make
# a link - we will never drop the referenced relation during a run.
logger.debug(
'{dep!s} references {ref!s} but {ref.database}.{ref.schema} '
'is not in the cache, skipping assumed external relation'
.format(dep=dependent, ref=ref_key)
)
fire_event(UncachedRelation(dep_key=dep_key, ref_key=ref_key))
return
if ref_key not in self.relations:
# Insert a dummy "external" relation.
referenced = referenced.replace(
type=referenced.External
)
referenced = referenced.replace(type=referenced.External)
self.add(referenced)
dep_key = _make_key(dependent)
if dep_key not in self.relations:
# Insert a dummy "external" relation.
dependent = dependent.replace(
type=referenced.External
)
dependent = dependent.replace(type=referenced.External)
self.add(dependent)
logger.debug(
'adding link, {!s} references {!s}'.format(dep_key, ref_key)
)
fire_event(AddLink(dep_key=dep_key, ref_key=ref_key))
with self.lock:
self._add_link(ref_key, dep_key)
@@ -330,14 +320,12 @@ class RelationsCache:
:param BaseRelation relation: The underlying relation.
"""
cached = _CachedRelation(relation)
logger.debug('Adding relation: {!s}'.format(cached))
lazy_log('before adding: {!s}', self.dump_graph)
fire_event(AddRelation(relation=_make_key(cached)))
fire_event(DumpBeforeAddGraph(dump=Lazy.defer(lambda: self.dump_graph())))
with self.lock:
self._setdefault(cached)
lazy_log('after adding: {!s}', self.dump_graph)
fire_event(DumpAfterAddGraph(dump=Lazy.defer(lambda: self.dump_graph())))
def _remove_refs(self, keys):
"""Removes all references to all entries in keys. This does not
@@ -352,20 +340,17 @@ class RelationsCache:
for cached in self.relations.values():
cached.release_references(keys)
def _drop_cascade_relation(self, dropped):
def _drop_cascade_relation(self, dropped_key):
"""Drop the given relation and cascade it appropriately to all
dependent relations.
:param _CachedRelation dropped: An existing _CachedRelation to drop.
"""
if dropped not in self.relations:
logger.debug('dropped a nonexistent relationship: {!s}'
.format(dropped))
if dropped_key not in self.relations:
fire_event(DropMissingRelation(relation=dropped_key))
return
consequences = self.relations[dropped].collect_consequences()
logger.debug(
'drop {} is cascading to {}'.format(dropped, consequences)
)
consequences = self.relations[dropped_key].collect_consequences()
fire_event(DropCascade(dropped=dropped_key, consequences=consequences))
self._remove_refs(consequences)
def drop(self, relation):
@@ -379,10 +364,10 @@ class RelationsCache:
:param str schema: The schema of the relation to drop.
:param str identifier: The identifier of the relation to drop.
"""
dropped = _make_key(relation)
logger.debug('Dropping relation: {!s}'.format(dropped))
dropped_key = _make_key(relation)
fire_event(DropRelation(dropped=dropped_key))
with self.lock:
self._drop_cascade_relation(dropped)
self._drop_cascade_relation(dropped_key)
def _rename_relation(self, old_key, new_relation):
"""Rename a relation named old_key to new_key, updating references.
@@ -403,9 +388,8 @@ class RelationsCache:
# update all the relations that refer to it
for cached in self.relations.values():
if cached.is_referenced_by(old_key):
logger.debug(
'updated reference from {0} -> {2} to {1} -> {2}'
.format(old_key, new_key, cached.key())
fire_event(
UpdateReference(old_key=old_key, new_key=new_key, cached_key=cached.key())
)
cached.rename_key(old_key, new_key)
@@ -430,15 +414,13 @@ class RelationsCache:
"""
if new_key in self.relations:
dbt.exceptions.raise_cache_inconsistent(
'in rename, new key {} already in cache: {}'
.format(new_key, list(self.relations.keys()))
"in rename, new key {} already in cache: {}".format(
new_key, list(self.relations.keys())
)
)
if old_key not in self.relations:
logger.debug(
'old key {} not found in self.relations, assuming temporary'
.format(old_key)
)
fire_event(TemporaryRelation(key=old_key))
return False
return True
@@ -456,11 +438,9 @@ class RelationsCache:
"""
old_key = _make_key(old)
new_key = _make_key(new)
logger.debug('Renaming relation {!s} to {!s}'.format(
old_key, new_key
))
fire_event(RenameSchema(old_key=old_key, new_key=new_key))
lazy_log('before rename: {!s}', self.dump_graph)
fire_event(DumpBeforeRenameSchema(dump=Lazy.defer(lambda: self.dump_graph())))
with self.lock:
if self._check_rename_constraints(old_key, new_key):
@@ -468,11 +448,9 @@ class RelationsCache:
else:
self._setdefault(_CachedRelation(new))
lazy_log('after rename: {!s}', self.dump_graph)
fire_event(DumpAfterRenameSchema(dump=Lazy.defer(lambda: self.dump_graph())))
def get_relations(
self, database: Optional[str], schema: Optional[str]
) -> List[Any]:
def get_relations(self, database: Optional[str], schema: Optional[str]) -> List[Any]:
"""Case-insensitively yield all relations matching the given schema.
:param str schema: The case-insensitive schema name to list from.
@@ -483,14 +461,14 @@ class RelationsCache:
schema = lowercase(schema)
with self.lock:
results = [
r.inner for r in self.relations.values()
if (lowercase(r.schema) == schema and
lowercase(r.database) == database)
r.inner
for r in self.relations.values()
if (lowercase(r.schema) == schema and lowercase(r.database) == database)
]
if None in results:
dbt.exceptions.raise_cache_inconsistent(
'in get_relations, a None relation was found in the cache!'
"in get_relations, a None relation was found in the cache!"
)
return results

View File

@@ -8,10 +8,9 @@ from dbt.include.global_project import (
PACKAGE_PATH as GLOBAL_PROJECT_PATH,
PROJECT_NAME as GLOBAL_PROJECT_NAME,
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import AdapterImportError, PluginLoadError
from dbt.contracts.connection import Credentials, AdapterRequiredConfig
from dbt.adapters.protocol import (
AdapterProtocol,
AdapterConfig,
@@ -50,9 +49,7 @@ class AdapterContainer:
adapter = self.get_adapter_class_by_name(name)
return adapter.Relation
def get_config_class_by_name(
self, name: str
) -> Type[AdapterConfig]:
def get_config_class_by_name(self, name: str) -> Type[AdapterConfig]:
adapter = self.get_adapter_class_by_name(name)
return adapter.AdapterSpecificConfigs
@@ -62,24 +59,25 @@ class AdapterContainer:
# singletons
try:
# mypy doesn't think modules have any attributes.
mod: Any = import_module('.' + name, 'dbt.adapters')
mod: Any = import_module("." + name, "dbt.adapters")
except ModuleNotFoundError as exc:
# if we failed to import the target module in particular, inform
# the user about it via a runtime error
if exc.name == 'dbt.adapters.' + name:
raise RuntimeException(f'Could not find adapter type {name}!')
logger.info(f'Error importing adapter: {exc}')
if exc.name == "dbt.adapters." + name:
fire_event(AdapterImportError(exc=exc))
raise RuntimeException(f"Could not find adapter type {name}!")
# otherwise, the error had to have come from some underlying
# library. Log the stack trace.
logger.debug('', exc_info=True)
fire_event(PluginLoadError())
raise
plugin: AdapterPlugin = mod.Plugin
plugin_type = plugin.adapter.type()
if plugin_type != name:
raise RuntimeException(
f'Expected to find adapter with type named {name}, got '
f'adapter with type {plugin_type}'
f"Expected to find adapter with type named {name}, got "
f"adapter with type {plugin_type}"
)
with self.lock:
@@ -109,8 +107,7 @@ class AdapterContainer:
return self.adapters[adapter_name]
def reset_adapters(self):
"""Clear the adapters. This is useful for tests, which change configs.
"""
"""Clear the adapters. This is useful for tests, which change configs."""
with self.lock:
for adapter in self.adapters.values():
adapter.cleanup_connections()
@@ -140,9 +137,7 @@ class AdapterContainer:
try:
plugin = self.plugins[plugin_name]
except KeyError:
raise InternalException(
f'No plugin found for {plugin_name}'
) from None
raise InternalException(f"No plugin found for {plugin_name}") from None
plugins.append(plugin)
seen.add(plugin_name)
if plugin.dependencies is None:
@@ -153,9 +148,7 @@ class AdapterContainer:
return plugins
def get_adapter_package_names(self, name: Optional[str]) -> List[str]:
package_names: List[str] = [
p.project_name for p in self.get_adapter_plugins(name)
]
package_names: List[str] = [p.project_name for p in self.get_adapter_plugins(name)]
package_names.append(GLOBAL_PROJECT_NAME)
return package_names
@@ -165,9 +158,7 @@ class AdapterContainer:
try:
path = self.packages[package_name]
except KeyError:
raise InternalException(
f'No internal package listing found for {package_name}'
)
raise InternalException(f"No internal package listing found for {package_name}")
paths.append(path)
return paths
@@ -186,9 +177,12 @@ def get_adapter(config: AdapterRequiredConfig):
return FACTORY.lookup_adapter(config.credentials.type)
def get_adapter_by_type(adapter_type):
return FACTORY.lookup_adapter(adapter_type)
def reset_adapters():
"""Clear the adapters. This is useful for tests, which change configs.
"""
"""Clear the adapters. This is useful for tests, which change configs."""
FACTORY.reset_adapters()

View File

@@ -1,18 +1,24 @@
from dataclasses import dataclass
from typing import (
Type, Hashable, Optional, ContextManager, List, Generic, TypeVar, ClassVar,
Tuple, Union, Dict, Any
Type,
Hashable,
Optional,
ContextManager,
List,
Generic,
TypeVar,
ClassVar,
Tuple,
Union,
Dict,
Any,
)
from typing_extensions import Protocol
import agate
from dbt.contracts.connection import (
Connection, AdapterRequiredConfig, AdapterResponse
)
from dbt.contracts.graph.compiled import (
CompiledNode, ManifestNode, NonSourceCompiledNode
)
from dbt.contracts.connection import Connection, AdapterRequiredConfig, AdapterResponse
from dbt.contracts.graph.compiled import CompiledNode, ManifestNode, NonSourceCompiledNode
from dbt.contracts.graph.parsed import ParsedNode, ParsedSourceDefinition
from dbt.contracts.graph.model_config import BaseConfig
from dbt.contracts.graph.manifest import Manifest
@@ -34,7 +40,7 @@ class ColumnProtocol(Protocol):
pass
Self = TypeVar('Self', bound='RelationProtocol')
Self = TypeVar("Self", bound="RelationProtocol")
class RelationProtocol(Protocol):
@@ -64,22 +70,15 @@ class CompilerProtocol(Protocol):
...
AdapterConfig_T = TypeVar(
'AdapterConfig_T', bound=AdapterConfig
)
ConnectionManager_T = TypeVar(
'ConnectionManager_T', bound=ConnectionManagerProtocol
)
Relation_T = TypeVar(
'Relation_T', bound=RelationProtocol
)
Column_T = TypeVar(
'Column_T', bound=ColumnProtocol
)
Compiler_T = TypeVar('Compiler_T', bound=CompilerProtocol)
AdapterConfig_T = TypeVar("AdapterConfig_T", bound=AdapterConfig)
ConnectionManager_T = TypeVar("ConnectionManager_T", bound=ConnectionManagerProtocol)
Relation_T = TypeVar("Relation_T", bound=RelationProtocol)
Column_T = TypeVar("Column_T", bound=ColumnProtocol)
Compiler_T = TypeVar("Compiler_T", bound=CompilerProtocol)
class AdapterProtocol(
# TODO CT-211
class AdapterProtocol( # type: ignore[misc]
Protocol,
Generic[
AdapterConfig_T,
@@ -87,7 +86,7 @@ class AdapterProtocol(
Relation_T,
Column_T,
Compiler_T,
]
],
):
AdapterSpecificConfigs: ClassVar[Type[AdapterConfig_T]]
Column: ClassVar[Type[Column_T]]
@@ -156,7 +155,7 @@ class AdapterProtocol(
def execute(
self, sql: str, auto_begin: bool = False, fetch: bool = False
) -> Tuple[Union[str, AdapterResponse], agate.Table]:
) -> Tuple[AdapterResponse, agate.Table]:
...
def get_compiler(self) -> Compiler_T:

View File

@@ -0,0 +1,24 @@
# this module exists to resolve circular imports with the events module
from collections import namedtuple
from typing import Optional
_ReferenceKey = namedtuple("_ReferenceKey", "database schema identifier")
def lowercase(value: Optional[str]) -> Optional[str]:
if value is None:
return None
else:
return value.lower()
def _make_key(relation) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting
"""
# databases and schemas can both be None
return _ReferenceKey(
lowercase(relation.database), lowercase(relation.schema), lowercase(relation.identifier)
)

View File

@@ -1,16 +1,15 @@
import abc
import time
from typing import List, Optional, Tuple, Any, Iterable, Dict, Union
from typing import List, Optional, Tuple, Any, Iterable, Dict
import agate
import dbt.clients.agate_helper
import dbt.exceptions
from dbt.adapters.base import BaseConnectionManager
from dbt.contracts.connection import (
Connection, ConnectionState, AdapterResponse
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.contracts.connection import Connection, ConnectionState, AdapterResponse
from dbt.events.functions import fire_event
from dbt.events.types import ConnectionUsed, SQLQuery, SQLCommit, SQLQueryStatus
class SQLConnectionManager(BaseConnectionManager):
@@ -22,11 +21,12 @@ class SQLConnectionManager(BaseConnectionManager):
- get_response
- open
"""
@abc.abstractmethod
def cancel(self, connection: Connection):
"""Cancel the given connection."""
raise dbt.exceptions.NotImplementedException(
'`cancel` is not implemented for this adapter!'
"`cancel` is not implemented for this adapter!"
)
def cancel_open(self) -> List[str]:
@@ -39,10 +39,7 @@ class SQLConnectionManager(BaseConnectionManager):
# if the connection failed, the handle will be None so we have
# nothing to cancel.
if (
connection.handle is not None and
connection.state == ConnectionState.OPEN
):
if connection.handle is not None and connection.state == ConnectionState.OPEN:
self.cancel(connection)
if connection.name is not None:
names.append(connection.name)
@@ -53,59 +50,57 @@ class SQLConnectionManager(BaseConnectionManager):
sql: str,
auto_begin: bool = True,
bindings: Optional[Any] = None,
abridge_sql_log: bool = False
abridge_sql_log: bool = False,
) -> Tuple[Connection, Any]:
connection = self.get_thread_connection()
if auto_begin and connection.transaction_open is False:
self.begin()
logger.debug('Using {} connection "{}".'
.format(self.TYPE, connection.name))
fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=connection.name))
with self.exception_handler(sql):
if abridge_sql_log:
log_sql = '{}...'.format(sql[:512])
log_sql = "{}...".format(sql[:512])
else:
log_sql = sql
logger.debug(
'On {connection_name}: {sql}',
connection_name=connection.name,
sql=log_sql,
)
fire_event(SQLQuery(conn_name=connection.name, sql=log_sql))
pre = time.time()
cursor = connection.handle.cursor()
cursor.execute(sql, bindings)
logger.debug(
"SQL status: {status} in {elapsed:0.2f} seconds",
status=self.get_response(cursor),
elapsed=(time.time() - pre)
fire_event(
SQLQueryStatus(
status=str(self.get_response(cursor)), elapsed=round((time.time() - pre), 2)
)
)
return connection, cursor
@abc.abstractclassmethod
def get_response(cls, cursor: Any) -> Union[AdapterResponse, str]:
def get_response(cls, cursor: Any) -> AdapterResponse:
"""Get the status of the cursor."""
raise dbt.exceptions.NotImplementedException(
'`get_response` is not implemented for this adapter!'
"`get_response` is not implemented for this adapter!"
)
@classmethod
def process_results(
cls,
column_names: Iterable[str],
rows: Iterable[Any]
cls, column_names: Iterable[str], rows: Iterable[Any]
) -> List[Dict[str, Any]]:
unique_col_names = dict()
for idx in range(len(column_names)):
col_name = column_names[idx]
# TODO CT-211
unique_col_names = dict() # type: ignore[var-annotated]
# TODO CT-211
for idx in range(len(column_names)): # type: ignore[arg-type]
# TODO CT-211
col_name = column_names[idx] # type: ignore[index]
if col_name in unique_col_names:
unique_col_names[col_name] += 1
column_names[idx] = f'{col_name}_{unique_col_names[col_name]}'
# TODO CT-211
column_names[idx] = f"{col_name}_{unique_col_names[col_name]}" # type: ignore[index] # noqa
else:
unique_col_names[column_names[idx]] = 1
# TODO CT-211
unique_col_names[column_names[idx]] = 1 # type: ignore[index]
return [dict(zip(column_names, row)) for row in rows]
@classmethod
@@ -118,14 +113,11 @@ class SQLConnectionManager(BaseConnectionManager):
rows = cursor.fetchall()
data = cls.process_results(column_names, rows)
return dbt.clients.agate_helper.table_from_data_flat(
data,
column_names
)
return dbt.clients.agate_helper.table_from_data_flat(data, column_names)
def execute(
self, sql: str, auto_begin: bool = False, fetch: bool = False
) -> Tuple[Union[AdapterResponse, str], agate.Table]:
) -> Tuple[AdapterResponse, agate.Table]:
sql = self._add_query_comment(sql)
_, cursor = self.add_query(sql, auto_begin)
response = self.get_response(cursor)
@@ -136,17 +128,18 @@ class SQLConnectionManager(BaseConnectionManager):
return response, table
def add_begin_query(self):
return self.add_query('BEGIN', auto_begin=False)
return self.add_query("BEGIN", auto_begin=False)
def add_commit_query(self):
return self.add_query('COMMIT', auto_begin=False)
return self.add_query("COMMIT", auto_begin=False)
def begin(self):
connection = self.get_thread_connection()
if connection.transaction_open is True:
raise dbt.exceptions.InternalException(
'Tried to begin a new transaction on connection "{}", but '
'it already had one open!'.format(connection.name))
"it already had one open!".format(connection.name)
)
self.add_begin_query()
@@ -158,9 +151,10 @@ class SQLConnectionManager(BaseConnectionManager):
if connection.transaction_open is False:
raise dbt.exceptions.InternalException(
'Tried to commit transaction on connection "{}", but '
'it does not have one open!'.format(connection.name))
"it does not have one open!".format(connection.name)
)
logger.debug('On {}: COMMIT'.format(connection.name))
fire_event(SQLCommit(conn_name=connection.name))
self.add_commit_query()
connection.transaction_open = False

View File

@@ -5,26 +5,29 @@ import dbt.clients.agate_helper
from dbt.contracts.connection import Connection
import dbt.exceptions
from dbt.adapters.base import BaseAdapter, available
from dbt.adapters.cache import _make_key
from dbt.adapters.sql import SQLConnectionManager
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import ColTypeChange, SchemaCreation, SchemaDrop
from dbt.adapters.base.relation import BaseRelation
LIST_RELATIONS_MACRO_NAME = 'list_relations_without_caching'
GET_COLUMNS_IN_RELATION_MACRO_NAME = 'get_columns_in_relation'
LIST_SCHEMAS_MACRO_NAME = 'list_schemas'
CHECK_SCHEMA_EXISTS_MACRO_NAME = 'check_schema_exists'
CREATE_SCHEMA_MACRO_NAME = 'create_schema'
DROP_SCHEMA_MACRO_NAME = 'drop_schema'
RENAME_RELATION_MACRO_NAME = 'rename_relation'
TRUNCATE_RELATION_MACRO_NAME = 'truncate_relation'
DROP_RELATION_MACRO_NAME = 'drop_relation'
ALTER_COLUMN_TYPE_MACRO_NAME = 'alter_column_type'
LIST_RELATIONS_MACRO_NAME = "list_relations_without_caching"
GET_COLUMNS_IN_RELATION_MACRO_NAME = "get_columns_in_relation"
LIST_SCHEMAS_MACRO_NAME = "list_schemas"
CHECK_SCHEMA_EXISTS_MACRO_NAME = "check_schema_exists"
CREATE_SCHEMA_MACRO_NAME = "create_schema"
DROP_SCHEMA_MACRO_NAME = "drop_schema"
RENAME_RELATION_MACRO_NAME = "rename_relation"
TRUNCATE_RELATION_MACRO_NAME = "truncate_relation"
DROP_RELATION_MACRO_NAME = "drop_relation"
ALTER_COLUMN_TYPE_MACRO_NAME = "alter_column_type"
class SQLAdapter(BaseAdapter):
"""The default adapter with the common agate conversions and some SQL
methods implemented. This adapter has a different much shorter list of
methods was implemented. This adapter has a different much shorter list of
methods to implement, but some more macros that must be implemented.
To implement a macro, implement "${adapter_type}__${macro_name}". in the
@@ -60,30 +63,24 @@ class SQLAdapter(BaseAdapter):
:param abridge_sql_log: If set, limit the raw sql logged to 512
characters
"""
return self.connections.add_query(sql, auto_begin, bindings,
abridge_sql_log)
return self.connections.add_query(sql, auto_begin, bindings, abridge_sql_log)
@classmethod
def convert_text_type(cls, agate_table: agate.Table, col_idx: int) -> str:
return "text"
@classmethod
def convert_number_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
decimals = agate_table.aggregate(agate.MaxPrecision(col_idx))
def convert_number_type(cls, agate_table: agate.Table, col_idx: int) -> str:
# TODO CT-211
decimals = agate_table.aggregate(agate.MaxPrecision(col_idx)) # type: ignore[attr-defined]
return "float8" if decimals else "integer"
@classmethod
def convert_boolean_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
def convert_boolean_type(cls, agate_table: agate.Table, col_idx: int) -> str:
return "boolean"
@classmethod
def convert_datetime_type(
cls, agate_table: agate.Table, col_idx: int
) -> str:
def convert_datetime_type(cls, agate_table: agate.Table, col_idx: int) -> str:
return "timestamp without time zone"
@classmethod
@@ -99,31 +96,27 @@ class SQLAdapter(BaseAdapter):
return True
def expand_column_types(self, goal, current):
reference_columns = {
c.name: c for c in
self.get_columns_in_relation(goal)
}
reference_columns = {c.name: c for c in self.get_columns_in_relation(goal)}
target_columns = {
c.name: c for c
in self.get_columns_in_relation(current)
}
target_columns = {c.name: c for c in self.get_columns_in_relation(current)}
for column_name, reference_column in reference_columns.items():
target_column = target_columns.get(column_name)
if target_column is not None and \
target_column.can_expand_to(reference_column):
if target_column is not None and target_column.can_expand_to(reference_column):
col_string_size = reference_column.string_size()
new_type = self.Column.string_type(col_string_size)
logger.debug("Changing col type from {} to {} in table {}",
target_column.data_type, new_type, current)
fire_event(
ColTypeChange(
orig_type=target_column.data_type,
new_type=new_type,
table=_make_key(current),
)
)
self.alter_column_type(current, column_name, new_type)
def alter_column_type(
self, relation, column_name, new_column_type
) -> None:
def alter_column_type(self, relation, column_name, new_column_type) -> None:
"""
1. Create a new column (w/ temp name and correct type)
2. Copy data over to it
@@ -131,53 +124,40 @@ class SQLAdapter(BaseAdapter):
4. Rename the new column to existing column
"""
kwargs = {
'relation': relation,
'column_name': column_name,
'new_column_type': new_column_type,
"relation": relation,
"column_name": column_name,
"new_column_type": new_column_type,
}
self.execute_macro(
ALTER_COLUMN_TYPE_MACRO_NAME,
kwargs=kwargs
)
self.execute_macro(ALTER_COLUMN_TYPE_MACRO_NAME, kwargs=kwargs)
def drop_relation(self, relation):
if relation.type is None:
dbt.exceptions.raise_compiler_error(
'Tried to drop relation {}, but its type is null.'
.format(relation))
"Tried to drop relation {}, but its type is null.".format(relation)
)
self.cache_dropped(relation)
self.execute_macro(
DROP_RELATION_MACRO_NAME,
kwargs={'relation': relation}
)
self.execute_macro(DROP_RELATION_MACRO_NAME, kwargs={"relation": relation})
def truncate_relation(self, relation):
self.execute_macro(
TRUNCATE_RELATION_MACRO_NAME,
kwargs={'relation': relation}
)
self.execute_macro(TRUNCATE_RELATION_MACRO_NAME, kwargs={"relation": relation})
def rename_relation(self, from_relation, to_relation):
self.cache_renamed(from_relation, to_relation)
kwargs = {'from_relation': from_relation, 'to_relation': to_relation}
self.execute_macro(
RENAME_RELATION_MACRO_NAME,
kwargs=kwargs
)
kwargs = {"from_relation": from_relation, "to_relation": to_relation}
self.execute_macro(RENAME_RELATION_MACRO_NAME, kwargs=kwargs)
def get_columns_in_relation(self, relation):
return self.execute_macro(
GET_COLUMNS_IN_RELATION_MACRO_NAME,
kwargs={'relation': relation}
GET_COLUMNS_IN_RELATION_MACRO_NAME, kwargs={"relation": relation}
)
def create_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
logger.debug('Creating schema "{}"', relation)
fire_event(SchemaCreation(relation=_make_key(relation)))
kwargs = {
'relation': relation,
"relation": relation,
}
self.execute_macro(CREATE_SCHEMA_MACRO_NAME, kwargs=kwargs)
self.commit_if_has_connection()
@@ -186,51 +166,45 @@ class SQLAdapter(BaseAdapter):
def drop_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
logger.debug('Dropping schema "{}".', relation)
fire_event(SchemaDrop(relation=_make_key(relation)))
kwargs = {
'relation': relation,
"relation": relation,
}
self.execute_macro(DROP_SCHEMA_MACRO_NAME, kwargs=kwargs)
self.commit_if_has_connection()
# we can update the cache here
self.cache.drop_schema(relation.database, relation.schema)
def list_relations_without_caching(
self, schema_relation: BaseRelation,
self,
schema_relation: BaseRelation,
) -> List[BaseRelation]:
kwargs = {'schema_relation': schema_relation}
results = self.execute_macro(
LIST_RELATIONS_MACRO_NAME,
kwargs=kwargs
)
kwargs = {"schema_relation": schema_relation}
results = self.execute_macro(LIST_RELATIONS_MACRO_NAME, kwargs=kwargs)
relations = []
quote_policy = {
'database': True,
'schema': True,
'identifier': True
}
quote_policy = {"database": True, "schema": True, "identifier": True}
for _database, name, _schema, _type in results:
try:
_type = self.Relation.get_relation_type(_type)
except ValueError:
_type = self.Relation.External
relations.append(self.Relation.create(
database=_database,
schema=_schema,
identifier=name,
quote_policy=quote_policy,
type=_type
))
relations.append(
self.Relation.create(
database=_database,
schema=_schema,
identifier=name,
quote_policy=quote_policy,
type=_type,
)
)
return relations
def quote(self, identifier):
return '"{}"'.format(identifier)
def list_schemas(self, database: str) -> List[str]:
results = self.execute_macro(
LIST_SCHEMAS_MACRO_NAME,
kwargs={'database': database}
)
results = self.execute_macro(LIST_SCHEMAS_MACRO_NAME, kwargs={"database": database})
return [row[0] for row in results]
@@ -238,13 +212,32 @@ class SQLAdapter(BaseAdapter):
information_schema = self.Relation.create(
database=database,
schema=schema,
identifier='INFORMATION_SCHEMA',
quote_policy=self.config.quoting
identifier="INFORMATION_SCHEMA",
quote_policy=self.config.quoting,
).information_schema()
kwargs = {'information_schema': information_schema, 'schema': schema}
results = self.execute_macro(
CHECK_SCHEMA_EXISTS_MACRO_NAME,
kwargs=kwargs
)
kwargs = {"information_schema": information_schema, "schema": schema}
results = self.execute_macro(CHECK_SCHEMA_EXISTS_MACRO_NAME, kwargs=kwargs)
return results[0][0] > 0
# This is for use in the test suite
def run_sql_for_tests(self, sql, fetch, conn):
cursor = conn.handle.cursor()
try:
cursor.execute(sql)
if hasattr(conn.handle, "commit"):
conn.handle.commit()
if fetch == "one":
return cursor.fetchone()
elif fetch == "all":
return cursor.fetchall()
else:
return
except BaseException as e:
if conn.handle and not getattr(conn.handle, "closed", True):
conn.handle.rollback()
print(sql)
print(e)
raise
finally:
conn.transaction_open = False

View File

@@ -0,0 +1 @@
# Clients README

View File

@@ -10,79 +10,83 @@ def regex(pat):
class BlockData:
"""raw plaintext data from the top level of the file."""
def __init__(self, contents):
self.block_type_name = '__dbt__data'
self.block_type_name = "__dbt__data"
self.contents = contents
self.full_block = contents
class BlockTag:
def __init__(self, block_type_name, block_name, contents=None,
full_block=None, **kw):
def __init__(self, block_type_name, block_name, contents=None, full_block=None, **kw):
self.block_type_name = block_type_name
self.block_name = block_name
self.contents = contents
self.full_block = full_block
def __str__(self):
return 'BlockTag({!r}, {!r})'.format(self.block_type_name,
self.block_name)
return "BlockTag({!r}, {!r})".format(self.block_type_name, self.block_name)
def __repr__(self):
return str(self)
@property
def end_block_type_name(self):
return 'end{}'.format(self.block_type_name)
return "end{}".format(self.block_type_name)
def end_pat(self):
# we don't want to use string formatting here because jinja uses most
# of the string formatting operators in its syntax...
pattern = ''.join((
r'(?P<endblock>((?:\s*\{\%\-|\{\%)\s*',
self.end_block_type_name,
r'\s*(?:\-\%\}\s*|\%\})))',
))
pattern = "".join(
(
r"(?P<endblock>((?:\s*\{\%\-|\{\%)\s*",
self.end_block_type_name,
r"\s*(?:\-\%\}\s*|\%\})))",
)
)
return regex(pattern)
Tag = namedtuple('Tag', 'block_type_name block_name start end')
Tag = namedtuple("Tag", "block_type_name block_name start end")
_NAME_PATTERN = r'[A-Za-z_][A-Za-z_0-9]*'
_NAME_PATTERN = r"[A-Za-z_][A-Za-z_0-9]*"
COMMENT_START_PATTERN = regex(r'(?:(?P<comment_start>(\s*\{\#)))')
COMMENT_END_PATTERN = regex(r'(.*?)(\s*\#\})')
RAW_START_PATTERN = regex(
r'(?:\s*\{\%\-|\{\%)\s*(?P<raw_start>(raw))\s*(?:\-\%\}\s*|\%\})'
COMMENT_START_PATTERN = regex(r"(?:(?P<comment_start>(\s*\{\#)))")
COMMENT_END_PATTERN = regex(r"(.*?)(\s*\#\})")
RAW_START_PATTERN = regex(r"(?:\s*\{\%\-|\{\%)\s*(?P<raw_start>(raw))\s*(?:\-\%\}\s*|\%\})")
EXPR_START_PATTERN = regex(r"(?P<expr_start>(\{\{\s*))")
EXPR_END_PATTERN = regex(r"(?P<expr_end>(\s*\}\}))")
BLOCK_START_PATTERN = regex(
"".join(
(
r"(?:\s*\{\%\-|\{\%)\s*",
r"(?P<block_type_name>({}))".format(_NAME_PATTERN),
# some blocks have a 'block name'.
r"(?:\s+(?P<block_name>({})))?".format(_NAME_PATTERN),
)
)
)
EXPR_START_PATTERN = regex(r'(?P<expr_start>(\{\{\s*))')
EXPR_END_PATTERN = regex(r'(?P<expr_end>(\s*\}\}))')
BLOCK_START_PATTERN = regex(''.join((
r'(?:\s*\{\%\-|\{\%)\s*',
r'(?P<block_type_name>({}))'.format(_NAME_PATTERN),
# some blocks have a 'block name'.
r'(?:\s+(?P<block_name>({})))?'.format(_NAME_PATTERN),
)))
RAW_BLOCK_PATTERN = regex(''.join((
r'(?:\s*\{\%\-|\{\%)\s*raw\s*(?:\-\%\}\s*|\%\})',
r'(?:.*?)',
r'(?:\s*\{\%\-|\{\%)\s*endraw\s*(?:\-\%\}\s*|\%\})',
)))
RAW_BLOCK_PATTERN = regex(
"".join(
(
r"(?:\s*\{\%\-|\{\%)\s*raw\s*(?:\-\%\}\s*|\%\})",
r"(?:.*?)",
r"(?:\s*\{\%\-|\{\%)\s*endraw\s*(?:\-\%\}\s*|\%\})",
)
)
)
TAG_CLOSE_PATTERN = regex(r'(?:(?P<tag_close>(\-\%\}\s*|\%\})))')
TAG_CLOSE_PATTERN = regex(r"(?:(?P<tag_close>(\-\%\}\s*|\%\})))")
# stolen from jinja's lexer. Note that we've consumed all prefix whitespace by
# the time we want to use this.
STRING_PATTERN = regex(
r"(?P<string>('([^'\\]*(?:\\.[^'\\]*)*)'|"
r'"([^"\\]*(?:\\.[^"\\]*)*)"))'
)
STRING_PATTERN = regex(r"(?P<string>('([^'\\]*(?:\\.[^'\\]*)*)'|" r'"([^"\\]*(?:\\.[^"\\]*)*)"))')
QUOTE_START_PATTERN = regex(r'''(?P<quote>(['"]))''')
QUOTE_START_PATTERN = regex(r"""(?P<quote>(['"]))""")
class TagIterator:
@@ -99,10 +103,10 @@ class TagIterator:
end_val: int = self.pos if end is None else end
data = self.data[:end_val]
# if not found, rfind returns -1, and -1+1=0, which is perfect!
last_line_start = data.rfind('\n') + 1
last_line_start = data.rfind("\n") + 1
# it's easy to forget this, but line numbers are 1-indexed
line_number = data.count('\n') + 1
return f'{line_number}:{end_val - last_line_start}'
line_number = data.count("\n") + 1
return f"{line_number}:{end_val - last_line_start}"
def advance(self, new_position):
self.pos = new_position
@@ -120,7 +124,7 @@ class TagIterator:
matches = []
for pattern in patterns:
# default to 'search', but sometimes we want to 'match'.
if kwargs.get('method', 'search') == 'search':
if kwargs.get("method", "search") == "search":
match = self._search(pattern)
else:
match = self._match(pattern)
@@ -136,7 +140,7 @@ class TagIterator:
match = self._first_match(*patterns, **kwargs)
if match is None:
msg = 'unexpected EOF, expected {}, got "{}"'.format(
expected_name, self.data[self.pos:]
expected_name, self.data[self.pos :]
)
dbt.exceptions.raise_compiler_error(msg)
return match
@@ -156,22 +160,20 @@ class TagIterator:
"""
self.advance(match.end())
while True:
match = self._expect_match('}}',
EXPR_END_PATTERN,
QUOTE_START_PATTERN)
if match.groupdict().get('expr_end') is not None:
match = self._expect_match("}}", EXPR_END_PATTERN, QUOTE_START_PATTERN)
if match.groupdict().get("expr_end") is not None:
break
else:
# it's a quote. we haven't advanced for this match yet, so
# just slurp up the whole string, no need to rewind.
match = self._expect_match('string', STRING_PATTERN)
match = self._expect_match("string", STRING_PATTERN)
self.advance(match.end())
self.advance(match.end())
def handle_comment(self, match):
self.advance(match.end())
match = self._expect_match('#}', COMMENT_END_PATTERN)
match = self._expect_match("#}", COMMENT_END_PATTERN)
self.advance(match.end())
def _expect_block_close(self):
@@ -188,22 +190,19 @@ class TagIterator:
"""
while True:
end_match = self._expect_match(
'tag close ("%}")',
QUOTE_START_PATTERN,
TAG_CLOSE_PATTERN
'tag close ("%}")', QUOTE_START_PATTERN, TAG_CLOSE_PATTERN
)
self.advance(end_match.end())
if end_match.groupdict().get('tag_close') is not None:
if end_match.groupdict().get("tag_close") is not None:
return
# must be a string. Rewind to its start and advance past it.
self.rewind()
string_match = self._expect_match('string', STRING_PATTERN)
string_match = self._expect_match("string", STRING_PATTERN)
self.advance(string_match.end())
def handle_raw(self):
# raw blocks are super special, they are a single complete regex
match = self._expect_match('{% raw %}...{% endraw %}',
RAW_BLOCK_PATTERN)
match = self._expect_match("{% raw %}...{% endraw %}", RAW_BLOCK_PATTERN)
self.advance(match.end())
return match.end()
@@ -220,30 +219,24 @@ class TagIterator:
"""
groups = match.groupdict()
# always a value
block_type_name = groups['block_type_name']
block_type_name = groups["block_type_name"]
# might be None
block_name = groups.get('block_name')
block_name = groups.get("block_name")
start_pos = self.pos
if block_type_name == 'raw':
match = self._expect_match('{% raw %}...{% endraw %}',
RAW_BLOCK_PATTERN)
if block_type_name == "raw":
match = self._expect_match("{% raw %}...{% endraw %}", RAW_BLOCK_PATTERN)
self.advance(match.end())
else:
self.advance(match.end())
self._expect_block_close()
return Tag(
block_type_name=block_type_name,
block_name=block_name,
start=start_pos,
end=self.pos
block_type_name=block_type_name, block_name=block_name, start=start_pos, end=self.pos
)
def find_tags(self):
while True:
match = self._first_match(
BLOCK_START_PATTERN,
COMMENT_START_PATTERN,
EXPR_START_PATTERN
BLOCK_START_PATTERN, COMMENT_START_PATTERN, EXPR_START_PATTERN
)
if match is None:
break
@@ -252,9 +245,9 @@ class TagIterator:
# start = self.pos
groups = match.groupdict()
comment_start = groups.get('comment_start')
expr_start = groups.get('expr_start')
block_type_name = groups.get('block_type_name')
comment_start = groups.get("comment_start")
expr_start = groups.get("expr_start")
block_type_name = groups.get("block_type_name")
if comment_start is not None:
self.handle_comment(match)
@@ -264,8 +257,8 @@ class TagIterator:
yield self.handle_tag(match)
else:
raise dbt.exceptions.InternalException(
'Invalid regex match in next_block, expected block start, '
'expr start, or comment start'
"Invalid regex match in next_block, expected block start, "
"expr start, or comment start"
)
def __iter__(self):
@@ -273,21 +266,18 @@ class TagIterator:
duplicate_tags = (
'Got nested tags: {outer.block_type_name} (started at {outer.start}) did '
'not have a matching {{% end{outer.block_type_name} %}} before a '
'subsequent {inner.block_type_name} was found (started at {inner.start})'
"Got nested tags: {outer.block_type_name} (started at {outer.start}) did "
"not have a matching {{% end{outer.block_type_name} %}} before a "
"subsequent {inner.block_type_name} was found (started at {inner.start})"
)
_CONTROL_FLOW_TAGS = {
'if': 'endif',
'for': 'endfor',
"if": "endif",
"for": "endfor",
}
_CONTROL_FLOW_END_TAGS = {
v: k
for k, v in _CONTROL_FLOW_TAGS.items()
}
_CONTROL_FLOW_END_TAGS = {v: k for k, v in _CONTROL_FLOW_TAGS.items()}
class BlockIterator:
@@ -310,15 +300,15 @@ class BlockIterator:
def is_current_end(self, tag):
return (
tag.block_type_name.startswith('end') and
self.current is not None and
tag.block_type_name[3:] == self.current.block_type_name
tag.block_type_name.startswith("end")
and self.current is not None
and tag.block_type_name[3:] == self.current.block_type_name
)
def find_blocks(self, allowed_blocks=None, collect_raw_data=True):
"""Find all top-level blocks in the data."""
if allowed_blocks is None:
allowed_blocks = {'snapshot', 'macro', 'materialization', 'docs'}
allowed_blocks = {"snapshot", "macro", "materialization", "docs"}
for tag in self.tag_parser.find_tags():
if tag.block_type_name in _CONTROL_FLOW_TAGS:
@@ -329,37 +319,35 @@ class BlockIterator:
found = self.stack.pop()
else:
expected = _CONTROL_FLOW_END_TAGS[tag.block_type_name]
dbt.exceptions.raise_compiler_error((
'Got an unexpected control flow end tag, got {} but '
'never saw a preceeding {} (@ {})'
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start)
))
dbt.exceptions.raise_compiler_error(
(
"Got an unexpected control flow end tag, got {} but "
"never saw a preceeding {} (@ {})"
).format(tag.block_type_name, expected, self.tag_parser.linepos(tag.start))
)
expected = _CONTROL_FLOW_TAGS[found]
if expected != tag.block_type_name:
dbt.exceptions.raise_compiler_error((
'Got an unexpected control flow end tag, got {} but '
'expected {} next (@ {})'
).format(
tag.block_type_name,
expected,
self.tag_parser.linepos(tag.start)
))
dbt.exceptions.raise_compiler_error(
(
"Got an unexpected control flow end tag, got {} but "
"expected {} next (@ {})"
).format(tag.block_type_name, expected, self.tag_parser.linepos(tag.start))
)
if tag.block_type_name in allowed_blocks:
if self.stack:
dbt.exceptions.raise_compiler_error((
'Got a block definition inside control flow at {}. '
'All dbt block definitions must be at the top level'
).format(self.tag_parser.linepos(tag.start)))
dbt.exceptions.raise_compiler_error(
(
"Got a block definition inside control flow at {}. "
"All dbt block definitions must be at the top level"
).format(self.tag_parser.linepos(tag.start))
)
if self.current is not None:
dbt.exceptions.raise_compiler_error(
duplicate_tags.format(outer=self.current, inner=tag)
)
if collect_raw_data:
raw_data = self.data[self.last_position:tag.start]
raw_data = self.data[self.last_position : tag.start]
self.last_position = tag.start
if raw_data:
yield BlockData(raw_data)
@@ -371,23 +359,25 @@ class BlockIterator:
yield BlockTag(
block_type_name=self.current.block_type_name,
block_name=self.current.block_name,
contents=self.data[self.current.end:tag.start],
full_block=self.data[self.current.start:tag.end]
contents=self.data[self.current.end : tag.start],
full_block=self.data[self.current.start : tag.end],
)
self.current = None
if self.current:
linecount = self.data[:self.current.end].count('\n') + 1
dbt.exceptions.raise_compiler_error((
'Reached EOF without finding a close tag for '
'{} (searched from line {})'
).format(self.current.block_type_name, linecount))
linecount = self.data[: self.current.end].count("\n") + 1
dbt.exceptions.raise_compiler_error(
(
"Reached EOF without finding a close tag for " "{} (searched from line {})"
).format(self.current.block_type_name, linecount)
)
if collect_raw_data:
raw_data = self.data[self.last_position:]
raw_data = self.data[self.last_position :]
if raw_data:
yield BlockData(raw_data)
def lex_for_blocks(self, allowed_blocks=None, collect_raw_data=True):
return list(self.find_blocks(allowed_blocks=allowed_blocks,
collect_raw_data=collect_raw_data))
return list(
self.find_blocks(allowed_blocks=allowed_blocks, collect_raw_data=collect_raw_data)
)

View File

@@ -10,7 +10,17 @@ from typing import Iterable, List, Dict, Union, Optional, Any
from dbt.exceptions import RuntimeException
BOM = BOM_UTF8.decode('utf-8') # '\ufeff'
BOM = BOM_UTF8.decode("utf-8") # '\ufeff'
class Number(agate.data_types.Number):
# undo the change in https://github.com/wireservice/agate/pull/733
# i.e. do not cast True and False to numeric 1 and 0
def cast(self, d):
if type(d) == bool:
raise agate.exceptions.CastError("Do not cast True to 1 or False to 0.")
else:
return super().cast(d)
class ISODateTime(agate.data_types.DateTime):
@@ -30,32 +40,24 @@ class ISODateTime(agate.data_types.DateTime):
except: # noqa
pass
raise agate.exceptions.CastError(
'Can not parse value "%s" as datetime.' % d
)
raise agate.exceptions.CastError('Can not parse value "%s" as datetime.' % d)
def build_type_tester(
text_columns: Iterable[str],
string_null_values: Optional[Iterable[str]] = ('null', '')
text_columns: Iterable[str], string_null_values: Optional[Iterable[str]] = ("null", "")
) -> agate.TypeTester:
types = [
agate.data_types.Number(null_values=('null', '')),
agate.data_types.Date(null_values=('null', ''),
date_format='%Y-%m-%d'),
agate.data_types.DateTime(null_values=('null', ''),
datetime_format='%Y-%m-%d %H:%M:%S'),
ISODateTime(null_values=('null', '')),
agate.data_types.Boolean(true_values=('true',),
false_values=('false',),
null_values=('null', '')),
agate.data_types.Text(null_values=string_null_values)
Number(null_values=("null", "")),
agate.data_types.Date(null_values=("null", ""), date_format="%Y-%m-%d"),
agate.data_types.DateTime(null_values=("null", ""), datetime_format="%Y-%m-%d %H:%M:%S"),
ISODateTime(null_values=("null", "")),
agate.data_types.Boolean(
true_values=("true",), false_values=("false",), null_values=("null", "")
),
agate.data_types.Text(null_values=string_null_values),
]
force = {
k: agate.data_types.Text(null_values=string_null_values)
for k in text_columns
}
force = {k: agate.data_types.Text(null_values=string_null_values) for k in text_columns}
return agate.TypeTester(force=force, types=types)
@@ -72,16 +74,13 @@ def table_from_rows(
else:
# If text_only_columns are present, prevent coercing empty string or
# literal 'null' strings to a None representation.
column_types = build_type_tester(
text_only_columns,
string_null_values=()
)
column_types = build_type_tester(text_only_columns, string_null_values=())
return agate.Table(rows, column_names, column_types=column_types)
def table_from_data(data, column_names: Iterable[str]) -> agate.Table:
"Convert list of dictionaries into an Agate table"
"Convert a list of dictionaries into an Agate table"
# The agate table is generated from a list of dicts, so the column order
# from `data` is not preserved. We can use `select` to reorder the columns
@@ -120,9 +119,7 @@ def table_from_data_flat(data, column_names: Iterable[str]) -> agate.Table:
rows.append(row)
return table_from_rows(
rows=rows,
column_names=column_names,
text_only_columns=text_only_columns
rows=rows, column_names=column_names, text_only_columns=text_only_columns
)
@@ -140,7 +137,7 @@ def as_matrix(table):
def from_csv(abspath, text_columns):
type_tester = build_type_tester(text_columns=text_columns)
with open(abspath, encoding='utf-8') as fp:
with open(abspath, encoding="utf-8") as fp:
if fp.read(1) != BOM:
fp.seek(0)
return agate.Table.from_csv(fp, column_types=type_tester)
@@ -172,8 +169,8 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
elif not isinstance(value, type(existing_type)):
# actual type mismatch!
raise RuntimeException(
f'Tables contain columns with the same names ({key}), '
f'but different types ({value} vs {existing_type})'
f"Tables contain columns with the same names ({key}), "
f"but different types ({value} vs {existing_type})"
)
def finalize(self) -> Dict[str, agate.data_types.DataType]:
@@ -187,9 +184,7 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
return result
def _merged_column_types(
tables: List[agate.Table]
) -> Dict[str, agate.data_types.DataType]:
def _merged_column_types(tables: List[agate.Table]) -> Dict[str, agate.data_types.DataType]:
# this is a lot like agate.Table.merge, but with handling for all-null
# rows being "any type".
new_columns: ColumnTypeBuilder = ColumnTypeBuilder()
@@ -215,10 +210,7 @@ def merge_tables(tables: List[agate.Table]) -> agate.Table:
rows: List[agate.Row] = []
for table in tables:
if (
table.column_names == column_names and
table.column_types == column_types
):
if table.column_names == column_names and table.column_types == column_types:
rows.extend(table.rows)
else:
for row in table.rows:

View File

@@ -2,8 +2,23 @@ import re
import os.path
from dbt.clients.system import run_cmd, rmdir
from dbt.logger import GLOBAL_LOGGER as logger
import dbt.exceptions
from dbt.events.functions import fire_event
from dbt.events.types import (
GitSparseCheckoutSubdirectory,
GitProgressCheckoutRevision,
GitProgressUpdatingExistingDependency,
GitProgressPullingNewDependency,
GitNothingToDo,
GitProgressUpdatedCheckoutRange,
GitProgressCheckedOutAt,
)
from dbt.exceptions import (
CommandResultError,
RuntimeException,
bad_package_spec,
raise_git_cloning_error,
raise_git_cloning_problem,
)
from packaging import version
@@ -12,14 +27,24 @@ def _is_commit(revision: str) -> bool:
return bool(re.match(r"\b[0-9a-f]{40}\b", revision))
def _raise_git_cloning_error(repo, revision, error):
stderr = error.stderr.strip()
if "usage: git" in stderr:
stderr = stderr.split("\nusage: git")[0]
if re.match("fatal: destination path '(.+)' already exists", stderr):
raise_git_cloning_error(error)
bad_package_spec(repo, revision, stderr)
def clone(repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirectory=None):
has_revision = revision is not None
is_commit = _is_commit(revision or "")
clone_cmd = ['git', 'clone', '--depth', '1']
clone_cmd = ["git", "clone", "--depth", "1"]
if subdirectory:
logger.debug(' Subdirectory specified: {}, using sparse checkout.'.format(subdirectory))
out, _ = run_cmd(cwd, ['git', '--version'], env={'LC_ALL': 'C'})
fire_event(GitSparseCheckoutSubdirectory(subdir=subdirectory))
out, _ = run_cmd(cwd, ["git", "--version"], env={"LC_ALL": "C"})
git_version = version.parse(re.search(r"\d+\.\d+\.\d+", out.decode("utf-8")).group(0))
if not git_version >= version.parse("2.25.0"):
# 2.25.0 introduces --sparse
@@ -27,78 +52,86 @@ def clone(repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirec
"Please update your git version to pull a dbt package "
"from a subdirectory: your version is {}, >= 2.25.0 needed".format(git_version)
)
clone_cmd.extend(['--filter=blob:none', '--sparse'])
clone_cmd.extend(["--filter=blob:none", "--sparse"])
if has_revision and not is_commit:
clone_cmd.extend(['--branch', revision])
clone_cmd.extend(["--branch", revision])
clone_cmd.append(repo)
if dirname is not None:
clone_cmd.append(dirname)
result = run_cmd(cwd, clone_cmd, env={'LC_ALL': 'C'})
try:
result = run_cmd(cwd, clone_cmd, env={"LC_ALL": "C"})
except CommandResultError as exc:
_raise_git_cloning_error(repo, revision, exc)
if subdirectory:
run_cmd(os.path.join(cwd, dirname or ''), ['git', 'sparse-checkout', 'set', subdirectory])
cwd_subdir = os.path.join(cwd, dirname or "")
clone_cmd_subdir = ["git", "sparse-checkout", "set", subdirectory]
try:
run_cmd(cwd_subdir, clone_cmd_subdir)
except CommandResultError as exc:
_raise_git_cloning_error(repo, revision, exc)
if remove_git_dir:
rmdir(os.path.join(dirname, '.git'))
rmdir(os.path.join(dirname, ".git"))
return result
def list_tags(cwd):
out, err = run_cmd(cwd, ['git', 'tag', '--list'], env={'LC_ALL': 'C'})
tags = out.decode('utf-8').strip().split("\n")
out, err = run_cmd(cwd, ["git", "tag", "--list"], env={"LC_ALL": "C"})
tags = out.decode("utf-8").strip().split("\n")
return tags
def _checkout(cwd, repo, revision):
logger.debug(' Checking out revision {}.'.format(revision))
fire_event(GitProgressCheckoutRevision(revision=revision))
fetch_cmd = ["git", "fetch", "origin", "--depth", "1"]
if _is_commit(revision):
run_cmd(cwd, fetch_cmd + [revision])
else:
run_cmd(cwd, ['git', 'remote', 'set-branches', 'origin', revision])
run_cmd(cwd, ["git", "remote", "set-branches", "origin", revision])
run_cmd(cwd, fetch_cmd + ["--tags", revision])
if _is_commit(revision):
spec = revision
# Prefer tags to branches if one exists
elif revision in list_tags(cwd):
spec = 'tags/{}'.format(revision)
spec = "tags/{}".format(revision)
else:
spec = 'origin/{}'.format(revision)
spec = "origin/{}".format(revision)
out, err = run_cmd(cwd, ['git', 'reset', '--hard', spec],
env={'LC_ALL': 'C'})
out, err = run_cmd(cwd, ["git", "reset", "--hard", spec], env={"LC_ALL": "C"})
return out, err
def checkout(cwd, repo, revision=None):
if revision is None:
revision = 'HEAD'
revision = "HEAD"
try:
return _checkout(cwd, repo, revision)
except dbt.exceptions.CommandResultError as exc:
stderr = exc.stderr.decode('utf-8').strip()
dbt.exceptions.bad_package_spec(repo, revision, stderr)
except CommandResultError as exc:
stderr = exc.stderr.strip()
bad_package_spec(repo, revision, stderr)
def get_current_sha(cwd):
out, err = run_cmd(cwd, ['git', 'rev-parse', 'HEAD'], env={'LC_ALL': 'C'})
out, err = run_cmd(cwd, ["git", "rev-parse", "HEAD"], env={"LC_ALL": "C"})
return out.decode('utf-8')
return out.decode("utf-8")
def remove_remote(cwd):
return run_cmd(cwd, ['git', 'remote', 'rm', 'origin'], env={'LC_ALL': 'C'})
return run_cmd(cwd, ["git", "remote", "rm", "origin"], env={"LC_ALL": "C"})
def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False,
revision=None, subdirectory=None):
def clone_and_checkout(
repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirectory=None
):
exists = None
try:
_, err = clone(
@@ -108,35 +141,34 @@ def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False,
remove_git_dir=remove_git_dir,
subdirectory=subdirectory,
)
except dbt.exceptions.CommandResultError as exc:
err = exc.stderr.decode('utf-8')
except CommandResultError as exc:
err = exc.stderr
exists = re.match("fatal: destination path '(.+)' already exists", err)
if not exists: # something else is wrong, raise it
raise
if not exists:
raise_git_cloning_problem(repo)
directory = None
start_sha = None
if exists:
directory = exists.group(1)
logger.debug('Updating existing dependency {}.', directory)
fire_event(GitProgressUpdatingExistingDependency(dir=directory))
else:
matches = re.match("Cloning into '(.+)'", err.decode('utf-8'))
matches = re.match("Cloning into '(.+)'", err.decode("utf-8"))
if matches is None:
raise dbt.exceptions.RuntimeException(
f'Error cloning {repo} - never saw "Cloning into ..." from git'
)
raise RuntimeException(f'Error cloning {repo} - never saw "Cloning into ..." from git')
directory = matches.group(1)
logger.debug('Pulling new dependency {}.', directory)
fire_event(GitProgressPullingNewDependency(dir=directory))
full_path = os.path.join(cwd, directory)
start_sha = get_current_sha(full_path)
checkout(full_path, repo, revision)
end_sha = get_current_sha(full_path)
if exists:
if start_sha == end_sha:
logger.debug(' Already at {}, nothing to do.', start_sha[:7])
fire_event(GitNothingToDo(sha=start_sha[:7]))
else:
logger.debug(' Updated checkout from {} to {}.',
start_sha[:7], end_sha[:7])
fire_event(
GitProgressUpdatedCheckoutRange(start_sha=start_sha[:7], end_sha=end_sha[:7])
)
else:
logger.debug(' Checked out at {}.', end_sha[:7])
return os.path.join(directory, subdirectory or '')
fire_event(GitProgressCheckedOutAt(end_sha=end_sha[:7]))
return os.path.join(directory, subdirectory or "")

View File

@@ -7,10 +7,7 @@ import threading
from ast import literal_eval
from contextlib import contextmanager
from itertools import chain, islice
from typing import (
List, Union, Set, Optional, Dict, Any, Iterator, Type, NoReturn, Tuple,
Callable
)
from typing import List, Union, Set, Optional, Dict, Any, Iterator, Type, NoReturn, Tuple, Callable
import jinja2
import jinja2.ext
@@ -20,20 +17,26 @@ import jinja2.parser
import jinja2.sandbox
from dbt.utils import (
get_dbt_macro_name, get_docs_macro_name, get_materialization_macro_name,
get_test_macro_name, deep_map
get_dbt_macro_name,
get_docs_macro_name,
get_materialization_macro_name,
get_test_macro_name,
deep_map_render,
)
from dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag
from dbt.contracts.graph.compiled import CompiledGenericTestNode
from dbt.contracts.graph.parsed import ParsedGenericTestNode
from dbt.exceptions import (
InternalException, raise_compiler_error, CompilationException,
invalid_materialization_argument, MacroReturn, JinjaRenderingException,
UndefinedMacroException
InternalException,
raise_compiler_error,
CompilationException,
invalid_materialization_argument,
MacroReturn,
JinjaRenderingException,
UndefinedMacroException,
)
from dbt import flags
from dbt.logger import GLOBAL_LOGGER as logger # noqa
def _linecache_inject(source, write):
@@ -41,27 +44,22 @@ def _linecache_inject(source, write):
# this is the only reliable way to accomplish this. Obviously, it's
# really darn noisy and will fill your temporary directory
tmp_file = tempfile.NamedTemporaryFile(
prefix='dbt-macro-compiled-',
suffix='.py',
prefix="dbt-macro-compiled-",
suffix=".py",
delete=False,
mode='w+',
encoding='utf-8',
mode="w+",
encoding="utf-8",
)
tmp_file.write(source)
filename = tmp_file.name
else:
# `codecs.encode` actually takes a `bytes` as the first argument if
# the second argument is 'hex' - mypy does not know this.
rnd = codecs.encode(os.urandom(12), 'hex') # type: ignore
filename = rnd.decode('ascii')
rnd = codecs.encode(os.urandom(12), "hex") # type: ignore
filename = rnd.decode("ascii")
# put ourselves in the cache
cache_entry = (
len(source),
None,
[line + '\n' for line in source.splitlines()],
filename
)
cache_entry = (len(source), None, [line + "\n" for line in source.splitlines()], filename)
# linecache does in fact have an attribute `cache`, thanks
linecache.cache[filename] = cache_entry # type: ignore
return filename
@@ -74,12 +72,10 @@ class MacroFuzzParser(jinja2.parser.Parser):
# modified to fuzz macros defined in the same file. this way
# dbt can understand the stack of macros being called.
# - @cmcarthur
node.name = get_dbt_macro_name(
self.parse_assign_target(name_only=True).name)
node.name = get_dbt_macro_name(self.parse_assign_target(name_only=True).name)
self.parse_signature(node)
node.body = self.parse_statements(('name:endmacro',),
drop_needle=True)
node.body = self.parse_statements(("name:endmacro",), drop_needle=True)
return node
@@ -95,8 +91,8 @@ class MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):
If the value is 'write', also write the files to disk.
WARNING: This can write a ton of data if you aren't careful.
"""
if filename == '<template>' and flags.MACRO_DEBUGGING:
write = flags.MACRO_DEBUGGING == 'write'
if filename == "<template>" and flags.MACRO_DEBUGGING:
write = flags.MACRO_DEBUGGING == "write"
filename = _linecache_inject(source, write)
return super()._compile(source, filename) # type: ignore
@@ -107,7 +103,7 @@ class NativeSandboxEnvironment(MacroFuzzEnvironment):
class TextMarker(str):
"""A special native-env marker that indicates that a value is text and is
"""A special native-env marker that indicates a value is text and is
not to be evaluated. Use this to prevent your numbery-strings from becoming
numbers!
"""
@@ -139,7 +135,7 @@ def quoted_native_concat(nodes):
head = list(islice(nodes, 2))
if not head:
return ''
return ""
if len(head) == 1:
raw = head[0]
@@ -157,13 +153,9 @@ def quoted_native_concat(nodes):
except (ValueError, SyntaxError, MemoryError):
result = raw
if isinstance(raw, BoolMarker) and not isinstance(result, bool):
raise JinjaRenderingException(
f"Could not convert value '{raw!s}' into type 'bool'"
)
raise JinjaRenderingException(f"Could not convert value '{raw!s}' into type 'bool'")
if isinstance(raw, NumberMarker) and not _is_number(result):
raise JinjaRenderingException(
f"Could not convert value '{raw!s}' into type 'number'"
)
raise JinjaRenderingException(f"Could not convert value '{raw!s}' into type 'number'")
return result
@@ -181,9 +173,7 @@ class NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore
vars = dict(*args, **kwargs)
try:
return quoted_native_concat(
self.root_render_func(self.new_context(vars))
)
return quoted_native_concat(self.root_render_func(self.new_context(vars)))
except Exception:
return self.environment.handle_exception()
@@ -222,10 +212,10 @@ class BaseMacroGenerator:
self.context: Optional[Dict[str, Any]] = context
def get_template(self):
raise NotImplementedError('get_template not implemented!')
raise NotImplementedError("get_template not implemented!")
def get_name(self) -> str:
raise NotImplementedError('get_name not implemented!')
raise NotImplementedError("get_name not implemented!")
def get_macro(self):
name = self.get_name()
@@ -248,9 +238,7 @@ class BaseMacroGenerator:
def call_macro(self, *args, **kwargs):
# called from __call__ methods
if self.context is None:
raise InternalException(
'Context is still None in call_macro!'
)
raise InternalException("Context is still None in call_macro!")
assert self.context is not None
macro = self.get_macro()
@@ -277,7 +265,7 @@ class MacroStack(threading.local):
def pop(self, name):
got = self.call_stack.pop()
if got != name:
raise InternalException(f'popped {got}, expected {name}')
raise InternalException(f"popped {got}, expected {name}")
class MacroGenerator(BaseMacroGenerator):
@@ -286,7 +274,7 @@ class MacroGenerator(BaseMacroGenerator):
macro,
context: Optional[Dict[str, Any]] = None,
node: Optional[Any] = None,
stack: Optional[MacroStack] = None
stack: Optional[MacroStack] = None,
) -> None:
super().__init__(context)
self.macro = macro
@@ -334,9 +322,7 @@ class MacroGenerator(BaseMacroGenerator):
class QueryStringGenerator(BaseMacroGenerator):
def __init__(
self, template_str: str, context: Dict[str, Any]
) -> None:
def __init__(self, template_str: str, context: Dict[str, Any]) -> None:
super().__init__(context)
self.template_str: str = template_str
env = get_environment()
@@ -346,7 +332,7 @@ class QueryStringGenerator(BaseMacroGenerator):
)
def get_name(self) -> str:
return 'query_comment_macro'
return "query_comment_macro"
def get_template(self):
"""Don't use the template cache, we don't have a node"""
@@ -357,45 +343,39 @@ class QueryStringGenerator(BaseMacroGenerator):
class MaterializationExtension(jinja2.ext.Extension):
tags = ['materialization']
tags = ["materialization"]
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
materialization_name = \
parser.parse_assign_target(name_only=True).name
materialization_name = parser.parse_assign_target(name_only=True).name
adapter_name = 'default'
adapter_name = "default"
node.args = []
node.defaults = []
while parser.stream.skip_if('comma'):
while parser.stream.skip_if("comma"):
target = parser.parse_assign_target(name_only=True)
if target.name == 'default':
if target.name == "default":
pass
elif target.name == 'adapter':
parser.stream.expect('assign')
elif target.name == "adapter":
parser.stream.expect("assign")
value = parser.parse_expression()
adapter_name = value.value
else:
invalid_materialization_argument(
materialization_name, target.name
)
invalid_materialization_argument(materialization_name, target.name)
node.name = get_materialization_macro_name(
materialization_name, adapter_name
)
node.name = get_materialization_macro_name(materialization_name, adapter_name)
node.body = parser.parse_statements(('name:endmaterialization',),
drop_needle=True)
node.body = parser.parse_statements(("name:endmaterialization",), drop_needle=True)
return node
class DocumentationExtension(jinja2.ext.Extension):
tags = ['docs']
tags = ["docs"]
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
@@ -404,13 +384,12 @@ class DocumentationExtension(jinja2.ext.Extension):
node.args = []
node.defaults = []
node.name = get_docs_macro_name(docs_name)
node.body = parser.parse_statements(('name:enddocs',),
drop_needle=True)
node.body = parser.parse_statements(("name:enddocs",), drop_needle=True)
return node
class TestExtension(jinja2.ext.Extension):
tags = ['test']
tags = ["test"]
def parse(self, parser):
node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
@@ -418,13 +397,12 @@ class TestExtension(jinja2.ext.Extension):
parser.parse_signature(node)
node.name = get_test_macro_name(test_name)
node.body = parser.parse_statements(('name:endtest',),
drop_needle=True)
node.body = parser.parse_statements(("name:endtest",), drop_needle=True)
return node
def _is_dunder_name(name):
return name.startswith('__') and name.endswith('__')
return name.startswith("__") and name.endswith("__")
def create_undefined(node=None):
@@ -445,10 +423,9 @@ def create_undefined(node=None):
return self
def __getattr__(self, name):
if name == 'name' or _is_dunder_name(name):
if name == "name" or _is_dunder_name(name):
raise AttributeError(
"'{}' object has no attribute '{}'"
.format(type(self).__name__, name)
"'{}' object has no attribute '{}'".format(type(self).__name__, name)
)
self.name = name
@@ -459,24 +436,24 @@ def create_undefined(node=None):
return self
def __reduce__(self):
raise_compiler_error(f'{self.name} is undefined', node=node)
raise_compiler_error(f"{self.name} is undefined", node=node)
return Undefined
NATIVE_FILTERS: Dict[str, Callable[[Any], Any]] = {
'as_text': TextMarker,
'as_bool': BoolMarker,
'as_native': NativeMarker,
'as_number': NumberMarker,
"as_text": TextMarker,
"as_bool": BoolMarker,
"as_native": NativeMarker,
"as_number": NumberMarker,
}
TEXT_FILTERS: Dict[str, Callable[[Any], Any]] = {
'as_text': lambda x: x,
'as_bool': lambda x: x,
'as_native': lambda x: x,
'as_number': lambda x: x,
"as_text": lambda x: x,
"as_bool": lambda x: x,
"as_native": lambda x: x,
"as_number": lambda x: x,
}
@@ -486,15 +463,15 @@ def get_environment(
native: bool = False,
) -> jinja2.Environment:
args: Dict[str, List[Union[str, Type[jinja2.ext.Extension]]]] = {
'extensions': ['jinja2.ext.do']
"extensions": ["jinja2.ext.do"]
}
if capture_macros:
args['undefined'] = create_undefined(node)
args["undefined"] = create_undefined(node)
args['extensions'].append(MaterializationExtension)
args['extensions'].append(DocumentationExtension)
args['extensions'].append(TestExtension)
args["extensions"].append(MaterializationExtension)
args["extensions"].append(DocumentationExtension)
args["extensions"].append(TestExtension)
env_cls: Type[jinja2.Environment]
text_filter: Type
@@ -557,8 +534,8 @@ def _requote_result(raw_value: str, rendered: str) -> str:
elif single_quoted:
quote_char = "'"
else:
quote_char = ''
return f'{quote_char}{rendered}{quote_char}'
quote_char = ""
return f"{quote_char}{rendered}{quote_char}"
# performance note: Local benmcharking (so take it with a big grain of salt!)
@@ -566,7 +543,7 @@ def _requote_result(raw_value: str, rendered: str) -> str:
# checking two separate patterns, but the standard deviation is smaller with
# one pattern. The time difference between the two was ~2 std deviations, which
# is small enough that I've just chosen the more readable option.
_HAS_RENDER_CHARS_PAT = re.compile(r'({[{%#]|[#}%]})')
_HAS_RENDER_CHARS_PAT = re.compile(r"({[{%#]|[#}%]})")
def get_rendered(
@@ -582,11 +559,7 @@ def get_rendered(
# If this is desirable in the native env as well, we could handle the
# native=True case by passing the input string to ast.literal_eval, like
# the native renderer does.
if (
not native and
isinstance(string, str) and
_HAS_RENDER_CHARS_PAT.search(string) is None
):
if not native and isinstance(string, str) and _HAS_RENDER_CHARS_PAT.search(string) is None:
return string
template = get_template(
string,
@@ -607,7 +580,7 @@ def extract_toplevel_blocks(
allowed_blocks: Optional[Set[str]] = None,
collect_raw_data: bool = True,
) -> List[Union[BlockData, BlockTag]]:
"""Extract the top level blocks with matching block types from a jinja
"""Extract the top-level blocks with matching block types from a jinja
file, with some special handling for block nesting.
:param data: The data to extract blocks from.
@@ -622,12 +595,11 @@ def extract_toplevel_blocks(
`collect_raw_data` is `True`) `BlockData` objects.
"""
return BlockIterator(data).lex_for_blocks(
allowed_blocks=allowed_blocks,
collect_raw_data=collect_raw_data
allowed_blocks=allowed_blocks, collect_raw_data=collect_raw_data
)
GENERIC_TEST_KWARGS_NAME = '_dbt_generic_test_kwargs'
GENERIC_TEST_KWARGS_NAME = "_dbt_generic_test_kwargs"
def add_rendered_test_kwargs(
@@ -639,27 +611,24 @@ def add_rendered_test_kwargs(
renderer, then insert that value into the given context as the special test
keyword arguments member.
"""
looks_like_func = r'^\s*(env_var|ref|var|source|doc)\s*\(.+\)\s*$'
looks_like_func = r"^\s*(env_var|ref|var|source|doc)\s*\(.+\)\s*$"
def _convert_function(
value: Any, keypath: Tuple[Union[str, int], ...]
) -> Any:
def _convert_function(value: Any, keypath: Tuple[Union[str, int], ...]) -> Any:
if isinstance(value, str):
if keypath == ('column_name',):
if keypath == ("column_name",):
# special case: Don't render column names as native, make them
# be strings
return value
if re.match(looks_like_func, value) is not None:
# curly braces to make rendering happy
value = f'{{{{ {value} }}}}'
value = f"{{{{ {value} }}}}"
value = get_rendered(
value, context, node, capture_macros=capture_macros,
native=True
)
value = get_rendered(value, context, node, capture_macros=capture_macros, native=True)
return value
kwargs = deep_map(_convert_function, node.test_metadata.kwargs)
# The test_metadata.kwargs come from the test builder, and were set
# when the test node was created in _parse_generic_test.
kwargs = deep_map_render(_convert_function, node.test_metadata.kwargs)
context[GENERIC_TEST_KWARGS_NAME] = kwargs

View File

@@ -8,11 +8,11 @@ def statically_extract_macro_calls(string, ctx, db_wrapper=None):
env = get_environment(None, capture_macros=True)
parsed = env.parse(string)
standard_calls = ['source', 'ref', 'config']
standard_calls = ["source", "ref", "config"]
possible_macro_calls = []
for func_call in parsed.find_all(jinja2.nodes.Call):
func_name = None
if hasattr(func_call, 'node') and hasattr(func_call.node, 'name'):
if hasattr(func_call, "node") and hasattr(func_call.node, "name"):
func_name = func_call.node.name
else:
# func_call for dbt_utils.current_timestamp macro
@@ -30,22 +30,25 @@ def statically_extract_macro_calls(string, ctx, db_wrapper=None):
# dyn_args=None,
# dyn_kwargs=None
# )
if (hasattr(func_call, 'node') and
hasattr(func_call.node, 'node') and
type(func_call.node.node).__name__ == 'Name' and
hasattr(func_call.node, 'attr')):
if (
hasattr(func_call, "node")
and hasattr(func_call.node, "node")
and type(func_call.node.node).__name__ == "Name"
and hasattr(func_call.node, "attr")
):
package_name = func_call.node.node.name
macro_name = func_call.node.attr
if package_name == 'adapter':
if macro_name == 'dispatch':
if package_name == "adapter":
if macro_name == "dispatch":
ad_macro_calls = statically_parse_adapter_dispatch(
func_call, ctx, db_wrapper)
func_call, ctx, db_wrapper
)
possible_macro_calls.extend(ad_macro_calls)
else:
# This skips calls such as adapter.parse_index
continue
else:
func_name = f'{package_name}.{macro_name}'
func_name = f"{package_name}.{macro_name}"
else:
continue
if not func_name:
@@ -108,40 +111,41 @@ def statically_parse_adapter_dispatch(func_call, ctx, db_wrapper):
# keyword arguments
if func_call.kwargs:
for kwarg in func_call.kwargs:
if kwarg.key == 'macro_name':
if kwarg.key == "macro_name":
# This will remain to enable static resolution
if type(kwarg.value).__name__ == 'Const':
if type(kwarg.value).__name__ == "Const":
func_name = kwarg.value.value
possible_macro_calls.append(func_name)
else:
raise_compiler_error(f"The macro_name parameter ({kwarg.value.value}) "
"to adapter.dispatch was not a string")
elif kwarg.key == 'macro_namespace':
raise_compiler_error(
f"The macro_name parameter ({kwarg.value.value}) "
"to adapter.dispatch was not a string"
)
elif kwarg.key == "macro_namespace":
# This will remain to enable static resolution
kwarg_type = type(kwarg.value).__name__
if kwarg_type == 'Const':
if kwarg_type == "Const":
macro_namespace = kwarg.value.value
else:
raise_compiler_error("The macro_namespace parameter to adapter.dispatch "
f"is a {kwarg_type}, not a string")
raise_compiler_error(
"The macro_namespace parameter to adapter.dispatch "
f"is a {kwarg_type}, not a string"
)
# positional arguments
if packages_arg:
if packages_arg_type == 'List':
if packages_arg_type == "List":
# This will remain to enable static resolution
packages = []
for item in packages_arg.items:
packages.append(item.value)
elif packages_arg_type == 'Const':
elif packages_arg_type == "Const":
# This will remain to enable static resolution
macro_namespace = packages_arg.value
if db_wrapper:
macro = db_wrapper.dispatch(
func_name,
macro_namespace=macro_namespace
).macro
func_name = f'{macro.package_name}.{macro.name}'
macro = db_wrapper.dispatch(func_name, macro_namespace=macro_namespace).macro
func_name = f"{macro.package_name}.{macro.name}"
possible_macro_calls.append(func_name)
else: # this is only for test/unit/test_macro_calls.py
if macro_namespace:
@@ -149,6 +153,6 @@ def statically_parse_adapter_dispatch(func_call, ctx, db_wrapper):
else:
packages = []
for package_name in packages:
possible_macro_calls.append(f'{package_name}.{func_name}')
possible_macro_calls.append(f"{package_name}.{func_name}")
return possible_macro_calls

View File

@@ -1,79 +1,163 @@
import functools
from typing import Any, Dict, List
import requests
from dbt.events.functions import fire_event
from dbt.events.types import (
RegistryProgressMakingGETRequest,
RegistryProgressGETResponse,
RegistryIndexProgressMakingGETRequest,
RegistryIndexProgressGETResponse,
RegistryResponseUnexpectedType,
RegistryResponseMissingTopKeys,
RegistryResponseMissingNestedKeys,
RegistryResponseExtraNestedKeys,
)
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import deprecations
import os
if os.getenv('DBT_PACKAGE_HUB_URL'):
DEFAULT_REGISTRY_BASE_URL = os.getenv('DBT_PACKAGE_HUB_URL')
if os.getenv("DBT_PACKAGE_HUB_URL"):
DEFAULT_REGISTRY_BASE_URL = os.getenv("DBT_PACKAGE_HUB_URL")
else:
DEFAULT_REGISTRY_BASE_URL = 'https://hub.getdbt.com/'
DEFAULT_REGISTRY_BASE_URL = "https://hub.getdbt.com/"
def _get_url(url, registry_base_url=None):
def _get_url(name, registry_base_url=None):
if registry_base_url is None:
registry_base_url = DEFAULT_REGISTRY_BASE_URL
url = "api/v1/{}.json".format(name)
return '{}{}'.format(registry_base_url, url)
return "{}{}".format(registry_base_url, url)
def _get_with_retries(path, registry_base_url=None):
get_fn = functools.partial(_get, path, registry_base_url)
def _get_with_retries(package_name, registry_base_url=None):
get_fn = functools.partial(_get, package_name, registry_base_url)
return connection_exception_retry(get_fn, 5)
def _get(path, registry_base_url=None):
url = _get_url(path, registry_base_url)
logger.debug('Making package registry request: GET {}'.format(url))
def _get(package_name, registry_base_url=None):
url = _get_url(package_name, registry_base_url)
fire_event(RegistryProgressMakingGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30)
logger.debug('Response from registry: GET {} {}'.format(url,
resp.status_code))
fire_event(RegistryProgressGETResponse(url=url, resp_code=resp.status_code))
resp.raise_for_status()
return resp.json()
# The response should always be a dictionary. Anything else is unexpected, raise error.
# Raising this error will cause this function to retry (if called within _get_with_retries)
# and hopefully get a valid response. This seems to happen when there's an issue with the Hub.
# Since we control what we expect the HUB to return, this is safe.
# See https://github.com/dbt-labs/dbt-core/issues/4577
# and https://github.com/dbt-labs/dbt-core/issues/4849
response = resp.json()
def index(registry_base_url=None):
return _get_with_retries('api/v1/index.json', registry_base_url)
if not isinstance(response, dict): # This will also catch Nonetype
error_msg = (
f"Request error: Expected a response type of <dict> but got {type(response)} instead"
)
fire_event(RegistryResponseUnexpectedType(response=response))
raise requests.exceptions.ContentDecodingError(error_msg, response=resp)
# check for expected top level keys
expected_keys = {"name", "versions"}
if not expected_keys.issubset(response):
error_msg = (
f"Request error: Expected the response to contain keys {expected_keys} "
f"but is missing {expected_keys.difference(set(response))}"
)
fire_event(RegistryResponseMissingTopKeys(response=response))
raise requests.exceptions.ContentDecodingError(error_msg, response=resp)
index_cached = memoized(index)
# check for the keys we need nested under each version
expected_version_keys = {"name", "packages", "downloads"}
all_keys = set().union(*(response["versions"][d] for d in response["versions"]))
if not expected_version_keys.issubset(all_keys):
error_msg = (
"Request error: Expected the response for the version to contain keys "
f"{expected_version_keys} but is missing {expected_version_keys.difference(all_keys)}"
)
fire_event(RegistryResponseMissingNestedKeys(response=response))
raise requests.exceptions.ContentDecodingError(error_msg, response=resp)
def packages(registry_base_url=None):
return _get_with_retries('api/v1/packages.json', registry_base_url)
def package(name, registry_base_url=None):
response = _get_with_retries('api/v1/{}.json'.format(name), registry_base_url)
# Either redirectnamespace or redirectname in the JSON response indicate a redirect
# redirectnamespace redirects based on package ownership
# redirectname redirects based on package name
# Both can be present at the same time, or neither. Fails gracefully to old name
if ('redirectnamespace' in response) or ('redirectname' in response):
if ('redirectnamespace' in response) and response['redirectnamespace'] is not None:
use_namespace = response['redirectnamespace']
else:
use_namespace = response['namespace']
if ('redirectname' in response) and response['redirectname'] is not None:
use_name = response['redirectname']
else:
use_name = response['name']
new_nwo = use_namespace + "/" + use_name
deprecations.warn('package-redirect', old_name=name, new_name=new_nwo)
# all version responses should contain identical keys.
has_extra_keys = set().difference(*(response["versions"][d] for d in response["versions"]))
if has_extra_keys:
error_msg = (
"Request error: Keys for all versions do not match. Found extra key(s) "
f"of {has_extra_keys}."
)
fire_event(RegistryResponseExtraNestedKeys(response=response))
raise requests.exceptions.ContentDecodingError(error_msg, response=resp)
return response
def package_version(name, version, registry_base_url=None):
return _get_with_retries('api/v1/{}/{}.json'.format(name, version), registry_base_url)
_get_cached = memoized(_get_with_retries)
def get_available_versions(name):
response = package(name)
return list(response['versions'])
def package(package_name, registry_base_url=None) -> Dict[str, Any]:
# returns a dictionary of metadata for all versions of a package
response = _get_cached(package_name, registry_base_url)
# Either redirectnamespace or redirectname in the JSON response indicate a redirect
# redirectnamespace redirects based on package ownership
# redirectname redirects based on package name
# Both can be present at the same time, or neither. Fails gracefully to old name
if ("redirectnamespace" in response) or ("redirectname" in response):
if ("redirectnamespace" in response) and response["redirectnamespace"] is not None:
use_namespace = response["redirectnamespace"]
else:
use_namespace = response["namespace"]
if ("redirectname" in response) and response["redirectname"] is not None:
use_name = response["redirectname"]
else:
use_name = response["name"]
new_nwo = use_namespace + "/" + use_name
deprecations.warn("package-redirect", old_name=package_name, new_name=new_nwo)
return response["versions"]
def package_version(package_name, version, registry_base_url=None) -> Dict[str, Any]:
# returns the metadata of a specific version of a package
response = package(package_name, registry_base_url)
return response[version]
def get_available_versions(package_name) -> List["str"]:
# returns a list of all available versions of a package
response = package(package_name)
return list(response)
def _get_index(registry_base_url=None):
url = _get_url("index", registry_base_url)
fire_event(RegistryIndexProgressMakingGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30)
fire_event(RegistryIndexProgressGETResponse(url=url, resp_code=resp.status_code))
resp.raise_for_status()
# The response should be a list. Anything else is unexpected, raise an error.
# Raising this error will cause this function to retry and hopefully get a valid response.
response = resp.json()
if not isinstance(response, list): # This will also catch Nonetype
error_msg = (
f"Request error: The response type of {type(response)} is not valid: {resp.text}"
)
raise requests.exceptions.ContentDecodingError(error_msg, response=resp)
return response
def index(registry_base_url=None) -> List[str]:
# this returns a list of all packages on the Hub
get_index_fn = functools.partial(_get_index, registry_base_url)
return connection_exception_retry(get_index_fn, 5)
index_cached = memoized(index)

View File

@@ -11,15 +11,21 @@ import sys
import tarfile
import requests
import stat
from typing import (
Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
)
from typing import Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
from dbt.events.functions import fire_event
from dbt.events.types import (
SystemErrorRetrievingModTime,
SystemCouldNotWrite,
SystemExecutingCmd,
SystemStdOutMsg,
SystemStdErrMsg,
SystemReportReturnCode,
)
import dbt.exceptions
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import _connection_exception_retry as connection_exception_retry
if sys.platform == 'win32':
if sys.platform == "win32":
from ctypes import WinDLL, c_bool
else:
WinDLL = None
@@ -51,38 +57,35 @@ def find_matching(
reobj = re.compile(regex, re.IGNORECASE)
for relative_path_to_search in relative_paths_to_search:
absolute_path_to_search = os.path.join(
root_path, relative_path_to_search)
absolute_path_to_search = os.path.join(root_path, relative_path_to_search)
walk_results = os.walk(absolute_path_to_search)
for current_path, subdirectories, local_files in walk_results:
for local_file in local_files:
absolute_path = os.path.join(current_path, local_file)
relative_path = os.path.relpath(
absolute_path, absolute_path_to_search
)
relative_path = os.path.relpath(absolute_path, absolute_path_to_search)
modification_time = 0.0
try:
modification_time = os.path.getmtime(absolute_path)
except OSError:
logger.exception(
f"Error retrieving modification time for file {absolute_path}"
)
fire_event(SystemErrorRetrievingModTime(path=absolute_path))
if reobj.match(local_file):
matching.append({
'searched_path': relative_path_to_search,
'absolute_path': absolute_path,
'relative_path': relative_path,
'modification_time': modification_time,
})
matching.append(
{
"searched_path": relative_path_to_search,
"absolute_path": absolute_path,
"relative_path": relative_path,
"modification_time": modification_time,
}
)
return matching
def load_file_contents(path: str, strip: bool = True) -> str:
path = convert_path(path)
with open(path, 'rb') as handle:
to_return = handle.read().decode('utf-8')
with open(path, "rb") as handle:
to_return = handle.read().decode("utf-8")
if strip:
to_return = to_return.strip()
@@ -109,14 +112,14 @@ def make_directory(path: str) -> None:
raise e
def make_file(path: str, contents: str = '', overwrite: bool = False) -> bool:
def make_file(path: str, contents: str = "", overwrite: bool = False) -> bool:
"""
Make a file at `path` assuming that the directory it resides in already
exists. The file is saved with contents `contents`
"""
if overwrite or not os.path.exists(path):
path = convert_path(path)
with open(path, 'w') as fh:
with open(path, "w") as fh:
fh.write(contents)
return True
@@ -128,7 +131,7 @@ def make_symlink(source: str, link_path: str) -> None:
Create a symlink at `link_path` referring to `source`.
"""
if not supports_symlinks():
dbt.exceptions.system_error('create a symbolic link')
dbt.exceptions.system_error("create a symbolic link")
os.symlink(source, link_path)
@@ -137,11 +140,11 @@ def supports_symlinks() -> bool:
return getattr(os, "symlink", None) is not None
def write_file(path: str, contents: str = '') -> bool:
def write_file(path: str, contents: str = "") -> bool:
path = convert_path(path)
try:
make_directory(os.path.dirname(path))
with open(path, 'w', encoding='utf-8') as f:
with open(path, "w", encoding="utf-8") as f:
f.write(str(contents))
except Exception as exc:
# note that you can't just catch FileNotFound, because sometimes
@@ -150,21 +153,18 @@ def write_file(path: str, contents: str = '') -> bool:
# sometimes windows fails to write paths that are less than the length
# limit. So on windows, suppress all errors that happen from writing
# to disk.
if os.name == 'nt':
if os.name == "nt":
# sometimes we get a winerror of 3 which means the path was
# definitely too long, but other times we don't and it means the
# path was just probably too long. This is probably based on the
# windows/python version.
if getattr(exc, 'winerror', 0) == 3:
reason = 'Path was too long'
if getattr(exc, "winerror", 0) == 3:
reason = "Path was too long"
else:
reason = 'Path was possibly too long'
reason = "Path was possibly too long"
# all our hard work and the path was still too long. Log and
# continue.
logger.debug(
f'Could not write to path {path}({len(path)} characters): '
f'{reason}\nexception: {exc}'
)
fire_event(SystemCouldNotWrite(path=path, reason=reason, exc=exc))
else:
raise
return True
@@ -178,9 +178,7 @@ def write_json(path: str, data: Dict[str, Any]) -> bool:
return write_file(path, json.dumps(data, cls=dbt.utils.JSONEncoder))
def _windows_rmdir_readonly(
func: Callable[[str], Any], path: str, exc: Tuple[Any, OSError, Any]
):
def _windows_rmdir_readonly(func: Callable[[str], Any], path: str, exc: Tuple[Any, OSError, Any]):
exception_val = exc[1]
if exception_val.errno == errno.EACCES:
os.chmod(path, stat.S_IWUSR)
@@ -197,10 +195,7 @@ def resolve_path_from_base(path_to_resolve: str, base_path: str) -> str:
If path_to_resolve is an absolute path or a user path (~), just
resolve it to an absolute path and return.
"""
return os.path.abspath(
os.path.join(
base_path,
os.path.expanduser(path_to_resolve)))
return os.path.abspath(os.path.join(base_path, os.path.expanduser(path_to_resolve)))
def rmdir(path: str) -> None:
@@ -210,7 +205,7 @@ def rmdir(path: str) -> None:
cloned via git) can cause rmtree to throw a PermissionError exception
"""
path = convert_path(path)
if sys.platform == 'win32':
if sys.platform == "win32":
onerror = _windows_rmdir_readonly
else:
onerror = None
@@ -229,7 +224,7 @@ def _win_prepare_path(path: str) -> str:
# letter back in.
# Unless it starts with '\\'. In that case, the path is a UNC mount point
# and splitdrive will be fine.
if not path.startswith('\\\\') and path.startswith('\\'):
if not path.startswith("\\\\") and path.startswith("\\"):
curdrive = os.path.splitdrive(os.getcwd())[0]
path = curdrive + path
@@ -244,7 +239,7 @@ def _win_prepare_path(path: str) -> str:
def _supports_long_paths() -> bool:
if sys.platform != 'win32':
if sys.platform != "win32":
return True
# Eryk Sun says to use `WinDLL('ntdll')` instead of `windll.ntdll` because
# of pointer caching in a comment here:
@@ -252,11 +247,11 @@ def _supports_long_paths() -> bool:
# I don't know exaclty what he means, but I am inclined to believe him as
# he's pretty active on Python windows bugs!
try:
dll = WinDLL('ntdll')
dll = WinDLL("ntdll")
except OSError: # I don't think this happens? you need ntdll to run python
return False
# not all windows versions have it at all
if not hasattr(dll, 'RtlAreLongPathsEnabled'):
if not hasattr(dll, "RtlAreLongPathsEnabled"):
return False
# tell windows we want to get back a single unsigned byte (a bool).
dll.RtlAreLongPathsEnabled.restype = c_bool
@@ -276,7 +271,7 @@ def convert_path(path: str) -> str:
if _supports_long_paths():
return path
prefix = '\\\\?\\'
prefix = "\\\\?\\"
# Nothing to do
if path.startswith(prefix):
return path
@@ -307,44 +302,40 @@ def path_is_symlink(path: str) -> bool:
def open_dir_cmd() -> str:
# https://docs.python.org/2/library/sys.html#sys.platform
if sys.platform == 'win32':
return 'start'
if sys.platform == "win32":
return "start"
elif sys.platform == 'darwin':
return 'open'
elif sys.platform == "darwin":
return "open"
else:
return 'xdg-open'
return "xdg-open"
def _handle_posix_cwd_error(
exc: OSError, cwd: str, cmd: List[str]
) -> NoReturn:
def _handle_posix_cwd_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
if exc.errno == errno.ENOENT:
message = 'Directory does not exist'
message = "Directory does not exist"
elif exc.errno == errno.EACCES:
message = 'Current user cannot access directory, check permissions'
message = "Current user cannot access directory, check permissions"
elif exc.errno == errno.ENOTDIR:
message = 'Not a directory'
message = "Not a directory"
else:
message = 'Unknown OSError: {} - cwd'.format(str(exc))
message = "Unknown OSError: {} - cwd".format(str(exc))
raise dbt.exceptions.WorkingDirectoryError(cwd, cmd, message)
def _handle_posix_cmd_error(
exc: OSError, cwd: str, cmd: List[str]
) -> NoReturn:
def _handle_posix_cmd_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
if exc.errno == errno.ENOENT:
message = "Could not find command, ensure it is in the user's PATH"
elif exc.errno == errno.EACCES:
message = 'User does not have permissions for this command'
message = "User does not have permissions for this command"
else:
message = 'Unknown OSError: {} - cmd'.format(str(exc))
message = "Unknown OSError: {} - cmd".format(str(exc))
raise dbt.exceptions.ExecutableError(cwd, cmd, message)
def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
"""OSError handling for posix systems.
"""OSError handling for POSIX systems.
Some things that could happen to trigger an OSError:
- cwd could not exist
@@ -364,7 +355,7 @@ def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
- exc.errno == EACCES
- exc.filename == None(?)
"""
if getattr(exc, 'filename', None) == cwd:
if getattr(exc, "filename", None) == cwd:
_handle_posix_cwd_error(exc, cwd, cmd)
else:
_handle_posix_cmd_error(exc, cwd, cmd)
@@ -373,46 +364,46 @@ def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _handle_windows_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
cls: Type[dbt.exceptions.Exception] = dbt.exceptions.CommandError
if exc.errno == errno.ENOENT:
message = ("Could not find command, ensure it is in the user's PATH "
"and that the user has permissions to run it")
message = (
"Could not find command, ensure it is in the user's PATH "
"and that the user has permissions to run it"
)
cls = dbt.exceptions.ExecutableError
elif exc.errno == errno.ENOEXEC:
message = ('Command was not executable, ensure it is valid')
message = "Command was not executable, ensure it is valid"
cls = dbt.exceptions.ExecutableError
elif exc.errno == errno.ENOTDIR:
message = ('Unable to cd: path does not exist, user does not have'
' permissions, or not a directory')
message = (
"Unable to cd: path does not exist, user does not have"
" permissions, or not a directory"
)
cls = dbt.exceptions.WorkingDirectoryError
else:
message = 'Unknown error: {} (errno={}: "{}")'.format(
str(exc), exc.errno, errno.errorcode.get(exc.errno, '<Unknown!>')
str(exc), exc.errno, errno.errorcode.get(exc.errno, "<Unknown!>")
)
raise cls(cwd, cmd, message)
def _interpret_oserror(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
"""Interpret an OSError exc and raise the appropriate dbt exception.
"""
"""Interpret an OSError exception and raise the appropriate dbt exception."""
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
# all of these functions raise unconditionally
if os.name == 'nt':
if os.name == "nt":
_handle_windows_error(exc, cwd, cmd)
else:
_handle_posix_error(exc, cwd, cmd)
# this should not be reachable, raise _something_ at least!
raise dbt.exceptions.InternalException(
'Unhandled exception in _interpret_oserror: {}'.format(exc)
"Unhandled exception in _interpret_oserror: {}".format(exc)
)
def run_cmd(
cwd: str, cmd: List[str], env: Optional[Dict[str, Any]] = None
) -> Tuple[bytes, bytes]:
logger.debug('Executing "{}"'.format(' '.join(cmd)))
def run_cmd(cwd: str, cmd: List[str], env: Optional[Dict[str, Any]] = None) -> Tuple[bytes, bytes]:
fire_event(SystemExecutingCmd(cmd=cmd))
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
@@ -428,23 +419,19 @@ def run_cmd(
if exe_pth:
cmd = [os.path.abspath(exe_pth)] + list(cmd[1:])
proc = subprocess.Popen(
cmd,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=full_env)
cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=full_env
)
out, err = proc.communicate()
except OSError as exc:
_interpret_oserror(exc, cwd, cmd)
logger.debug('STDOUT: "{!s}"'.format(out))
logger.debug('STDERR: "{!s}"'.format(err))
fire_event(SystemStdOutMsg(bmsg=out))
fire_event(SystemStdErrMsg(bmsg=err))
if proc.returncode != 0:
logger.debug('command return code={}'.format(proc.returncode))
raise dbt.exceptions.CommandResultError(cwd, cmd, proc.returncode,
out, err)
fire_event(SystemReportReturnCode(returncode=proc.returncode))
raise dbt.exceptions.CommandResultError(cwd, cmd, proc.returncode, out, err)
return out, err
@@ -456,13 +443,11 @@ def download_with_retries(
connection_exception_retry(download_fn, 5)
def download(
url: str, path: str, timeout: Optional[Union[float, tuple]] = None
) -> None:
def download(url: str, path: str, timeout: Optional[Union[float, tuple]] = None) -> None:
path = convert_path(path)
connection_timeout = timeout or float(os.getenv('DBT_HTTP_TIMEOUT', 10))
connection_timeout = timeout or float(os.getenv("DBT_HTTP_TIMEOUT", 10))
response = requests.get(url, timeout=connection_timeout)
with open(path, 'wb') as handle:
with open(path, "wb") as handle:
for block in response.iter_content(1024 * 64):
handle.write(block)
@@ -481,12 +466,10 @@ def rename(from_path: str, to_path: str, force: bool = False) -> None:
shutil.move(from_path, to_path)
def untar_package(
tar_path: str, dest_dir: str, rename_to: Optional[str] = None
) -> None:
def untar_package(tar_path: str, dest_dir: str, rename_to: Optional[str] = None) -> None:
tar_path = convert_path(tar_path)
tar_dir_name = None
with tarfile.open(tar_path, 'r') as tarball:
with tarfile.open(tar_path, "r:gz") as tarball:
tarball.extractall(dest_dir)
tar_dir_name = os.path.commonprefix(tarball.getnames())
if rename_to:
@@ -502,7 +485,7 @@ def chmod_and_retry(func, path, exc_info):
We want to retry most operations here, but listdir is one that we know will
be useless.
"""
if func is os.listdir or os.name != 'nt':
if func is os.listdir or os.name != "nt":
raise
os.chmod(path, stat.S_IREAD | stat.S_IWRITE)
# on error,this will raise.
@@ -518,12 +501,12 @@ def move(src, dst):
directory on windows when it has read-only files in it and the move is
between two drives.
This is almost identical to the real shutil.move, except it uses our rmtree
This is almost identical to the real shutil.move, except it, uses our rmtree
and skips handling non-windows OSes since the existing one works ok there.
"""
src = convert_path(src)
dst = convert_path(dst)
if os.name != 'nt':
if os.name != "nt":
return shutil.move(src, dst)
if os.path.isdir(dst):
@@ -531,7 +514,7 @@ def move(src, dst):
os.rename(src, dst)
return
dst = os.path.join(dst, os.path.basename(src.rstrip('/\\')))
dst = os.path.join(dst, os.path.basename(src.rstrip("/\\")))
if os.path.exists(dst):
raise EnvironmentError("Path '{}' already exists".format(dst))
@@ -540,11 +523,10 @@ def move(src, dst):
except OSError:
# probably different drives
if os.path.isdir(src):
if _absnorm(dst + '\\').startswith(_absnorm(src + '\\')):
if _absnorm(dst + "\\").startswith(_absnorm(src + "\\")):
# dst is inside src
raise EnvironmentError(
"Cannot move a directory '{}' into itself '{}'"
.format(src, dst)
"Cannot move a directory '{}' into itself '{}'".format(src, dst)
)
shutil.copytree(src, dst, symlinks=True)
rmtree(src)
@@ -554,7 +536,7 @@ def move(src, dst):
def rmtree(path):
"""Recursively remove path. On permissions errors on windows, try to remove
"""Recursively remove the path. On permissions errors on windows, try to remove
the read-only flag and try again.
"""
path = convert_path(path)

View File

@@ -4,16 +4,11 @@ import yaml
# the C version is faster, but it doesn't always exist
try:
from yaml import (
CLoader as Loader,
CSafeLoader as SafeLoader,
CDumper as Dumper
)
from yaml import CLoader as Loader, CSafeLoader as SafeLoader, CDumper as Dumper
except ImportError:
from yaml import ( # type: ignore # noqa: F401
Loader, SafeLoader, Dumper
)
from yaml import Loader, SafeLoader, Dumper # type: ignore # noqa: F401
from dbt.ui import warning_tag
YAML_ERROR_MESSAGE = """
Syntax error near line {line_number}
@@ -26,20 +21,38 @@ Raw Error:
""".strip()
class UniqueKeyLoader(SafeLoader):
"""A subclass that checks for unique yaml mapping nodes.
This class extends `SafeLoader` from the `yaml` library to check for
unique top level keys (mapping nodes). See issue (https://github.com/yaml/pyyaml/issues/165)
and solution (https://gist.github.com/pypt/94d747fe5180851196eb?permalink_comment_id=4015118).
"""
def construct_mapping(self, node, deep=False):
mapping = set()
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
if key in mapping:
raise dbt.exceptions.DuplicateYamlKeyException(
f"Duplicate {key!r} key found in yaml file"
)
mapping.add(key)
return super().construct_mapping(node, deep)
def line_no(i, line, width=3):
line_number = str(i).ljust(width)
return "{}| {}".format(line_number, line)
def prefix_with_line_numbers(string, no_start, no_end):
line_list = string.split('\n')
line_list = string.split("\n")
numbers = range(no_start, no_end)
relevant_lines = line_list[no_start:no_end]
return "\n".join([
line_no(i + 1, line) for (i, line) in zip(numbers, relevant_lines)
])
return "\n".join([line_no(i + 1, line) for (i, line) in zip(numbers, relevant_lines)])
def contextualized_yaml_error(raw_contents, error):
@@ -50,22 +63,26 @@ def contextualized_yaml_error(raw_contents, error):
nice_error = prefix_with_line_numbers(raw_contents, min_line, max_line)
return YAML_ERROR_MESSAGE.format(line_number=mark.line + 1,
nice_error=nice_error,
raw_error=error)
return YAML_ERROR_MESSAGE.format(
line_number=mark.line + 1, nice_error=nice_error, raw_error=error
)
def safe_load(contents) -> Optional[Dict[str, Any]]:
return yaml.load(contents, Loader=SafeLoader)
return yaml.load(contents, Loader=UniqueKeyLoader)
def load_yaml_text(contents):
def load_yaml_text(contents, path=None):
try:
return safe_load(contents)
except (yaml.scanner.ScannerError, yaml.YAMLError) as e:
if hasattr(e, 'problem_mark'):
if hasattr(e, "problem_mark"):
error = contextualized_yaml_error(contents, e)
else:
error = str(e)
raise dbt.exceptions.ValidationException(error)
except dbt.exceptions.DuplicateYamlKeyException as e:
# TODO: We may want to raise an exception instead of a warning in the future.
e.msg = f"{e} {path.searched_path}/{path.relative_path}."
dbt.exceptions.warn_or_raise(e, log_fmt=warning_tag("{}"))

View File

@@ -3,13 +3,14 @@ from collections import defaultdict
from typing import List, Dict, Any, Tuple, cast, Optional
import networkx as nx # type: ignore
import pickle
import sqlparse
from dbt import flags
from dbt.adapters.factory import get_adapter
from dbt.clients import jinja
from dbt.clients.system import make_directory
from dbt.context.providers import generate_runtime_model
from dbt.context.providers import generate_runtime_model_context
from dbt.contracts.graph.manifest import Manifest, UniqueID
from dbt.contracts.graph.compiled import (
COMPILED_TYPES,
@@ -26,33 +27,35 @@ from dbt.exceptions import (
RuntimeException,
)
from dbt.graph import Graph
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import FoundStats, CompilingNode, WritingInjectedSQLForNode
from dbt.node_types import NodeType
from dbt.utils import pluralize
from dbt.events.format import pluralize
import dbt.tracking
graph_file_name = 'graph.gpickle'
graph_file_name = "graph.gpickle"
def _compiled_type_for(model: ParsedNode):
if type(model) not in COMPILED_TYPES:
raise InternalException(
f'Asked to compile {type(model)} node, but it has no compiled form'
f"Asked to compile {type(model)} node, but it has no compiled form"
)
return COMPILED_TYPES[type(model)]
def print_compile_stats(stats):
names = {
NodeType.Model: 'model',
NodeType.Test: 'test',
NodeType.Snapshot: 'snapshot',
NodeType.Analysis: 'analysis',
NodeType.Macro: 'macro',
NodeType.Operation: 'operation',
NodeType.Seed: 'seed file',
NodeType.Source: 'source',
NodeType.Exposure: 'exposure',
NodeType.Model: "model",
NodeType.Test: "test",
NodeType.Snapshot: "snapshot",
NodeType.Analysis: "analysis",
NodeType.Macro: "macro",
NodeType.Operation: "operation",
NodeType.Seed: "seed file",
NodeType.Source: "source",
NodeType.Exposure: "exposure",
NodeType.Metric: "metric",
}
results = {k: 0 for k in names.keys()}
@@ -63,12 +66,9 @@ def print_compile_stats(stats):
resource_counts = {k.pluralize(): v for k, v in results.items()}
dbt.tracking.track_resource_counts(resource_counts)
stat_line = ", ".join([
pluralize(ct, names.get(t)) for t, ct in results.items()
if t in names
])
stat_line = ", ".join([pluralize(ct, names.get(t)) for t, ct in results.items() if t in names])
logger.info("Found {}".format(stat_line))
fire_event(FoundStats(stat_line=stat_line))
def _node_enabled(node: ManifestNode):
@@ -89,6 +89,8 @@ def _generate_stats(manifest: Manifest):
stats[source.resource_type] += 1
for exposure in manifest.exposures.values():
stats[exposure.resource_type] += 1
for metric in manifest.metrics.values():
stats[metric.resource_type] += 1
for macro in manifest.macros.values():
stats[macro.resource_type] += 1
return stats
@@ -108,13 +110,13 @@ def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
def _get_tests_for_node(manifest: Manifest, unique_id: UniqueID) -> List[UniqueID]:
""" Get a list of tests that depend on the node with the
provided unique id """
"""Get a list of tests that depend on the node with the
provided unique id"""
tests = []
if unique_id in manifest.child_map:
for child_unique_id in manifest.child_map[unique_id]:
if child_unique_id.startswith('test.'):
if child_unique_id.startswith("test."):
tests.append(child_unique_id)
return tests
@@ -158,7 +160,8 @@ class Linker:
for node_id in self.graph:
data = manifest.expect(node_id).to_dict(omit_none=True)
out_graph.add_node(node_id, **data)
nx.write_gpickle(out_graph, outfile)
with open(outfile, "wb") as outfh:
pickle.dump(out_graph, outfh, protocol=pickle.HIGHEST_PROTOCOL)
class Compiler:
@@ -178,9 +181,7 @@ class Compiler:
extra_context: Dict[str, Any],
) -> Dict[str, Any]:
context = generate_runtime_model(
node, self.config, manifest
)
context = generate_runtime_model_context(node, self.config, manifest)
context.update(extra_context)
if isinstance(node, CompiledGenericTestNode):
# for test nodes, add a special keyword args value to the context
@@ -238,26 +239,21 @@ class Compiler:
with_stmt = None
for token in parsed.tokens:
if token.is_keyword and token.normalized == 'WITH':
if token.is_keyword and token.normalized == "WITH":
with_stmt = token
break
if with_stmt is None:
# no with stmt, add one, and inject CTEs right at the beginning
first_token = parsed.token_first()
with_stmt = sqlparse.sql.Token(sqlparse.tokens.Keyword, 'with')
with_stmt = sqlparse.sql.Token(sqlparse.tokens.Keyword, "with")
parsed.insert_before(first_token, with_stmt)
else:
# stmt exists, add a comma (which will come after injected CTEs)
trailing_comma = sqlparse.sql.Token(
sqlparse.tokens.Punctuation, ','
)
trailing_comma = sqlparse.sql.Token(sqlparse.tokens.Punctuation, ",")
parsed.insert_after(with_stmt, trailing_comma)
token = sqlparse.sql.Token(
sqlparse.tokens.Keyword,
", ".join(c.sql for c in ctes)
)
token = sqlparse.sql.Token(sqlparse.tokens.Keyword, ", ".join(c.sql for c in ctes))
parsed.insert_after(with_stmt, token)
return str(parsed)
@@ -276,9 +272,7 @@ class Compiler:
inserting CTEs into the SQL.
"""
if model.compiled_sql is None:
raise RuntimeException(
'Cannot inject ctes into an unparsed node', model
)
raise RuntimeException("Cannot inject ctes into an unparsed node", model)
if model.extra_ctes_injected:
return (model, model.extra_ctes)
@@ -299,17 +293,17 @@ class Compiler:
for cte in model.extra_ctes:
if cte.id not in manifest.nodes:
raise InternalException(
f'During compilation, found a cte reference that '
f'could not be resolved: {cte.id}'
f"During compilation, found a cte reference that "
f"could not be resolved: {cte.id}"
)
cte_model = manifest.nodes[cte.id]
if not cte_model.is_ephemeral_model:
raise InternalException(f'{cte.id} is not ephemeral')
raise InternalException(f"{cte.id} is not ephemeral")
# This model has already been compiled, so it's been
# through here before
if getattr(cte_model, 'compiled', False):
if getattr(cte_model, "compiled", False):
assert isinstance(cte_model, tuple(COMPILED_TYPES.values()))
cte_model = cast(NonSourceCompiledNode, cte_model)
new_prepended_ctes = cte_model.extra_ctes
@@ -318,13 +312,11 @@ class Compiler:
else:
# This is an ephemeral parsed model that we can compile.
# Compile and update the node
cte_model = self._compile_node(
cte_model, manifest, extra_context)
cte_model = self._compile_node(cte_model, manifest, extra_context)
# recursively call this method
cte_model, new_prepended_ctes = \
self._recursively_prepend_ctes(
cte_model, manifest, extra_context
)
cte_model, new_prepended_ctes = self._recursively_prepend_ctes(
cte_model, manifest, extra_context
)
# Save compiled SQL file and sync manifest
self._write_node(cte_model)
manifest.sync_update_node(cte_model)
@@ -332,10 +324,8 @@ class Compiler:
_extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
new_cte_name = self.add_ephemeral_prefix(cte_model.name)
rendered_sql = (
cte_model._pre_injected_sql or cte_model.compiled_sql
)
sql = f' {new_cte_name} as (\n{rendered_sql}\n)'
rendered_sql = cte_model._pre_injected_sql or cte_model.compiled_sql
sql = f" {new_cte_name} as (\n{rendered_sql}\n)"
_add_prepended_cte(prepended_ctes, InjectedCTE(id=cte.id, sql=sql))
@@ -366,20 +356,20 @@ class Compiler:
if extra_context is None:
extra_context = {}
logger.debug("Compiling {}".format(node.unique_id))
fire_event(CompilingNode(unique_id=node.unique_id))
data = node.to_dict(omit_none=True)
data.update({
'compiled': False,
'compiled_sql': None,
'extra_ctes_injected': False,
'extra_ctes': [],
})
data.update(
{
"compiled": False,
"compiled_sql": None,
"extra_ctes_injected": False,
"extra_ctes": [],
}
)
compiled_node = _compiled_type_for(node).from_dict(data)
context = self._create_node_context(
compiled_node, manifest, extra_context
)
context = self._create_node_context(compiled_node, manifest, extra_context)
compiled_node.compiled_sql = jinja.get_rendered(
node.raw_sql,
@@ -399,85 +389,72 @@ class Compiler:
if flags.WRITE_JSON:
linker.write_graph(graph_path, manifest)
def link_node(
self, linker: Linker, node: GraphMemberNode, manifest: Manifest
):
def link_node(self, linker: Linker, node: GraphMemberNode, manifest: Manifest):
linker.add_node(node.unique_id)
for dependency in node.depends_on_nodes:
if dependency in manifest.nodes:
linker.dependency(
node.unique_id,
(manifest.nodes[dependency].unique_id)
)
linker.dependency(node.unique_id, (manifest.nodes[dependency].unique_id))
elif dependency in manifest.sources:
linker.dependency(
node.unique_id,
(manifest.sources[dependency].unique_id)
)
linker.dependency(node.unique_id, (manifest.sources[dependency].unique_id))
else:
dependency_not_found(node, dependency)
def link_graph(self, linker: Linker, manifest: Manifest):
def link_graph(self, linker: Linker, manifest: Manifest, add_test_edges: bool = False):
for source in manifest.sources.values():
linker.add_node(source.unique_id)
for node in manifest.nodes.values():
self.link_node(linker, node, manifest)
for exposure in manifest.exposures.values():
self.link_node(linker, exposure, manifest)
for metric in manifest.metrics.values():
self.link_node(linker, metric, manifest)
cycle = linker.find_cycles()
if cycle:
raise RuntimeError("Found a cycle: {}".format(cycle))
manifest.build_parent_and_child_maps()
if add_test_edges:
manifest.build_parent_and_child_maps()
self.add_test_edges(linker, manifest)
self.resolve_graph(linker, manifest)
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
""" This method adds additional edges to the DAG. For a given non-test
def add_test_edges(self, linker: Linker, manifest: Manifest) -> None:
"""This method adds additional edges to the DAG. For a given non-test
executable node, add an edge from an upstream test to the given node if
the set of nodes the test depends on is a proper/strict subset of the
upstream nodes for the given node. """
the set of nodes the test depends on is a subset of the upstream nodes
for the given node."""
# Given a graph:
# model1 --> model2 --> model3
# | |
# | \/
# \/ test 2
# | |
# | \/
# \/ test 2
# test1
#
# Produce the following graph:
# model1 --> model2 --> model3
# | | /\ /\
# | \/ | |
# \/ test2 ------- |
# test1 -------------------
# | /\ | /\ /\
# | | \/ | |
# \/ | test2 ----| |
# test1 ----|---------------|
for node_id in linker.graph:
# If node is executable (in manifest.nodes) and does _not_
# represent a test, continue.
if (
node_id in manifest.nodes and
manifest.nodes[node_id].resource_type != NodeType.Test
node_id in manifest.nodes
and manifest.nodes[node_id].resource_type != NodeType.Test
):
# Get *everything* upstream of the node
all_upstream_nodes = nx.traversal.bfs_tree(
linker.graph, node_id, reverse=True
)
all_upstream_nodes = nx.traversal.bfs_tree(linker.graph, node_id, reverse=True)
# Get the set of upstream nodes not including the current node.
upstream_nodes = set([
n for n in all_upstream_nodes if n != node_id
])
upstream_nodes = set([n for n in all_upstream_nodes if n != node_id])
# Get all tests that depend on any upstream nodes.
upstream_tests = []
for upstream_node in upstream_nodes:
upstream_tests += _get_tests_for_node(
manifest,
upstream_node
)
upstream_tests += _get_tests_for_node(manifest, upstream_node)
for upstream_test in upstream_tests:
# Get the set of all nodes that the test depends on
@@ -486,26 +463,19 @@ class Compiler:
# relationship tests). Test nodes do not distinguish
# between what node the test is "testing" and what
# node(s) it depends on.
test_depends_on = set(
manifest.nodes[upstream_test].depends_on_nodes
)
test_depends_on = set(manifest.nodes[upstream_test].depends_on_nodes)
# If the set of nodes that an upstream test depends on
# is a proper (or strict) subset of all upstream nodes of
# the current node, add an edge from the upstream test
# to the current node. Must be a proper/strict subset to
# avoid adding a circular dependency to the graph.
if (test_depends_on < upstream_nodes):
linker.graph.add_edge(
upstream_test,
node_id
)
# is a subset of all upstream nodes of the current node,
# add an edge from the upstream test to the current node.
if test_depends_on.issubset(upstream_nodes):
linker.graph.add_edge(upstream_test, node_id)
def compile(self, manifest: Manifest, write=True) -> Graph:
def compile(self, manifest: Manifest, write=True, add_test_edges=False) -> Graph:
self.initialize()
linker = Linker()
self.link_graph(linker, manifest)
self.link_graph(linker, manifest, add_test_edges)
stats = _generate_stats(manifest)
@@ -517,16 +487,13 @@ class Compiler:
# writes the "compiled_sql" into the target/compiled directory
def _write_node(self, node: NonSourceCompiledNode) -> ManifestNode:
if (not node.extra_ctes_injected or
node.resource_type == NodeType.Snapshot):
if not node.extra_ctes_injected or node.resource_type == NodeType.Snapshot:
return node
logger.debug(f'Writing injected SQL for node "{node.unique_id}"')
fire_event(WritingInjectedSQLForNode(unique_id=node.unique_id))
if node.compiled_sql:
node.compiled_path = node.write_node(
self.config.target_path,
'compiled',
node.compiled_sql
self.config.target_path, "compiled", node.compiled_sql
)
return node
@@ -545,9 +512,7 @@ class Compiler:
"""
node = self._compile_node(node, manifest, extra_context)
node, _ = self._recursively_prepend_ctes(
node, manifest, extra_context
)
node, _ = self._recursively_prepend_ctes(node, manifest, extra_context)
if write:
self._write_node(node)
return node

View File

@@ -0,0 +1 @@
# Config README

View File

@@ -15,14 +15,15 @@ from dbt.exceptions import DbtProjectError
from dbt.exceptions import ValidationException
from dbt.exceptions import RuntimeException
from dbt.exceptions import validator_error_message
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.types import MissingProfileTarget
from dbt.events.functions import fire_event
from dbt.utils import coerce_dict_str
from .renderer import ProfileRenderer
DEFAULT_THREADS = 1
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser('~'), '.dbt')
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser("~"), ".dbt")
INVALID_PROFILE_MESSAGE = """
dbt encountered an error while trying to read your profiles.yml file.
@@ -42,11 +43,13 @@ Here, [profile name] should be replaced with a profile name
defined in your profiles.yml file. You can find profiles.yml here:
{profiles_file}/profiles.yml
""".format(profiles_file=DEFAULT_PROFILES_DIR)
""".format(
profiles_file=DEFAULT_PROFILES_DIR
)
def read_profile(profiles_dir: str) -> Dict[str, Any]:
path = os.path.join(profiles_dir, 'profiles.yml')
path = os.path.join(profiles_dir, "profiles.yml")
contents = None
if os.path.isfile(path):
@@ -54,12 +57,8 @@ def read_profile(profiles_dir: str) -> Dict[str, Any]:
contents = load_file_contents(path, strip=False)
yaml_content = load_yaml_text(contents)
if not yaml_content:
msg = f'The profiles.yml file at {path} is empty'
raise DbtProfileError(
INVALID_PROFILE_MESSAGE.format(
error_string=msg
)
)
msg = f"The profiles.yml file at {path} is empty"
raise DbtProfileError(INVALID_PROFILE_MESSAGE.format(error_string=msg))
return yaml_content
except ValidationException as e:
msg = INVALID_PROFILE_MESSAGE.format(error_string=e)
@@ -72,7 +71,7 @@ def read_user_config(directory: str) -> UserConfig:
try:
profile = read_profile(directory)
if profile:
user_config = coerce_dict_str(profile.get('config', {}))
user_config = coerce_dict_str(profile.get("config", {}))
if user_config is not None:
UserConfig.validate(user_config)
return UserConfig.from_dict(user_config)
@@ -91,6 +90,7 @@ class Profile(HasCredentials):
user_config: UserConfig
threads: int
credentials: Credentials
profile_env_vars: Dict[str, Any]
def __init__(
self,
@@ -98,7 +98,7 @@ class Profile(HasCredentials):
target_name: str,
user_config: UserConfig,
threads: int,
credentials: Credentials
credentials: Credentials,
):
"""Explicitly defining `__init__` to work around bug in Python 3.9.7
https://bugs.python.org/issue45081
@@ -108,10 +108,9 @@ class Profile(HasCredentials):
self.user_config = user_config
self.threads = threads
self.credentials = credentials
self.profile_env_vars = {} # never available on init
def to_profile_info(
self, serialize_credentials: bool = False
) -> Dict[str, Any]:
def to_profile_info(self, serialize_credentials: bool = False) -> Dict[str, Any]:
"""Unlike to_project_config, this dict is not a mirror of any existing
on-disk data structure. It's used when creating a new profile from an
existing one.
@@ -121,34 +120,33 @@ class Profile(HasCredentials):
:returns dict: The serialized profile.
"""
result = {
'profile_name': self.profile_name,
'target_name': self.target_name,
'user_config': self.user_config,
'threads': self.threads,
'credentials': self.credentials,
"profile_name": self.profile_name,
"target_name": self.target_name,
"user_config": self.user_config,
"threads": self.threads,
"credentials": self.credentials,
}
if serialize_credentials:
result['user_config'] = self.user_config.to_dict(omit_none=True)
result['credentials'] = self.credentials.to_dict(omit_none=True)
result["user_config"] = self.user_config.to_dict(omit_none=True)
result["credentials"] = self.credentials.to_dict(omit_none=True)
return result
def to_target_dict(self) -> Dict[str, Any]:
target = dict(
self.credentials.connection_info(with_aliases=True)
target = dict(self.credentials.connection_info(with_aliases=True))
target.update(
{
"type": self.credentials.type,
"threads": self.threads,
"name": self.target_name,
"target_name": self.target_name,
"profile_name": self.profile_name,
"config": self.user_config.to_dict(omit_none=True),
}
)
target.update({
'type': self.credentials.type,
'threads': self.threads,
'name': self.target_name,
'target_name': self.target_name,
'profile_name': self.profile_name,
'config': self.user_config.to_dict(omit_none=True),
})
return target
def __eq__(self, other: object) -> bool:
if not (isinstance(other, self.__class__) and
isinstance(self, other.__class__)):
if not (isinstance(other, self.__class__) and isinstance(self, other.__class__)):
return NotImplemented
return self.to_profile_info() == other.to_profile_info()
@@ -168,14 +166,17 @@ class Profile(HasCredentials):
) -> Credentials:
# avoid an import cycle
from dbt.adapters.factory import load_plugin
# credentials carry their 'type' in their actual type, not their
# attributes. We do want this in order to pick our Credentials class.
if 'type' not in profile:
if "type" not in profile:
raise DbtProfileError(
'required field "type" not found in profile {} and target {}'
.format(profile_name, target_name))
'required field "type" not found in profile {} and target {}'.format(
profile_name, target_name
)
)
typename = profile.pop('type')
typename = profile.pop("type")
try:
cls = load_plugin(typename)
data = cls.translate_aliases(profile)
@@ -184,8 +185,9 @@ class Profile(HasCredentials):
except (RuntimeException, ValidationError) as e:
msg = str(e) if isinstance(e, RuntimeException) else e.message
raise DbtProfileError(
'Credentials in profile "{}", target "{}" invalid: {}'
.format(profile_name, target_name, msg)
'Credentials in profile "{}", target "{}" invalid: {}'.format(
profile_name, target_name, msg
)
) from e
return credentials
@@ -206,19 +208,19 @@ class Profile(HasCredentials):
def _get_profile_data(
profile: Dict[str, Any], profile_name: str, target_name: str
) -> Dict[str, Any]:
if 'outputs' not in profile:
raise DbtProfileError(
"outputs not specified in profile '{}'".format(profile_name)
)
outputs = profile['outputs']
if "outputs" not in profile:
raise DbtProfileError("outputs not specified in profile '{}'".format(profile_name))
outputs = profile["outputs"]
if target_name not in outputs:
outputs = '\n'.join(' - {}'.format(output)
for output in outputs)
msg = ("The profile '{}' does not have a target named '{}'. The "
"valid target names for this profile are:\n{}"
.format(profile_name, target_name, outputs))
raise DbtProfileError(msg, result_type='invalid_target')
outputs = "\n".join(" - {}".format(output) for output in outputs)
msg = (
"The profile '{}' does not have a target named '{}'. The "
"valid target names for this profile are:\n{}".format(
profile_name, target_name, outputs
)
)
raise DbtProfileError(msg, result_type="invalid_target")
profile_data = outputs[target_name]
if not isinstance(profile_data, dict):
@@ -226,7 +228,7 @@ class Profile(HasCredentials):
f"output '{target_name}' of profile '{profile_name}' is "
f"misconfigured in profiles.yml"
)
raise DbtProfileError(msg, result_type='invalid_target')
raise DbtProfileError(msg, result_type="invalid_target")
return profile_data
@@ -237,8 +239,8 @@ class Profile(HasCredentials):
threads: int,
profile_name: str,
target_name: str,
user_config: Optional[Dict[str, Any]] = None
) -> 'Profile':
user_config: Optional[Dict[str, Any]] = None,
) -> "Profile":
"""Create a profile from an existing set of Credentials and the
remaining information.
@@ -261,7 +263,7 @@ class Profile(HasCredentials):
target_name=target_name,
user_config=user_config_obj,
threads=threads,
credentials=credentials
credentials=credentials,
)
profile.validate()
return profile
@@ -286,19 +288,14 @@ class Profile(HasCredentials):
# name to extract a profile that we can render.
if target_override is not None:
target_name = target_override
elif 'target' in raw_profile:
elif "target" in raw_profile:
# render the target if it was parsed from yaml
target_name = renderer.render_value(raw_profile['target'])
target_name = renderer.render_value(raw_profile["target"])
else:
target_name = 'default'
logger.debug(
"target not specified in profile '{}', using '{}'"
.format(profile_name, target_name)
)
target_name = "default"
fire_event(MissingProfileTarget(profile_name=profile_name, target_name=target_name))
raw_profile_data = cls._get_profile_data(
raw_profile, profile_name, target_name
)
raw_profile_data = cls._get_profile_data(raw_profile, profile_name, target_name)
try:
profile_data = renderer.render_data(raw_profile_data)
@@ -315,7 +312,7 @@ class Profile(HasCredentials):
user_config: Optional[Dict[str, Any]] = None,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> 'Profile':
) -> "Profile":
"""Create a profile from its raw profile information.
(this is an intermediate step, mostly useful for unit testing)
@@ -336,7 +333,7 @@ class Profile(HasCredentials):
"""
# user_config is not rendered.
if user_config is None:
user_config = raw_profile.get('config')
user_config = raw_profile.get("config")
# TODO: should it be, and the values coerced to bool?
target_name, profile_data = cls.render_profile(
raw_profile, profile_name, target_override, renderer
@@ -344,7 +341,7 @@ class Profile(HasCredentials):
# valid connections never include the number of threads, but it's
# stored on a per-connection level in the raw configs
threads = profile_data.pop('threads', DEFAULT_THREADS)
threads = profile_data.pop("threads", DEFAULT_THREADS)
if threads_override is not None:
threads = threads_override
@@ -357,7 +354,7 @@ class Profile(HasCredentials):
profile_name=profile_name,
target_name=target_name,
threads=threads,
user_config=user_config
user_config=user_config,
)
@classmethod
@@ -368,7 +365,7 @@ class Profile(HasCredentials):
renderer: ProfileRenderer,
target_override: Optional[str] = None,
threads_override: Optional[int] = None,
) -> 'Profile':
) -> "Profile":
"""
:param raw_profiles: The profile data, from disk as yaml.
:param profile_name: The profile name to use.
@@ -384,23 +381,15 @@ class Profile(HasCredentials):
:returns: The new Profile object.
"""
if profile_name not in raw_profiles:
raise DbtProjectError(
"Could not find profile named '{}'".format(profile_name)
)
raise DbtProjectError("Could not find profile named '{}'".format(profile_name))
# First, we've already got our final decision on profile name, and we
# don't render keys, so we can pluck that out
raw_profile = raw_profiles[profile_name]
if not raw_profile:
msg = (
f'Profile {profile_name} in profiles.yml is empty'
)
raise DbtProfileError(
INVALID_PROFILE_MESSAGE.format(
error_string=msg
)
)
user_config = raw_profiles.get('config')
msg = f"Profile {profile_name} in profiles.yml is empty"
raise DbtProfileError(INVALID_PROFILE_MESSAGE.format(error_string=msg))
user_config = raw_profiles.get("config")
return cls.from_raw_profile_info(
raw_profile=raw_profile,
@@ -417,7 +406,7 @@ class Profile(HasCredentials):
args: Any,
renderer: ProfileRenderer,
project_profile_name: Optional[str],
) -> 'Profile':
) -> "Profile":
"""Given the raw profiles as read from disk and the name of the desired
profile if specified, return the profile component of the runtime
config.
@@ -432,15 +421,14 @@ class Profile(HasCredentials):
target could not be found.
:returns Profile: The new Profile object.
"""
threads_override = getattr(args, 'threads', None)
target_override = getattr(args, 'target', None)
threads_override = getattr(args, "threads", None)
target_override = getattr(args, "target", None)
raw_profiles = read_profile(flags.PROFILES_DIR)
profile_name = cls.pick_profile_name(getattr(args, 'profile', None),
project_profile_name)
profile_name = cls.pick_profile_name(getattr(args, "profile", None), project_profile_name)
return cls.from_raw_profiles(
raw_profiles=raw_profiles,
profile_name=profile_name,
renderer=renderer,
target_override=target_override,
threads_override=threads_override
threads_override=threads_override,
)

View File

@@ -2,7 +2,13 @@ from copy import deepcopy
from dataclasses import dataclass, field
from itertools import chain
from typing import (
List, Dict, Any, Optional, TypeVar, Union, Mapping,
List,
Dict,
Any,
Optional,
TypeVar,
Union,
Mapping,
)
from typing_extensions import Protocol, runtime_checkable
@@ -45,7 +51,7 @@ INVALID_VERSION_ERROR = """\
This version of dbt is not supported with the '{package}' package.
Installed version of dbt: {installed}
Required version of dbt for '{package}': {version_spec}
Check the requirements for the '{package}' package, or run dbt again with \
Check for a different version of the '{package}' package, or run dbt again with \
--no-version-check
"""
@@ -54,7 +60,7 @@ IMPOSSIBLE_VERSION_ERROR = """\
The package version requirement can never be satisfied for the '{package}
package.
Required versions of dbt for '{package}': {version_spec}
Check the requirements for the '{package}' package, or run dbt again with \
Check for a different version of the '{package}' package, or run dbt again with \
--no-version-check
"""
@@ -83,9 +89,7 @@ def _load_yaml(path):
def package_data_from_root(project_root):
package_filepath = resolve_path_from_base(
'packages.yml', project_root
)
package_filepath = resolve_path_from_base("packages.yml", project_root)
if path_exists(package_filepath):
packages_dict = _load_yaml(package_filepath)
@@ -96,15 +100,13 @@ def package_data_from_root(project_root):
def package_config_from_data(packages_data: Dict[str, Any]):
if not packages_data:
packages_data = {'packages': []}
packages_data = {"packages": []}
try:
PackageConfig.validate(packages_data)
packages = PackageConfig.from_dict(packages_data)
except ValidationError as e:
raise DbtProjectError(
MALFORMED_PACKAGE_ERROR.format(error=str(e.message))
) from e
raise DbtProjectError(MALFORMED_PACKAGE_ERROR.format(error=str(e.message))) from e
return packages
@@ -119,7 +121,7 @@ def _parse_versions(versions: Union[List[str], str]) -> List[VersionSpecifier]:
Regardless, this will return a list of VersionSpecifiers
"""
if isinstance(versions, str):
versions = versions.split(',')
versions = versions.split(",")
return [VersionSpecifier.from_version_string(v) for v in versions]
@@ -130,11 +132,14 @@ def _all_source_paths(
analysis_paths: List[str],
macro_paths: List[str],
) -> List[str]:
return list(chain(model_paths, seed_paths, snapshot_paths, analysis_paths,
macro_paths))
# We need to turn a list of lists into just a list, then convert to a set to
# get only unique elements, then back to a list
return list(
set(list(chain(model_paths, seed_paths, snapshot_paths, analysis_paths, macro_paths)))
)
T = TypeVar('T')
T = TypeVar("T")
def value_or(value: Optional[T], default: T) -> T:
@@ -147,30 +152,27 @@ def value_or(value: Optional[T], default: T) -> T:
def _raw_project_from(project_root: str) -> Dict[str, Any]:
project_root = os.path.normpath(project_root)
project_yaml_filepath = os.path.join(project_root, 'dbt_project.yml')
project_yaml_filepath = os.path.join(project_root, "dbt_project.yml")
# get the project.yml contents
if not path_exists(project_yaml_filepath):
raise DbtProjectError(
'no dbt_project.yml found at expected path {}'
.format(project_yaml_filepath)
"no dbt_project.yml found at expected path {}".format(project_yaml_filepath)
)
project_dict = _load_yaml(project_yaml_filepath)
if not isinstance(project_dict, dict):
raise DbtProjectError(
'dbt_project.yml does not parse to a dictionary'
)
raise DbtProjectError("dbt_project.yml does not parse to a dictionary")
return project_dict
def _query_comment_from_cfg(
cfg_query_comment: Union[QueryComment, NoValue, str, None]
cfg_query_comment: Union[QueryComment, NoValue, str, None]
) -> QueryComment:
if not cfg_query_comment:
return QueryComment(comment='')
return QueryComment(comment="")
if isinstance(cfg_query_comment, str):
return QueryComment(comment=cfg_query_comment)
@@ -186,10 +188,7 @@ def validate_version(dbt_version: List[VersionSpecifier], project_name: str):
installed = get_installed_version()
if not versions_compatible(*dbt_version):
msg = IMPOSSIBLE_VERSION_ERROR.format(
package=project_name,
version_spec=[
x.to_version_string() for x in dbt_version
]
package=project_name, version_spec=[x.to_version_string() for x in dbt_version]
)
raise DbtProjectError(msg)
@@ -197,9 +196,7 @@ def validate_version(dbt_version: List[VersionSpecifier], project_name: str):
msg = INVALID_VERSION_ERROR.format(
package=project_name,
installed=installed.to_version_string(),
version_spec=[
x.to_version_string() for x in dbt_version
]
version_spec=[x.to_version_string() for x in dbt_version],
)
raise DbtProjectError(msg)
@@ -208,8 +205,8 @@ def _get_required_version(
project_dict: Dict[str, Any],
verify_version: bool,
) -> List[VersionSpecifier]:
dbt_raw_version: Union[List[str], str] = '>=0.0.0'
required = project_dict.get('require-dbt-version')
dbt_raw_version: Union[List[str], str] = ">=0.0.0"
required = project_dict.get("require-dbt-version")
if required is not None:
dbt_raw_version = required
@@ -220,46 +217,39 @@ def _get_required_version(
if verify_version:
# no name is also an error that we want to raise
if 'name' not in project_dict:
if "name" not in project_dict:
raise DbtProjectError(
'Required "name" field not present in project',
)
validate_version(dbt_version, project_dict['name'])
validate_version(dbt_version, project_dict["name"])
return dbt_version
@dataclass
class RenderComponents:
project_dict: Dict[str, Any] = field(
metadata=dict(description='The project dictionary')
)
packages_dict: Dict[str, Any] = field(
metadata=dict(description='The packages dictionary')
)
selectors_dict: Dict[str, Any] = field(
metadata=dict(description='The selectors dictionary')
)
project_dict: Dict[str, Any] = field(metadata=dict(description="The project dictionary"))
packages_dict: Dict[str, Any] = field(metadata=dict(description="The packages dictionary"))
selectors_dict: Dict[str, Any] = field(metadata=dict(description="The selectors dictionary"))
@dataclass
class PartialProject(RenderComponents):
profile_name: Optional[str] = field(metadata=dict(
description='The unrendered profile name in the project, if set'
))
project_name: Optional[str] = field(metadata=dict(
description=(
'The name of the project. This should always be set and will not '
'be rendered'
profile_name: Optional[str] = field(
metadata=dict(description="The unrendered profile name in the project, if set")
)
project_name: Optional[str] = field(
metadata=dict(
description=(
"The name of the project. This should always be set and will not " "be rendered"
)
)
))
)
project_root: str = field(
metadata=dict(description='The root directory of the project'),
metadata=dict(description="The root directory of the project"),
)
verify_version: bool = field(
metadata=dict(description=(
'If True, verify the dbt version matches the required version'
))
metadata=dict(description=("If True, verify the dbt version matches the required version"))
)
def render_profile_name(self, renderer) -> Optional[str]:
@@ -272,9 +262,7 @@ class PartialProject(RenderComponents):
renderer: DbtProjectYamlRenderer,
) -> RenderComponents:
rendered_project = renderer.render_project(
self.project_dict, self.project_root
)
rendered_project = renderer.render_project(self.project_dict, self.project_root)
rendered_packages = renderer.render_packages(self.packages_dict)
rendered_selectors = renderer.render_selectors(self.selectors_dict)
@@ -284,31 +272,35 @@ class PartialProject(RenderComponents):
selectors_dict=rendered_selectors,
)
def render(self, renderer: DbtProjectYamlRenderer) -> 'Project':
# Called by 'collect_parts' in RuntimeConfig
def render(self, renderer: DbtProjectYamlRenderer) -> "Project":
try:
rendered = self.get_rendered(renderer)
return self.create_project(rendered)
except DbtProjectError as exc:
if exc.path is None:
exc.path = os.path.join(self.project_root, 'dbt_project.yml')
exc.path = os.path.join(self.project_root, "dbt_project.yml")
raise
def check_config_path(self, project_dict, deprecated_path, exp_path):
if deprecated_path in project_dict:
if exp_path in project_dict:
msg = (
'{deprecated_path} and {exp_path} cannot both be defined. The '
'`{deprecated_path}` config has been deprecated in favor of `{exp_path}`. '
'Please update your `dbt_project.yml` configuration to reflect this '
'change.'
"{deprecated_path} and {exp_path} cannot both be defined. The "
"`{deprecated_path}` config has been deprecated in favor of `{exp_path}`. "
"Please update your `dbt_project.yml` configuration to reflect this "
"change."
)
raise DbtProjectError(msg.format(deprecated_path=deprecated_path,
exp_path=exp_path))
deprecations.warn('project_config_path',
deprecated_path=deprecated_path,
exp_path=exp_path)
raise DbtProjectError(
msg.format(deprecated_path=deprecated_path, exp_path=exp_path)
)
deprecations.warn(
f"project-config-{deprecated_path}",
deprecated_path=deprecated_path,
exp_path=exp_path,
)
def create_project(self, rendered: RenderComponents) -> 'Project':
def create_project(self, rendered: RenderComponents) -> "Project":
unrendered = RenderComponents(
project_dict=self.project_dict,
packages_dict=self.packages_dict,
@@ -319,14 +311,12 @@ class PartialProject(RenderComponents):
verify_version=self.verify_version,
)
self.check_config_path(rendered.project_dict, 'source-paths', 'model-paths')
self.check_config_path(rendered.project_dict, 'data-paths', 'seed-paths')
self.check_config_path(rendered.project_dict, "source-paths", "model-paths")
self.check_config_path(rendered.project_dict, "data-paths", "seed-paths")
try:
ProjectContract.validate(rendered.project_dict)
cfg = ProjectContract.from_dict(
rendered.project_dict
)
cfg = ProjectContract.from_dict(rendered.project_dict)
except ValidationError as e:
raise DbtProjectError(validator_error_message(e)) from e
# name/version are required in the Project definition, so we can assume
@@ -336,7 +326,7 @@ class PartialProject(RenderComponents):
# this is added at project_dict parse time and should always be here
# once we see it.
if cfg.project_root is None:
raise DbtProjectError('cfg must have a project root!')
raise DbtProjectError("cfg must have a project root!")
else:
project_root = cfg.project_root
# this is only optional in the sense that if it's not present, it needs
@@ -346,30 +336,30 @@ class PartialProject(RenderComponents):
# `source_paths` is deprecated but still allowed. Copy it into
# `model_paths` to simlify logic throughout the rest of the system.
model_paths: List[str] = value_or(cfg.model_paths
if 'model-paths' in rendered.project_dict
else cfg.source_paths, ['models'])
macro_paths: List[str] = value_or(cfg.macro_paths, ['macros'])
model_paths: List[str] = value_or(
cfg.model_paths if "model-paths" in rendered.project_dict else cfg.source_paths,
["models"],
)
macro_paths: List[str] = value_or(cfg.macro_paths, ["macros"])
# `data_paths` is deprecated but still allowed. Copy it into
# `seed_paths` to simlify logic throughout the rest of the system.
seed_paths: List[str] = value_or(cfg.seed_paths
if 'seed-paths' in rendered.project_dict
else cfg.data_paths, ['seeds'])
test_paths: List[str] = value_or(cfg.test_paths, ['tests'])
analysis_paths: List[str] = value_or(cfg.analysis_paths, ['analyses'])
snapshot_paths: List[str] = value_or(cfg.snapshot_paths, ['snapshots'])
seed_paths: List[str] = value_or(
cfg.seed_paths if "seed-paths" in rendered.project_dict else cfg.data_paths, ["seeds"]
)
test_paths: List[str] = value_or(cfg.test_paths, ["tests"])
analysis_paths: List[str] = value_or(cfg.analysis_paths, ["analyses"])
snapshot_paths: List[str] = value_or(cfg.snapshot_paths, ["snapshots"])
all_source_paths: List[str] = _all_source_paths(
model_paths, seed_paths, snapshot_paths, analysis_paths,
macro_paths
model_paths, seed_paths, snapshot_paths, analysis_paths, macro_paths
)
docs_paths: List[str] = value_or(cfg.docs_paths, all_source_paths)
asset_paths: List[str] = value_or(cfg.asset_paths, [])
target_path: str = value_or(cfg.target_path, 'target')
target_path: str = value_or(cfg.target_path, "target")
clean_targets: List[str] = value_or(cfg.clean_targets, [target_path])
log_path: str = value_or(cfg.log_path, 'logs')
packages_install_path: str = value_or(cfg.packages_install_path, 'dbt_packages')
log_path: str = value_or(cfg.log_path, "logs")
packages_install_path: str = value_or(cfg.packages_install_path, "dbt_packages")
# in the default case we'll populate this once we know the adapter type
# It would be nice to just pass along a Quoting here, but that would
# break many things
@@ -397,6 +387,8 @@ class PartialProject(RenderComponents):
vars_dict = cfg.vars
vars_value = VarProvider(vars_dict)
# There will never be any project_env_vars when it's first created
project_env_vars: Dict[str, Any] = {}
on_run_start: List[str] = value_or(cfg.on_run_start, [])
on_run_end: List[str] = value_or(cfg.on_run_end, [])
@@ -405,11 +397,12 @@ class PartialProject(RenderComponents):
packages = package_config_from_data(rendered.packages_dict)
selectors = selector_config_from_data(rendered.selectors_dict)
manifest_selectors: Dict[str, Any] = {}
if rendered.selectors_dict and rendered.selectors_dict['selectors']:
if rendered.selectors_dict and rendered.selectors_dict["selectors"]:
# this is a dict with a single key 'selectors' pointing to a list
# of dicts.
manifest_selectors = SelectorDict.parse_from_selectors_list(
rendered.selectors_dict['selectors'])
rendered.selectors_dict["selectors"]
)
project = Project(
project_name=name,
version=version,
@@ -444,6 +437,7 @@ class PartialProject(RenderComponents):
vars=vars_value,
config_version=cfg.config_version,
unrendered=unrendered,
project_env_vars=project_env_vars,
)
# sanity check - this means an internal issue
project.validate()
@@ -459,10 +453,9 @@ class PartialProject(RenderComponents):
*,
verify_version: bool = False,
):
"""Construct a partial project from its constituent dicts.
"""
project_name = project_dict.get('name')
profile_name = project_dict.get('profile')
"""Construct a partial project from its constituent dicts."""
project_name = project_dict.get("name")
profile_name = project_dict.get("profile")
return cls(
profile_name=profile_name,
@@ -477,14 +470,14 @@ class PartialProject(RenderComponents):
@classmethod
def from_project_root(
cls, project_root: str, *, verify_version: bool = False
) -> 'PartialProject':
) -> "PartialProject":
project_root = os.path.normpath(project_root)
project_dict = _raw_project_from(project_root)
config_version = project_dict.get('config-version', 1)
config_version = project_dict.get("config-version", 1)
if config_version != 2:
raise DbtProjectError(
f'Invalid config version: {config_version}, expected 2',
path=os.path.join(project_root, 'dbt_project.yml')
f"Invalid config version: {config_version}, expected 2",
path=os.path.join(project_root, "dbt_project.yml"),
)
packages_dict = package_data_from_root(project_root)
@@ -501,15 +494,10 @@ class PartialProject(RenderComponents):
class VarProvider:
"""Var providers are tied to a particular Project."""
def __init__(
self,
vars: Dict[str, Dict[str, Any]]
) -> None:
def __init__(self, vars: Dict[str, Dict[str, Any]]) -> None:
self.vars = vars
def vars_for(
self, node: IsFQNResource, adapter_type: str
) -> Mapping[str, Any]:
def vars_for(self, node: IsFQNResource, adapter_type: str) -> Mapping[str, Any]:
# in v2, vars are only either project or globally scoped
merged = MultiDict([self.vars])
merged.add(self.vars.get(node.package_name, {}))
@@ -556,24 +544,35 @@ class Project:
query_comment: QueryComment
config_version: int
unrendered: RenderComponents
project_env_vars: Dict[str, Any]
@property
def all_source_paths(self) -> List[str]:
return _all_source_paths(
self.model_paths, self.seed_paths, self.snapshot_paths,
self.analysis_paths, self.macro_paths
self.model_paths,
self.seed_paths,
self.snapshot_paths,
self.analysis_paths,
self.macro_paths,
)
@property
def generic_test_paths(self):
generic_test_paths = []
for test_path in self.test_paths:
generic_test_paths.append(os.path.join(test_path, "generic"))
return generic_test_paths
def __str__(self):
cfg = self.to_project_config(with_packages=True)
return str(cfg)
def __eq__(self, other):
if not (isinstance(other, self.__class__) and
isinstance(self, other.__class__)):
if not (isinstance(other, self.__class__) and isinstance(self, other.__class__)):
return False
return self.to_project_config(with_packages=True) == \
other.to_project_config(with_packages=True)
return self.to_project_config(with_packages=True) == other.to_project_config(
with_packages=True
)
def to_project_config(self, with_packages=False):
"""Return a dict representation of the config that could be written to
@@ -583,40 +582,39 @@ class Project:
file in the root.
:returns dict: The serialized profile.
"""
result = deepcopy({
'name': self.project_name,
'version': self.version,
'project-root': self.project_root,
'profile': self.profile_name,
'model-paths': self.model_paths,
'macro-paths': self.macro_paths,
'seed-paths': self.seed_paths,
'test-paths': self.test_paths,
'analysis-paths': self.analysis_paths,
'docs-paths': self.docs_paths,
'asset-paths': self.asset_paths,
'target-path': self.target_path,
'snapshot-paths': self.snapshot_paths,
'clean-targets': self.clean_targets,
'log-path': self.log_path,
'quoting': self.quoting,
'models': self.models,
'on-run-start': self.on_run_start,
'on-run-end': self.on_run_end,
'dispatch': self.dispatch,
'seeds': self.seeds,
'snapshots': self.snapshots,
'sources': self.sources,
'tests': self.tests,
'vars': self.vars.to_dict(),
'require-dbt-version': [
v.to_version_string() for v in self.dbt_version
],
'config-version': self.config_version,
})
result = deepcopy(
{
"name": self.project_name,
"version": self.version,
"project-root": self.project_root,
"profile": self.profile_name,
"model-paths": self.model_paths,
"macro-paths": self.macro_paths,
"seed-paths": self.seed_paths,
"test-paths": self.test_paths,
"analysis-paths": self.analysis_paths,
"docs-paths": self.docs_paths,
"asset-paths": self.asset_paths,
"target-path": self.target_path,
"snapshot-paths": self.snapshot_paths,
"clean-targets": self.clean_targets,
"log-path": self.log_path,
"quoting": self.quoting,
"models": self.models,
"on-run-start": self.on_run_start,
"on-run-end": self.on_run_end,
"dispatch": self.dispatch,
"seeds": self.seeds,
"snapshots": self.snapshots,
"sources": self.sources,
"tests": self.tests,
"vars": self.vars.to_dict(),
"require-dbt-version": [v.to_version_string() for v in self.dbt_version],
"config-version": self.config_version,
}
)
if self.query_comment:
result['query-comment'] = \
self.query_comment.to_dict(omit_none=True)
result["query-comment"] = self.query_comment.to_dict(omit_none=True)
if with_packages:
result.update(self.packages.to_dict(omit_none=True))
@@ -630,34 +628,12 @@ class Project:
raise DbtProjectError(validator_error_message(e)) from e
@classmethod
def partial_load(
cls, project_root: str, *, verify_version: bool = False
) -> PartialProject:
def partial_load(cls, project_root: str, *, verify_version: bool = False) -> PartialProject:
return PartialProject.from_project_root(
project_root,
verify_version=verify_version,
)
@classmethod
def render_from_dict(
cls,
project_root: str,
project_dict: Dict[str, Any],
packages_dict: Dict[str, Any],
selectors_dict: Dict[str, Any],
renderer: DbtProjectYamlRenderer,
*,
verify_version: bool = False
) -> 'Project':
partial = PartialProject.from_dicts(
project_root=project_root,
project_dict=project_dict,
packages_dict=packages_dict,
selectors_dict=selectors_dict,
verify_version=verify_version,
)
return partial.render(renderer)
@classmethod
def from_project_root(
cls,
@@ -665,18 +641,17 @@ class Project:
renderer: DbtProjectYamlRenderer,
*,
verify_version: bool = False,
) -> 'Project':
) -> "Project":
partial = cls.partial_load(project_root, verify_version=verify_version)
return partial.render(renderer)
def hashed_name(self):
return hashlib.md5(self.project_name.encode('utf-8')).hexdigest()
return hashlib.md5(self.project_name.encode("utf-8")).hexdigest()
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
if name not in self.selectors:
raise RuntimeException(
f'Could not find selector named {name}, expected one of '
f'{list(self.selectors)}'
f"Could not find selector named {name}, expected one of " f"{list(self.selectors)}"
)
return self.selectors[name]["definition"]
@@ -693,6 +668,6 @@ class Project:
def get_macro_search_order(self, macro_namespace: str):
for dispatch_entry in self.dispatch:
if dispatch_entry['macro_namespace'] == macro_namespace:
return dispatch_entry['search_order']
if dispatch_entry["macro_namespace"] == macro_namespace:
return dispatch_entry["search_order"]
return None

View File

@@ -1,12 +1,12 @@
from typing import Dict, Any, Tuple, Optional, Union, Callable
from dbt.clients.jinja import get_rendered, catch_jinja
from dbt.exceptions import (
DbtProjectError, CompilationException, RecursionException
)
from dbt.node_types import NodeType
from dbt.utils import deep_map
from dbt.context.target import TargetContext
from dbt.context.secret import SecretContext
from dbt.context.base import BaseContext
from dbt.contracts.connection import HasCredentials
from dbt.exceptions import DbtProjectError, CompilationException, RecursionException
from dbt.utils import deep_map_render
Keypath = Tuple[Union[str, int], ...]
@@ -18,7 +18,7 @@ class BaseRenderer:
@property
def name(self):
return 'Rendering'
return "Rendering"
def should_render_keypath(self, keypath: Keypath) -> bool:
return True
@@ -29,9 +29,7 @@ class BaseRenderer:
return self.render_value(value, keypath)
def render_value(
self, value: Any, keypath: Optional[Keypath] = None
) -> Any:
def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:
# keypath is ignored.
# if it wasn't read as a string, ignore it
if not isinstance(value, str):
@@ -40,18 +38,15 @@ class BaseRenderer:
with catch_jinja():
return get_rendered(value, self.context, native=True)
except CompilationException as exc:
msg = f'Could not render {value}: {exc.msg}'
msg = f"Could not render {value}: {exc.msg}"
raise CompilationException(msg) from exc
def render_data(
self, data: Dict[str, Any]
) -> Dict[str, Any]:
def render_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
try:
return deep_map(self.render_entry, data)
return deep_map_render(self.render_entry, data)
except RecursionException:
raise DbtProjectError(
f'Cycle detected: {self.name} input has a reference to itself',
project=data
f"Cycle detected: {self.name} input has a reference to itself", project=data
)
@@ -78,15 +73,15 @@ class ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):
def __init__(self):
super().__init__()
self[('on-run-start',)] = _list_if_none_or_string
self[('on-run-end',)] = _list_if_none_or_string
self[("on-run-start",)] = _list_if_none_or_string
self[("on-run-end",)] = _list_if_none_or_string
for k in ('models', 'seeds', 'snapshots'):
for k in ("models", "seeds", "snapshots"):
self[(k,)] = _dict_if_none
self[(k, 'vars')] = _dict_if_none
self[(k, 'pre-hook')] = _list_if_none_or_string
self[(k, 'post-hook')] = _list_if_none_or_string
self[('seeds', 'column_types')] = _dict_if_none
self[(k, "vars")] = _dict_if_none
self[(k, "pre-hook")] = _list_if_none_or_string
self[(k, "post-hook")] = _list_if_none_or_string
self[("seeds", "column_types")] = _dict_if_none
def postprocess(self, value: Any, key: Keypath) -> Any:
if key in self:
@@ -99,15 +94,29 @@ class ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):
class DbtProjectYamlRenderer(BaseRenderer):
_KEYPATH_HANDLERS = ProjectPostprocessor()
def __init__(
self, profile: Optional[HasCredentials] = None, cli_vars: Optional[Dict[str, Any]] = None
) -> None:
# Generate contexts here because we want to save the context
# object in order to retrieve the env_vars. This is almost always
# a TargetContext, but in the debug task we want a project
# even when we don't have a profile.
if cli_vars is None:
cli_vars = {}
if profile:
self.ctx_obj = TargetContext(profile, cli_vars)
else:
self.ctx_obj = BaseContext(cli_vars) # type:ignore
context = self.ctx_obj.to_dict()
super().__init__(context)
@property
def name(self):
'Project config'
"Project config"
# Uses SecretRenderer
def get_package_renderer(self) -> BaseRenderer:
return PackageRenderer(self.context)
def get_selector_renderer(self) -> BaseRenderer:
return SelectorRenderer(self.context)
return PackageRenderer(self.ctx_obj.cli_vars)
def render_project(
self,
@@ -116,7 +125,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
) -> Dict[str, Any]:
"""Render the project and insert the project root after rendering."""
rendered_project = self.render_data(project)
rendered_project['project-root'] = project_root
rendered_project["project-root"] = project_root
return rendered_project
def render_packages(self, packages: Dict[str, Any]):
@@ -125,8 +134,7 @@ class DbtProjectYamlRenderer(BaseRenderer):
return package_renderer.render_data(packages)
def render_selectors(self, selectors: Dict[str, Any]):
selector_renderer = self.get_selector_renderer()
return selector_renderer.render_data(selectors)
return self.render_data(selectors)
def render_entry(self, value: Any, keypath: Keypath) -> Any:
result = super().render_entry(value, keypath)
@@ -138,101 +146,42 @@ class DbtProjectYamlRenderer(BaseRenderer):
first = keypath[0]
# run hooks are not rendered
if first in {'on-run-start', 'on-run-end', 'query-comment'}:
if first in {"on-run-start", "on-run-end", "query-comment"}:
return False
# don't render vars blocks until runtime
if first == 'vars':
if first == "vars":
return False
if first in {'seeds', 'models', 'snapshots', 'tests'}:
keypath_parts = {
(k.lstrip('+ ') if isinstance(k, str) else k)
for k in keypath
}
if first in {"seeds", "models", "snapshots", "tests"}:
keypath_parts = {(k.lstrip("+ ") if isinstance(k, str) else k) for k in keypath}
# model-level hooks
if 'pre-hook' in keypath_parts or 'post-hook' in keypath_parts:
if "pre-hook" in keypath_parts or "post-hook" in keypath_parts:
return False
return True
class ProfileRenderer(BaseRenderer):
@property
def name(self):
'Profile'
class SchemaYamlRenderer(BaseRenderer):
DOCUMENTABLE_NODES = frozenset(
n.pluralize() for n in NodeType.documentable()
)
class SecretRenderer(BaseRenderer):
def __init__(self, cli_vars: Dict[str, Any] = {}) -> None:
# Generate contexts here because we want to save the context
# object in order to retrieve the env_vars.
self.ctx_obj = SecretContext(cli_vars)
context = self.ctx_obj.to_dict()
super().__init__(context)
@property
def name(self):
return 'Rendering yaml'
def _is_norender_key(self, keypath: Keypath) -> bool:
"""
models:
- name: blah
- description: blah
tests: ...
- columns:
- name:
- description: blah
tests: ...
Return True if it's tests or description - those aren't rendered
"""
if len(keypath) >= 2 and keypath[1] in ('tests', 'description'):
return True
if (
len(keypath) >= 4 and
keypath[1] == 'columns' and
keypath[3] in ('tests', 'description')
):
return True
return False
# don't render descriptions or test keyword arguments
def should_render_keypath(self, keypath: Keypath) -> bool:
if len(keypath) < 2:
return True
if keypath[0] not in self.DOCUMENTABLE_NODES:
return True
if len(keypath) < 3:
return True
if keypath[0] == NodeType.Source.pluralize():
if keypath[2] == 'description':
return False
if keypath[2] == 'tables':
if self._is_norender_key(keypath[3:]):
return False
elif keypath[0] == NodeType.Macro.pluralize():
if keypath[2] == 'arguments':
if self._is_norender_key(keypath[3:]):
return False
elif self._is_norender_key(keypath[1:]):
return False
else: # keypath[0] in self.DOCUMENTABLE_NODES:
if self._is_norender_key(keypath[1:]):
return False
return True
return "Secret"
class PackageRenderer(BaseRenderer):
class ProfileRenderer(SecretRenderer):
@property
def name(self):
return 'Packages config'
return "Profile"
class SelectorRenderer(BaseRenderer):
class PackageRenderer(SecretRenderer):
@property
def name(self):
return 'Selector config'
return "Packages config"

View File

@@ -1,45 +1,36 @@
import itertools
import os
from copy import deepcopy
from dataclasses import dataclass, fields
from dataclasses import dataclass
from pathlib import Path
from typing import (
Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet,
Type
)
from typing import Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet, Type
from .profile import Profile
from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt import tracking
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
from dbt.context.base import generate_base_context
from dbt.context.target import generate_target_context
from dbt.helper_types import FQNPath, PathSet, DictDefaultEmptyStr
from dbt.config.profile import read_user_config
from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata
from dbt.contracts.relation import ComponentName
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.ui import warning_tag
from dbt.contracts.project import Configuration, UserConfig
from dbt.exceptions import (
RuntimeException,
DbtProfileError,
DbtProjectError,
validator_error_message,
warn_or_error,
raise_compiler_error
raise_compiler_error,
)
from dbt.dataclass_schema import ValidationError
def _project_quoting_dict(
proj: Project, profile: Profile
) -> Dict[ComponentName, bool]:
def _project_quoting_dict(proj: Project, profile: Profile) -> Dict[ComponentName, bool]:
src: Dict[str, Any] = profile.credentials.translate_aliases(proj.quoting)
result: Dict[ComponentName, bool] = {}
for key in ComponentName:
@@ -55,19 +46,20 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
args: Any
profile_name: str
cli_vars: Dict[str, Any]
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None
def __post_init__(self):
self.validate()
# Called by 'new_project' and 'from_args'
@classmethod
def from_parts(
cls,
project: Project,
profile: Profile,
args: Any,
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None,
) -> 'RuntimeConfig':
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None,
) -> "RuntimeConfig":
"""Instantiate a RuntimeConfig from its components.
:param profile: A parsed dbt Profile.
@@ -81,7 +73,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
.replace_dict(_project_quoting_dict(project, profile))
).to_dict(omit_none=True)
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
return cls(
project_name=project.project_name,
@@ -116,6 +108,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
project_env_vars=project.project_env_vars,
profile_env_vars=profile.profile_env_vars,
profile_name=profile.profile_name,
target_name=profile.target_name,
user_config=profile.user_config,
@@ -126,7 +120,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
dependencies=dependencies,
)
def new_project(self, project_root: str) -> 'RuntimeConfig':
# Called by 'load_projects' in this class
def new_project(self, project_root: str) -> "RuntimeConfig":
"""Given a new project root, read in its project dictionary, supply the
existing project's profile info, and create a new project file.
@@ -140,7 +135,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
profile.validate()
# load the new project and its packages. Don't pass cli variables.
renderer = DbtProjectYamlRenderer(generate_target_context(profile, {}))
renderer = DbtProjectYamlRenderer(profile)
project = Project.from_project_root(
project_root,
@@ -148,14 +143,14 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
verify_version=bool(flags.VERSION_CHECK),
)
cfg = self.from_parts(
runtime_config = self.from_parts(
project=project,
profile=profile,
args=deepcopy(self.args),
)
# force our quoting back onto the new project.
cfg.quoting = deepcopy(self.quoting)
return cfg
runtime_config.quoting = deepcopy(self.quoting)
return runtime_config
def serialize(self) -> Dict[str, Any]:
"""Serialize the full configuration to a single dictionary. For any
@@ -168,7 +163,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
"""
result = self.to_project_config(with_packages=True)
result.update(self.to_profile_info(serialize_credentials=True))
result['cli_vars'] = deepcopy(self.cli_vars)
result["cli_vars"] = deepcopy(self.cli_vars)
return result
def validate(self):
@@ -188,40 +183,37 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
profile_renderer: ProfileRenderer,
profile_name: Optional[str],
) -> Profile:
return Profile.render_from_args(
args, profile_renderer, profile_name
)
return Profile.render_from_args(args, profile_renderer, profile_name)
@classmethod
def collect_parts(
cls: Type['RuntimeConfig'], args: Any
) -> Tuple[Project, Profile]:
def collect_parts(cls: Type["RuntimeConfig"], args: Any) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(
project_root,
verify_version=version_check
)
partial = Project.partial_load(project_root, verify_version=version_check)
# build the profile using the base renderer and the one fact we know
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
profile_renderer = ProfileRenderer(generate_base_context(cli_vars))
# Note: only the named profile section is rendered. The rest of the
# profile is ignored.
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
profile_renderer = ProfileRenderer(cli_vars)
profile_name = partial.render_profile_name(profile_renderer)
profile = cls._get_rendered_profile(
args, profile_renderer, profile_name
)
profile = cls._get_rendered_profile(args, profile_renderer, profile_name)
# Save env_vars encountered in rendering for partial parsing
profile.profile_env_vars = profile_renderer.ctx_obj.env_vars
# get a new renderer using our target information and render the
# project
ctx = generate_target_context(profile, cli_vars)
project_renderer = DbtProjectYamlRenderer(ctx)
project_renderer = DbtProjectYamlRenderer(profile, cli_vars)
project = partial.render(project_renderer)
# Save env_vars encountered in rendering for partial parsing
project.project_env_vars = project_renderer.ctx_obj.env_vars
return (project, profile)
# Called in main.py, lib.py, task/base.py
@classmethod
def from_args(cls, args: Any) -> 'RuntimeConfig':
def from_args(cls, args: Any) -> "RuntimeConfig":
"""Given arguments, read in dbt_project.yml from the current directory,
read in packages.yml if it exists, and use them to find the profile to
load.
@@ -240,10 +232,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
)
def get_metadata(self) -> ManifestMetadata:
return ManifestMetadata(
project_id=self.hashed_name(),
adapter_type=self.credentials.type
)
return ManifestMetadata(project_id=self.hashed_name(), adapter_type=self.credentials.type)
def _get_v2_config_paths(
self,
@@ -252,8 +241,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
paths: MutableSet[FQNPath],
) -> PathSet:
for key, value in config.items():
if isinstance(value, dict) and not key.startswith('+'):
self._get_v2_config_paths(value, path + (key,), paths)
if isinstance(value, dict) and not key.startswith("+"):
self._get_config_paths(value, path + (key,), paths)
else:
paths.add(path)
return frozenset(paths)
@@ -268,7 +257,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
paths = set()
for key, value in config.items():
if isinstance(value, dict) and not key.startswith('+'):
if isinstance(value, dict) and not key.startswith("+"):
self._get_v2_config_paths(value, path + (key,), paths)
else:
paths.add(path)
@@ -280,11 +269,11 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
a configured path in the resource.
"""
return {
'models': self._get_config_paths(self.models),
'seeds': self._get_config_paths(self.seeds),
'snapshots': self._get_config_paths(self.snapshots),
'sources': self._get_config_paths(self.sources),
'tests': self._get_config_paths(self.tests),
"models": self._get_config_paths(self.models),
"seeds": self._get_config_paths(self.seeds),
"snapshots": self._get_config_paths(self.snapshots),
"sources": self._get_config_paths(self.sources),
"tests": self._get_config_paths(self.tests),
}
def get_unused_resource_config_paths(
@@ -305,9 +294,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
for config_path in config_paths:
if not _is_config_used(config_path, fqns):
unused_resource_config_paths.append(
(resource_type,) + config_path
)
unused_resource_config_paths.append((resource_type,) + config_path)
return unused_resource_config_paths
def warn_for_unused_resource_config_paths(
@@ -320,38 +307,38 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
return
msg = UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE.format(
len(unused),
'\n'.join('- {}'.format('.'.join(u)) for u in unused)
len(unused), "\n".join("- {}".format(".".join(u)) for u in unused)
)
warn_or_error(msg, log_fmt=warning_tag('{}'))
warn_or_error(msg, log_fmt=warning_tag("{}"))
def load_dependencies(self) -> Mapping[str, 'RuntimeConfig']:
def load_dependencies(self, base_only=False) -> Mapping[str, "RuntimeConfig"]:
if self.dependencies is None:
all_projects = {self.project_name: self}
internal_packages = get_include_paths(self.credentials.type)
# raise exception if fewer installed packages than in packages.yml
count_packages_specified = len(self.packages.packages) # type: ignore
count_packages_installed = len(tuple(self._get_project_directories()))
if count_packages_specified > count_packages_installed:
raise_compiler_error(
f'dbt found {count_packages_specified} package(s) '
f'specified in packages.yml, but only '
f'{count_packages_installed} package(s) installed '
f'in {self.packages_install_path}. Run "dbt deps" to '
f'install package dependencies.'
)
project_paths = itertools.chain(
internal_packages,
self._get_project_directories()
)
if base_only:
# Test setup -- we want to load macros without dependencies
project_paths = itertools.chain(internal_packages)
else:
# raise exception if fewer installed packages than in packages.yml
count_packages_specified = len(self.packages.packages) # type: ignore
count_packages_installed = len(tuple(self._get_project_directories()))
if count_packages_specified > count_packages_installed:
raise_compiler_error(
f"dbt found {count_packages_specified} package(s) "
f"specified in packages.yml, but only "
f"{count_packages_installed} package(s) installed "
f'in {self.packages_install_path}. Run "dbt deps" to '
f"install package dependencies."
)
project_paths = itertools.chain(internal_packages, self._get_project_directories())
for project_name, project in self.load_projects(project_paths):
if project_name in all_projects:
raise_compiler_error(
f'dbt found more than one package with the name '
f"dbt found more than one package with the name "
f'"{project_name}" included in this project. Package '
f'names must be unique in a project. Please rename '
f'one of these packages.'
f"names must be unique in a project. Please rename "
f"one of these packages."
)
all_projects[project_name] = project
self.dependencies = all_projects
@@ -360,16 +347,15 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
def clear_dependencies(self):
self.dependencies = None
def load_projects(
self, paths: Iterable[Path]
) -> Iterator[Tuple[str, 'RuntimeConfig']]:
# Called by 'load_dependencies' in this class
def load_projects(self, paths: Iterable[Path]) -> Iterator[Tuple[str, "RuntimeConfig"]]:
for path in paths:
try:
project = self.new_project(str(path))
except DbtProjectError as e:
raise DbtProjectError(
f'Failed to read package: {e}',
result_type='invalid_project',
f"Failed to read package: {e}",
result_type="invalid_project",
path=path,
) from e
else:
@@ -380,13 +366,13 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
if root.exists():
for path in root.iterdir():
if path.is_dir() and not path.name.startswith('__'):
if path.is_dir() and not path.name.startswith("__"):
yield path
class UnsetCredentials(Credentials):
def __init__(self):
super().__init__('', '')
super().__init__("", "")
@property
def type(self):
@@ -403,37 +389,28 @@ class UnsetCredentials(Credentials):
return ()
class UnsetConfig(UserConfig):
def __getattribute__(self, name):
if name in {f.name for f in fields(UserConfig)}:
raise AttributeError(
f"'UnsetConfig' object has no attribute {name}"
)
def __post_serialize__(self, dct):
return {}
# This is used by UnsetProfileConfig, for commands which do
# not require a profile, i.e. dbt deps and clean
class UnsetProfile(Profile):
def __init__(self):
self.credentials = UnsetCredentials()
self.user_config = UnsetConfig()
self.profile_name = ''
self.target_name = ''
self.user_config = UserConfig() # This will be read in _get_rendered_profile
self.profile_name = ""
self.target_name = ""
self.threads = -1
def to_target_dict(self):
return {}
return DictDefaultEmptyStr({})
def __getattribute__(self, name):
if name in {'profile_name', 'target_name', 'threads'}:
raise RuntimeException(
f'Error: disallowed attribute "{name}" - no profile!'
)
if name in {"profile_name", "target_name", "threads"}:
raise RuntimeException(f'Error: disallowed attribute "{name}" - no profile!')
return Profile.__getattribute__(self, name)
# This class is used by the dbt deps and clean commands, because they don't
# require a functioning profile.
@dataclass
class UnsetProfileConfig(RuntimeConfig):
"""This class acts a lot _like_ a RuntimeConfig, except if your profile is
@@ -450,17 +427,15 @@ class UnsetProfileConfig(RuntimeConfig):
def __getattribute__(self, name):
# Override __getattribute__ to check that the attribute isn't 'banned'.
if name in {'profile_name', 'target_name'}:
raise RuntimeException(
f'Error: disallowed attribute "{name}" - no profile!'
)
if name in {"profile_name", "target_name"}:
raise RuntimeException(f'Error: disallowed attribute "{name}" - no profile!')
# avoid every attribute access triggering infinite recursion
return RuntimeConfig.__getattribute__(self, name)
def to_target_dict(self):
# re-override the poisoned profile behavior
return {}
return DictDefaultEmptyStr({})
@classmethod
def from_parts(
@@ -468,8 +443,8 @@ class UnsetProfileConfig(RuntimeConfig):
project: Project,
profile: Profile,
args: Any,
dependencies: Optional[Mapping[str, 'RuntimeConfig']] = None,
) -> 'RuntimeConfig':
dependencies: Optional[Mapping[str, "RuntimeConfig"]] = None,
) -> "RuntimeConfig":
"""Instantiate a RuntimeConfig from its components.
:param profile: Ignored.
@@ -477,7 +452,7 @@ class UnsetProfileConfig(RuntimeConfig):
:param args: The parsed command-line arguments.
:returns RuntimeConfig: The new configuration.
"""
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
return cls(
project_name=project.project_name,
@@ -512,10 +487,12 @@ class UnsetProfileConfig(RuntimeConfig):
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
profile_name='',
target_name='',
user_config=UnsetConfig(),
threads=getattr(args, 'threads', 1),
project_env_vars=project.project_env_vars,
profile_env_vars=profile.profile_env_vars,
profile_name="",
target_name="",
user_config=UserConfig(),
threads=getattr(args, "threads", 1),
credentials=UnsetCredentials(),
args=args,
cli_vars=cli_vars,
@@ -529,26 +506,16 @@ class UnsetProfileConfig(RuntimeConfig):
profile_renderer: ProfileRenderer,
profile_name: Optional[str],
) -> Profile:
try:
profile = Profile.render_from_args(
args, profile_renderer, profile_name
)
except (DbtProjectError, DbtProfileError) as exc:
logger.debug(
'Profile not loaded due to error: {}', exc, exc_info=True
)
logger.info(
'No profile "{}" found, continuing with no target',
profile_name
)
# return the poisoned form
profile = UnsetProfile()
# disable anonymous usage statistics
tracking.disable_tracking()
profile = UnsetProfile()
# The profile (for warehouse connection) is not needed, but we want
# to get the UserConfig, which is also in profiles.yml
user_config = read_user_config(flags.PROFILES_DIR)
profile.user_config = user_config
return profile
@classmethod
def from_args(cls: Type[RuntimeConfig], args: Any) -> 'RuntimeConfig':
def from_args(cls: Type[RuntimeConfig], args: Any) -> "RuntimeConfig":
"""Given arguments, read in dbt_project.yml from the current directory,
read in packages.yml if it exists, and use them to find the profile to
load.
@@ -559,15 +526,8 @@ class UnsetProfileConfig(RuntimeConfig):
:raises ValidationException: If the cli variables are invalid.
"""
project, profile = cls.collect_parts(args)
if not isinstance(profile, UnsetProfile):
# if it's a real profile, return a real config
cls = RuntimeConfig
return cls.from_parts(
project=project,
profile=profile,
args=args
)
return cls.from_parts(project=project, profile=profile, args=args)
UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE = """\
@@ -581,6 +541,6 @@ There are {} unused configuration paths:
def _is_config_used(path, fqns):
if fqns:
for fqn in fqns:
if len(path) <= len(fqn) and fqn[:len(path)] == path:
if len(path) <= len(fqn) and fqn[: len(path)] == path:
return True
return False

View File

@@ -1,11 +1,10 @@
from pathlib import Path
from copy import deepcopy
from typing import Dict, Any, Union
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, Loader, Dumper, load_yaml_text
)
from dbt.clients.yaml_helper import yaml, Loader, Dumper, load_yaml_text # noqa: F401
from dbt.dataclass_schema import ValidationError
from .renderer import SelectorRenderer
from .renderer import BaseRenderer
from dbt.clients.system import (
load_file_contents,
@@ -30,9 +29,8 @@ Validator Error:
class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
@classmethod
def selectors_from_dict(cls, data: Dict[str, Any]) -> 'SelectorConfig':
def selectors_from_dict(cls, data: Dict[str, Any]) -> "SelectorConfig":
try:
SelectorFile.validate(data)
selector_file = SelectorFile.from_dict(data)
@@ -46,12 +44,12 @@ class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
f"union, intersection, string, dictionary. No lists. "
f"\nhttps://docs.getdbt.com/reference/node-selection/"
f"yaml-selectors",
result_type='invalid_selector'
result_type="invalid_selector",
) from exc
except RuntimeException as exc:
raise DbtSelectorsError(
f'Could not read selector file data: {exc}',
result_type='invalid_selector',
f"Could not read selector file data: {exc}",
result_type="invalid_selector",
) from exc
return cls(selectors)
@@ -60,27 +58,29 @@ class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
def render_from_dict(
cls,
data: Dict[str, Any],
renderer: SelectorRenderer,
) -> 'SelectorConfig':
renderer: BaseRenderer,
) -> "SelectorConfig":
try:
rendered = renderer.render_data(data)
except (ValidationError, RuntimeException) as exc:
raise DbtSelectorsError(
f'Could not render selector data: {exc}',
result_type='invalid_selector',
f"Could not render selector data: {exc}",
result_type="invalid_selector",
) from exc
return cls.selectors_from_dict(rendered)
@classmethod
def from_path(
cls, path: Path, renderer: SelectorRenderer,
) -> 'SelectorConfig':
cls,
path: Path,
renderer: BaseRenderer,
) -> "SelectorConfig":
try:
data = load_yaml_text(load_file_contents(str(path)))
except (ValidationError, RuntimeException) as exc:
raise DbtSelectorsError(
f'Could not read selector file: {exc}',
result_type='invalid_selector',
f"Could not read selector file: {exc}",
result_type="invalid_selector",
path=path,
) from exc
@@ -92,9 +92,7 @@ class SelectorConfig(Dict[str, Dict[str, Union[SelectionSpec, bool]]]):
def selector_data_from_root(project_root: str) -> Dict[str, Any]:
selector_filepath = resolve_path_from_base(
'selectors.yml', project_root
)
selector_filepath = resolve_path_from_base("selectors.yml", project_root)
if path_exists(selector_filepath):
selectors_dict = load_yaml_text(load_file_contents(selector_filepath))
@@ -103,18 +101,16 @@ def selector_data_from_root(project_root: str) -> Dict[str, Any]:
return selectors_dict
def selector_config_from_data(
selectors_data: Dict[str, Any]
) -> SelectorConfig:
def selector_config_from_data(selectors_data: Dict[str, Any]) -> SelectorConfig:
if not selectors_data:
selectors_data = {'selectors': []}
selectors_data = {"selectors": []}
try:
selectors = SelectorConfig.selectors_from_dict(selectors_data)
except ValidationError as e:
raise DbtSelectorsError(
MALFORMED_SELECTOR_ERROR.format(error=str(e.message)),
result_type='invalid_selector',
result_type="invalid_selector",
) from e
return selectors
@@ -144,30 +140,34 @@ def validate_selector_default(selector_file: SelectorFile) -> None:
# be necessary to make changes here. Ideally it would be
# good to combine the two flows into one at some point.
class SelectorDict:
@classmethod
def parse_dict_definition(cls, definition):
def parse_dict_definition(cls, definition, selector_dict={}):
key = list(definition)[0]
value = definition[key]
if isinstance(value, list):
new_values = []
for sel_def in value:
new_value = cls.parse_from_definition(sel_def)
new_value = cls.parse_from_definition(sel_def, selector_dict=selector_dict)
new_values.append(new_value)
value = new_values
if key == 'exclude':
if key == "exclude":
definition = {key: value}
elif len(definition) == 1:
definition = {'method': key, 'value': value}
definition = {"method": key, "value": value}
elif key == "method" and value == "selector":
sel_def = definition.get("value")
if sel_def not in selector_dict:
raise DbtSelectorsError(f"Existing selector definition for {sel_def} not found.")
return selector_dict[definition["value"]]["definition"]
return definition
@classmethod
def parse_a_definition(cls, def_type, definition):
def parse_a_definition(cls, def_type, definition, selector_dict={}):
# this definition must be a list
new_dict = {def_type: []}
for sel_def in definition[def_type]:
if isinstance(sel_def, dict):
sel_def = cls.parse_from_definition(sel_def)
sel_def = cls.parse_from_definition(sel_def, selector_dict=selector_dict)
new_dict[def_type].append(sel_def)
elif isinstance(sel_def, str):
sel_def = SelectionCriteria.dict_from_single_spec(sel_def)
@@ -177,15 +177,17 @@ class SelectorDict:
return new_dict
@classmethod
def parse_from_definition(cls, definition):
def parse_from_definition(cls, definition, selector_dict={}):
if isinstance(definition, str):
definition = SelectionCriteria.dict_from_single_spec(definition)
elif 'union' in definition:
definition = cls.parse_a_definition('union', definition)
elif 'intersection' in definition:
definition = cls.parse_a_definition('intersection', definition)
elif "union" in definition:
definition = cls.parse_a_definition("union", definition, selector_dict=selector_dict)
elif "intersection" in definition:
definition = cls.parse_a_definition(
"intersection", definition, selector_dict=selector_dict
)
elif isinstance(definition, dict):
definition = cls.parse_dict_definition(definition)
definition = cls.parse_dict_definition(definition, selector_dict=selector_dict)
return definition
# This is the normal entrypoint of this code. Give it the
@@ -194,8 +196,10 @@ class SelectorDict:
def parse_from_selectors_list(cls, selectors):
selector_dict = {}
for selector in selectors:
sel_name = selector['name']
sel_name = selector["name"]
selector_dict[sel_name] = selector
definition = cls.parse_from_definition(selector['definition'])
selector_dict[sel_name]['definition'] = definition
definition = cls.parse_from_definition(
selector["definition"], selector_dict=deepcopy(selector_dict)
)
selector_dict[sel_name]["definition"] = definition
return selector_dict

View File

@@ -1,8 +1,15 @@
from typing import Dict, Any
from argparse import Namespace
from typing import Any, Dict, Optional, Union
from xmlrpc.client import Boolean
from dbt.contracts.project import UserConfig
import dbt.flags as flags
from dbt.clients import yaml_helper
from dbt.exceptions import raise_compiler_error, ValidationException
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.config import Profile, Project, read_user_config
from dbt.config.renderer import DbtProjectYamlRenderer, ProfileRenderer
from dbt.events.functions import fire_event
from dbt.events.types import InvalidVarsYAML
from dbt.exceptions import ValidationException, raise_compiler_error
def parse_cli_vars(var_string: str) -> Dict[str, Any]:
@@ -15,9 +22,54 @@ def parse_cli_vars(var_string: str) -> Dict[str, Any]:
type_name = var_type.__name__
raise_compiler_error(
"The --vars argument must be a YAML dictionary, but was "
"of type '{}'".format(type_name))
"of type '{}'".format(type_name)
)
except ValidationException:
logger.error(
"The YAML provided in the --vars argument is not valid.\n"
)
fire_event(InvalidVarsYAML())
raise
def get_project_config(
project_path: str,
profile_name: str,
args: Namespace = Namespace(),
cli_vars: Optional[Dict[str, Any]] = None,
profile: Optional[Profile] = None,
user_config: Optional[UserConfig] = None,
return_dict: Boolean = True,
) -> Union[Project, Dict]:
"""Returns a project config (dict or object) from a given project path and profile name.
Args:
project_path: Path to project
profile_name: Name of profile
args: An argparse.Namespace that represents what would have been passed in on the
command line (optional)
cli_vars: A dict of any vars that would have been passed in on the command line (optional)
(see parse_cli_vars above for formatting details)
profile: A dbt.config.profile.Profile object (optional)
user_config: A dbt.contracts.project.UserConfig object (optional)
return_dict: Return a dict if true, return the full dbt.config.project.Project object if false
Returns:
A full project config
"""
# Generate a profile if not provided
if profile is None:
# Generate user_config if not provided
if user_config is None:
user_config = read_user_config(flags.PROFILES_DIR)
# Update flags
flags.set_from_args(args, user_config)
if cli_vars is None:
cli_vars = {}
profile = Profile.render_from_args(args, ProfileRenderer(cli_vars), profile_name)
# Generate a project
project = Project.from_project_root(
project_path,
DbtProjectYamlRenderer(profile),
verify_version=bool(flags.VERSION_CHECK),
)
# Return
return project.to_project_config() if return_dict else project

View File

@@ -0,0 +1 @@
# Contexts and Jinja rendering

View File

@@ -1,18 +1,21 @@
import json
import os
from typing import (
Any, Dict, NoReturn, Optional, Mapping
)
from typing import Any, Dict, NoReturn, Optional, Mapping
from dbt import flags
from dbt import tracking
from dbt.clients.jinja import undefined_error, get_rendered
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, safe_load, SafeLoader, Loader, Dumper
)
from dbt.clients.jinja import get_rendered
from dbt.clients.yaml_helper import yaml, safe_load, SafeLoader, Loader, Dumper # noqa: F401
from dbt.contracts.graph.compiled import CompiledResource
from dbt.exceptions import raise_compiler_error, MacroReturn
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.exceptions import (
raise_compiler_error,
MacroReturn,
raise_parsing_error,
disallow_secret_env_var,
)
from dbt.logger import SECRET_ENV_PREFIX
from dbt.events.functions import fire_event, get_invocation_id
from dbt.events.types import MacroEventInfo, MacroEventDebug
from dbt.version import __version__ as dbt_version
# These modules are added to the context. Consider alternative
@@ -20,43 +23,90 @@ from dbt.version import __version__ as dbt_version
import pytz
import datetime
import re
import itertools
# Contexts in dbt Core
# Contexts are used for Jinja rendering. They include context methods,
# executable macros, and various settings that are available in Jinja.
#
# Different contexts are used in different places because we allow access
# to different methods and data in different places. Executable SQL, for
# example, includes the available macros and the model, while Jinja in
# yaml files is more limited.
#
# The context that is passed to Jinja is always in a dictionary format,
# not an actual class, so a 'to_dict()' is executed on a context class
# before it is used for rendering.
#
# Each context has a generate_<name>_context function to create the context.
# ProviderContext subclasses have different generate functions for
# parsing and for execution.
#
# Context class hierarchy
#
# BaseContext -- core/dbt/context/base.py
# SecretContext -- core/dbt/context/secret.py
# TargetContext -- core/dbt/context/target.py
# ConfiguredContext -- core/dbt/context/configured.py
# SchemaYamlContext -- core/dbt/context/configured.py
# DocsRuntimeContext -- core/dbt/context/configured.py
# MacroResolvingContext -- core/dbt/context/configured.py
# ManifestContext -- core/dbt/context/manifest.py
# QueryHeaderContext -- core/dbt/context/manifest.py
# ProviderContext -- core/dbt/context/provider.py
# MacroContext -- core/dbt/context/provider.py
# ModelContext -- core/dbt/context/provider.py
# TestContext -- core/dbt/context/provider.py
def get_pytz_module_context() -> Dict[str, Any]:
context_exports = pytz.__all__ # type: ignore
return {
name: getattr(pytz, name) for name in context_exports
}
return {name: getattr(pytz, name) for name in context_exports}
def get_datetime_module_context() -> Dict[str, Any]:
context_exports = [
'date',
'datetime',
'time',
'timedelta',
'tzinfo'
]
context_exports = ["date", "datetime", "time", "timedelta", "tzinfo"]
return {
name: getattr(datetime, name) for name in context_exports
}
return {name: getattr(datetime, name) for name in context_exports}
def get_re_module_context() -> Dict[str, Any]:
context_exports = re.__all__
# TODO CT-211
context_exports = re.__all__ # type: ignore[attr-defined]
return {
name: getattr(re, name) for name in context_exports
}
return {name: getattr(re, name) for name in context_exports}
def get_itertools_module_context() -> Dict[str, Any]:
# Excluded dropwhile, filterfalse, takewhile and groupby;
# first 3 illogical for Jinja and last redundant.
context_exports = [
"count",
"cycle",
"repeat",
"accumulate",
"chain",
"compress",
"islice",
"starmap",
"tee",
"zip_longest",
"product",
"permutations",
"combinations",
"combinations_with_replacement",
]
return {name: getattr(itertools, name) for name in context_exports}
def get_context_modules() -> Dict[str, Dict[str, Any]]:
return {
'pytz': get_pytz_module_context(),
'datetime': get_datetime_module_context(),
're': get_re_module_context(),
"pytz": get_pytz_module_context(),
"datetime": get_datetime_module_context(),
"re": get_re_module_context(),
"itertools": get_itertools_module_context(),
}
@@ -90,8 +140,8 @@ class ContextMeta(type):
new_dct = {}
for base in bases:
context_members.update(getattr(base, '_context_members_', {}))
context_attrs.update(getattr(base, '_context_attrs_', {}))
context_members.update(getattr(base, "_context_members_", {}))
context_attrs.update(getattr(base, "_context_attrs_", {}))
for key, value in dct.items():
if isinstance(value, ContextMember):
@@ -100,21 +150,20 @@ class ContextMeta(type):
context_attrs[context_key] = key
value = value.inner
new_dct[key] = value
new_dct['_context_members_'] = context_members
new_dct['_context_attrs_'] = context_attrs
new_dct["_context_members_"] = context_members
new_dct["_context_attrs_"] = context_attrs
return type.__new__(mcls, name, bases, new_dct)
class Var:
UndefinedVarError = "Required var '{}' not found in config:\nVars "\
"supplied to {} = {}"
UndefinedVarError = "Required var '{}' not found in config:\nVars " "supplied to {} = {}"
_VAR_NOTSET = object()
def __init__(
self,
context: Mapping[str, Any],
cli_vars: Mapping[str, Any],
node: Optional[CompiledResource] = None
node: Optional[CompiledResource] = None,
) -> None:
self._context: Mapping[str, Any] = context
self._cli_vars: Mapping[str, Any] = cli_vars
@@ -129,14 +178,12 @@ class Var:
if self._node is not None:
return self._node.name
else:
return '<Configuration>'
return "<Configuration>"
def get_missing_var(self, var_name):
dct = {k: self._merged[k] for k in self._merged}
pretty_vars = json.dumps(dct, sort_keys=True, indent=4)
msg = self.UndefinedVarError.format(
var_name, self.node_name, pretty_vars
)
msg = self.UndefinedVarError.format(var_name, self.node_name, pretty_vars)
raise_compiler_error(msg, self._node)
def has_var(self, var_name: str):
@@ -160,14 +207,16 @@ class Var:
class BaseContext(metaclass=ContextMeta):
# subclass is TargetContext
def __init__(self, cli_vars):
self._ctx = {}
self.cli_vars = cli_vars
self.env_vars = {}
def generate_builtins(self):
builtins: Dict[str, Any] = {}
for key, value in self._context_members_.items():
if hasattr(value, '__get__'):
if hasattr(value, "__get__"):
# handle properties, bound methods, etc
value = value.__get__(self)
builtins[key] = value
@@ -175,9 +224,9 @@ class BaseContext(metaclass=ContextMeta):
# no dbtClassMixin so this is not an actual override
def to_dict(self):
self._ctx['context'] = self._ctx
self._ctx["context"] = self._ctx
builtins = self.generate_builtins()
self._ctx['builtins'] = builtins
self._ctx["builtins"] = builtins
self._ctx.update(builtins)
return self._ctx
@@ -271,33 +320,41 @@ class BaseContext(metaclass=ContextMeta):
return Var(self._ctx, self.cli_vars)
@contextmember
@staticmethod
def env_var(var: str, default: Optional[str] = None) -> str:
def env_var(self, var: str, default: Optional[str] = None) -> str:
"""The env_var() function. Return the environment variable named 'var'.
If there is no such environment variable set, return the default.
If the default is None, raise an exception for an undefined variable.
"""
return_value = None
if var.startswith(SECRET_ENV_PREFIX):
disallow_secret_env_var(var)
if var in os.environ:
return os.environ[var]
return_value = os.environ[var]
elif default is not None:
return default
return_value = default
if return_value is not None:
self.env_vars[var] = return_value
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
undefined_error(msg)
raise_parsing_error(msg)
if os.environ.get("DBT_MACRO_DEBUGGING"):
if os.environ.get('DBT_MACRO_DEBUGGING'):
@contextmember
@staticmethod
def debug():
"""Enter a debugger at this line in the compiled jinja code."""
import sys
import ipdb # type: ignore
frame = sys._getframe(3)
ipdb.set_trace(frame)
return ''
return ""
@contextmember('return')
@contextmember("return")
@staticmethod
def _return(data: Any) -> NoReturn:
"""The `return` function can be used in macros to return data to the
@@ -348,9 +405,7 @@ class BaseContext(metaclass=ContextMeta):
@contextmember
@staticmethod
def tojson(
value: Any, default: Any = None, sort_keys: bool = False
) -> Any:
def tojson(value: Any, default: Any = None, sort_keys: bool = False) -> Any:
"""The `tojson` context method can be used to serialize a Python
object primitive, eg. a `dict` or `list` to a json string.
@@ -443,10 +498,10 @@ class BaseContext(metaclass=ContextMeta):
{% endmacro %}"
"""
if info:
logger.info(msg)
fire_event(MacroEventInfo(msg=msg))
else:
logger.debug(msg)
return ''
fire_event(MacroEventDebug(msg=msg))
return ""
@contextproperty
def run_started_at(self) -> Optional[datetime.datetime]:
@@ -481,10 +536,7 @@ class BaseContext(metaclass=ContextMeta):
"""invocation_id outputs a UUID generated for this dbt run (useful for
auditing)
"""
if tracking.active_user is not None:
return tracking.active_user.invocation_id
else:
return None
return get_invocation_id()
@contextproperty
def modules(self) -> Dict[str, Any]:
@@ -529,6 +581,24 @@ class BaseContext(metaclass=ContextMeta):
"""
return flags
@contextmember
@staticmethod
def print(msg: str) -> str:
"""Prints a line to stdout.
:param msg: The message to print
> macros/my_log_macro.sql
{% macro some_macro(arg1, arg2) %}
{{ print("Running some_macro: " ~ arg1 ~ ", " ~ arg2) }}
{% endmacro %}"
"""
if not flags.NO_PRINT:
print(msg)
return ""
def generate_base_context(cli_vars: Dict[str, Any]) -> Dict[str, Any]:
ctx = BaseContext(cli_vars)

Some files were not shown because too many files have changed in this diff Show More