Compare commits

..

194 Commits

Author SHA1 Message Date
Callum McCann
fb8b161351 adding entity inheritence 2022-12-06 16:20:12 -06:00
Callum McCann
7ecb431278 making dimensions possible! 2022-12-06 13:54:28 -06:00
Callum McCann
792150ff6a let there be entity 2022-12-05 16:02:14 -06:00
leahwicz
85d0b5afc7 Reverting back to older ubuntu image (#6363)
* Reverting back to older ubuntu image

* Updating the structured logging workflow as well
2022-12-02 12:09:46 -05:00
Matthew McKnight
1fbcaa4484 reformatting of test after some spike investigation (#6314)
* reformatting of test after some spike investigation

* reformat code to pull tests back into base class definition, move a test to more appropriate spot
2022-12-01 16:54:58 -06:00
justbldwn
481235a943 clarify error log for number of allowed models in a Python file (#6251) 2022-12-01 14:43:36 -05:00
Michelle Ark
2289e45571 Exposures support metrics (#6342)
* exposures support metrics
2022-12-01 11:01:16 -05:00
dependabot[bot]
b5d303f12a Bump mashumaro[msgpack] from 3.0.4 to 3.1 in /core (#6108)
* Bump mashumaro[msgpack] from 3.0.4 to 3.1.1 in /core

Bumps [mashumaro[msgpack]](https://github.com/Fatal1ty/mashumaro) from 3.0.4 to 3.1.
- [Release notes](https://github.com/Fatal1ty/mashumaro/releases)
- [Commits](https://github.com/Fatal1ty/mashumaro/compare/v3.0.4...v3.1)

---
updated-dependencies:
- dependency-name: mashumaro[msgpack]
  dependency-type: direct:production
  update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
2022-11-30 17:32:47 -05:00
Mila Page
c3be975783 Ct 288/convert 070 incremental test (#6330)
* Convert incremental schema tests.

* Drop the old test.

* Bad git add. My disappoint is immeasurable and my day has been ruined.

* Adjustments for flake8.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 12:47:20 -08:00
Mila Page
47c2edb42a Ct 1518/convert 063 relation names tests (#6304)
* Convert old test.

Add documentation. Adapt and reenable previously skipped test.

* Convert test and adapt and comment for current standards.

* Remove old versions of tests.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 12:25:36 -08:00
Stu Kilgore
b3440417ad Add GHA workflow to build CLI API docs (#6187) 2022-11-29 13:30:47 -06:00
leahwicz
020f639c7a Update stale.yml (#6258) 2022-11-29 09:40:59 -05:00
Mila Page
55db15aba8 Convert test 067. (#6305)
* Convert test 067. One bug outstanding.

* Test now working! Schema needed renaming to avoid 63 char max problems

* Remove old test.

* Add some docs and rewrite.

* Add exception for when audit tables' schema runs over the db limit.

* Code cleanup.

* Revert exception.

* Round out comments.

* Rename what shouldn't be a base class.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 00:06:07 -08:00
Itamar Hartstein
bce0e7c096 BaseContext: expose md5 function in context (#6247)
* BaseContext: expose md5 function in context

* BaseContext: add return value type

* Add changie entry

* rename "md5" to "local_md5"

* fix test_context.py
2022-11-28 10:23:40 -05:00
Gerda Shank
7d7066466d CT 1537 fix event test and rename a couple of fields (#6293)
* Rename MacroEvent to JinjaLog

* Rename ConnectionClosed/2

* Fix LogSeedResult

* Rename ConnectionLeftOpen events, fix test_events.py

* Update events README.md, add "category" to EventInfo

* Rename GeneralMacroWarning to JinjaLogWarning
2022-11-22 14:54:20 -05:00
Emily Rockman
517576c088 add back in conditional node length check (#6298) 2022-11-21 21:20:55 -08:00
leahwicz
987764858b Revert "Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker (#6180)" (#6281)
This reverts commit 8e28f5906e.
2022-11-17 09:14:22 -05:00
FishtownBuildBot
a235abd176 Add new index.html and changelog yaml files from dbt-docs (#6265) 2022-11-16 17:00:33 +01:00
dependabot[bot]
9297e4d55c Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core (#5917)
* Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core

Updates the requirements on [pathspec](https://github.com/cpburnz/python-pathspec) to permit the latest version.
- [Release notes](https://github.com/cpburnz/python-pathspec/releases)
- [Changelog](https://github.com/cpburnz/python-pathspec/blob/master/CHANGES.rst)
- [Commits](https://github.com/cpburnz/python-pathspec/compare/v0.9.0...v0.10.1)

---
updated-dependencies:
- dependency-name: pathspec
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-15 22:02:37 -05:00
Michelle Ark
eae98677b9 s/gitlab/github for flake8 precommit repo (#6252) 2022-11-15 10:30:00 -05:00
Matthew McKnight
66ac107409 [CT-1262] Convert dbt_debug (#6125)
* init pr for dbt_debug test conversion

* removal of old test

* minor test format change

* add new Base class and Test classes

* reformatting test, new method for capsys and error messgae to check, todo fix badproject

* refomatting tests, ready for review

* checking yaml file, and small reformat

* modifying since update wasn't working in ci/cd
2022-11-14 14:22:48 -06:00
Michelle Ark
39c5c42215 converting 044_test_run_operations (#6122)
* converting 044_test_run_operations
2022-11-14 10:39:57 -05:00
dependabot[bot]
9f280a8469 Update colorama requirement from <0.4.6,>=0.3.9 to >=0.3.9,<0.4.7 in /core (#6144)
* Update colorama requirement in /core

Updates the requirements on [colorama](https://github.com/tartley/colorama) to permit the latest version.
- [Release notes](https://github.com/tartley/colorama/releases)
- [Changelog](https://github.com/tartley/colorama/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/tartley/colorama/compare/0.3.9...0.4.6)

---
updated-dependencies:
- dependency-name: colorama
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-13 09:57:33 -05:00
Joe Berni
73116fb816 feature/favor-state-node (#5859) 2022-11-09 10:58:01 -06:00
Stu Kilgore
f02243506d Convert postgres index tests (#6228) 2022-11-08 15:30:29 -06:00
Stu Kilgore
d5e9ce1797 Convert color tests to pytest (#6230) 2022-11-08 15:25:57 -06:00
Stu Kilgore
4e786184d2 Convert threading tests to pytest (#6226) 2022-11-08 08:56:10 -06:00
Chenyu Li
930bd3541e properly track hook running (#6059) 2022-11-07 10:44:29 -06:00
Gerda Shank
6c76137da4 CT 1443 remove root path (#6172)
* Remove root_path

* Bump manifest schema to 8

* Update tests and compability utility for v8, root_path removal
2022-11-04 16:38:26 -04:00
Gerda Shank
68d06d8a9c Combine various print result log events with different levels (#6174)
* Combine various print result log events with different levels

* Changie

* more merge cleanup

* Specify DynamicLevel for event classes that must specify level
2022-11-04 14:26:37 -04:00
Rachel
d0543c9242 Updates lib to use new profile name functionality (#6202)
* Updates lib to use new profile name functionality

* Adds changie entry

* Fixes formatting
2022-11-04 10:05:24 -07:00
Michelle Ark
cfad27f963 add typing to DepsTask.run (#6192) 2022-11-03 17:35:16 -04:00
Emily Rockman
c3ccbe3357 add python version and upgrade action (#6204) 2022-11-03 09:13:00 -05:00
dependabot[bot]
8e28f5906e Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker (#6180)
* Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker

Bumps python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye.

---
updated-dependencies:
- dependency-name: python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-02 08:40:51 -07:00
FishtownBuildBot
d23285b4ba Add new index.html and changelog yaml files from dbt-docs (#6112) 2022-11-02 08:36:56 -07:00
Michelle Ark
a42748433d converting 023_exit_codes_tests (#6105)
* converting 023_exit_codes_tests

* use packages fixture, clean up test names
2022-11-01 16:26:12 -04:00
Emily Rockman
be4a91a0fe Convert messages to struct logs (#6064)
* Initial structured logging changes

* remove "this" from core/dbt/events/functions.py

* CT-1047: Fix execution_time definitions to use float

* CT-1047: Revert unintended checking of changes to functions.py

* WIP

* first pass to resolve circular deps

* more circular dep resolution

* remove a bunch of duplication

* move message into log line

* update comments

* fix field that wen missing during rebase

* remove double import

* remove some comments and extra code

* fix pre-commit

* rework deprecations

* WIP converting messages

* WIP converting messages

* remove stray comment

* WIP more message conversion

* WIP more message conversion

* tweak the messages

* convert last message

* rename

* remove warn_or_raise as never used

* add fake calls to all new events

* fix some tests

* put back deprecation

* restore deprecation fully

* fix unit test

* fix log levels

* remove some skipped ids

* fix macro log function

* fix how messages are built to match expected outcome

* fix expected test message

* small fixes from reviews

* fix conflict resolution in UI

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
2022-10-31 12:04:56 -05:00
Emily Rockman
8145eed603 revert to community action (#6163) 2022-10-27 16:10:58 -05:00
Emily Rockman
fc00239f36 point to correct workflow (#6161)
* point to correct workflow

* add inputs
2022-10-27 14:05:09 -05:00
Ian Knox
77dfec7214 more ergonomic profile name handling (#6157) 2022-10-27 10:49:27 -05:00
Emily Rockman
7b73264ec8 switch out to use internal action for triage labels (#6120)
* switch out to use our action

* point to main
2022-10-27 08:33:15 -05:00
Mila Page
1916784287 Ct 1167/030 statement tests conversion (#6109)
* Convert test to functional set.

* Remove old statement tests from integration test set.

* Nix whitespace

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-26 03:37:44 -07:00
Ian Knox
c2856017a1 [BUGFIX] Force tox to update pip (fixes psycopg2-binary @ 2.9.5) (#6134) 2022-10-25 13:01:38 -05:00
Michelle Ark
17b82661d2 convert 027 cycle test (#6094)
* convert 027 cycle test

* remove no-op expect_pass=False

* remove postgres from test names
2022-10-21 11:41:51 -04:00
Michelle Ark
6c8609499a Add 'michelleark' to changie's core_team list (#6084) 2022-10-20 14:41:41 -04:00
Peter Webb
53ae325576 CT-1099: Migrate test 071_commented_yaml_regression_3568_tests (#6106) 2022-10-20 12:43:30 -04:00
Mila Page
a7670a3ab9 Add unit tests for recent stringifier functors added to events library. (#6095)
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-19 22:52:32 -07:00
Mila Page
ff2f1f42c3 Working solution serialization bug. (#5874)
* Create functors to initialize event types with str-type member attributes. Before this change, the spec of various classes expected base_msg and msg params to be str's. This assumption did not always hold true. post_init hooks ensures the spec is obeyed.
* Add new changelog.
* Add msg type change functor to a few other events that could use it.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-18 12:20:30 -07:00
Luke Bassett
35f7975d8f Updated string formatting on non-f-strings. (#6086)
* Updated string formatting on non-f-strings.

Found all cases of strings separated by white space on a single line and
removed white space separation. EX: "hello " "world" -> "hello world".

* add changelog entry
2022-10-17 15:58:31 -05:00
Eve Johns
a9c8bc0e0a f-string cleanup #6068 (#6082)
* fix f string issue

* removed one space

* Add changelog

* fixed return format

Co-authored-by: Leah Antkiewicz <leah.antkiewicz@fishtownanalytics.com>
2022-10-17 16:58:04 -04:00
Peter Webb
73aebd8159 CT-625: Fail with clear message for invalid materialized vals (#6025)
* CT-625: Fail with clear message for invalid materialized vals

* CT-625: Increase test coverage, run pre-commit checks

* CT-625: run black on problem file

* CT-625: Add changelog entry

* CT-625: Remove test that didn't make sense
2022-10-14 16:14:32 -04:00
Gerda Shank
9b84b6e2e8 Initial structured logging changes (#5954)
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-10-14 13:57:46 -04:00
colin-rogers-dbt
095997913e migrate 034 integration test to functional (#6054)
* migrate 034 integration test to functional

* formatting fixes

* move to adapter directory

* add extra args
2022-10-12 15:24:51 -07:00
pgoslatara
6de1d29cf9 Allowing partitions in external table to be a list (#5930)
* Allowing partitions in external table to be a list

* Adding changelog entry
2022-10-12 15:23:58 -07:00
Mila Page
87db12d05b Ct 1049/migrate 016 macro tests (#5780)
* Migrate test

* Remove old integration test.

* Simplify object definitions since we enforce python 3

* Factor many fixtures into a file.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-12 13:37:08 -07:00
Stu Kilgore
dcc70f314f Generate API docs (#6022) 2022-10-12 08:56:24 -05:00
Matteo Ferrando
dcd6ef733b fix: check length of args of python model function before accessing it (#6042)
* fix: check length of args of python model function before accessing it

* Add test

* changie
2022-10-11 14:34:27 -07:00
Emily Rockman
85e415f50f Validate enabled config is a bool for sources, exposures and metrics (#6038)
* clean up tests, validate enabled config is a bool

* clean up code, add model test

* add changelog

* move validation into proper validation methods

* fix source defn validation

* add optional tables logic

* use existing validation
2022-10-11 16:10:50 -05:00
Matthew McKnight
2c684247e9 [CT-1271] Test Query Comment conversion (#5971)
* init query_comment test conversion pr

* importing model and macro, changing to new project_config_update, tests passing locally for core

* delete old integration test

* trying to test against other adapters

* update to main

* file rename

* file rename

* import change

* move query_comment directory to functional/

* move test directory back to adapter zone

* update to main

* updating core test based on feedback from @gshank

* testing removing target checking
2022-10-11 14:58:32 -05:00
Maximilian Roos
3d09531cda Make mypy pass from the command line (#5983)
* Make `mypy` pass from the command line
2022-10-11 13:10:15 -05:00
leahwicz
fc1227e0b1 Adding 1.3.latest branch to testing matrix (#6045) 2022-10-11 10:44:16 -04:00
Maximilian Roos
dc96352493 Add dmypy cache to gitignore (#5978)
* Add dmypy cache to gitignore
2022-10-10 09:14:55 -05:00
Jared Rimmer
725cf81af6 Add friendlier error messages when packages yml is malformed (#5812)
* Add friendlier error messages when packages yml is malformed

* Add Changelog

* Add validate classmethod to PackageConfig with tests

* Fix failing tests
2022-10-10 08:44:31 -05:00
Emily Rockman
558468e854 Consolidate exception (#6024)
* combine repeated exception logic

* fix params

* changelog
2022-10-07 13:07:18 -05:00
dependabot[bot]
95ad1ca4f8 Bump black from 22.8.0 to 22.10.0 (#6019)
* Bump black from 22.8.0 to 22.10.0

Bumps [black](https://github.com/psf/black) from 22.8.0 to 22.10.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/22.8.0...22.10.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-10-07 08:59:19 -05:00
Maximilian Roos
02a69c8f4f Put black config in explicit config (#5947) 2022-10-06 11:47:04 -05:00
dave-connors-3
7dbdfc88e0 add -f for seeds (#5991)
* add -f for seeds

* ole changie

* more specific changlog description

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-10-05 16:13:21 -05:00
Andy
2002791ec1 Andy/fix init comment (#5684)
* edited comment to correctly specify that views are set, not tables

* updated init test to match starter project change

* added changelog

* update 3 other occurrences of the init test for text update
2022-10-05 15:46:27 -05:00
Emily Rockman
29d96bd6bf Convert 019_analysis_tests (#6006)
* convert 019 test

* add duplicate analysis test
2022-10-05 13:41:49 -05:00
Peter Webb
d01245133a CT-1150: Include flat_graph in manifest.deepcopy(), tests (#5975)
* CT-1150: Include flat_graph in manifest.deepcopy(), tests

* append to changelog
2022-10-05 13:56:29 -04:00
Emily Rockman
23c8ac230c convert 025 to new framework, add metrics dupe test (#6004) 2022-10-05 08:42:07 -05:00
Gerda Shank
43d9ee3470 Remove 015_cli_invocation_tests directory (#5961) 2022-10-03 13:31:45 -04:00
colin-rogers-dbt
50fe25d230 consolidate timestamp logic (#5979)
* Consolidate date macros into dates.sql

* rename to timestamps.sql

* fix whitespace + add changie

* cleanup macros and add testing

* fix whitespace

* remove now macro

* fix functional test

* remove local config

* make snowflake backwards compat return utc

* move timestamps to adaptor base tests

* move backcompat macros to respective adapters

* change timestamp param to source_timestamp

* move timestamps.py to utils

* update changie.yaml

* make expected schema a fixture

* formatting

* add debug message to assert

* fix changie.yaml

* Update tests/adapter/dbt/tests/adapter/utils/test_timestamps.py

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* Update plugins/postgres/dbt/include/postgres/macros/timestamps.sql

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* Update .changie.yaml

* add backcompat utc

* remove current_timestamp_in_utc

* remove convert_timezone

* add _in_utc_backcompat

* fix macro_calls typo

* add expected sql validation to test_timestamps

* make expected_sql optional

* improve sql check string comparison test

* remove extraneous test file

* add timestamp casting back

* Update plugins/postgres/dbt/include/postgres/macros/adapters.sql

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* add check_relation_has_expected_schema to comments

* fix whitespace

* remove default impl of current_timestamp

* manual changie log fix

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2022-09-30 16:37:24 -07:00
colin-rogers-dbt
a79960fa64 Consolidate date macros into timestamps.sql (#5838)
* Consolidate date macros into dates.sql

* rename to timestamps.sql

* fix whitespace + add changie

* cleanup macros and add testing

* fix whitespace

* remove now macro

* fix functional test

* remove local config

* make snowflake backwards compat return utc

* move timestamps to adaptor base tests

* move backcompat macros to respective adapters

* change timestamp param to source_timestamp

* move timestamps.py to utils

* update changie.yaml

* make expected schema a fixture

* formatting

* add debug message to assert

* fix changie.yaml

* Update tests/adapter/dbt/tests/adapter/utils/test_timestamps.py

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* Update plugins/postgres/dbt/include/postgres/macros/timestamps.sql

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* Update .changie.yaml

* add backcompat utc

* remove current_timestamp_in_utc

* remove convert_timezone

* add _in_utc_backcompat

* fix macro_calls typo

* add expected sql validation to test_timestamps

* make expected_sql optional

* improve sql check string comparison test

* remove extraneous test file

* add timestamp casting back

* Update plugins/postgres/dbt/include/postgres/macros/adapters.sql

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

* add check_relation_has_expected_schema to comments

* fix whitespace

* remove default impl of current_timestamp

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2022-09-30 15:33:12 -07:00
Emily Rockman
fa4f9d3d97 Disabled Models in schema files (#5868)
* clean up debugging

* reword some comments

* changelog

* add more tests

* move around the manifest.node

* fix typos

* all tests passing

* move logic for moving around nodes

* add tests

* more cleanup

* fix failing pp test

* remove comments

* add more tests, patch all disabled nodes

* fix test for windows

* fix node processing to not overwrite enabled nodes

* add checking disabled in pp, fix error msg

* stop deepcopying all nodes when processing

* update error message
2022-09-29 15:24:08 -05:00
Emily Rockman
73385720b4 Update 0.0.0.md (#5973) 2022-09-29 09:57:22 -05:00
github-actions[bot]
c2ab2971b0 Bumping version to 1.4.0a1 and generate changelog (#5965)
* Bumping version to 1.4.0a1 and generate CHANGELOG

* Updating Changelog files

* Removed Changelog entry for alpha

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Leah Antkiewicz <leah.antkiewicz@fishtownanalytics.com>
2022-09-29 09:14:39 -04:00
leahwicz
0e60fc1078 Update Version Bump to include Homebrew in PATH (#5963) 2022-09-28 14:18:29 -04:00
Matthew McKnight
4f2fef1ece [CT-1166] test_aliases conversion (#5884)
* init pr for 026 test conversion

* removing old test, got all tests setup, need to find best way to handle regex in new test and see what we would actually want to do to test check we didn't run anything against

* changes to test_alias_dupe_thorews_exeption passing locally now

* adding test cases for final test

* following the create new shcema method tests are passing up for review for core code

* noving alias test to adapter zone

* adding Base Classes

* changing ref to fixtures

* add double check to test

* minor change to alt schema name formation, removal of unneeded setup fixture

* typo in model names

* update to main

* pull models/schemas/macros into a fixtures file
2022-09-28 11:03:12 -05:00
Gerda Shank
3562637984 Remove parsers/source.py type ignores (#5953) 2022-09-28 11:24:59 -04:00
Callum McCann
17aca39e1c Adding metric expression validation (#5873)
* adding validation

* changie

* code formatting

* updating for review

* updating tests
2022-09-27 12:38:03 -04:00
Yoshiaki Ishihara
59744f18bb Fix typos of comments in core/dbt/adapters/ (#5693) 2022-09-27 09:10:24 -07:00
Rachel
f1326f526c Runtime: Prevent introspective queries at compile (SL only) (#5926)
* Preliminary changes to keep compile from connecting to the warehouse for runtime calls

* Adds option to lib to skip connecting to warehouse for compile; adds prelim tests

* Removes unused imports

* Simplifies test and renames to SqlCompileRunnerNoIntrospection

* Updates name in tests

* Spacing

* Updates test to check for adapter connection call instead of compile and execute

* Removes commented line

* Fixes test names

* Updates plugin to postgres type as snowflake isn't available

* Fixes docstring

* Fixes formatting

* Moves conditional logic out of class

* Fixes formatting

* Removes commented line

* Moves import

* Unmoves import

* Updates changelog

* Adds further info to method docstring
2022-09-27 09:49:55 -04:00
Jeremy Cohen
834ac716fd Prefer internal macros when called explicitly (#5907)
* Add functional test

* Prefer internal macros when called explicitly

* Add changelog entry

* update tests format

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-09-27 08:50:55 -04:00
Doug Beatty
0487b96098 Array macros (#5823)
* Helper macro to cast from array to string

* Default implementations and tests for array macros

* Trim Trailing Whitespace

* Changelog entry

* Remove dependence upon `cast_array_to_string` macro

* pre-commit fixes

* Remove `cast_array_to_string` macro

* pre-commit fix

* Trivial direct test; array_concat/append test non-triviallly indirectly

* Remove vestigial `lstrip`
2022-09-26 13:40:15 -06:00
FishtownBuildBot
dbd36f06e4 Add new index.html and changelog yaml files from dbt-docs (#5925) 2022-09-26 14:00:04 -05:00
dave-connors-3
38ada8a68e merge exclude columns for incremental models (#5457)
* exlcude cols like in dbt_utils.star

* dispatch macro

* changelog entry

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2022-09-26 14:49:12 -04:00
Doug Beatty
e58edaab2d Test for Koalas DataFrames (#5928) 2022-09-26 12:41:56 -06:00
Doug Beatty
c202e005cd Tests for current_timestamp (#5935)
* Tests for `current_timestamp`

* Black formatting
2022-09-26 12:31:36 -06:00
Peter Webb
8129862b3c CT-1221 add handle to changie (#5923)
* Add 'peterallenwebb' to changie's core_team list.

* fix accidental line break
2022-09-26 11:44:20 -04:00
Drew Banin
4e8aa007cf Fix adapter reset race condition in lib.py (#5921)
* (#5919) Fix adapter reset race condition in lib.py

* run black

* changie
2022-09-26 10:26:20 -04:00
Doug Beatty
fe88bfabbf Click CLI profiles directory (#5896)
* Default directories for projects and profiles

* Re-write of get_nearest_project_dir()

* Trim Trailing Whitespace

* Functionally equivalent resolvers
2022-09-24 10:47:47 -06:00
Gerda Shank
5328a64df2 CT 815 partial parsing handling of deleted metrics (#5920)
* Update delete_schema_metric to schedule referencing nodes for reparsing

* Changie
2022-09-23 18:48:00 -04:00
Chenyu Li
87c9974be1 improve error message for parsing args (#5895)
* improve error message for parsing args

* update error message

* Update models.py

* skip stack_trace for all dbt Exceptions
2022-09-23 13:31:10 -07:00
Emily Rockman
f3f509da92 update disabled metrics/exposures to use add_disabled (#5909)
* update disabled metrics/exposures to use add_disabled

* put back func fo tests

* add pp logic

* switch elif

* fix test name

* return node
2022-09-23 13:24:37 -05:00
dave-connors-3
5e8dcec2c5 update flag to -f (#5908) 2022-09-23 20:20:29 +02:00
Chenyu Li
56783446db PySpark dataframe related tests (#5906)
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2022-09-22 16:27:27 -07:00
Chenyu Li
207cc0383d initial support for a .dbtignore (#5897) 2022-09-22 09:06:35 -07:00
leahwicz
49ecd6a6a4 Adding missing permissions on GHA (#5870)
* Adding missing permissions on GHA

* Adding read all permissions explicitly
2022-09-21 14:53:27 -04:00
dave-connors-3
c109f39d82 add optional shorthand to full refresh command (#5879)
* add optional shorthand to full refresh command

* changie

* replace full refresh with shorthand in functional test
2022-09-21 14:37:28 -04:00
Ian Knox
fd778dceb5 Profiling and Adapter management work with Click (#5892) 2022-09-21 11:23:26 -05:00
Emily Rockman
e402241e0e Validate exposure names and add label (#5844)
* first pass

* add label and name validation

* changelog

* fix tests

* convert ParsingError to Deprecation

* fix bug where label did not end up in parsed node

* update deprecation msg
2022-09-21 10:24:44 -05:00
Emily Rockman
a6c37c948d Add name validation for metrics (#5841)
* add tests, add name validation

* tweak test

* small update

* change to 250 char limit, tweak message
2022-09-21 09:31:13 -05:00
Daniel Messias
fd886cb7dd ConfigSelectorMethod should check for bools (#5889)
* ConfigSelectorMethod should check for bools

* Add changelog entry

* Add support for lists and test cases

* Typo and formatting in test

* pre-commit linting
2022-09-21 10:20:14 -04:00
James McNeill
b089a471b7 implement type_boolean macro (#5875)
* implement type_boolean macro

* changie result
2022-09-20 19:24:25 -06:00
Emily Rockman
ae294b643b Manifest deserialization error with disabled models (#5891)
* account for disabled nodes in renaming attributes

* tweak naming
2022-09-20 14:27:25 -05:00
colin-rogers-dbt
0bd6df0d1b [CT-1101] 024_custom_schema_tests (#5828)
* create functional custom schema tests

* delete old integration tests

* changie log

* formatting fixes

* delete changie entry and errant comment
2022-09-20 10:42:07 -07:00
Gerda Shank
7b1d61c956 Ct 1191 event history cleanup (#5858)
* Change Exceptions in events to strings. Refactor event_buffer_handling.

* Changie

* fix fire_event call MainEncounteredError

* Set EventBufferFull message when event buffer >= 10,000
2022-09-20 12:44:33 -04:00
Chenyu Li
646a0c704f restrict python submission (#5822)
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2022-09-20 09:39:20 -07:00
Doug Beatty
bbf4fc30a5 Default to current working directory for profiles.yml and fall back to ~/.dbt (#5717)
* Method for capturing standard out during testing (rather than logs)

* Allow dbt exit code assertion to be optional

* Verify priority order to search for profiles.yml configuration

* Updates after pre-commit checks

* Test searching for profiles.yml within the dbt project directory before `~/.dbt/`

* Refactor `dbt debug` to move to the project directory prior to looking up profiles directory

* Search the current working directory for profiles.yml

* Changelog

* Formatting with Black

* Move `run_dbt_and_capture_stdout` into the test case

* Update CLI help text

* Unify separate DEFAULT_PROFILES_DIR definitions

* Remove unused PROFILE_DIR_MESSAGE

* Remove unused DEFAULT_PROFILES_DIR

* Use shared definition of DEFAULT_PROFILES_DIR

* Define global vs. local profiles location and dynamically determine the default

* Restore original

* Remove function for determining the default profiles directory
2022-09-20 08:44:15 -04:00
jared-rimmer
6baaa2bcb0 Add metadata env method to provider context (#5794)
* Add dbt_metadata_envs contextproperty to ProviderContext

* Refactor helper methods into functions

* Fix code quality failures

* Add Changelog
2022-09-20 08:41:44 -04:00
Chenyu Li
13a595722a add support for custom file ending (#5845) 2022-09-19 13:58:38 -07:00
Sam Debruyn
3680b6ad0e remove source quoting setting in adapter tests (#5839)
* remove source quoting setting in adapter tests

* changelog entry

* formatting
2022-09-19 11:00:12 -07:00
Ian Knox
4c29d48d1c Flags work with Click (#5790) 2022-09-19 12:36:38 -05:00
Chenyu Li
e00eb9aa3a fix multiple dbt.config.get in python model (#5850)
* fix multiple dbt.config.get in python model

Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
2022-09-16 15:07:59 -07:00
Jeremy Cohen
f5a94fc774 Back/fwd compatibility for renamed metrics attributes (#5825)
* Naive handling for metric attr renames

* Add tests for bwd/fwd compatibility

* Add deprecation

* Add changelog entry

* PR feedback

* Small fixups

* emmyoop's suggestions

Co-authored-by: Callum McCann <cmccann51@gmail.com>
2022-09-16 11:21:35 +02:00
Sam Debruyn
b98af4ce17 remove key as reserved keyword from test_bool_or (#5818) 2022-09-15 09:44:25 -05:00
Matthew McKnight
b0f8d3d2f1 [CT-1100] 021_test_concurrency test conversion (#5753)
* init push for 021_test_concurrency conversion

* ref to self, delete old integration tests, core passing locally

* creating base class to send setup to snowflake

* making changes to store all setup in core, todo: remove util changes after 1050 is merged

* swap sql seeds to csv

* white space removal

* rewriting seed to see if it fixes issue in snowflake

* attempt to rewrite file for test in snowflake

* update to main

* remove unneeded variable to seeds

* remove unneeded snowflake specific code
2022-09-14 15:49:36 -05:00
Emily Rockman
6c4577f44e Add config to disable metrics/exposures (#5815)
* first pass adding disabled functionality to metrics and exposures

* first pass at getting metrics disabled

* add unsaved file

* fix up comments

* Delete tmp.csv

* fix test

* add exposure logic, fix merge from main

* change when nodes are added to manifest, finish tests

* add changelog

* removed unused code

* minor cleanup
2022-09-14 14:35:38 -05:00
Matthew McKnight
89ee5962f5 [CT-1050] convert 020_ephemeral_test (#5699)
* init file creation for test_ephemeral conversion

* creating base class to run seed through and pass along to classes to test against

* laid out basic flow of tests, need to finish by figuring out how to handle the assertTrue sections and fix error thats occuring

* added creation and comparison of sql and expected result, seeing issue with extra appended test_ on some and issue with errorhandling regarding expect pass

* working on fixing view structure

* update to expected_sql file

* update to expected_sql file

* directory rename, close on all tests need to fix the test_test_ name change for first two tests and figure out why the new test is calling error instead of skipped in status

* renamed expected_sql to include the test_test_ephemeral style name, organized how models are imported into test classes

* move ephemeral functional test to adapter zone

* trying to include the BaseEphemeralMulti class to send to snowflake

* trying to fix snowflake test

* trying to fix snowflake test

* creation of second Base class to feed into others for testing purposes

* found way to check type of warehouse to make data type change for snowflake

* move seed into fixture, to be able to import it from core for adapter tests

* convert to csv and get test passing in core

* remove snowflake specific stuff from util

* remove whitespace

* update to main
2022-09-14 12:00:02 -05:00
Ian Knox
a096202b28 Complete CLI modeling for Click (#5789) 2022-09-14 10:27:47 -05:00
Stu Kilgore
7da7c2d692 Convert default selectors tests to pytest (#5820) 2022-09-13 11:47:13 -05:00
dependabot[bot]
1db48b3cca Bump python from 3.10.6-slim-bullseye to 3.10.7-slim-bullseye in /docker (#5805)
* Bump python from 3.10.6-slim-bullseye to 3.10.7-slim-bullseye in /docker

Bumps python from 3.10.6-slim-bullseye to 3.10.7-slim-bullseye.

---
updated-dependencies:
- dependency-name: python
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-09-12 10:58:24 -07:00
Chenyu Li
567847a5b0 add a base python job helper class (#5802)
* add a base python job helper class

* fix comment, add changelog

* fix quote for adapters
2022-09-12 10:44:19 -07:00
Sam Debruyn
9894c04d38 fix dead anchor link in PR template (#5814)
* fix dead anchor link in PR template

The link did not go to the anchor directly, now it does

* changelog entry
2022-09-12 11:08:30 -04:00
Callum McCann
b26280d1cf Altering Window Metric Attribute To Match Freshness Tests (#5793)
* changing window spec

* more updates

* adding to v7 json?

* chenyu rules

* updating for formatting

* updating metric deferral test
2022-09-09 16:33:32 -07:00
Callum McCann
cfece2cf51 Renaming Attributes In Metric Spec (#5775)
* making updates - see what fails

* updating tests

* adding timestamp to ok_metric_no_model

* adding changie and fixing description error

* test fixes

* updating schema renderer

* fixing test_yaml_render

* file cleaning and window tests
2022-09-09 14:59:52 -04:00
Stu Kilgore
79da002c3c Fix warnings as errors during tests (#5800)
Added RunResultWarningMessage event to support this change.
2022-09-09 12:50:15 -05:00
Bertjan Broeksema
e3f827513f Remove tmp file after test passed (#5749)
* Remove tmp file after test passed

* Add changelog entry
2022-09-09 11:14:43 -04:00
Gerda Shank
10b2a7e7ff Convert test/integration/074_postgres_unlogged_table_tests (#5752)
* Convert test/integration//074_postgres_unlogged_table_tests

* Remove old test
2022-09-08 16:54:03 -04:00
jared-rimmer
82c8d6a7a8 Add invocation_args_dict to ProviderContext (#5782)
* Add invocation_args_to_dict to ProviderContext

* Change invocation_args_to_dict contextproperty to invocation_args_dict

* Fix invocation_args_dict builtin test

* Add CHANGELOG entry

* Fix formatting
2022-09-08 15:31:40 -04:00
Gerda Shank
c994717cbc Call build_flat_graph in merge_from_artifact (#5786) 2022-09-08 15:30:26 -04:00
Callum McCann
e3452b9a8f Add Window Attribute for Metrics (#5722)
* file changes

* changing to window

* adding test

* adding changie for feature

* fixing commits

* fixing tests

* adding timestamp

* fixing graph unparsed

* changing default value
2022-09-07 10:45:08 -04:00
Aram Panasenco
e95e36d63b Include py.typed in MANIFEST.in (#5703)
This enables packages that install dbt-core from pypi to use mypy.
2022-09-06 22:03:12 -04:00
jared-rimmer
74f7416144 Add extra rm command in make clean to remove all .coverage files (#5759)
* Add extra rm command in make clean to remove all .coverage files

* Add Changie entry
2022-09-06 21:36:35 -04:00
Mila Page
1feeb804f4 Update makefile to match our CI (#5763)
* Add structured logging test and provide CI env vars to integration conditionally.
* Add the crazy inline if make feature and ax unneeded variable

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-09-06 15:42:50 -07:00
Mila Page
0f6e4f0e32 Ct 866/migrate hook tests fixed (#5760)
* Finish converting first test file.
* Finish test conversion.
* Remove old integration hook tests.
* Move location of schema.yml to models directory.
* fix snapshot delete test that was failing
* Add the extra env var check for our CI.
* Add changelog
* Remove naive json flag check and instead force all integration tests to check for environment variables using flag routine.
* Revise the changelog to be more of an explanation.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-09-06 12:22:29 -07:00
Emily Rockman
2b44c2b456 Convert experimental parser tests (#5772)
* WIP

* fixed up tests

* remove old test
2022-09-06 14:11:22 -05:00
leahwicz
2bb31ade39 Update release-branch-tests.yml (#5767) 2022-09-06 14:46:53 -04:00
Mila Page
0ce12405c0 Move docs in from Jinja. (#5762)
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-09-06 10:05:05 -07:00
dependabot[bot]
b8c13e05db Bump black from 22.6.0 to 22.8.0 (#5750)
* Bump black from 22.6.0 to 22.8.0

Bumps [black](https://github.com/psf/black) from 22.6.0 to 22.8.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/22.6.0...22.8.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

* Remove newline

* Delete Dependency-20220901-154946.yaml

* Add automated changelog yaml from template for bot PR

* Delete Dependency-20220906-132643.yaml

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-09-06 11:31:13 -05:00
Emily Rockman
64268d2f9b Fix label selection syntax and trigger (#5754)
* Fix label selection syntax and trigger

* update version
2022-09-06 08:21:21 -05:00
Jeremy Cohen
8c8be68701 Roadmap update (Aug 2022) (#5748)
* Add dbt Core roadmap as of August 2022

* Cody intro

* Florian intro

* Lint my markdown

* add blurb on 1.5+ for Python next steps

* Revert "add blurb on 1.5+ for Python next steps"

This reverts commit 1659a5a727.

* PR feedback, self review

Co-authored-by: Cody Peterson <cody.dkdc2@gmail.com>
Co-authored-by: Florian Eiden <florian.eiden@dbtlabs.com>
2022-09-01 00:31:51 +02:00
Doug Beatty
1df713fee9 Test priority order of profiles directory configuration (#5715)
* Method for capturing standard out during testing (rather than logs)

* Allow dbt exit code assertion to be optional

* Verify priority order to search for profiles.yml configuration

* Updates after pre-commit checks

* Move `run_dbt_and_capture_stdout` into the test case
2022-08-30 17:18:17 -06:00
Gerda Shank
758afd4071 ADR for why we're using betterproto for protobuf (#5726) 2022-08-30 12:58:44 -04:00
Jeremy Cohen
0f9200d356 Update team ownership (#5694) 2022-08-30 15:24:34 +02:00
github-actions[bot]
5f59ff1254 Bumping version to 1.3.0b2 and generate changelog (#5724)
* Bumping version to 1.3.0b2 and generate CHANGELOG

* Remove newlines

* Remove newline

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-08-29 11:11:35 -04:00
dependabot[bot]
49e7bdbef9 Bump mashumaro[msgpack] from 3.0.3 to 3.0.4 in /core (#5649)
* Bump mashumaro[msgpack] from 3.0.3 to 3.0.4 in /core

Bumps [mashumaro[msgpack]](https://github.com/Fatal1ty/mashumaro) from 3.0.3 to 3.0.4.
- [Release notes](https://github.com/Fatal1ty/mashumaro/releases)
- [Commits](https://github.com/Fatal1ty/mashumaro/compare/v3.0.3...v3.0.4)

---
updated-dependencies:
- dependency-name: mashumaro[msgpack]
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

* Remove newline

* Remove newline

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-08-25 09:00:46 -04:00
Stu Kilgore
5466fa5575 Add supported languages to materializations (#5695)
* Add supported languages to materializations

* Add changie entry

* Linting

* add more error and only get supported language for materialization macro, update schema

* fix test and add more check

Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
2022-08-24 12:08:16 -07:00
dependabot[bot]
f8f21ee707 Bump python from 3.10.5-slim-bullseye to 3.10.6-slim-bullseye in /docker (#5623)
* Bump python from 3.10.5-slim-bullseye to 3.10.6-slim-bullseye in /docker

Bumps python from 3.10.5-slim-bullseye to 3.10.6-slim-bullseye.

---
updated-dependencies:
- dependency-name: python
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

* Update Dependency-20220808-132327.yaml

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
2022-08-24 11:20:00 -07:00
Chenyu Li
436737dde5 more complex visit_Call to parse chained command (#5677)
* more complex visit_Call

* add changelog

* traversing all of the tree
2022-08-24 08:02:19 -07:00
Jeremy Cohen
7f8d9a7af9 Check dbt-core version requirements when installing Hub packages (#5651)
* First cut at checking version compat for hub pkgs

* Account for field rename

* Add changelog entry

* Update error message

* Fix unit test

* PR feedback

* Try fixing test

* Edit exception msg

* Expand unit test to include pkg prerelease

* Update core/dbt/deps/registry.py

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2022-08-19 23:13:03 +02:00
Doug Beatty
d80de82316 Update "Homepage" link for dbt-tests-adapter on PyPI (#5678) 2022-08-18 06:27:30 -06:00
Mila Page
0d02446e07 Change postgres name truncation logic to be overridable. (#5656)
* Change postgres name truncation logic to be overridable. Add exception with debugging instructions.

* Add changelog.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-08-18 00:52:43 -07:00
Chenyu Li
a9e71b3907 fix multiple args for ref and source (#5635)
* fix multiple args for ref and source

* add test for support multi part ref and source
2022-08-17 14:13:32 -07:00
Nathaniel May
739fb98d0e Print more information on log line interop test failures (#5659)
Print more information on log line interop test failures
2022-08-16 21:30:10 +01:00
Elize Papineau
348769fa80 Fix/incremental column precision changes (#5395)
* Only consider schema change when column cannot be expanded

* Add test for column shortening

* Add changelog entry

* Move test from integration to adapter tests

* Remove print statement

* add on_schema_change
2022-08-12 14:54:29 -07:00
ilanbenb
7efb6ab62d Incremental model show reason for on_schema_change fail failures (#5505)
* show reason for schema change failures

When the incremental model fails, I do not get the context I need to easily fix my discrepency.
Adding more info

* Update on_schema_change.sql

Fix identation

* Added changie changes

Added changie changes

* Update on_schema_change.sql

Trim whitespaces

* Update on_schema_change.sql

Log message text  enhancement
2022-08-12 10:07:30 -07:00
Vyacheslav
a3b018fd3b Extended validations for the project names (#5620) 2022-08-10 16:45:42 -07:00
varun-dc
4d6208be64 Use sys.exit instead of exit (#5627)
* Use sys.exit instead of exit

* Add changelog
2022-08-09 10:22:36 -07:00
Gerda Shank
3aab9befcf CT-729 Include schema model config in unrendered config (#5344)
* Pass patch_config_dict to build_config_dict when creating
unrendered_config

* Add test case for unrendered_config

* Changie

* formatting, fix test

* Fix test so unrendered config includes docs config
2022-08-09 12:49:29 -04:00
Léon Stefani
e5ac9df069 Fix postgres handling for unlimited varchars (#5292)
* Fix postgres handling for unlimited varchars

* fix: correctly name varchar

* chore: added changelog entry

* Update .changes/unreleased/Fixes-20220523-103843.yaml

Co-authored-by: Emily Rockman <ebuschang@gmail.com>

Co-authored-by: Emily Rockman <ebuschang@gmail.com>
2022-08-09 09:59:58 -05:00
Emily Rockman
34960d8d61 link changelog to dbt-docs for Docs kind (#5628)
* first pass

* tweaks

* convert to use dbt-docs links in contributors section

* fix eq check

* fix format of contributos prs

* update docs changelog to point back to dbt-docs

* update beta 1.3 docs changelog

* remove optional param

* make issue inclusion conditional on being filled
2022-08-09 09:19:40 -05:00
Callum McCann
94a7cfa58d Adding Metric Helper Functions (#5607)
* Adding helper functions

* adding bad test

* use ResolvedMetricReference

* adding pytest

* adding changie updates

* adding pre-commit changes

Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
2022-08-08 08:37:09 -05:00
Gerda Shank
eb72dbf32a Do not render metrics description field when doing render_data (#5603) 2022-08-05 09:19:46 -04:00
Emily Rockman
9eb411f7b7 bring in new index and add changelog (#5614)
* bring in new index and add changelog

* Update Docs-20220804-134138.yaml
2022-08-04 14:07:18 -05:00
Benoit Perigaud
32415e3659 Add docs as a real node config and support node_color for coloring the DAG (#5397)
* add Optional node_color config in Docs dataclass

* Remove node_color from the original docs config

* Add docs config and input validation

* Handle when docs is both under docs and config.docs

* Add node_color to Docs

* Make docs a Dict to avoid parsing errors

* Make docs a dataclass instead of a Dict

* Fix error when using docs as dataclass

* Simplify generator for the default value

* skeleton for test fixtures

* bump manifest to v7

* + config hierarchy tests

* add show override tests

* update manifest

* Remove node_color from the original docs config

* Add node_color to Docs

* Make docs a Dict to avoid parsing errors

* Make docs a dataclass instead of a Dict

* Simplify generator for the default value

* + config hierarchy tests

* add show override tests

* Fix unit tests

* Add tests in case of incorrect input for node_color

* Rename tests and Fix typos

* Fix functional tests

* Fix issues with remote branch

* Add changie entry

* modify tests to meet standards (#5608)

Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-08-03 14:31:30 -05:00
Gerda Shank
7886924c07 Comment out line to generate new manifest in test_previous_version_state.py (#5604) 2022-08-03 14:18:48 -04:00
Emily Rockman
40b55ed65a convert to reusable action (#5565)
* convert to reusable action

* fix branch name

* reimplemented changelog

* update to use workflow

* fix typo

* move def

* inherit secrets

* send in comment/label

* specify GITHUB_TOKEN

* Add automated changelog yaml from template for bot PR

* Delete Dependency-20220801-193810.yaml

* Add automated changelog yaml from template for bot PR

* remove dummy changelog

* remove token

* rename file

* point to main

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-08-03 11:33:11 -05:00
Emily Rockman
4f5b9e686c Store when default env vars are used in manifest (#5589)
* WIP

* handle defauly env vars

* fix typo

* add changelog

* small fixes

* add constants.py file
2022-08-02 20:53:11 -05:00
Emily Rockman
95284aff68 rename .github readme (#5597) 2022-08-01 19:22:26 -05:00
kadero
063ff9c254 Add defer to docs and compile commands (#4514)
* Add defer to docs and compile commands
Add docs generate defer test
Fix CLA check
Add compile defer test

* Update changelog

* Restore changelog

* Add changie entry

* Move defer_to_manifest to CompileTask

* Add check to to verify defer works as expected

* Add assert
2022-08-01 14:38:08 -07:00
Vyacheslav
26b33e668d use MethodName.File when value ends with .csv (#5581)
* [CT-953] [Feature] Support for .csv files when using file selectors(for seed)

* Added Changelog entry for issue-5578

* Added Unit tests
2022-08-01 14:32:58 -07:00
Jeremy Cohen
26ac9d57d0 Convert git deps to local, faster tests (#5515) 2022-07-30 14:10:57 +02:00
github-actions[bot]
7bd861a351 Bumping version to 1.3.0b1 and generate changelog (#5582)
* Bumping version to 1.3.0b1 and generate CHANGELOG

* Update Docker versions for adapters

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-07-29 11:59:12 -04:00
Chenyu Li
15c97f009a fix passing in Rlock error in rpc (#5580) 2022-07-29 07:46:08 -07:00
Leo Folsom
5153023100 use MethodName.File when value ends with .py (#5295) 2022-07-29 07:33:53 -07:00
Emily Rockman
c879083bc9 add info on basics of using actions (#5576) 2022-07-29 08:20:00 -05:00
Chenyu Li
05bf27c958 add changelog from dbt-docs (#5577) 2022-07-28 15:49:35 -07:00
Chenyu Li
a7ff003d4f python model beta feature(#5421)
* Python model beta version with update to manifest that renames `raw_sql` and `compiled_sql` to `raw_code` and `compiled_code`
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Ian Knox <ian.knox@dbtlabs.com>
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
2022-07-28 11:42:59 -07:00
leahwicz
2547e4f55e Add changelog and whitespace fix to version bump Action (#5563)
* Add changelog and whitespace fix to version bump Action

* Fixing whitespace

* Remove tabs

* Update .github/workflows/version-bump.yml

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* Update .github/workflows/version-bump.yml

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* Update .github/workflows/version-bump.yml

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* Update .github/workflows/version-bump.yml

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* Updating per comments

* Fix whitespace

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-07-28 14:08:03 -04:00
Gerda Shank
b43fc76701 Fix handling of top-level exceptions (#5560) 2022-07-27 23:52:17 -04:00
Jeremy Cohen
48464a22a4 Proposed updates to issue "wayfinder" (#5426)
* Propose updates to issue wayfinder

* Updating bug template with clearer description

* Updated link to new troubleshooting spot

* Updating Cloud messaging

* Adding required fields and dbt versions

* Fixed whitespace

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-07-27 22:25:34 -04:00
Vyacheslav
c3891d78e4 [CT-700] [Bug] Logging tons of asterisks for sensitive env vars (#5518)
* [CT-700] [Bug] Logging tons of asterisks when sensitive env vars are missing

* [CT-700][Bug] Added changelog entry

* Updated the changelog body message
2022-07-27 11:40:32 -05:00
Nicholas A. Yager
69ce6779e1 Improve CompilationException messaging for generic test Jinja rendering (#5393)
* feat: Improve generic test UndefinedMacroException message

The error message rendered from the `UndefinedMacroException` when
raised by a TestBuilder is very vague as to where the problem is
and how to resolve it. This commit adds a basic amount of
information about the specific model and column that is
referencing an undefined macro.

Note: All custom macros referenced in a generic test config will
raise an UndefinedMacroException as of v0.20.0.

* feat: Bubble CompilationException into schemas.py

I realized that this exception information would be better if
CompilationExceptions inclulded the file that raised the exception.
To that end, I created a new exception handler in `_parse_generic_test`
to report on CompilationExceptions raised during the parsing of
generic tests. Along the way I reformatted the message returned
from TestBuilder to play nicely with the the existing formatting of
`_parse_generic_test`'s exception handling code.

* feat: Add tests to confirm CompileException

I've added a basic test to confirm that the approriate
CompilationException when a custom macro is referenced
in a generic test config.

* feat: Add changie entry and tweak error msg

* Update .changes/unreleased/Under the Hood-20220617-150744.yaml

Thanks to @emmyoop for the recommendation that this be listed as a Fix change instead of an "Under the Hood" change!

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* fix: Simplified Compliation Error message

I've simplified the error message raised during a Compilation Error
sourced from a test config. Mainly by way of removing tabs and newlines
where not required.

* fix: Convert format to fstring in schemas

This commit moves a format call to a multiline fstring in the
schemas.py file for CompilationExceptions.

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-07-27 09:41:22 -05:00
Emily Rockman
a206cfce65 add readme to .github (#5487)
* add readme to .github

* more changes to readme

* improve docs

* more readme tweaks

* add more docs

* incorporate feedback

* removed section with no info
2022-07-25 16:16:36 -05:00
Emily Rockman
3f54f30349 Remove markupsafe pin (#5507)
* update markupsafe pin

* add changelog

* completely remove markupsafe pin
2022-07-25 09:47:25 -05:00
Matthew McKnight
1071a4681d reformat changie.yaml to ignore dependabot (#5508)
* reformat changie.yaml to ignore dependabot

* Removed modifled file from PR

* Removed modifled file from PR
2022-07-22 09:45:37 -05:00
Gerda Shank
2548ba9936 Refactoring of incremental materialization (#5359) 2022-07-21 14:11:23 -04:00
Nathaniel May
999ed0b74c postgres: add exponential backoff to connection retries (#5503)
add exponential backoff to postgres connection retries
2022-07-21 13:10:52 -04:00
Emily Rockman
eef7bca005 snyk automation plus bot changelog consolidation (#5479)
* first pass at snyk changelog entry

* refactor for single workflow for all bot PRs

* exclude snyk from contributors list

* point action to branch temporarily

* replace quotes

* point to released tag
2022-07-20 17:48:35 -05:00
Gerda Shank
5686cab5a0 Bump mashumaro from 2.9 to 3.0.3 in /core (#5118)
* add pyyaml as a setup requirement
2022-07-20 17:15:51 -04:00
dependabot[bot]
99bc292588 Bump mypy from 0.961 to 0.971 (#5495)
* Bump mypy from 0.961 to 0.971

Bumps [mypy](https://github.com/python/mypy) from 0.961 to 0.971.
- [Release notes](https://github.com/python/mypy/releases)
- [Commits](https://github.com/python/mypy/compare/v0.961...v0.971)

---
updated-dependencies:
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-07-19 21:37:49 -04:00
Emily Rockman
a1ee348a6f jinja2 v3 upgrade (#5465)
* update version

* ignore mypy for now

* add changelog
2022-07-19 16:16:31 -05:00
699 changed files with 54998 additions and 12332 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion] [bumpversion]
current_version = 1.3.0a1 current_version = 1.4.0a1
parse = (?P<major>\d+) parse = (?P<major>\d+)
\.(?P<minor>\d+) \.(?P<minor>\d+)
\.(?P<patch>\d+) \.(?P<patch>\d+)

View File

@@ -3,6 +3,7 @@
For information on prior major and minor releases, see their changelogs: For information on prior major and minor releases, see their changelogs:
* [1.3](https://github.com/dbt-labs/dbt-core/blob/1.3.latest/CHANGELOG.md)
* [1.2](https://github.com/dbt-labs/dbt-core/blob/1.2.latest/CHANGELOG.md) * [1.2](https://github.com/dbt-labs/dbt-core/blob/1.2.latest/CHANGELOG.md)
* [1.1](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md) * [1.1](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md)
* [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md) * [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md)

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core"
time: 2022-09-23T00:06:46.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 5917

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Bump black from 22.8.0 to 22.10.0"
time: 2022-10-07T00:08:48.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 6019

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Bump mashumaro[msgpack] from 3.0.4 to 3.1.1 in /core"
time: 2022-10-20T00:07:53.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 6108

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Update colorama requirement from <0.4.6,>=0.3.9 to >=0.3.9,<0.4.7 in /core"
time: 2022-10-26T00:09:10.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 6144

View File

@@ -0,0 +1,7 @@
kind: Docs
body: minor doc correction
time: 2022-09-08T15:41:57.689162-04:00
custom:
Author: andy-clapson
Issue: "5791"
PR: "5684"

View File

@@ -0,0 +1,7 @@
kind: Docs
body: Generate API docs for new CLI interface
time: 2022-10-07T09:06:56.446078-05:00
custom:
Author: stu-k
Issue: "5528"
PR: "6022"

View File

@@ -0,0 +1,6 @@
kind: Docs
time: 2022-10-17T17:14:11.715348-05:00
custom:
Author: paulbenschmidt
Issue: "5880"
PR: "324"

View File

@@ -0,0 +1,7 @@
kind: Docs
body: Fix rendering of sample code for metrics
time: 2022-11-16T15:57:43.204201+01:00
custom:
Author: jtcohen6
Issue: "323"
PR: "346"

View File

@@ -0,0 +1,8 @@
kind: Features
body: Added favor-state flag to optionally favor state nodes even if unselected node
exists
time: 2022-04-08T16:54:59.696564+01:00
custom:
Author: daniel-murray josephberni
Issue: "2968"
PR: "5859"

View File

@@ -1,8 +0,0 @@
kind: Features
body: Add reusable function for retrying adapter connections. Utilize said function
to add retries for Postgres (and Redshift).
time: 2022-07-15T03:55:55.270637265+02:00
custom:
Author: tomasfarias
Issue: "5022"
PR: "5432"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Proto logging messages
time: 2022-08-17T15:48:57.225267-04:00
custom:
Author: gshank
Issue: "5610"
PR: "5643"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Friendlier error messages when packages.yml is malformed
time: 2022-09-12T12:59:35.121188+01:00
custom:
Author: jared-rimmer
Issue: "5486"
PR: "5812"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Migrate dbt-utils current_timestamp macros into core + adapters
time: 2022-09-14T09:56:25.97818-07:00
custom:
Author: colin-rogers-dbt
Issue: "5521"
PR: "5838"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Allow partitions in external tables to be supplied as a list
time: 2022-09-25T21:16:51.051239654+02:00
custom:
Author: pgoslatara
Issue: "5929"
PR: "5930"

View File

@@ -0,0 +1,7 @@
kind: Features
body: extend -f flag shorthand for seed command
time: 2022-10-03T11:07:05.381632-05:00
custom:
Author: dave-connors-3
Issue: "5990"
PR: "5991"

View File

@@ -0,0 +1,8 @@
kind: Features
body: This pulls the profile name from args when constructing a RuntimeConfig in lib.py,
enabling the dbt-server to override the value that's in the dbt_project.yml
time: 2022-11-02T15:00:03.000805-05:00
custom:
Author: racheldaniel
Issue: "6201"
PR: "6202"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Added an md5 function to the base context
time: 2022-11-14T18:52:07.788593+02:00
custom:
Author: haritamar
Issue: "6246"
PR: "6247"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Exposures support metrics in lineage
time: 2022-11-30T11:29:13.256034-05:00
custom:
Author: michelleark
Issue: "6057"
PR: "6342"

View File

@@ -1,7 +0,0 @@
kind: Fixes
body: Rename try to strict for more intuitiveness
time: 2022-07-15T23:11:48.327928+12:00
custom:
Author: jeremyyeo
Issue: "5475"
PR: "5477"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Account for disabled flags on models in schema files more completely
time: 2022-09-16T10:48:54.162273-05:00
custom:
Author: emmyoop
Issue: "3992"
PR: "5868"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Add validation of enabled config for metrics, exposures and sources
time: 2022-10-10T11:32:18.752322-05:00
custom:
Author: emmyoop
Issue: "6030"
PR: "6038"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: check length of args of python model function before accessing it
time: 2022-10-11T16:07:15.464093-04:00
custom:
Author: chamini2
Issue: "6041"
PR: "6042"

View File

@@ -0,0 +1,8 @@
kind: Fixes
body: Add functors to ensure event types with str-type attributes are initialized
to spec, even when provided non-str type params.
time: 2022-10-16T17:37:42.846683-07:00
custom:
Author: versusfacit
Issue: "5436"
PR: "5874"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Allow hooks to fail without halting execution flow
time: 2022-11-07T09:53:14.340257-06:00
custom:
Author: ChenyuLInx
Issue: "5625"
PR: "6059"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Clarify Error Message for how many models are allowed in a Python file
time: 2022-11-15T08:10:21.527884-05:00
custom:
Author: justbldwn
Issue: "6245"
PR: "6251"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Put black config in explicit config
time: 2022-09-27T19:42:59.241433-07:00
custom:
Author: max-sixty
Issue: "5946"
PR: "5947"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Added flat_graph attribute the Manifest class's deepcopy() coverage
time: 2022-09-29T13:44:06.275941-04:00
custom:
Author: peterallenwebb
Issue: "5809"
PR: "5975"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Add mypy configs so `mypy` passes from CLI
time: 2022-10-05T12:03:10.061263-07:00
custom:
Author: max-sixty
Issue: "5983"
PR: "5983"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Exception message cleanup.
time: 2022-10-07T09:46:27.682872-05:00
custom:
Author: emmyoop
Issue: "6023"
PR: "6024"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Add dmypy cache to gitignore
time: 2022-10-07T14:00:44.227644-07:00
custom:
Author: max-sixty
Issue: "6028"
PR: "5978"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Provide useful errors when the value of 'materialized' is invalid
time: 2022-10-13T18:19:12.167548-04:00
custom:
Author: peterallenwebb
Issue: "5229"
PR: "6025"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Fixed extra whitespace in strings introduced by black.
time: 2022-10-17T15:15:11.499246-05:00
custom:
Author: luke-bassett
Issue: "1350"
PR: "6086"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Clean up string formatting
time: 2022-10-17T15:58:44.676549-04:00
custom:
Author: eve-johns
Issue: "6068"
PR: "6082"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Remove the 'root_path' field from most nodes
time: 2022-10-28T10:48:37.687886-04:00
custom:
Author: gshank
Issue: "6171"
PR: "6172"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Combine certain logging events with different levels
time: 2022-10-28T11:03:44.887836-04:00
custom:
Author: gshank
Issue: "6173"
PR: "6174"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert threading tests to pytest
time: 2022-11-08T07:45:50.589147-06:00
custom:
Author: stu-k
Issue: "5942"
PR: "6226"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert postgres index tests to pytest
time: 2022-11-08T11:56:33.743042-06:00
custom:
Author: stu-k
Issue: "5770"
PR: "6228"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert use color tests to pytest
time: 2022-11-08T13:31:04.788547-06:00
custom:
Author: stu-k
Issue: "5771"
PR: "6230"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Add github actions workflow to generate high level CLI API docs
time: 2022-11-16T13:00:37.916202-06:00
custom:
Author: stu-k
Issue: "5942"
PR: "6187"

33
.changie.yaml Executable file → Normal file
View File

@@ -7,14 +7,26 @@ versionExt: md
versionFormat: '## dbt-core {{.Version}} - {{.Time.Format "January 02, 2006"}}' versionFormat: '## dbt-core {{.Version}} - {{.Time.Format "January 02, 2006"}}'
kindFormat: '### {{.Kind}}' kindFormat: '### {{.Kind}}'
changeFormat: '- {{.Body}} ([#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), [#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))' changeFormat: '- {{.Body}} ([#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), [#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))'
kinds: kinds:
- label: Breaking Changes - label: Breaking Changes
- label: Features - label: Features
- label: Fixes - label: Fixes
- label: Docs - label: Docs
changeFormat: '- {{.Body}} ([dbt-docs/#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-docs/issues/{{.Custom.Issue}}), [dbt-docs/#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-docs/pull/{{.Custom.PR}}))'
- label: Under the Hood - label: Under the Hood
- label: Dependencies - label: Dependencies
changeFormat: '- {{.Body}} ({{if ne .Custom.Issue ""}}[#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), {{end}}[#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))'
- label: Security - label: Security
changeFormat: '- {{.Body}} ({{if ne .Custom.Issue ""}}[#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), {{end}}[#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))'
newlines:
afterChangelogHeader: 1
afterKind: 1
afterChangelogVersion: 1
beforeKind: 1
endOfVersion: 1
custom: custom:
- key: Author - key: Author
label: GitHub Username(s) (separated by a single space if multiple) label: GitHub Username(s) (separated by a single space if multiple)
@@ -23,15 +35,16 @@ custom:
- key: Issue - key: Issue
label: GitHub Issue Number label: GitHub Issue Number
type: int type: int
minLength: 4 minInt: 1
- key: PR - key: PR
label: GitHub Pull Request Number label: GitHub Pull Request Number
type: int type: int
minLength: 4 minInt: 1
footerFormat: | footerFormat: |
{{- $contributorDict := dict }} {{- $contributorDict := dict }}
{{- /* any names added to this list should be all lowercase for later matching purposes */}} {{- /* any names added to this list should be all lowercase for later matching purposes */}}
{{- $core_team := list "emmyoop" "nathaniel-may" "gshank" "leahwicz" "chenyulinx" "stu-k" "iknox-fa" "versusfacit" "mcknight-42" "jtcohen6" "dependabot" }} {{- $core_team := list "michelleark" "peterallenwebb" "emmyoop" "nathaniel-may" "gshank" "leahwicz" "chenyulinx" "stu-k" "iknox-fa" "versusfacit" "mcknight-42" "jtcohen6" "dependabot[bot]" "snyk-bot" "colin-rogers-dbt" }}
{{- range $change := .Changes }} {{- range $change := .Changes }}
{{- $authorList := splitList " " $change.Custom.Author }} {{- $authorList := splitList " " $change.Custom.Author }}
{{- /* loop through all authors for a PR */}} {{- /* loop through all authors for a PR */}}
@@ -39,14 +52,20 @@ footerFormat: |
{{- $authorLower := lower $author }} {{- $authorLower := lower $author }}
{{- /* we only want to include non-core team contributors */}} {{- /* we only want to include non-core team contributors */}}
{{- if not (has $authorLower $core_team)}} {{- if not (has $authorLower $core_team)}}
{{- $pr := $change.Custom.PR }} {{- /* Docs kind link back to dbt-docs instead of dbt-core PRs */}}
{{- $prLink := $change.Kind }}
{{- if eq $change.Kind "Docs" }}
{{- $prLink = "[dbt-docs/#pr](https://github.com/dbt-labs/dbt-docs/pull/pr)" | replace "pr" $change.Custom.PR }}
{{- else }}
{{- $prLink = "[#pr](https://github.com/dbt-labs/dbt-core/pull/pr)" | replace "pr" $change.Custom.PR }}
{{- end }}
{{- /* check if this contributor has other PRs associated with them already */}} {{- /* check if this contributor has other PRs associated with them already */}}
{{- if hasKey $contributorDict $author }} {{- if hasKey $contributorDict $author }}
{{- $prList := get $contributorDict $author }} {{- $prList := get $contributorDict $author }}
{{- $prList = append $prList $pr }} {{- $prList = append $prList $prLink }}
{{- $contributorDict := set $contributorDict $author $prList }} {{- $contributorDict := set $contributorDict $author $prList }}
{{- else }} {{- else }}
{{- $prList := list $change.Custom.PR }} {{- $prList := list $prLink }}
{{- $contributorDict := set $contributorDict $author $prList }} {{- $contributorDict := set $contributorDict $author $prList }}
{{- end }} {{- end }}
{{- end}} {{- end}}
@@ -56,6 +75,6 @@ footerFormat: |
{{- if $contributorDict}} {{- if $contributorDict}}
### Contributors ### Contributors
{{- range $k,$v := $contributorDict }} {{- range $k,$v := $contributorDict }}
- [@{{$k}}](https://github.com/{{$k}}) ({{ range $index, $element := $v }}{{if $index}}, {{end}}[#{{$element}}](https://github.com/dbt-labs/dbt-core/pull/{{$element}}){{end}}) - [@{{$k}}](https://github.com/{{$k}}) ({{ range $index, $element := $v }}{{if $index}}, {{end}}{{$element}}{{end}})
{{- end }} {{- end }}
{{- end }} {{- end }}

42
.github/CODEOWNERS vendored
View File

@@ -16,25 +16,57 @@
# Changes to GitHub configurations including Actions # Changes to GitHub configurations including Actions
/.github/ @leahwicz /.github/ @leahwicz
### LANGUAGE
# Language core modules # Language core modules
/core/dbt/config/ @dbt-labs/core-language /core/dbt/config/ @dbt-labs/core-language
/core/dbt/context/ @dbt-labs/core-language /core/dbt/context/ @dbt-labs/core-language
/core/dbt/contracts/ @dbt-labs/core-language /core/dbt/contracts/ @dbt-labs/core-language
/core/dbt/deps/ @dbt-labs/core-language /core/dbt/deps/ @dbt-labs/core-language
/core/dbt/events/ @dbt-labs/core-language # structured logging
/core/dbt/parser/ @dbt-labs/core-language /core/dbt/parser/ @dbt-labs/core-language
# Language misc files
/core/dbt/dataclass_schema.py @dbt-labs/core-language
/core/dbt/hooks.py @dbt-labs/core-language
/core/dbt/node_types.py @dbt-labs/core-language
/core/dbt/semver.py @dbt-labs/core-language
### EXECUTION
# Execution core modules # Execution core modules
/core/dbt/events/ @dbt-labs/core-execution @dbt-labs/core-language # eventually remove language but they have knowledge here now
/core/dbt/graph/ @dbt-labs/core-execution /core/dbt/graph/ @dbt-labs/core-execution
/core/dbt/task/ @dbt-labs/core-execution /core/dbt/task/ @dbt-labs/core-execution
# Adapter interface, scaffold, Postgres plugin # Execution misc files
/core/dbt/compilation.py @dbt-labs/core-execution
/core/dbt/flags.py @dbt-labs/core-execution
/core/dbt/lib.py @dbt-labs/core-execution
/core/dbt/main.py @dbt-labs/core-execution
/core/dbt/profiler.py @dbt-labs/core-execution
/core/dbt/selected_resources.py @dbt-labs/core-execution
/core/dbt/tracking.py @dbt-labs/core-execution
/core/dbt/version.py @dbt-labs/core-execution
### ADAPTERS
# Adapter interface ("base" + "sql" adapter defaults, cache)
/core/dbt/adapters @dbt-labs/core-adapters /core/dbt/adapters @dbt-labs/core-adapters
/core/scripts/create_adapter_plugin.py @dbt-labs/core-adapters
# Global project (default macros + materializations), starter project
/core/dbt/include @dbt-labs/core-adapters
# Postgres plugin
/plugins/ @dbt-labs/core-adapters /plugins/ @dbt-labs/core-adapters
# Global project: default macros, including generic tests + materializations # Functional tests for adapter plugins
/core/dbt/include/global_project @dbt-labs/core-execution @dbt-labs/core-adapters /tests/adapter @dbt-labs/core-adapters
### TESTS
# Overlapping ownership for vast majority of unit + functional tests
# Perf regression testing framework # Perf regression testing framework
# This excludes the test project files itself since those aren't specific # This excludes the test project files itself since those aren't specific

View File

@@ -9,23 +9,33 @@ body:
Thanks for taking the time to fill out this bug report! Thanks for taking the time to fill out this bug report!
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing issue for this? label: Is this a new bug in dbt-core?
description: Please search to see if an issue already exists for the bug you encountered. description: >
In other words, is this an error, flaw, failure or fault in our software?
If this is a bug that broke existing functionality that used to work, please open a regression issue.
If this is a bug in an adapter plugin, please open an issue in the adapter's repository.
If this is a bug experienced while using dbt Cloud, please report to [support](mailto:support@getdbt.com).
If this is a request for help or troubleshooting code in your own dbt project, please join our [dbt Community Slack](https://www.getdbt.com/community/join-the-community/) or open a [Discussion question](https://github.com/dbt-labs/docs.getdbt.com/discussions).
Please search to see if an issue already exists for the bug you encountered.
options: options:
- label: I have searched the existing issues - label: I believe this is a new bug in dbt-core
required: true
- label: I have searched the existing issues, and I could not find an existing issue for this bug
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
label: Current Behavior label: Current Behavior
description: A concise description of what you're experiencing. description: A concise description of what you're experiencing.
validations: validations:
required: false required: true
- type: textarea - type: textarea
attributes: attributes:
label: Expected Behavior label: Expected Behavior
description: A concise description of what you expected to happen. description: A concise description of what you expected to happen.
validations: validations:
required: false required: true
- type: textarea - type: textarea
attributes: attributes:
label: Steps To Reproduce label: Steps To Reproduce
@@ -36,7 +46,7 @@ body:
3. Run '...' 3. Run '...'
4. See error... 4. See error...
validations: validations:
required: false required: true
- type: textarea - type: textarea
id: logs id: logs
attributes: attributes:
@@ -52,8 +62,8 @@ body:
description: | description: |
examples: examples:
- **OS**: Ubuntu 20.04 - **OS**: Ubuntu 20.04
- **Python**: 3.7.2 (`python --version`) - **Python**: 3.9.12 (`python3 --version`)
- **dbt**: 0.21.0 (`dbt --version`) - **dbt-core**: 1.1.1 (`dbt --version`)
value: | value: |
- OS: - OS:
- Python: - Python:
@@ -64,13 +74,15 @@ body:
- type: dropdown - type: dropdown
id: database id: database
attributes: attributes:
label: What database are you using dbt with? label: Which database adapter are you using with dbt?
description: If the bug is specific to the database or adapter, please open the issue in that adapter's repository instead
multiple: true multiple: true
options: options:
- postgres - postgres
- redshift - redshift
- snowflake - snowflake
- bigquery - bigquery
- spark
- other (mention it in "Additional Context") - other (mention it in "Additional Context")
validations: validations:
required: false required: false

View File

@@ -1,4 +1,14 @@
blank_issues_enabled: false
contact_links: contact_links:
- name: Ask the community for help
url: https://github.com/dbt-labs/docs.getdbt.com/discussions
about: Need help troubleshooting? Check out our guide on how to ask
- name: Contact dbt Cloud support
url: mailto:support@getdbt.com
about: Are you using dbt Cloud? Contact our support team for help!
- name: Participate in Discussions
url: https://github.com/dbt-labs/dbt-core/discussions
about: Do you have a Big Idea for dbt? Read open discussions, or start a new one
- name: Create an issue for dbt-redshift - name: Create an issue for dbt-redshift
url: https://github.com/dbt-labs/dbt-redshift/issues/new/choose url: https://github.com/dbt-labs/dbt-redshift/issues/new/choose
about: Report a bug or request a feature for dbt-redshift about: Report a bug or request a feature for dbt-redshift
@@ -8,9 +18,6 @@ contact_links:
- name: Create an issue for dbt-snowflake - name: Create an issue for dbt-snowflake
url: https://github.com/dbt-labs/dbt-snowflake/issues/new/choose url: https://github.com/dbt-labs/dbt-snowflake/issues/new/choose
about: Report a bug or request a feature for dbt-snowflake about: Report a bug or request a feature for dbt-snowflake
- name: Ask a question or get support - name: Create an issue for dbt-spark
url: https://docs.getdbt.com/docs/guides/getting-help url: https://github.com/dbt-labs/dbt-spark/issues/new/choose
about: Ask a question or request support about: Report a bug or request a feature for dbt-spark
- name: Questions on Stack Overflow
url: https://stackoverflow.com/questions/tagged/dbt
about: Look at questions/answers at Stack Overflow

View File

@@ -1,5 +1,5 @@
name: ✨ Feature name: ✨ Feature
description: Suggest an idea for dbt description: Propose a straightforward extension of dbt functionality
title: "[Feature] <title>" title: "[Feature] <title>"
labels: ["enhancement", "triage"] labels: ["enhancement", "triage"]
body: body:
@@ -9,18 +9,24 @@ body:
Thanks for taking the time to fill out this feature request! Thanks for taking the time to fill out this feature request!
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing feature request for this? label: Is this your first time submitting a feature request?
description: Please search to see if an issue already exists for the feature you would like. description: >
options: We want to make sure that features are distinct and discoverable,
- label: I have searched the existing issues so that other members of the community can find them and offer their thoughts.
required: true
label: Is this your first time opening an issue? Issues are the right place to request straightforward extensions of existing dbt functionality.
For "big ideas" about future capabilities of dbt, we ask that you open a
[discussion](https://github.com/dbt-labs/dbt-core/discussions) in the "Ideas" category instead.
options: options:
- label: I have read the [expectations for open source contributors](https://docs.getdbt.com/docs/contributing/oss-expectations) - label: I have read the [expectations for open source contributors](https://docs.getdbt.com/docs/contributing/oss-expectations)
required: true required: true
- label: I have searched the existing issues, and I could not find an existing issue for this feature
required: true
- label: I am requesting a straightforward extension of existing dbt functionality, rather than a Big Idea better suited to a discussion
required: true
- type: textarea - type: textarea
attributes: attributes:
label: Describe the Feature label: Describe the feature
description: A clear and concise description of what you want to happen. description: A clear and concise description of what you want to happen.
validations: validations:
required: true required: true

View File

@@ -0,0 +1,93 @@
name: ☣️ Regression
description: Report a regression you've observed in a newer version of dbt
title: "[Regression] <title>"
labels: ["bug", "regression", "triage"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this regression report!
- type: checkboxes
attributes:
label: Is this a regression in a recent version of dbt-core?
description: >
A regression is when documented functionality works as expected in an older version of dbt-core,
and no longer works after upgrading to a newer version of dbt-core
options:
- label: I believe this is a regression in dbt-core functionality
required: true
- label: I have searched the existing issues, and I could not find an existing issue for this regression
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected/Previous Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
2. With this config...
3. Run '...'
4. See error...
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant log output
description: |
If applicable, log output to help explain your problem.
render: shell
validations:
required: false
- type: textarea
attributes:
label: Environment
description: |
examples:
- **OS**: Ubuntu 20.04
- **Python**: 3.9.12 (`python3 --version`)
- **dbt-core (working version)**: 1.1.1 (`dbt --version`)
- **dbt-core (regression version)**: 1.2.0 (`dbt --version`)
value: |
- OS:
- Python:
- dbt (working version):
- dbt (regression version):
render: markdown
validations:
required: true
- type: dropdown
id: database
attributes:
label: Which database adapter are you using with dbt?
description: If the regression is specific to the database or adapter, please open the issue in that adapter's repository instead
multiple: true
options:
- postgres
- redshift
- snowflake
- bigquery
- spark
- other (mention it in "Additional Context")
validations:
required: false
- type: textarea
attributes:
label: Additional Context
description: |
Links? References? Anything that will give us more context about the issue you are encountering!
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required: false

216
.github/_README.md vendored Normal file
View File

@@ -0,0 +1,216 @@
<!-- GitHub will publish this readme on the main repo page if the name is `README.md` so we've added the leading underscore to prevent this -->
<!-- Do not rename this file `README.md` -->
<!-- See https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-readmes -->
## What are GitHub Actions?
GitHub Actions are used for many different purposes. We use them to run tests in CI, validate PRs are in an expected state, and automate processes.
- [Overview of GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions)
- [What's a workflow?](https://docs.github.com/en/actions/using-workflows/about-workflows)
- [GitHub Actions guides](https://docs.github.com/en/actions/guides)
___
## Where do actions and workflows live
We try to maintain actions that are shared across repositories in a single place so that necesary changes can be made in a single place.
[dbt-labs/actions](https://github.com/dbt-labs/actions/) is the central repository of actions and workflows we use across repositories.
GitHub Actions also live locally within a repository. The workflows can be found at `.github/workflows` from the root of the repository. These should be specific to that code base.
Note: We are actively moving actions into the central Action repository so there is currently some duplication across repositories.
___
## Basics of Using Actions
### Viewing Output
- View the detailed action output for your PR in the **Checks** tab of the PR. This only shows the most recent run. You can also view high level **Checks** output at the bottom on the PR.
- View _all_ action output for a repository from the [**Actions**](https://github.com/dbt-labs/dbt-core/actions) tab. Workflow results last 1 year. Artifacts last 90 days, unless specified otherwise in individual workflows.
This view often shows what seem like duplicates of the same workflow. This occurs when files are renamed but the workflow name has not changed. These are in fact _not_ duplicates.
You can see the branch the workflow runs from in this view. It is listed in the table between the workflow name and the time/duration of the run. When blank, the workflow is running in the context of the `main` branch.
### How to view what workflow file is being referenced from a run
- When viewing the output of a specific workflow run, click the 3 dots at the top right of the display. There will be an option to `View workflow file`.
### How to manually run a workflow
- If a workflow has the `on: workflow_dispatch` trigger, it can be manually triggered
- From the [**Actions**](https://github.com/dbt-labs/dbt-core/actions) tab, find the workflow you want to run, select it and fill in any inputs requied. That's it!
### How to re-run jobs
- Some actions cannot be rerun in the GitHub UI. Namely the snyk checks and the cla check. Snyk checks are rerun by closing and reopening the PR. You can retrigger the cla check by commenting on the PR with `@cla-bot check`
___
## General Standards
### Permissions
- By default, workflows have read permissions in the repository for the contents scope only when no permissions are explicitly set.
- It is best practice to always define the permissions explicitly. This will allow actions to continue to work when the default permissions on the repository are changed. It also allows explicit grants of the least permissions possible.
- There are a lot of permissions available. [Read up on them](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs) if you're unsure what to use.
```yaml
permissions:
contents: read
pull-requests: write
```
### Secrets
- When to use a [Personal Access Token (PAT)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) vs the [GITHUB_TOKEN](https://docs.github.com/en/actions/security-guides/automatic-token-authentication) generated for the action?
The `GITHUB_TOKEN` is used by default. In most cases it is sufficient for what you need.
If you expect the workflow to result in a commit to that should retrigger workflows, you will need to use a Personal Access Token for the bot to commit the file. When using the GITHUB_TOKEN, the resulting commit will not trigger another GitHub Actions Workflow run. This is due to limitations set by GitHub. See [the docs](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#using-the-github_token-in-a-workflow) for a more detailed explanation.
For example, we must use a PAT in our workflow to commit a new changelog yaml file for bot PRs. Once the file has been committed to the branch, it should retrigger the check to validate that a changelog exists on the PR. Otherwise, it would stay in a failed state since the check would never retrigger.
### Triggers
You can configure your workflows to run when specific activity on GitHub happens, at a scheduled time, or when an event outside of GitHub occurs. Read more details in the [GitHub docs](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows).
These triggers are under the `on` key of the workflow and more than one can be listed.
```yaml
on:
push:
branches:
- "main"
- "*.latest"
- "releases/*"
pull_request:
# catch when the PR is opened with the label or when the label is added
types: [opened, labeled]
workflow_dispatch:
```
Some triggers of note that we use:
- `push` - Runs your workflow when you push a commit or tag.
- `pull_request` - Runs your workflow when activity on a pull request in the workflow's repository occurs. Takes in a list of activity types (opened, labeled, etc) if appropriate.
- `pull_request_target` - Same as `pull_request` but runs in the context of the PR target branch.
- `workflow_call` - used with reusable workflows. Triggered by another workflow calling it.
- `workflow_dispatch` - Gives the ability to manually trigger a workflow from the GitHub API, GitHub CLI, or GitHub browser interface.
### Basic Formatting
- Add a description of what your workflow does at the top in this format
```
# **what?**
# Describe what the action does.
# **why?**
# Why does this action exist?
# **when?**
# How/when will it be triggered?
```
- Leave blank lines between steps and jobs
```yaml
jobs:
dependency_changelog:
runs-on: ubuntu-latest
steps:
- name: Get File Name Timestamp
id: filename_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYYMMDD-HHmmss'
- name: Get File Content Timestamp
id: file_content_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYY-MM-DDTHH:mm:ss.000000-05:00'
- name: Generate Filepath
id: fp
run: |
FILEPATH=.changes/unreleased/Dependencies-${{ steps.filename_time.outputs.time }}.yaml
echo "::set-output name=FILEPATH::$FILEPATH"
```
- Print out all variables you will reference as the first step of a job. This allows for easier debugging. The first job should log all inputs. Subsequent jobs should reference outputs of other jobs, if present.
When possible, generate variables at the top of your workflow in a single place to reference later. This is not always strictly possible since you may generate a value to be used later mid-workflow.
Be sure to use quotes around these logs so special characters are not interpreted.
```yaml
job1:
- name: "[DEBUG] Print Variables"
run: |
echo "all variables defined as inputs"
echo "The last commit sha in the release: ${{ inputs.sha }}"
echo "The release version number: ${{ inputs.version_number }}"
echo "The changelog_path: ${{ inputs.changelog_path }}"
echo "The build_script_path: ${{ inputs.build_script_path }}"
echo "The s3_bucket_name: ${{ inputs.s3_bucket_name }}"
echo "The package_test_command: ${{ inputs.package_test_command }}"
# collect all the variables that need to be used in subsequent jobs
- name: Set Variables
id: variables
run: |
echo "::set-output name=important_path::'performance/runner/Cargo.toml'"
echo "::set-output name=release_id::${{github.event.inputs.release_id}}"
echo "::set-output name=open_prs::${{github.event.inputs.open_prs}}"
job2:
needs: [job1]
- name: "[DEBUG] Print Variables"
run: |
echo "all variables defined in job1 > Set Variables > outputs"
echo "important_path: ${{ needs.job1.outputs.important_path }}"
echo "release_id: ${{ needs.job1.outputs.release_id }}"
echo "open_prs: ${{ needs.job1.outputs.open_prs }}"
```
- When it's not obvious what something does, add a comment!
___
## Tips
### Context
- The [GitHub CLI](https://cli.github.com/) is available in the default runners
- Actions run in your context. ie, using an action from the marketplace that uses the GITHUB_TOKEN uses the GITHUB_TOKEN generated by your workflow run.
### Actions from the Marketplace
- Dont use external actions for things that can easily be accomplished manually.
- Always read through what an external action does before using it! Often an action in the GitHub Actions Marketplace can be replaced with a few lines in bash. This is much more maintainable (and wont change under us) and clear as to whats actually happening. It also prevents any
- Pin actions _we don't control_ to tags.
### Connecting to AWS
- Authenticate with the aws managed workflow
```yaml
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
```
- Then access with the aws command that comes installed on the action runner machines
```yaml
- name: Copy Artifacts from S3 via CLI
run: aws s3 cp ${{ env.s3_bucket }} . --recursive
```
### Testing
- Depending on what your action does, you may be able to use [`act`](https://github.com/nektos/act) to test the action locally. Some features of GitHub Actions do not work with `act`, among those are reusable workflows. If you can't use `act`, you'll have to push your changes up before being able to test. This can be slow.

View File

@@ -20,4 +20,4 @@ resolves #
- [ ] I have run this code in development and it appears to resolve the stated issue - [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR - [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR - [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR
- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#Adding-CHANGELOG-Entry) - [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)

61
.github/workflows/bot-changelog.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
# **what?**
# When bots create a PR, this action will add a corresponding changie yaml file to that
# PR when a specific label is added.
#
# The file is created off a template:
#
# kind: <per action matrix>
# body: <PR title>
# time: <current timestamp>
# custom:
# Author: <PR User Login (generally the bot)>
# Issue: 4904
# PR: <PR number>
#
# **why?**
# Automate changelog generation for more visability with automated bot PRs.
#
# **when?**
# Once a PR is created, label should be added to PR before or after creation. You can also
# manually trigger this by adding the appropriate label at any time.
#
# **how to add another bot?**
# Add the label and changie kind to the include matrix. That's it!
#
name: Bot Changelog
on:
pull_request:
# catch when the PR is opened with the label or when the label is added
types: [labeled]
permissions:
contents: write
pull-requests: read
jobs:
generate_changelog:
strategy:
matrix:
include:
- label: "dependencies"
changie_kind: "Dependency"
- label: "snyk"
changie_kind: "Security"
runs-on: ubuntu-latest
steps:
- name: Create and commit changelog on bot PR
if: ${{ contains(github.event.pull_request.labels.*.name, matrix.label) }}
id: bot_changelog
uses: emmyoop/changie_bot@v1.0.1
with:
GITHUB_TOKEN: ${{ secrets.FISHTOWN_BOT_PAT }}
commit_author_name: "Github Build Bot"
commit_author_email: "<buildbot@fishtownanalytics.com>"
commit_message: "Add automated changelog yaml from template for bot PR"
changie_kind: ${{ matrix.changie_kind }}
label: ${{ matrix.label }}
custom_changelog_string: "custom:\n Author: ${{ github.event.pull_request.user.login }}\n Issue: 4904\n PR: ${{ github.event.pull_request.number }}"

View File

@@ -1,78 +0,0 @@
# **what?**
# Checks that a file has been committed under the /.changes directory
# as a new CHANGELOG entry. Cannot check for a specific filename as
# it is dynamically generated by change type and timestamp.
# This workflow should not require any secrets since it runs for PRs
# from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code change gets reflected in the CHANGELOG.
# **when?**
# This will run for all PRs going into main and *.latest. It will
# run when they are opened, reopened, when any label is added or removed
# and when new code is pushed to the branch. The action will then get
# skipped if the 'Skip Changelog' label is present is any of the labels.
name: Check Changelog Entry
on:
pull_request:
types: [opened, reopened, labeled, unlabeled, synchronize]
workflow_dispatch:
defaults:
run:
shell: bash
permissions:
contents: read
pull-requests: write
env:
changelog_comment: 'Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry).'
jobs:
changelog:
name: changelog
if: "!contains(github.event.pull_request.labels.*.name, 'Skip Changelog')"
runs-on: ubuntu-latest
steps:
- name: Check if changelog file was added
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: filter
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
changelog:
- added: '.changes/unreleased/**.yaml'
- name: Check if comment already exists
uses: peter-evans/find-comment@v1
id: changelog_comment
with:
issue-number: ${{ github.event.pull_request.number }}
comment-author: 'github-actions[bot]'
body-includes: ${{ env.changelog_comment }}
- name: Create PR comment if changelog entry is missing, required, and does not exist
if: |
steps.filter.outputs.changelog == 'false' &&
steps.changelog_comment.outputs.comment-body == ''
uses: peter-evans/create-or-update-comment@v1
with:
issue-number: ${{ github.event.pull_request.number }}
body: ${{ env.changelog_comment }}
- name: Fail job if changelog entry is missing and required
if: steps.filter.outputs.changelog == 'false'
uses: actions/github-script@v6
with:
script: core.setFailed('Changelog entry required to merge.')

View File

@@ -0,0 +1,40 @@
# **what?**
# Checks that a file has been committed under the /.changes directory
# as a new CHANGELOG entry. Cannot check for a specific filename as
# it is dynamically generated by change type and timestamp.
# This workflow should not require any secrets since it runs for PRs
# from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code change gets reflected in the CHANGELOG.
# **when?**
# This will run for all PRs going into main and *.latest. It will
# run when they are opened, reopened, when any label is added or removed
# and when new code is pushed to the branch. The action will then get
# skipped if the 'Skip Changelog' label is present is any of the labels.
name: Check Changelog Entry
on:
pull_request:
types: [opened, reopened, labeled, unlabeled, synchronize]
workflow_dispatch:
defaults:
run:
shell: bash
permissions:
contents: read
pull-requests: write
jobs:
changelog:
uses: dbt-labs/actions/.github/workflows/changelog-existence.yml@main
with:
changelog_comment: 'Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry).'
skip_label: 'Skip Changelog'
secrets: inherit

View File

@@ -1,114 +0,0 @@
# **what?**
# When dependabot create a PR, it always adds the `dependencies` label. This
# action will add a corresponding changie yaml file to that PR when that label is added.
# The file is created off a template:
#
# kind: Dependencies
# body: <PR title>
# time: <current timestamp>
# custom:
# Author: dependabot
# Issue: 4904
# PR: <PR number>
#
# **why?**
# Automate changelog generation for more visability with automated dependency updates via dependabot.
# **when?**
# Once a PR is created and it has been correctly labeled with `dependencies`. The intended use
# is for the PRs created by dependabot. You can also manually trigger this by adding the
# `dependencies` label at any time.
name: Dependency Changelog
on:
pull_request:
# catch when the PR is opened with the label or when the label is added
types: [opened, labeled]
permissions:
contents: write
pull-requests: read
jobs:
dependency_changelog:
if: "contains(github.event.pull_request.labels.*.name, 'dependencies')"
runs-on: ubuntu-latest
steps:
# timestamp changes the order the changelog entries are listed in the final Changelog.md file. Precision is not
# important here.
# The timestamp on the filename and the timestamp in the contents of the file have different expected formats.
- name: Get File Name Timestamp
id: filename_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYYMMDD-HHmmss'
- name: Get File Content Timestamp
id: file_content_time
uses: nanzm/get-time-action@v1.1
with:
format: 'YYYY-MM-DDTHH:mm:ss.000000-05:00'
# changie expects files to be named in a specific pattern.
- name: Generate Filepath
id: fp
run: |
FILEPATH=.changes/unreleased/Dependencies-${{ steps.filename_time.outputs.time }}.yaml
echo "::set-output name=FILEPATH::$FILEPATH"
- name: Check if changelog file exists already
# if there's already a changelog entry, don't add another one!
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: changelog_check
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
exists:
- added: '.changes/unreleased/**.yaml'
- name: Checkout Branch
if: steps.changelog_check.outputs.exists == 'false'
uses: actions/checkout@v2
with:
# specifying the ref avoids checking out the repository in a detached state
ref: ${{ github.event.pull_request.head.ref }}
# If this is not set to false, Git push is performed with github.token and not the token
# configured using the env: GITHUB_TOKEN in commit step
persist-credentials: false
- name: Create file from template
if: steps.changelog_check.outputs.exists == 'false'
run: |
echo kind: Dependencies > "${{ steps.fp.outputs.FILEPATH }}"
echo 'body: "${{ github.event.pull_request.title }}"' >> "${{ steps.fp.outputs.FILEPATH }}"
echo time: "${{ steps.file_content_time.outputs.time }}" >> "${{ steps.fp.outputs.FILEPATH }}"
echo custom: >> "${{ steps.fp.outputs.FILEPATH }}"
echo ' Author: ${{ github.event.pull_request.user.login }}' >> "${{ steps.fp.outputs.FILEPATH }}"
echo ' Issue: "4904"' >> "${{ steps.fp.outputs.FILEPATH }}" # github.event.pull_request.issue for auto id?
echo ' PR: "${{ github.event.pull_request.number }}"' >> "${{ steps.fp.outputs.FILEPATH }}"
- name: Commit Changelog File
if: steps.changelog_check.outputs.exists == 'false'
uses: gr2m/create-or-update-pull-request-action@v1
env:
# When using the GITHUB_TOKEN, the resulting commit will not trigger another GitHub Actions
# Workflow run. This is due to limitations set by GitHub.
# See: https://docs.github.com/en/actions/security-guides/automatic-token-authentication#using-the-github_token-in-a-workflow
# When you use the repository's GITHUB_TOKEN to perform tasks on behalf of the GitHub Actions
# app, events triggered by the GITHUB_TOKEN will not create a new workflow run. This prevents
# you from accidentally creating recursive workflow runs. To get around this, use a Personal
# Access Token to commit changes.
GITHUB_TOKEN: ${{ secrets.FISHTOWN_BOT_PAT }}
with:
branch: ${{ github.event.pull_request.head.ref }}
# author expected in the format "Lorem J. Ipsum <lorem@example.com>"
author: "Github Build Bot <buildbot@fishtownanalytics.com>"
commit-message: "Add automated changelog yaml from template"

View File

@@ -0,0 +1,166 @@
# **what?**
# On push, if anything in core/dbt/docs or core/dbt/cli has been
# created or modified, regenerate the CLI API docs using sphinx.
# **why?**
# We watch for changes in core/dbt/cli because the CLI API docs rely on click
# and all supporting flags/params to be generated. We watch for changes in
# core/dbt/docs since any changes to sphinx configuration or any of the
# .rst files there could result in a differently build final index.html file.
# **when?**
# Whenever a change has been pushed to a branch, and only if there is a diff
# between the PR branch and main's core/dbt/cli and or core/dbt/docs dirs.
# TODO: add bot comment to PR informing contributor that the docs have been committed
# TODO: figure out why github action triggered pushes cause github to fail to report
# the status of jobs
name: Generate CLI API docs
on:
pull_request:
permissions:
contents: write
pull-requests: write
env:
CLI_DIR: ${{ github.workspace }}/core/dbt/cli
DOCS_DIR: ${{ github.workspace }}/core/dbt/docs
DOCS_BUILD_DIR: ${{ github.workspace }}/core/dbt/docs/build
jobs:
check_gen:
name: check if generation needed
runs-on: ubuntu-latest
outputs:
cli_dir_changed: ${{ steps.check_cli.outputs.cli_dir_changed }}
docs_dir_changed: ${{ steps.check_docs.outputs.docs_dir_changed }}
steps:
- name: "[DEBUG] print variables"
run: |
echo "env.CLI_DIR: ${{ env.CLI_DIR }}"
echo "env.DOCS_BUILD_DIR: ${{ env.DOCS_BUILD_DIR }}"
echo "env.DOCS_DIR: ${{ env.DOCS_DIR }}"
echo ">>>>> git log"
git log --pretty=oneline | head -5
- name: git checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.head_ref }}
- name: set shas
id: set_shas
run: |
THIS_SHA=$(git rev-parse @)
LAST_SHA=$(git rev-parse @~1)
echo "this sha: $THIS_SHA"
echo "last sha: $LAST_SHA"
echo "this_sha=$THIS_SHA" >> $GITHUB_OUTPUT
echo "last_sha=$LAST_SHA" >> $GITHUB_OUTPUT
- name: check for changes in core/dbt/cli
id: check_cli
run: |
CLI_DIR_CHANGES=$(git diff \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.CLI_DIR }})
if [ -n "$CLI_DIR_CHANGES" ]; then
echo "changes found"
echo $CLI_DIR_CHANGES
echo "cli_dir_changed=true" >> $GITHUB_OUTPUT
exit 0
fi
echo "cli_dir_changed=false" >> $GITHUB_OUTPUT
echo "no changes found"
- name: check for changes in core/dbt/docs
id: check_docs
if: steps.check_cli.outputs.cli_dir_changed == 'false'
run: |
DOCS_DIR_CHANGES=$(git diff --name-only \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.DOCS_DIR }} ':!${{ env.DOCS_BUILD_DIR }}')
DOCS_BUILD_DIR_CHANGES=$(git diff --name-only \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.DOCS_BUILD_DIR }})
if [ -n "$DOCS_DIR_CHANGES" ] && [ -z "$DOCS_BUILD_DIR_CHANGES" ]; then
echo "changes found"
echo $DOCS_DIR_CHANGES
echo "docs_dir_changed=true" >> $GITHUB_OUTPUT
exit 0
fi
echo "docs_dir_changed=false" >> $GITHUB_OUTPUT
echo "no changes found"
gen_docs:
name: generate docs
runs-on: ubuntu-latest
needs: [check_gen]
if: |
needs.check_gen.outputs.cli_dir_changed == 'true'
|| needs.check_gen.outputs.docs_dir_changed == 'true'
steps:
- name: "[DEBUG] print variables"
run: |
echo "env.DOCS_DIR: ${{ env.DOCS_DIR }}"
echo "github head_ref: ${{ github.head_ref }}"
- name: git checkout
uses: actions/checkout@v3
with:
ref: ${{ github.head_ref }}
- name: install python
uses: actions/setup-python@v4.3.0
with:
python-version: 3.8
- name: install dev requirements
run: |
python3 -m venv env
source env/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt -r dev-requirements.txt
- name: generate docs
run: |
source env/bin/activate
cd ${{ env.DOCS_DIR }}
echo "cleaning existing docs"
make clean
echo "creating docs"
make html
- name: debug
run: |
echo ">>>>> status"
git status
echo ">>>>> remotes"
git remote -v
echo ">>>>> branch"
git branch -v
echo ">>>>> log"
git log --pretty=oneline | head -5
- name: commit docs
run: |
git config user.name 'Github Build Bot'
git config user.email 'buildbot@fishtownanalytics.com'
git commit -am "Add generated CLI API docs"
git push -u origin ${{ github.head_ref }}

View File

@@ -15,6 +15,9 @@ on:
issues: issues:
types: [closed, deleted, reopened] types: [closed, deleted, reopened]
# no special access is needed
permissions: read-all
jobs: jobs:
call-label-action: call-label-action:
uses: dbt-labs/jira-actions/.github/workflows/jira-transition.yml@main uses: dbt-labs/jira-actions/.github/workflows/jira-transition.yml@main

View File

@@ -45,7 +45,9 @@ jobs:
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v2 uses: actions/setup-python@v4.3.0
with:
python-version: '3.8'
- name: Install python dependencies - name: Install python dependencies
run: | run: |
@@ -82,7 +84,7 @@ jobs:
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4.3.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
@@ -117,7 +119,7 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"] python-version: ["3.7", "3.8", "3.9", "3.10"]
os: [ubuntu-latest] os: [ubuntu-20.04]
include: include:
- python-version: 3.8 - python-version: 3.8
os: windows-latest os: windows-latest
@@ -137,7 +139,7 @@ jobs:
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4.3.0
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
@@ -190,9 +192,9 @@ jobs:
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v2 uses: actions/setup-python@v4.3.0
with: with:
python-version: 3.8 python-version: '3.8'
- name: Install python dependencies - name: Install python dependencies
run: | run: |

View File

@@ -39,7 +39,7 @@ jobs:
max-parallel: 1 max-parallel: 1
fail-fast: false fail-fast: false
matrix: matrix:
branch: [1.0.latest, 1.1.latest, main] branch: [1.0.latest, 1.1.latest, 1.2.latest, 1.3.latest, main]
steps: steps:
- name: Call CI workflow for ${{ matrix.branch }} branch - name: Call CI workflow for ${{ matrix.branch }} branch

View File

@@ -20,6 +20,9 @@ on:
description: 'The release version number (i.e. 1.0.0b1)' description: 'The release version number (i.e. 1.0.0b1)'
required: true required: true
permissions:
contents: write # this is the permission that allows creating a new release
defaults: defaults:
run: run:
shell: bash shell: bash

View File

@@ -21,6 +21,9 @@ on:
- "*.latest" - "*.latest"
- "releases/*" - "releases/*"
# no special access is needed
permissions: read-all
env: env:
LATEST_SCHEMA_PATH: ${{ github.workspace }}/new_schemas LATEST_SCHEMA_PATH: ${{ github.workspace }}/new_schemas
SCHEMA_DIFF_ARTIFACT: ${{ github.workspace }}//schema_schanges.txt SCHEMA_DIFF_ARTIFACT: ${{ github.workspace }}//schema_schanges.txt

View File

@@ -3,15 +3,10 @@ on:
schedule: schedule:
- cron: "30 1 * * *" - cron: "30 1 * * *"
permissions:
issues: write
pull-requests: write
jobs: jobs:
stale: stale:
runs-on: ubuntu-latest uses: dbt-labs/actions/.github/workflows/stale-bot-matrix.yml@main
steps:
# pinned at v4 (https://github.com/actions/stale/releases/tag/v4.0.0)
- uses: actions/stale@cdf15f641adb27a71842045a94023bef6945e3aa
with:
stale-issue-message: "This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days."
stale-pr-message: "This PR has been marked as Stale because it has been open for 180 days with no activity. If you would like the PR to remain open, please remove the stale label or comment on the PR, or it will be closed in 7 days."
close-issue-message: "Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest; add a comment to notify the maintainers."
# mark issues/PRs stale when they haven't seen activity in 180 days
days-before-stale: 180

View File

@@ -22,7 +22,7 @@ jobs:
# run the performance measurements on the current or default branch # run the performance measurements on the current or default branch
test-schema: test-schema:
name: Test Log Schema name: Test Log Schema
runs-on: ubuntu-latest runs-on: ubuntu-20.04
env: env:
# turns warnings into errors # turns warnings into errors
RUSTFLAGS: "-D warnings" RUSTFLAGS: "-D warnings"
@@ -46,12 +46,6 @@ jobs:
with: with:
python-version: "3.8" python-version: "3.8"
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install python dependencies - name: Install python dependencies
run: | run: |
pip install --user --upgrade pip pip install --user --upgrade pip
@@ -69,10 +63,3 @@ jobs:
# we actually care if these pass, because the normal test run doesn't usually include many json log outputs # we actually care if these pass, because the normal test run doesn't usually include many json log outputs
- name: Run integration tests - name: Run integration tests
run: tox -e integration -- -nauto run: tox -e integration -- -nauto
# apply our schema tests to every log event from the previous step
# skips any output that isn't valid json
- uses: actions-rs/cargo@v1
with:
command: run
args: --manifest-path test/interop/log_parsing/Cargo.toml

View File

@@ -1,18 +1,15 @@
# **what?** # **what?**
# This workflow will take a version number and a dry run flag. With that # This workflow will take the new version number to bump to. With that
# it will run versionbump to update the version number everywhere in the # it will run versionbump to update the version number everywhere in the
# code base and then generate an update Docker requirements file. If this # code base and then run changie to create the corresponding changelog.
# is a dry run, a draft PR will open with the changes. If this isn't a dry # A PR will be created with the changes that can be reviewed before committing.
# run, the changes will be committed to the branch this is run on.
# **why?** # **why?**
# This is to aid in releasing dbt and making sure we have updated # This is to aid in releasing dbt and making sure we have updated
# the versions and Docker requirements in all places. # the version in all places and generated the changelog.
# **when?** # **when?**
# This is triggered either manually OR # This is triggered manually
# from the repository_dispatch event "version-bump" which is sent from
# the dbt-release repo Action
name: Version Bump name: Version Bump
@@ -20,35 +17,25 @@ on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
version_number: version_number:
description: 'The version number to bump to' description: 'The version number to bump to (ex. 1.2.0, 1.3.0b1)'
required: true required: true
is_dry_run:
description: 'Creates a draft PR to allow testing instead of committing to a branch' permissions:
required: true contents: write
default: 'true' pull-requests: write
repository_dispatch:
types: [version-bump]
jobs: jobs:
bump: bump:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: "[DEBUG] Print Variables"
run: |
echo "all variables defined as inputs"
echo The version_number: ${{ github.event.inputs.version_number }}
- name: Check out the repository - name: Check out the repository
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set version and dry run values
id: variables
env:
VERSION_NUMBER: "${{ github.event.client_payload.version_number == '' && github.event.inputs.version_number || github.event.client_payload.version_number }}"
IS_DRY_RUN: "${{ github.event.client_payload.is_dry_run == '' && github.event.inputs.is_dry_run || github.event.client_payload.is_dry_run }}"
run: |
echo Repository dispatch event version: ${{ github.event.client_payload.version_number }}
echo Repository dispatch event dry run: ${{ github.event.client_payload.is_dry_run }}
echo Workflow dispatch event version: ${{ github.event.inputs.version_number }}
echo Workflow dispatch event dry run: ${{ github.event.inputs.is_dry_run }}
echo ::set-output name=VERSION_NUMBER::$VERSION_NUMBER
echo ::set-output name=IS_DRY_RUN::$IS_DRY_RUN
- uses: actions/setup-python@v2 - uses: actions/setup-python@v2
with: with:
python-version: "3.8" python-version: "3.8"
@@ -59,53 +46,80 @@ jobs:
source env/bin/activate source env/bin/activate
pip install --upgrade pip pip install --upgrade pip
- name: Create PR branch - name: Add Homebrew to PATH
if: ${{ steps.variables.outputs.IS_DRY_RUN == 'true' }}
run: | run: |
git checkout -b bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID echo "/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin" >> $GITHUB_PATH
git push origin bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID
git branch --set-upstream-to=origin/bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_$GITHUB_RUN_ID
# - name: Generate Docker requirements - name: Install Homebrew packages
# run: | run: |
# source env/bin/activate brew install pre-commit
# pip install -r requirements.txt brew tap miniscruff/changie https://github.com/miniscruff/changie
# pip freeze -l > docker/requirements/requirements.txt brew install changie
# git status
- name: Audit Version and Parse Into Parts
id: semver
uses: dbt-labs/actions/parse-semver@v1
with:
version: ${{ github.event.inputs.version_number }}
- name: Set branch value
id: variables
run: |
echo "::set-output name=BRANCH_NAME::prep-release/${{ github.event.inputs.version_number }}_$GITHUB_RUN_ID"
- name: Create PR branch
run: |
git checkout -b ${{ steps.variables.outputs.BRANCH_NAME }}
git push origin ${{ steps.variables.outputs.BRANCH_NAME }}
git branch --set-upstream-to=origin/${{ steps.variables.outputs.BRANCH_NAME }} ${{ steps.variables.outputs.BRANCH_NAME }}
- name: Bump version - name: Bump version
run: | run: |
source env/bin/activate source env/bin/activate
pip install -r dev-requirements.txt pip install -r dev-requirements.txt
env/bin/bumpversion --allow-dirty --new-version ${{steps.variables.outputs.VERSION_NUMBER}} major env/bin/bumpversion --allow-dirty --new-version ${{ github.event.inputs.version_number }} major
git status git status
- name: Commit version bump directly - name: Run changie
uses: EndBug/add-and-commit@v7 run: |
if: ${{ steps.variables.outputs.IS_DRY_RUN == 'false' }} if [[ ${{ steps.semver.outputs.is-pre-release }} -eq 1 ]]
with: then
author_name: 'Github Build Bot' changie batch ${{ steps.semver.outputs.base-version }} --move-dir '${{ steps.semver.outputs.base-version }}' --prerelease '${{ steps.semver.outputs.pre-release }}'
author_email: 'buildbot@fishtownanalytics.com' else
message: 'Bumping version to ${{steps.variables.outputs.VERSION_NUMBER}}' changie batch ${{ steps.semver.outputs.base-version }} --include '${{ steps.semver.outputs.base-version }}' --remove-prereleases
fi
changie merge
git status
# this step will fail on whitespace errors but also correct them
- name: Remove trailing whitespace
continue-on-error: true
run: |
pre-commit run trailing-whitespace --files .bumpversion.cfg CHANGELOG.md .changes/*
git status
# this step will fail on newline errors but also correct them
- name: Removing extra newlines
continue-on-error: true
run: |
pre-commit run end-of-file-fixer --files .bumpversion.cfg CHANGELOG.md .changes/*
git status
- name: Commit version bump to branch - name: Commit version bump to branch
uses: EndBug/add-and-commit@v7 uses: EndBug/add-and-commit@v7
if: ${{ steps.variables.outputs.IS_DRY_RUN == 'true' }}
with: with:
author_name: 'Github Build Bot' author_name: 'Github Build Bot'
author_email: 'buildbot@fishtownanalytics.com' author_email: 'buildbot@fishtownanalytics.com'
message: 'Bumping version to ${{steps.variables.outputs.VERSION_NUMBER}}' message: 'Bumping version to ${{ github.event.inputs.version_number }} and generate CHANGELOG'
branch: 'bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_${{GITHUB.RUN_ID}}' branch: '${{ steps.variables.outputs.BRANCH_NAME }}'
push: 'origin origin/bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_${{GITHUB.RUN_ID}}' push: 'origin origin/${{ steps.variables.outputs.BRANCH_NAME }}'
- name: Create Pull Request - name: Create Pull Request
uses: peter-evans/create-pull-request@v3 uses: peter-evans/create-pull-request@v3
if: ${{ steps.variables.outputs.IS_DRY_RUN == 'true' }}
with: with:
author: 'Github Build Bot <buildbot@fishtownanalytics.com>' author: 'Github Build Bot <buildbot@fishtownanalytics.com>'
draft: true
base: ${{github.ref}} base: ${{github.ref}}
title: 'Bumping version to ${{steps.variables.outputs.VERSION_NUMBER}}' title: 'Bumping version to ${{ github.event.inputs.version_number }} and generate changelog'
branch: 'bumping-version/${{steps.variables.outputs.VERSION_NUMBER}}_${{GITHUB.RUN_ID}}' branch: '${{ steps.variables.outputs.BRANCH_NAME }}'
labels: | labels: |
Skip Changelog Skip Changelog

8
.gitignore vendored
View File

@@ -11,6 +11,7 @@ __pycache__/
env*/ env*/
dbt_env/ dbt_env/
build/ build/
!core/dbt/docs/build
develop-eggs/ develop-eggs/
dist/ dist/
downloads/ downloads/
@@ -24,7 +25,8 @@ var/
*.egg-info/ *.egg-info/
.installed.cfg .installed.cfg
*.egg *.egg
*.mypy_cache/ .mypy_cache/
.dmypy.json
logs/ logs/
# PyInstaller # PyInstaller
@@ -95,3 +97,7 @@ venv/
# vscode # vscode
.vscode/ .vscode/
*.code-workspace
# poetry
poetry.lock

View File

@@ -2,11 +2,11 @@
# Eventually the hooks described here will be run as tests before merging each PR. # Eventually the hooks described here will be run as tests before merging each PR.
# TODO: remove global exclusion of tests when testing overhaul is complete # TODO: remove global exclusion of tests when testing overhaul is complete
exclude: ^test/ exclude: ^(test/|core/dbt/docs/build/)
# Force all unspecified python hooks to run python 3.8 # Force all unspecified python hooks to run python 3.8
default_language_version: default_language_version:
python: python3.8 python: python3
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
@@ -24,18 +24,13 @@ repos:
rev: 22.3.0 rev: 22.3.0
hooks: hooks:
- id: black - id: black
args:
- "--line-length=99"
- "--target-version=py38"
- id: black - id: black
alias: black-check alias: black-check
stages: [manual] stages: [manual]
args: args:
- "--line-length=99"
- "--target-version=py38"
- "--check" - "--check"
- "--diff" - "--diff"
- repo: https://gitlab.com/pycqa/flake8 - repo: https://github.com/pycqa/flake8
rev: 4.0.1 rev: 4.0.1
hooks: hooks:
- id: flake8 - id: flake8

View File

@@ -11,6 +11,7 @@
For information on prior major and minor releases, see their changelogs: For information on prior major and minor releases, see their changelogs:
* [1.3](https://github.com/dbt-labs/dbt-core/blob/1.3.latest/CHANGELOG.md)
* [1.2](https://github.com/dbt-labs/dbt-core/blob/1.2.latest/CHANGELOG.md) * [1.2](https://github.com/dbt-labs/dbt-core/blob/1.2.latest/CHANGELOG.md)
* [1.1](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md) * [1.1](https://github.com/dbt-labs/dbt-core/blob/1.1.latest/CHANGELOG.md)
* [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md) * [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md)

View File

@@ -7,7 +7,9 @@
3. [Setting up an environment](#setting-up-an-environment) 3. [Setting up an environment](#setting-up-an-environment)
4. [Running `dbt` in development](#running-dbt-core-in-development) 4. [Running `dbt` in development](#running-dbt-core-in-development)
5. [Testing dbt-core](#testing) 5. [Testing dbt-core](#testing)
6. [Submitting a Pull Request](#submitting-a-pull-request) 6. [Debugging](#debugging)
7. [Adding a changelog entry](#adding-a-changelog-entry)
8. [Submitting a Pull Request](#submitting-a-pull-request)
## About this document ## About this document
@@ -21,7 +23,8 @@ If you get stuck, we're happy to help! Drop us a line in the `#dbt-core-developm
- **Adapters:** Is your issue or proposed code change related to a specific [database adapter](https://docs.getdbt.com/docs/available-adapters)? If so, please open issues, PRs, and discussions in that adapter's repository instead. The sole exception is Postgres; the `dbt-postgres` plugin lives in this repository (`dbt-core`). - **Adapters:** Is your issue or proposed code change related to a specific [database adapter](https://docs.getdbt.com/docs/available-adapters)? If so, please open issues, PRs, and discussions in that adapter's repository instead. The sole exception is Postgres; the `dbt-postgres` plugin lives in this repository (`dbt-core`).
- **CLA:** Please note that anyone contributing code to `dbt-core` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements). If you are unable to sign the CLA, the `dbt-core` maintainers will unfortunately be unable to merge any of your Pull Requests. We welcome you to participate in discussions, open issues, and comment on existing ones. - **CLA:** Please note that anyone contributing code to `dbt-core` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements). If you are unable to sign the CLA, the `dbt-core` maintainers will unfortunately be unable to merge any of your Pull Requests. We welcome you to participate in discussions, open issues, and comment on existing ones.
- **Branches:** All pull requests from community contributors should target the `main` branch (default). If the change is needed as a patch for a minor version of dbt that has already been released (or is already a release candidate), a maintainer will backport the changes in your PR to the relevant "latest" release branch (`1.0.latest`, `1.1.latest`, ...) - **Branches:** All pull requests from community contributors should target the `main` branch (default). If the change is needed as a patch for a minor version of dbt that has already been released (or is already a release candidate), a maintainer will backport the changes in your PR to the relevant "latest" release branch (`1.0.latest`, `1.1.latest`, ...). If an issue fix applies to a release branch, that fix should be first committed to the development branch and then to the release branch (rarely release-branch fixes may not apply to `main`).
- **Releases**: Before releasing a new minor version of Core, we prepare a series of alphas and release candidates to allow users (especially employees of dbt Labs!) to test the new version in live environments. This is an important quality assurance step, as it exposes the new code to a wide variety of complicated deployments and can surface bugs before official release. Releases are accessible via pip, homebrew, and dbt Cloud.
## Getting the code ## Getting the code
@@ -41,7 +44,9 @@ If you are not a member of the `dbt-labs` GitHub organization, you can contribut
### dbt Labs contributors ### dbt Labs contributors
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt-core` repo. Rather than forking `dbt-core` to make your changes, just clone the repository, check out a new branch, and push directly to that branch. If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt-core` repo. Rather than forking `dbt-core` to make your changes, just clone the repository, check out a new branch, and push directly to that branch. Branch names should be fixed by `CT-XXX/` where:
* CT stands for 'core team'
* XXX stands for a JIRA ticket number
## Setting up an environment ## Setting up an environment
@@ -151,7 +156,7 @@ Check out the other targets in the Makefile to see other commonly used test
suites. suites.
#### `pre-commit` #### `pre-commit`
[`pre-commit`](https://pre-commit.com) takes care of running all code-checks for formatting and linting. Run `make dev` to install `pre-commit` in your local environment. Once this is done you can use any of the linter-based make targets as well as a git pre-commit hook that will ensure proper formatting and linting. [`pre-commit`](https://pre-commit.com) takes care of running all code-checks for formatting and linting. Run `make dev` to install `pre-commit` in your local environment (we recommend running this command with a python virtual environment active). This command installs several pip executables including black, mypy, and flake8. Once this is done you can use any of the linter-based make targets as well as a git pre-commit hook that will ensure proper formatting and linting.
#### `tox` #### `tox`
@@ -174,7 +179,29 @@ python3 -m pytest tests/functional/sources
> See [pytest usage docs](https://docs.pytest.org/en/6.2.x/usage.html) for an overview of useful command-line options. > See [pytest usage docs](https://docs.pytest.org/en/6.2.x/usage.html) for an overview of useful command-line options.
## Adding CHANGELOG Entry ### Unit, Integration, Functional?
Here are some general rules for adding tests:
* unit tests (`test/unit` & `tests/unit`) dont need to access a database; "pure Python" tests should be written as unit tests
* functional tests (`test/integration` & `tests/functional`) cover anything that interacts with a database, namely adapter
* *everything in* `test/*` *is being steadily migrated to* `tests/*`
## Debugging
1. The logs for a `dbt run` have stack traces and other information for debugging errors (in `logs/dbt.log` in your project directory).
2. Try using a debugger, like `ipdb`. For pytest: `--pdb --pdbcls=IPython.terminal.debugger:pdb`
3. Sometimes, its easier to debug on a single thread: `dbt --single-threaded run`
4. To make print statements from Jinja macros: `{{ log(msg, info=true) }}`
5. You can also add `{{ debug() }}` statements, which will drop you into some auto-generated code that the macro wrote.
6. The dbt “artifacts” are written out to the target directory of your dbt project. They are in unformatted json, which can be hard to read. Format them with:
> python -m json.tool target/run_results.json > run_results.json
### Assorted development tips
* Append `# type: ignore` to the end of a line if you need to disable `mypy` on that line.
* Sometimes flake8 complains about lines that are actually fine, in which case you can put a comment on the line such as: # noqa or # noqa: ANNN, where ANNN is the error code that flake8 issues.
* To collect output for `CProfile`, run dbt with the `-r` option and the name of an output file, i.e. `dbt -r dbt.cprof run`. If you just want to profile parsing, you can do: `dbt -r dbt.cprof parse`. `pip` install `snakeviz` to view the output. Run `snakeviz dbt.cprof` and output will be rendered in a browser window.
## Adding a CHANGELOG Entry
We use [changie](https://changie.dev) to generate `CHANGELOG` entries. **Note:** Do not edit the `CHANGELOG.md` directly. Your modifications will be lost. We use [changie](https://changie.dev) to generate `CHANGELOG` entries. **Note:** Do not edit the `CHANGELOG.md` directly. Your modifications will be lost.
@@ -186,8 +213,10 @@ You don't need to worry about which `dbt-core` version your change will go into.
## Submitting a Pull Request ## Submitting a Pull Request
A `dbt-core` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code. Code can be merged into the current development branch `main` by opening a pull request. A `dbt-core` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
Automated tests run via GitHub Actions. If you're a first-time contributor, all tests (including code checks and unit tests) will require a maintainer to approve. Changes in the `dbt-core` repository trigger integration tests against Postgres. dbt Labs also provides CI environments in which to test changes to other adapters, triggered by PRs in those adapters' repositories, as well as periodic maintenance checks of each adapter in concert with the latest `dbt-core` code changes. Automated tests run via GitHub Actions. If you're a first-time contributor, all tests (including code checks and unit tests) will require a maintainer to approve. Changes in the `dbt-core` repository trigger integration tests against Postgres. dbt Labs also provides CI environments in which to test changes to other adapters, triggered by PRs in those adapters' repositories, as well as periodic maintenance checks of each adapter in concert with the latest `dbt-core` code changes.
Once all tests are passing and your PR has been approved, a `dbt-core` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada: Once all tests are passing and your PR has been approved, a `dbt-core` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:
Sometimes, the content license agreement auto-check bot doesn't find a user's entry in its roster. If you need to force a rerun, add `@cla-bot check` in a comment on the pull request.

View File

@@ -6,6 +6,19 @@ ifeq ($(USE_DOCKER),true)
DOCKER_CMD := docker-compose run --rm test DOCKER_CMD := docker-compose run --rm test
endif endif
LOGS_DIR := ./logs
# Optional flag to invoke tests using our CI env.
# But we always want these active for structured
# log testing.
CI_FLAGS =\
DBT_TEST_USER_1=dbt_test_user_1\
DBT_TEST_USER_2=dbt_test_user_2\
DBT_TEST_USER_3=dbt_test_user_3\
RUSTFLAGS="-D warnings"\
LOG_DIR=./logs\
DBT_LOG_FORMAT=json
.PHONY: dev .PHONY: dev
dev: ## Installs dbt-* packages in develop mode along with development dependencies. dev: ## Installs dbt-* packages in develop mode along with development dependencies.
@\ @\
@@ -48,13 +61,20 @@ test: .env ## Runs unit tests with py and code checks against staged changes.
.PHONY: integration .PHONY: integration
integration: .env ## Runs postgres integration tests with py-integration integration: .env ## Runs postgres integration tests with py-integration
@\ @\
$(DOCKER_CMD) tox -e py-integration -- -nauto $(if $(USE_CI_FLAGS), $(CI_FLAGS)) $(DOCKER_CMD) tox -e py-integration -- -nauto
.PHONY: integration-fail-fast .PHONY: integration-fail-fast
integration-fail-fast: .env ## Runs postgres integration tests with py-integration in "fail fast" mode. integration-fail-fast: .env ## Runs postgres integration tests with py-integration in "fail fast" mode.
@\ @\
$(DOCKER_CMD) tox -e py-integration -- -x -nauto $(DOCKER_CMD) tox -e py-integration -- -x -nauto
.PHONY: interop
interop: clean
@\
mkdir $(LOGS_DIR) && \
$(CI_FLAGS) $(DOCKER_CMD) tox -e py-integration -- -nauto && \
LOG_DIR=$(LOGS_DIR) cargo run --manifest-path test/interop/log_parsing/Cargo.toml
.PHONY: setup-db .PHONY: setup-db
setup-db: ## Setup Postgres database with docker-compose for system testing. setup-db: ## Setup Postgres database with docker-compose for system testing.
@\ @\
@@ -76,6 +96,7 @@ endif
clean: ## Resets development environment. clean: ## Resets development environment.
@echo 'cleaning repo...' @echo 'cleaning repo...'
@rm -f .coverage @rm -f .coverage
@rm -f .coverage.*
@rm -rf .eggs/ @rm -rf .eggs/
@rm -f .env @rm -f .env
@rm -rf .tox/ @rm -rf .tox/

View File

@@ -1 +1,2 @@
recursive-include dbt/include *.py *.sql *.yml *.html *.md .gitkeep .gitignore recursive-include dbt/include *.py *.sql *.yml *.html *.md .gitkeep .gitignore
include dbt/py.typed

View File

@@ -0,0 +1,10 @@
## Base adapters
### impl.py
The class `SQLAdapter` in [base/imply.py](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/base/impl.py) is a (mostly) abstract object that adapter objects inherit from. The base class scaffolds out methods that every adapter project usually should implement for smooth communication between dbt and database.
Some target databases require more or fewer methods--it all depends on what the warehouse's featureset is.
Look into the class for function-level comments.

View File

@@ -10,5 +10,5 @@ from dbt.adapters.base.relation import ( # noqa
SchemaSearchMap, SchemaSearchMap,
) )
from dbt.adapters.base.column import Column # noqa from dbt.adapters.base.column import Column # noqa
from dbt.adapters.base.impl import AdapterConfig, BaseAdapter # noqa from dbt.adapters.base.impl import AdapterConfig, BaseAdapter, PythonJobHelper # noqa
from dbt.adapters.base.plugin import AdapterPlugin # noqa from dbt.adapters.base.plugin import AdapterPlugin # noqa

View File

@@ -12,6 +12,7 @@ class Column:
"TIMESTAMP": "TIMESTAMP", "TIMESTAMP": "TIMESTAMP",
"FLOAT": "FLOAT", "FLOAT": "FLOAT",
"INTEGER": "INT", "INTEGER": "INT",
"BOOLEAN": "BOOLEAN",
} }
column: str column: str
dtype: str dtype: str

View File

@@ -2,6 +2,7 @@ import abc
import os import os
from time import sleep from time import sleep
import sys import sys
import traceback
# multiprocessing.RLock is a function returning this type # multiprocessing.RLock is a function returning this type
from multiprocessing.synchronize import RLock from multiprocessing.synchronize import RLock
@@ -40,14 +41,15 @@ from dbt.events.functions import fire_event
from dbt.events.types import ( from dbt.events.types import (
NewConnection, NewConnection,
ConnectionReused, ConnectionReused,
ConnectionLeftOpenInCleanup,
ConnectionLeftOpen, ConnectionLeftOpen,
ConnectionLeftOpen2, ConnectionClosedInCleanup,
ConnectionClosed, ConnectionClosed,
ConnectionClosed2,
Rollback, Rollback,
RollbackFailed, RollbackFailed,
) )
from dbt import flags from dbt import flags
from dbt.utils import cast_to_str
SleepTime = Union[int, float] # As taken by time.sleep. SleepTime = Union[int, float] # As taken by time.sleep.
AdapterHandle = Any # Adapter connection handle objects can be any class. AdapterHandle = Any # Adapter connection handle objects can be any class.
@@ -304,9 +306,9 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
with self.lock: with self.lock:
for connection in self.thread_connections.values(): for connection in self.thread_connections.values():
if connection.state not in {"closed", "init"}: if connection.state not in {"closed", "init"}:
fire_event(ConnectionLeftOpen(conn_name=connection.name)) fire_event(ConnectionLeftOpenInCleanup(conn_name=cast_to_str(connection.name)))
else: else:
fire_event(ConnectionClosed(conn_name=connection.name)) fire_event(ConnectionClosedInCleanup(conn_name=cast_to_str(connection.name)))
self.close(connection) self.close(connection)
# garbage collect these connections # garbage collect these connections
@@ -332,17 +334,21 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
try: try:
connection.handle.rollback() connection.handle.rollback()
except Exception: except Exception:
fire_event(RollbackFailed(conn_name=connection.name)) fire_event(
RollbackFailed(
conn_name=cast_to_str(connection.name), exc_info=traceback.format_exc()
)
)
@classmethod @classmethod
def _close_handle(cls, connection: Connection) -> None: def _close_handle(cls, connection: Connection) -> None:
"""Perform the actual close operation.""" """Perform the actual close operation."""
# On windows, sometimes connection handles don't have a close() attr. # On windows, sometimes connection handles don't have a close() attr.
if hasattr(connection.handle, "close"): if hasattr(connection.handle, "close"):
fire_event(ConnectionClosed2(conn_name=connection.name)) fire_event(ConnectionClosed(conn_name=cast_to_str(connection.name)))
connection.handle.close() connection.handle.close()
else: else:
fire_event(ConnectionLeftOpen2(conn_name=connection.name)) fire_event(ConnectionLeftOpen(conn_name=cast_to_str(connection.name)))
@classmethod @classmethod
def _rollback(cls, connection: Connection) -> None: def _rollback(cls, connection: Connection) -> None:
@@ -353,7 +359,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
f'"{connection.name}", but it does not have one open!' f'"{connection.name}", but it does not have one open!'
) )
fire_event(Rollback(conn_name=connection.name)) fire_event(Rollback(conn_name=cast_to_str(connection.name)))
cls._rollback_handle(connection) cls._rollback_handle(connection)
connection.transaction_open = False connection.transaction_open = False
@@ -365,7 +371,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
return connection return connection
if connection.transaction_open and connection.handle: if connection.transaction_open and connection.handle:
fire_event(Rollback(conn_name=connection.name)) fire_event(Rollback(conn_name=cast_to_str(connection.name)))
cls._rollback_handle(connection) cls._rollback_handle(connection)
connection.transaction_open = False connection.transaction_open = False

View File

@@ -2,6 +2,7 @@ import abc
from concurrent.futures import as_completed, Future from concurrent.futures import as_completed, Future
from contextlib import contextmanager from contextlib import contextmanager
from datetime import datetime from datetime import datetime
import time
from itertools import chain from itertools import chain
from typing import ( from typing import (
Optional, Optional,
@@ -40,10 +41,15 @@ from dbt.clients.jinja import MacroGenerator
from dbt.contracts.graph.compiled import CompileResultNode, CompiledSeedNode from dbt.contracts.graph.compiled import CompileResultNode, CompiledSeedNode
from dbt.contracts.graph.manifest import Manifest, MacroManifest from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error from dbt.events.functions import fire_event, warn_or_error
from dbt.events.functions import fire_event from dbt.events.types import (
from dbt.events.types import CacheMiss, ListRelations CacheMiss,
from dbt.utils import filter_null_values, executor ListRelations,
CodeExecution,
CodeExecutionStatus,
CatalogGenerationError,
)
from dbt.utils import filter_null_values, executor, cast_to_str
from dbt.adapters.base.connections import Connection, AdapterResponse from dbt.adapters.base.connections import Connection, AdapterResponse
from dbt.adapters.base.meta import AdapterMeta, available from dbt.adapters.base.meta import AdapterMeta, available
@@ -54,7 +60,8 @@ from dbt.adapters.base.relation import (
SchemaSearchMap, SchemaSearchMap,
) )
from dbt.adapters.base import Column as BaseColumn from dbt.adapters.base import Column as BaseColumn
from dbt.adapters.cache import RelationsCache, _make_key from dbt.adapters.base import Credentials
from dbt.adapters.cache import RelationsCache, _make_ref_key_msg
SeedModel = Union[ParsedSeedNode, CompiledSeedNode] SeedModel = Union[ParsedSeedNode, CompiledSeedNode]
@@ -121,6 +128,35 @@ def _relation_name(rel: Optional[BaseRelation]) -> str:
return str(rel) return str(rel)
def log_code_execution(code_execution_function):
# decorator to log code and execution time
if code_execution_function.__name__ != "submit_python_job":
raise ValueError("this should be only used to log submit_python_job now")
def execution_with_log(*args):
self = args[0]
connection_name = self.connections.get_thread_connection().name
fire_event(CodeExecution(conn_name=connection_name, code_content=args[2]))
start_time = time.time()
response = code_execution_function(*args)
fire_event(
CodeExecutionStatus(
status=response._message, elapsed=round((time.time() - start_time), 2)
)
)
return response
return execution_with_log
class PythonJobHelper:
def __init__(self, parsed_model: Dict, credential: Credentials) -> None:
raise NotImplementedError("PythonJobHelper is not implemented yet")
def submit(self, compiled_code: str) -> Any:
raise NotImplementedError("PythonJobHelper submit function is not implemented yet")
class BaseAdapter(metaclass=AdapterMeta): class BaseAdapter(metaclass=AdapterMeta):
"""The BaseAdapter provides an abstract base class for adapters. """The BaseAdapter provides an abstract base class for adapters.
@@ -284,7 +320,9 @@ class BaseAdapter(metaclass=AdapterMeta):
from dbt.parser.manifest import ManifestLoader from dbt.parser.manifest import ManifestLoader
manifest = ManifestLoader.load_macros( manifest = ManifestLoader.load_macros(
self.config, self.connections.set_query_header, base_macros_only=base_macros_only self.config,
self.connections.set_query_header,
base_macros_only=base_macros_only,
) )
# TODO CT-211 # TODO CT-211
self._macro_manifest_lazy = manifest # type: ignore[assignment] self._macro_manifest_lazy = manifest # type: ignore[assignment]
@@ -303,7 +341,11 @@ class BaseAdapter(metaclass=AdapterMeta):
if (database, schema) not in self.cache: if (database, schema) not in self.cache:
fire_event( fire_event(
CacheMiss(conn_name=self.nice_connection_name(), database=database, schema=schema) CacheMiss(
conn_name=self.nice_connection_name(),
database=cast_to_str(database),
schema=schema,
)
) )
return False return False
else: else:
@@ -381,7 +423,10 @@ class BaseAdapter(metaclass=AdapterMeta):
self.cache.update_schemas(cache_update) self.cache.update_schemas(cache_update)
def set_relations_cache( def set_relations_cache(
self, manifest: Manifest, clear: bool = False, required_schemas: Set[BaseRelation] = None self,
manifest: Manifest,
clear: bool = False,
required_schemas: Set[BaseRelation] = None,
) -> None: ) -> None:
"""Run a query that gets a populated cache of the relations in the """Run a query that gets a populated cache of the relations in the
database and set the cache on this adapter. database and set the cache on this adapter.
@@ -536,7 +581,7 @@ class BaseAdapter(metaclass=AdapterMeta):
:rtype: List[self.Relation] :rtype: List[self.Relation]
""" """
raise NotImplementedException( raise NotImplementedException(
"`list_relations_without_caching` is not implemented for this " "adapter!" "`list_relations_without_caching` is not implemented for this adapter!"
) )
### ###
@@ -670,7 +715,10 @@ class BaseAdapter(metaclass=AdapterMeta):
return self.cache.get_relations(database, schema) return self.cache.get_relations(database, schema)
schema_relation = self.Relation.create( schema_relation = self.Relation.create(
database=database, schema=schema, identifier="", quote_policy=self.config.quoting database=database,
schema=schema,
identifier="",
quote_policy=self.config.quoting,
).without_identifier() ).without_identifier()
# we can't build the relations cache because we don't have a # we can't build the relations cache because we don't have a
@@ -678,7 +726,9 @@ class BaseAdapter(metaclass=AdapterMeta):
relations = self.list_relations_without_caching(schema_relation) relations = self.list_relations_without_caching(schema_relation)
fire_event( fire_event(
ListRelations( ListRelations(
database=database, schema=schema, relations=[_make_key(x) for x in relations] database=cast_to_str(database),
schema=schema,
relations=[_make_ref_key_msg(x) for x in relations],
) )
) )
@@ -1162,6 +1212,74 @@ class BaseAdapter(metaclass=AdapterMeta):
return sql return sql
@property
def python_submission_helpers(self) -> Dict[str, Type[PythonJobHelper]]:
raise NotImplementedError("python_submission_helpers is not specified")
@property
def default_python_submission_method(self) -> str:
raise NotImplementedError("default_python_submission_method is not specified")
@log_code_execution
def submit_python_job(self, parsed_model: dict, compiled_code: str) -> AdapterResponse:
submission_method = parsed_model["config"].get(
"submission_method", self.default_python_submission_method
)
if submission_method not in self.python_submission_helpers:
raise NotImplementedError(
"Submission method {} is not supported for current adapter".format(
submission_method
)
)
job_helper = self.python_submission_helpers[submission_method](
parsed_model, self.connections.profile.credentials
)
submission_result = job_helper.submit(compiled_code)
# process submission result to generate adapter response
return self.generate_python_submission_response(submission_result)
def generate_python_submission_response(self, submission_result: Any) -> AdapterResponse:
raise NotImplementedException(
"Your adapter need to implement generate_python_submission_response"
)
def valid_incremental_strategies(self):
"""The set of standard builtin strategies which this adapter supports out-of-the-box.
Not used to validate custom strategies defined by end users.
"""
return ["append"]
def builtin_incremental_strategies(self):
return ["append", "delete+insert", "merge", "insert_overwrite"]
@available.parse_none
def get_incremental_strategy_macro(self, model_context, strategy: str):
# Construct macro_name from strategy name
if strategy is None:
strategy = "default"
# validate strategies for this adapter
valid_strategies = self.valid_incremental_strategies()
valid_strategies.append("default")
builtin_strategies = self.builtin_incremental_strategies()
if strategy in builtin_strategies and strategy not in valid_strategies:
raise RuntimeException(
f"The incremental strategy '{strategy}' is not valid for this adapter"
)
strategy = strategy.replace("+", "_")
macro_name = f"get_incremental_{strategy}_sql"
# The model_context should have MacroGenerator callable objects for all macros
if macro_name not in model_context:
raise RuntimeException(
'dbt could not find an incremental strategy macro with the name "{}" in {}'.format(
macro_name, self.config.project_name
)
)
# This returns a callable macro
return model_context[macro_name]
COLUMNS_EQUAL_SQL = """ COLUMNS_EQUAL_SQL = """
with diff_count as ( with diff_count as (
@@ -1209,7 +1327,7 @@ def catch_as_completed(
elif isinstance(exc, KeyboardInterrupt) or not isinstance(exc, Exception): elif isinstance(exc, KeyboardInterrupt) or not isinstance(exc, Exception):
raise exc raise exc
else: else:
warn_or_error(f"Encountered an error while generating catalog: {str(exc)}") warn_or_error(CatalogGenerationError(exc=str(exc)))
# exc is not None, derives from Exception, and isn't ctrl+c # exc is not None, derives from Exception, and isn't ctrl+c
exceptions.append(exc) exceptions.append(exc)
return merge_tables(tables), exceptions return merge_tables(tables), exceptions

View File

@@ -1,10 +1,16 @@
import re
import threading import threading
from copy import deepcopy from copy import deepcopy
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
from dbt.adapters.reference_keys import _make_key, _ReferenceKey from dbt.adapters.reference_keys import (
_make_ref_key,
_make_ref_key_msg,
_make_msg_from_ref_key,
_ReferenceKey,
)
import dbt.exceptions import dbt.exceptions
from dbt.events.functions import fire_event from dbt.events.functions import fire_event, fire_event_if
from dbt.events.types import ( from dbt.events.types import (
AddLink, AddLink,
AddRelation, AddRelation,
@@ -20,8 +26,8 @@ from dbt.events.types import (
UncachedRelation, UncachedRelation,
UpdateReference, UpdateReference,
) )
import dbt.flags as flags
from dbt.utils import lowercase from dbt.utils import lowercase
from dbt.helper_types import Lazy
def dot_separated(key: _ReferenceKey) -> str: def dot_separated(key: _ReferenceKey) -> str:
@@ -81,7 +87,7 @@ class _CachedRelation:
:return _ReferenceKey: A key for this relation. :return _ReferenceKey: A key for this relation.
""" """
return _make_key(self) return _make_ref_key(self)
def add_reference(self, referrer: "_CachedRelation"): def add_reference(self, referrer: "_CachedRelation"):
"""Add a reference from referrer to self, indicating that if this node """Add a reference from referrer to self, indicating that if this node
@@ -293,13 +299,18 @@ class RelationsCache:
:param BaseRelation dependent: The dependent model. :param BaseRelation dependent: The dependent model.
:raises InternalError: If either entry does not exist. :raises InternalError: If either entry does not exist.
""" """
ref_key = _make_key(referenced) ref_key = _make_ref_key(referenced)
dep_key = _make_key(dependent) dep_key = _make_ref_key(dependent)
if (ref_key.database, ref_key.schema) not in self: if (ref_key.database, ref_key.schema) not in self:
# if we have not cached the referenced schema at all, we must be # if we have not cached the referenced schema at all, we must be
# referring to a table outside our control. There's no need to make # referring to a table outside our control. There's no need to make
# a link - we will never drop the referenced relation during a run. # a link - we will never drop the referenced relation during a run.
fire_event(UncachedRelation(dep_key=dep_key, ref_key=ref_key)) fire_event(
UncachedRelation(
dep_key=_make_msg_from_ref_key(dep_key),
ref_key=_make_msg_from_ref_key(ref_key),
)
)
return return
if ref_key not in self.relations: if ref_key not in self.relations:
# Insert a dummy "external" relation. # Insert a dummy "external" relation.
@@ -309,7 +320,11 @@ class RelationsCache:
# Insert a dummy "external" relation. # Insert a dummy "external" relation.
dependent = dependent.replace(type=referenced.External) dependent = dependent.replace(type=referenced.External)
self.add(dependent) self.add(dependent)
fire_event(AddLink(dep_key=dep_key, ref_key=ref_key)) fire_event(
AddLink(
dep_key=_make_msg_from_ref_key(dep_key), ref_key=_make_msg_from_ref_key(ref_key)
)
)
with self.lock: with self.lock:
self._add_link(ref_key, dep_key) self._add_link(ref_key, dep_key)
@@ -320,12 +335,12 @@ class RelationsCache:
:param BaseRelation relation: The underlying relation. :param BaseRelation relation: The underlying relation.
""" """
cached = _CachedRelation(relation) cached = _CachedRelation(relation)
fire_event(AddRelation(relation=_make_key(cached))) fire_event(AddRelation(relation=_make_ref_key_msg(cached)))
fire_event(DumpBeforeAddGraph(dump=Lazy.defer(lambda: self.dump_graph()))) fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpBeforeAddGraph(dump=self.dump_graph()))
with self.lock: with self.lock:
self._setdefault(cached) self._setdefault(cached)
fire_event(DumpAfterAddGraph(dump=Lazy.defer(lambda: self.dump_graph()))) fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpAfterAddGraph(dump=self.dump_graph()))
def _remove_refs(self, keys): def _remove_refs(self, keys):
"""Removes all references to all entries in keys. This does not """Removes all references to all entries in keys. This does not
@@ -340,19 +355,6 @@ class RelationsCache:
for cached in self.relations.values(): for cached in self.relations.values():
cached.release_references(keys) cached.release_references(keys)
def _drop_cascade_relation(self, dropped_key):
"""Drop the given relation and cascade it appropriately to all
dependent relations.
:param _CachedRelation dropped: An existing _CachedRelation to drop.
"""
if dropped_key not in self.relations:
fire_event(DropMissingRelation(relation=dropped_key))
return
consequences = self.relations[dropped_key].collect_consequences()
fire_event(DropCascade(dropped=dropped_key, consequences=consequences))
self._remove_refs(consequences)
def drop(self, relation): def drop(self, relation):
"""Drop the named relation and cascade it appropriately to all """Drop the named relation and cascade it appropriately to all
dependent relations. dependent relations.
@@ -364,10 +366,19 @@ class RelationsCache:
:param str schema: The schema of the relation to drop. :param str schema: The schema of the relation to drop.
:param str identifier: The identifier of the relation to drop. :param str identifier: The identifier of the relation to drop.
""" """
dropped_key = _make_key(relation) dropped_key = _make_ref_key(relation)
fire_event(DropRelation(dropped=dropped_key)) dropped_key_msg = _make_ref_key_msg(relation)
fire_event(DropRelation(dropped=dropped_key_msg))
with self.lock: with self.lock:
self._drop_cascade_relation(dropped_key) if dropped_key not in self.relations:
fire_event(DropMissingRelation(relation=dropped_key_msg))
return
consequences = self.relations[dropped_key].collect_consequences()
# convert from a list of _ReferenceKeys to a list of ReferenceKeyMsgs
consequence_msgs = [_make_msg_from_ref_key(key) for key in consequences]
fire_event(DropCascade(dropped=dropped_key_msg, consequences=consequence_msgs))
self._remove_refs(consequences)
def _rename_relation(self, old_key, new_relation): def _rename_relation(self, old_key, new_relation):
"""Rename a relation named old_key to new_key, updating references. """Rename a relation named old_key to new_key, updating references.
@@ -383,13 +394,17 @@ class RelationsCache:
relation = self.relations.pop(old_key) relation = self.relations.pop(old_key)
new_key = new_relation.key() new_key = new_relation.key()
# relaton has to rename its innards, so it needs the _CachedRelation. # relation has to rename its innards, so it needs the _CachedRelation.
relation.rename(new_relation) relation.rename(new_relation)
# update all the relations that refer to it # update all the relations that refer to it
for cached in self.relations.values(): for cached in self.relations.values():
if cached.is_referenced_by(old_key): if cached.is_referenced_by(old_key):
fire_event( fire_event(
UpdateReference(old_key=old_key, new_key=new_key, cached_key=cached.key()) UpdateReference(
old_key=_make_ref_key_msg(old_key),
new_key=_make_ref_key_msg(new_key),
cached_key=_make_ref_key_msg(cached.key()),
)
) )
cached.rename_key(old_key, new_key) cached.rename_key(old_key, new_key)
@@ -413,14 +428,29 @@ class RelationsCache:
:raises InternalError: If the new key is already present. :raises InternalError: If the new key is already present.
""" """
if new_key in self.relations: if new_key in self.relations:
# Tell user when collision caused by model names truncated during
# materialization.
match = re.search("__dbt_backup|__dbt_tmp$", new_key.identifier)
if match:
truncated_model_name_prefix = new_key.identifier[: match.start()]
message_addendum = (
"\n\nName collisions can occur when the length of two "
"models' names approach your database's builtin limit. "
"Try restructuring your project such that no two models "
"share the prefix '{}'.".format(truncated_model_name_prefix)
+ " Then, clean your warehouse of any removed models."
)
else:
message_addendum = ""
dbt.exceptions.raise_cache_inconsistent( dbt.exceptions.raise_cache_inconsistent(
"in rename, new key {} already in cache: {}".format( "in rename, new key {} already in cache: {}{}".format(
new_key, list(self.relations.keys()) new_key, list(self.relations.keys()), message_addendum
) )
) )
if old_key not in self.relations: if old_key not in self.relations:
fire_event(TemporaryRelation(key=old_key)) fire_event(TemporaryRelation(key=_make_msg_from_ref_key(old_key)))
return False return False
return True return True
@@ -436,11 +466,17 @@ class RelationsCache:
:param BaseRelation new: The new relation name information. :param BaseRelation new: The new relation name information.
:raises InternalError: If the new key is already present. :raises InternalError: If the new key is already present.
""" """
old_key = _make_key(old) old_key = _make_ref_key(old)
new_key = _make_key(new) new_key = _make_ref_key(new)
fire_event(RenameSchema(old_key=old_key, new_key=new_key)) fire_event(
RenameSchema(
old_key=_make_msg_from_ref_key(old_key), new_key=_make_msg_from_ref_key(new)
)
)
fire_event(DumpBeforeRenameSchema(dump=Lazy.defer(lambda: self.dump_graph()))) fire_event_if(
flags.LOG_CACHE_EVENTS, lambda: DumpBeforeRenameSchema(dump=self.dump_graph())
)
with self.lock: with self.lock:
if self._check_rename_constraints(old_key, new_key): if self._check_rename_constraints(old_key, new_key):
@@ -448,7 +484,9 @@ class RelationsCache:
else: else:
self._setdefault(_CachedRelation(new)) self._setdefault(_CachedRelation(new))
fire_event(DumpAfterRenameSchema(dump=Lazy.defer(lambda: self.dump_graph()))) fire_event_if(
flags.LOG_CACHE_EVENTS, lambda: DumpAfterRenameSchema(dump=self.dump_graph())
)
def get_relations(self, database: Optional[str], schema: Optional[str]) -> List[Any]: def get_relations(self, database: Optional[str], schema: Optional[str]) -> List[Any]:
"""Case-insensitively yield all relations matching the given schema. """Case-insensitively yield all relations matching the given schema.
@@ -496,6 +534,6 @@ class RelationsCache:
""" """
for relation in to_remove: for relation in to_remove:
# it may have been cascaded out already # it may have been cascaded out already
drop_key = _make_key(relation) drop_key = _make_ref_key(relation)
if drop_key in self.relations: if drop_key in self.relations:
self.drop(drop_key) self.drop(drop_key)

View File

@@ -1,23 +1,18 @@
import threading import threading
from pathlib import Path import traceback
from contextlib import contextmanager
from importlib import import_module from importlib import import_module
from typing import Type, Dict, Any, List, Optional, Set from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Type
from dbt.exceptions import RuntimeException, InternalException from dbt.adapters.base.plugin import AdapterPlugin
from dbt.include.global_project import ( from dbt.adapters.protocol import AdapterConfig, AdapterProtocol, RelationProtocol
PACKAGE_PATH as GLOBAL_PROJECT_PATH, from dbt.contracts.connection import AdapterRequiredConfig, Credentials
PROJECT_NAME as GLOBAL_PROJECT_NAME,
)
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import AdapterImportError, PluginLoadError from dbt.events.types import AdapterImportError, PluginLoadError
from dbt.contracts.connection import Credentials, AdapterRequiredConfig from dbt.exceptions import InternalException, RuntimeException
from dbt.adapters.protocol import ( from dbt.include.global_project import PACKAGE_PATH as GLOBAL_PROJECT_PATH
AdapterProtocol, from dbt.include.global_project import PROJECT_NAME as GLOBAL_PROJECT_NAME
AdapterConfig,
RelationProtocol,
)
from dbt.adapters.base.plugin import AdapterPlugin
Adapter = AdapterProtocol Adapter = AdapterProtocol
@@ -64,12 +59,12 @@ class AdapterContainer:
# if we failed to import the target module in particular, inform # if we failed to import the target module in particular, inform
# the user about it via a runtime error # the user about it via a runtime error
if exc.name == "dbt.adapters." + name: if exc.name == "dbt.adapters." + name:
fire_event(AdapterImportError(exc=exc)) fire_event(AdapterImportError(exc=str(exc)))
raise RuntimeException(f"Could not find adapter type {name}!") raise RuntimeException(f"Could not find adapter type {name}!")
# otherwise, the error had to have come from some underlying # otherwise, the error had to have come from some underlying
# library. Log the stack trace. # library. Log the stack trace.
fire_event(PluginLoadError()) fire_event(PluginLoadError(exc_info=traceback.format_exc()))
raise raise
plugin: AdapterPlugin = mod.Plugin plugin: AdapterPlugin = mod.Plugin
plugin_type = plugin.adapter.type() plugin_type = plugin.adapter.type()
@@ -217,3 +212,12 @@ def get_adapter_package_names(name: Optional[str]) -> List[str]:
def get_adapter_type_names(name: Optional[str]) -> List[str]: def get_adapter_type_names(name: Optional[str]) -> List[str]:
return FACTORY.get_adapter_type_names(name) return FACTORY.get_adapter_type_names(name)
@contextmanager
def adapter_management():
reset_adapters()
try:
yield
finally:
cleanup_connections()

View File

@@ -88,7 +88,7 @@ class AdapterProtocol( # type: ignore[misc]
], ],
): ):
# N.B. Technically these are ClassVars, but mypy doesn't support putting type vars in a # N.B. Technically these are ClassVars, but mypy doesn't support putting type vars in a
# ClassVar due to the restirctiveness of PEP-526 # ClassVar due to the restrictiveness of PEP-526
# See: https://github.com/python/mypy/issues/5144 # See: https://github.com/python/mypy/issues/5144
AdapterSpecificConfigs: Type[AdapterConfig_T] AdapterSpecificConfigs: Type[AdapterConfig_T]
Column: Type[Column_T] Column: Type[Column_T]

View File

@@ -2,6 +2,7 @@
from collections import namedtuple from collections import namedtuple
from typing import Any, Optional from typing import Any, Optional
from dbt.events.proto_types import ReferenceKeyMsg
_ReferenceKey = namedtuple("_ReferenceKey", "database schema identifier") _ReferenceKey = namedtuple("_ReferenceKey", "database schema identifier")
@@ -14,7 +15,12 @@ def lowercase(value: Optional[str]) -> Optional[str]:
return value.lower() return value.lower()
# For backwards compatibility. New code should use _make_ref_key
def _make_key(relation: Any) -> _ReferenceKey: def _make_key(relation: Any) -> _ReferenceKey:
return _make_ref_key(relation)
def _make_ref_key(relation: Any) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have """Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting to keep track of quoting
""" """
@@ -22,3 +28,13 @@ def _make_key(relation: Any) -> _ReferenceKey:
return _ReferenceKey( return _ReferenceKey(
lowercase(relation.database), lowercase(relation.schema), lowercase(relation.identifier) lowercase(relation.database), lowercase(relation.schema), lowercase(relation.identifier)
) )
def _make_ref_key_msg(relation: Any):
return _make_msg_from_ref_key(_make_ref_key(relation))
def _make_msg_from_ref_key(ref_key: _ReferenceKey) -> ReferenceKeyMsg:
return ReferenceKeyMsg(
database=ref_key.database, schema=ref_key.schema, identifier=ref_key.identifier
)

View File

@@ -10,6 +10,7 @@ from dbt.adapters.base import BaseConnectionManager
from dbt.contracts.connection import Connection, ConnectionState, AdapterResponse from dbt.contracts.connection import Connection, ConnectionState, AdapterResponse
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import ConnectionUsed, SQLQuery, SQLCommit, SQLQueryStatus from dbt.events.types import ConnectionUsed, SQLQuery, SQLCommit, SQLQueryStatus
from dbt.utils import cast_to_str
class SQLConnectionManager(BaseConnectionManager): class SQLConnectionManager(BaseConnectionManager):
@@ -55,7 +56,7 @@ class SQLConnectionManager(BaseConnectionManager):
connection = self.get_thread_connection() connection = self.get_thread_connection()
if auto_begin and connection.transaction_open is False: if auto_begin and connection.transaction_open is False:
self.begin() self.begin()
fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=connection.name)) fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=cast_to_str(connection.name)))
with self.exception_handler(sql): with self.exception_handler(sql):
if abridge_sql_log: if abridge_sql_log:
@@ -63,7 +64,7 @@ class SQLConnectionManager(BaseConnectionManager):
else: else:
log_sql = sql log_sql = sql
fire_event(SQLQuery(conn_name=connection.name, sql=log_sql)) fire_event(SQLQuery(conn_name=cast_to_str(connection.name), sql=log_sql))
pre = time.time() pre = time.time()
cursor = connection.handle.cursor() cursor = connection.handle.cursor()

View File

@@ -5,7 +5,7 @@ import dbt.clients.agate_helper
from dbt.contracts.connection import Connection from dbt.contracts.connection import Connection
import dbt.exceptions import dbt.exceptions
from dbt.adapters.base import BaseAdapter, available from dbt.adapters.base import BaseAdapter, available
from dbt.adapters.cache import _make_key from dbt.adapters.cache import _make_ref_key_msg
from dbt.adapters.sql import SQLConnectionManager from dbt.adapters.sql import SQLConnectionManager
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import ColTypeChange, SchemaCreation, SchemaDrop from dbt.events.types import ColTypeChange, SchemaCreation, SchemaDrop
@@ -110,7 +110,7 @@ class SQLAdapter(BaseAdapter):
ColTypeChange( ColTypeChange(
orig_type=target_column.data_type, orig_type=target_column.data_type,
new_type=new_type, new_type=new_type,
table=_make_key(current), table=_make_ref_key_msg(current),
) )
) )
@@ -155,7 +155,7 @@ class SQLAdapter(BaseAdapter):
def create_schema(self, relation: BaseRelation) -> None: def create_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier() relation = relation.without_identifier()
fire_event(SchemaCreation(relation=_make_key(relation))) fire_event(SchemaCreation(relation=_make_ref_key_msg(relation)))
kwargs = { kwargs = {
"relation": relation, "relation": relation,
} }
@@ -166,7 +166,7 @@ class SQLAdapter(BaseAdapter):
def drop_schema(self, relation: BaseRelation) -> None: def drop_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier() relation = relation.without_identifier()
fire_event(SchemaDrop(relation=_make_key(relation))) fire_event(SchemaDrop(relation=_make_ref_key_msg(relation)))
kwargs = { kwargs = {
"relation": relation, "relation": relation,
} }

1
core/dbt/cli/README.md Normal file
View File

@@ -0,0 +1 @@
TODO

44
core/dbt/cli/flags.py Normal file
View File

@@ -0,0 +1,44 @@
# TODO Move this to /core/dbt/flags.py when we're ready to break things
import os
from dataclasses import dataclass
from multiprocessing import get_context
from pprint import pformat as pf
from click import get_current_context
if os.name != "nt":
# https://bugs.python.org/issue41567
import multiprocessing.popen_spawn_posix # type: ignore # noqa: F401
@dataclass(frozen=True)
class Flags:
def __init__(self, ctx=None) -> None:
if ctx is None:
ctx = get_current_context()
def assign_params(ctx):
"""Recursively adds all click params to flag object"""
for param_name, param_value in ctx.params.items():
# N.B. You have to use the base MRO method (object.__setattr__) to set attributes
# when using frozen dataclasses.
# https://docs.python.org/3/library/dataclasses.html#frozen-instances
if hasattr(self, param_name):
raise Exception(f"Duplicate flag names found in click command: {param_name}")
object.__setattr__(self, param_name.upper(), param_value)
if ctx.parent:
assign_params(ctx.parent)
assign_params(ctx)
# Hard coded flags
object.__setattr__(self, "WHICH", ctx.info_name)
object.__setattr__(self, "MP_CONTEXT", get_context("spawn"))
# Support console DO NOT TRACK initiave
if os.getenv("DO_NOT_TRACK", "").lower() in (1, "t", "true", "y", "yes"):
object.__setattr__(self, "ANONYMOUS_USAGE_STATS", False)
def __str__(self) -> str:
return str(pf(self.__dict__))

412
core/dbt/cli/main.py Normal file
View File

@@ -0,0 +1,412 @@
import inspect # This is temporary for RAT-ing
from copy import copy
from pprint import pformat as pf # This is temporary for RAT-ing
import click
from dbt.adapters.factory import adapter_management
from dbt.cli import params as p
from dbt.cli.flags import Flags
from dbt.profiler import profiler
def cli_runner():
# Alias "list" to "ls"
ls = copy(cli.commands["list"])
ls.hidden = True
cli.add_command(ls, "ls")
# Run the cli
cli()
# dbt
@click.group(
context_settings={"help_option_names": ["-h", "--help"]},
invoke_without_command=True,
no_args_is_help=True,
epilog="Specify one of these sub-commands and you can find more help from there.",
)
@click.pass_context
@p.anonymous_usage_stats
@p.cache_selected_only
@p.debug
@p.enable_legacy_logger
@p.event_buffer_size
@p.fail_fast
@p.log_cache_events
@p.log_format
@p.macro_debugging
@p.partial_parse
@p.print
@p.printer_width
@p.quiet
@p.record_timing_info
@p.static_parser
@p.use_colors
@p.use_experimental_parser
@p.version
@p.version_check
@p.warn_error
@p.write_json
def cli(ctx, **kwargs):
"""An ELT tool for managing your SQL transformations and data models.
For more documentation on these commands, visit: docs.getdbt.com
"""
incomplete_flags = Flags()
# Profiling
if incomplete_flags.RECORD_TIMING_INFO:
ctx.with_resource(profiler(enable=True, outfile=incomplete_flags.RECORD_TIMING_INFO))
# Adapter management
ctx.with_resource(adapter_management())
# Version info
if incomplete_flags.VERSION:
click.echo(f"`version` called\n ctx.params: {pf(ctx.params)}")
return
else:
del ctx.params["version"]
# dbt build
@cli.command("build")
@click.pass_context
@p.defer
@p.exclude
@p.fail_fast
@p.full_refresh
@p.indirect_selection
@p.log_path
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.show
@p.state
@p.store_failures
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def build(ctx, **kwargs):
"""Run all Seeds, Models, Snapshots, and tests in DAG order"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt clean
@cli.command("clean")
@click.pass_context
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.vars
def clean(ctx, **kwargs):
"""Delete all folders in the clean-targets list (usually the dbt_packages and target directories.)"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt docs
@cli.group()
@click.pass_context
def docs(ctx, **kwargs):
"""Generate or serve the documentation website for your project"""
# dbt docs generate
@docs.command("generate")
@click.pass_context
@p.compile_docs
@p.defer
@p.exclude
@p.log_path
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def docs_generate(ctx, **kwargs):
"""Generate the documentation website for your project"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt docs serve
@docs.command("serve")
@click.pass_context
@p.browser
@p.port
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.vars
def docs_serve(ctx, **kwargs):
"""Serve the documentation website for your project"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt compile
@cli.command("compile")
@click.pass_context
@p.defer
@p.exclude
@p.full_refresh
@p.log_path
@p.models
@p.parse_only
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def compile(ctx, **kwargs):
"""Generates executable SQL from source, model, test, and analysis files. Compiled SQL files are written to the target/ directory."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt debug
@cli.command("debug")
@click.pass_context
@p.config_dir
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.vars
@p.version_check
def debug(ctx, **kwargs):
"""Show some helpful information about dbt for debugging. Not to be confused with the --debug option which increases verbosity."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt deps
@cli.command("deps")
@click.pass_context
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.vars
def deps(ctx, **kwargs):
"""Pull the most recent version of the dependencies listed in packages.yml"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt init
@cli.command("init")
@click.pass_context
@p.profile
@p.profiles_dir
@p.project_dir
@p.skip_profile_setup
@p.target
@p.vars
def init(ctx, **kwargs):
"""Initialize a new DBT project."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt list
@cli.command("list")
@click.pass_context
@p.exclude
@p.indirect_selection
@p.models
@p.output
@p.output_keys
@p.profile
@p.profiles_dir
@p.project_dir
@p.resource_type
@p.selector
@p.state
@p.target
@p.vars
def list(ctx, **kwargs):
"""List the resources in your project"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt parse
@cli.command("parse")
@click.pass_context
@p.compile_parse
@p.log_path
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
@p.write_manifest
def parse(ctx, **kwargs):
"""Parses the project and provides information on performance"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt run
@cli.command("run")
@click.pass_context
@p.defer
@p.exclude
@p.fail_fast
@p.full_refresh
@p.log_path
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def run(ctx, **kwargs):
"""Compile SQL and execute against the current target database."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt run operation
@cli.command("run-operation")
@click.pass_context
@p.args
@p.profile
@p.profiles_dir
@p.project_dir
@p.target
@p.vars
def run_operation(ctx, **kwargs):
"""Run the named macro with any supplied arguments."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt seed
@cli.command("seed")
@click.pass_context
@p.exclude
@p.full_refresh
@p.log_path
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.show
@p.state
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def seed(ctx, **kwargs):
"""Load data from csv files into your data warehouse."""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt snapshot
@cli.command("snapshot")
@click.pass_context
@p.defer
@p.exclude
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.target
@p.threads
@p.vars
def snapshot(ctx, **kwargs):
"""Execute snapshots defined in your project"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt source
@cli.group()
@click.pass_context
def source(ctx, **kwargs):
"""Manage your project's sources"""
# dbt source freshness
@source.command("freshness")
@click.pass_context
@p.exclude
@p.models
@p.output_path # TODO: Is this ok to re-use? We have three different output params, how much can we consolidate?
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.target
@p.threads
@p.vars
def freshness(ctx, **kwargs):
"""Snapshots the current freshness of the project's sources"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# dbt test
@cli.command("test")
@click.pass_context
@p.defer
@p.exclude
@p.fail_fast
@p.indirect_selection
@p.log_path
@p.models
@p.profile
@p.profiles_dir
@p.project_dir
@p.selector
@p.state
@p.store_failures
@p.target
@p.target_path
@p.threads
@p.vars
@p.version_check
def test(ctx, **kwargs):
"""Runs tests on data in deployed models. Run this after `dbt run`"""
flags = Flags()
click.echo(f"`{inspect.stack()[0][3]}` called\n flags: {flags}")
# Support running as a module
if __name__ == "__main__":
cli_runner()

View File

@@ -0,0 +1,33 @@
from click import ParamType
import yaml
class YAML(ParamType):
"""The Click YAML type. Converts YAML strings into objects."""
name = "YAML"
def convert(self, value, param, ctx):
# assume non-string values are a problem
if not isinstance(value, str):
self.fail(f"Cannot load YAML from type {type(value)}", param, ctx)
try:
return yaml.load(value, Loader=yaml.Loader)
except yaml.parser.ParserError:
self.fail(f"String '{value}' is not valid YAML", param, ctx)
class Truthy(ParamType):
"""The Click Truthy type. Converts strings into a "truthy" type"""
name = "TRUTHY"
def convert(self, value, param, ctx):
# assume non-string / non-None values are a problem
if not isinstance(value, (str, None)):
self.fail(f"Cannot load TRUTHY from type {type(value)}", param, ctx)
if value is None or value.lower() in ("0", "false", "f"):
return None
else:
return value

386
core/dbt/cli/params.py Normal file
View File

@@ -0,0 +1,386 @@
from pathlib import Path, PurePath
import click
from dbt.cli.option_types import YAML
from dbt.cli.resolvers import default_project_dir, default_profiles_dir
# TODO: The name (reflected in flags) is a correction!
# The original name was `SEND_ANONYMOUS_USAGE_STATS` and used an env var called "DBT_SEND_ANONYMOUS_USAGE_STATS"
# Both of which break existing naming conventions (doesn't match param flag).
# This will need to be fixed before use in the main codebase and communicated as a change to the community!
anonymous_usage_stats = click.option(
"--anonymous-usage-stats/--no-anonymous-usage-stats",
envvar="DBT_ANONYMOUS_USAGE_STATS",
help="Send anonymous usage stats to dbt Labs.",
default=True,
)
args = click.option(
"--args",
envvar=None,
help="Supply arguments to the macro. This dictionary will be mapped to the keyword arguments defined in the selected macro. This argument should be a YAML string, eg. '{my_variable: my_value}'",
type=YAML(),
)
browser = click.option(
"--browser/--no-browser",
envvar=None,
help="Wether or not to open a local web browser after starting the server",
default=True,
)
cache_selected_only = click.option(
"--cache-selected-only/--no-cache-selected-only",
envvar="DBT_CACHE_SELECTED_ONLY",
help="Pre cache database objects relevant to selected resource only.",
)
compile_docs = click.option(
"--compile/--no-compile",
envvar=None,
help="Wether or not to run 'dbt compile' as part of docs generation",
default=True,
)
compile_parse = click.option(
"--compile/--no-compile",
envvar=None,
help="TODO: No help text currently available",
default=True,
)
config_dir = click.option(
"--config-dir",
envvar=None,
help="If specified, DBT will show path information for this project",
type=click.STRING,
)
debug = click.option(
"--debug/--no-debug",
"-d/ ",
envvar="DBT_DEBUG",
help="Display debug logging during dbt execution. Useful for debugging and making bug reports.",
)
# TODO: The env var and name (reflected in flags) are corrections!
# The original name was `DEFER_MODE` and used an env var called "DBT_DEFER_TO_STATE"
# Both of which break existing naming conventions.
# This will need to be fixed before use in the main codebase and communicated as a change to the community!
defer = click.option(
"--defer/--no-defer",
envvar="DBT_DEFER",
help="If set, defer to the state variable for resolving unselected nodes.",
)
enable_legacy_logger = click.option(
"--enable-legacy-logger/--no-enable-legacy-logger",
envvar="DBT_ENABLE_LEGACY_LOGGER",
hidden=True,
)
event_buffer_size = click.option(
"--event-buffer-size",
envvar="DBT_EVENT_BUFFER_SIZE",
help="Sets the max number of events to buffer in EVENT_HISTORY.",
default=100000,
type=click.INT,
)
exclude = click.option("--exclude", envvar=None, help="Specify the nodes to exclude.")
fail_fast = click.option(
"--fail-fast/--no-fail-fast",
"-x/ ",
envvar="DBT_FAIL_FAST",
help="Stop execution on first failure.",
)
full_refresh = click.option(
"--full-refresh",
"-f",
envvar="DBT_FULL_REFRESH",
help="If specified, dbt will drop incremental models and fully-recalculate the incremental table from the model definition.",
is_flag=True,
)
indirect_selection = click.option(
"--indirect-selection",
envvar="DBT_INDIRECT_SELECTION",
help="Select all tests that are adjacent to selected resources, even if they those resources have been explicitly selected.",
type=click.Choice(["eager", "cautious"], case_sensitive=False),
default="eager",
)
log_cache_events = click.option(
"--log-cache-events/--no-log-cache-events",
help="Enable verbose adapter cache logging.",
envvar="DBT_LOG_CACHE_EVENTS",
)
log_format = click.option(
"--log-format",
envvar="DBT_LOG_FORMAT",
help="Specify the log format, overriding the command's default.",
type=click.Choice(["text", "json", "default"], case_sensitive=False),
default="default",
)
log_path = click.option(
"--log-path",
envvar="DBT_LOG_PATH",
help="Configure the 'log-path'. Only applies this setting for the current run. Overrides the 'DBT_LOG_PATH' if it is set.",
type=click.Path(),
)
macro_debugging = click.option(
"--macro-debugging/--no-macro-debugging",
envvar="DBT_MACRO_DEBUGGING",
hidden=True,
)
models = click.option(
"-m",
"-s",
"models",
envvar=None,
help="Specify the nodes to include.",
multiple=True,
)
output = click.option(
"--output",
envvar=None,
help="TODO: No current help text",
type=click.Choice(["json", "name", "path", "selector"], case_sensitive=False),
default="name",
)
output_keys = click.option(
"--output-keys", envvar=None, help="TODO: No current help text", type=click.STRING
)
output_path = click.option(
"--output",
"-o",
envvar=None,
help="Specify the output path for the json report. By default, outputs to 'target/sources.json'",
type=click.Path(file_okay=True, dir_okay=False, writable=True),
default=PurePath.joinpath(Path.cwd(), "target/sources.json"),
)
parse_only = click.option(
"--parse-only",
envvar=None,
help="TODO: No help text currently available",
is_flag=True,
)
partial_parse = click.option(
"--partial-parse/--no-partial-parse",
envvar="DBT_PARTIAL_PARSE",
help="Allow for partial parsing by looking for and writing to a pickle file in the target directory. This overrides the user configuration file.",
default=True,
)
port = click.option(
"--port",
envvar=None,
help="Specify the port number for the docs server",
default=8080,
type=click.INT,
)
# TODO: The env var and name (reflected in flags) are corrections!
# The original name was `NO_PRINT` and used the env var `DBT_NO_PRINT`.
# Both of which break existing naming conventions.
# This will need to be fixed before use in the main codebase and communicated as a change to the community!
print = click.option(
"--print/--no-print",
envvar="DBT_PRINT",
help="Output all {{ print() }} macro calls.",
default=True,
)
printer_width = click.option(
"--printer-width",
envvar="DBT_PRINTER_WIDTH",
help="Sets the width of terminal output",
type=click.INT,
default=80,
)
profile = click.option(
"--profile",
envvar=None,
help="Which profile to load. Overrides setting in dbt_project.yml.",
)
profiles_dir = click.option(
"--profiles-dir",
envvar="DBT_PROFILES_DIR",
help="Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/",
default=default_profiles_dir(),
type=click.Path(exists=True),
)
project_dir = click.option(
"--project-dir",
envvar=None,
help="Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.",
default=default_project_dir(),
type=click.Path(exists=True),
)
quiet = click.option(
"--quiet/--no-quiet",
envvar="DBT_QUIET",
help="Suppress all non-error logging to stdout. Does not affect {{ print() }} macro calls.",
)
record_timing_info = click.option(
"--record-timing-info",
"-r",
envvar=None,
help="When this option is passed, dbt will output low-level timing stats to the specified file. Example: `--record-timing-info output.profile`",
type=click.Path(exists=False),
)
resource_type = click.option(
"--resource-type",
envvar=None,
help="TODO: No current help text",
type=click.Choice(
[
"metric",
"source",
"analysis",
"model",
"test",
"exposure",
"snapshot",
"seed",
"default",
"all",
],
case_sensitive=False,
),
default="default",
)
selector = click.option(
"--selector", envvar=None, help="The selector name to use, as defined in selectors.yml"
)
show = click.option(
"--show", envvar=None, help="Show a sample of the loaded data in the terminal", is_flag=True
)
skip_profile_setup = click.option(
"--skip-profile-setup", "-s", envvar=None, help="Skip interative profile setup.", is_flag=True
)
# TODO: The env var and name (reflected in flags) are corrections!
# The original name was `ARTIFACT_STATE_PATH` and used the env var `DBT_ARTIFACT_STATE_PATH`.
# Both of which break existing naming conventions.
# This will need to be fixed before use in the main codebase and communicated as a change to the community!
state = click.option(
"--state",
envvar="DBT_STATE",
help="If set, use the given directory as the source for json files to compare with this project.",
type=click.Path(
dir_okay=True,
exists=True,
file_okay=False,
readable=True,
resolve_path=True,
),
)
static_parser = click.option(
"--static-parser/--no-static-parser",
envvar="DBT_STATIC_PARSER",
help="Use the static parser.",
default=True,
)
store_failures = click.option(
"--store-failures",
envvar="DBT_STORE_FAILURES",
help="Store test results (failing rows) in the database",
is_flag=True,
)
target = click.option(
"--target", "-t", envvar=None, help="Which target to load for the given profile"
)
target_path = click.option(
"--target-path",
envvar="DBT_TARGET_PATH",
help="Configure the 'target-path'. Only applies this setting for the current run. Overrides the 'DBT_TARGET_PATH' if it is set.",
type=click.Path(),
)
threads = click.option(
"--threads",
envvar=None,
help="Specify number of threads to use while executing models. Overrides settings in profiles.yml.",
default=1,
type=click.INT,
)
use_colors = click.option(
"--use-colors/--no-use-colors",
envvar="DBT_USE_COLORS",
help="Output is colorized by default and may also be set in a profile or at the command line.",
default=True,
)
use_experimental_parser = click.option(
"--use-experimental-parser/--no-use-experimental-parser",
envvar="DBT_USE_EXPERIMENTAL_PARSER",
help="Enable experimental parsing features.",
)
vars = click.option(
"--vars",
envvar=None,
help="Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. '{my_variable: my_value}'",
type=YAML(),
)
version = click.option(
"--version",
envvar=None,
help="Show version information",
is_flag=True,
)
version_check = click.option(
"--version-check/--no-version-check",
envvar="DBT_VERSION_CHECK",
help="Ensure dbt's version matches the one specified in the dbt_project.yml file ('require-dbt-version')",
default=True,
)
warn_error = click.option(
"--warn-error/--no-warn-error",
envvar="DBT_WARN_ERROR",
help="If dbt would normally warn, instead raise an exception. Examples include --models that selects nothing, deprecations, configurations with no associated models, invalid test configurations, and missing sources/refs in tests.",
)
write_json = click.option(
"--write-json/--no-write-json",
envvar="DBT_WRITE_JSON",
help="Writing the manifest and run_results.json files to disk",
default=True,
)
write_manifest = click.option(
"--write-manifest/--no-write-manifest",
envvar=None,
help="TODO: No help text currently available",
default=True,
)

11
core/dbt/cli/resolvers.py Normal file
View File

@@ -0,0 +1,11 @@
from pathlib import Path
def default_project_dir():
paths = list(Path.cwd().parents)
paths.insert(0, Path.cwd())
return next((x for x in paths if (x / "dbt_project.yml").exists()), Path.cwd())
def default_profiles_dir():
return Path.cwd() if (Path.cwd() / "profiles.yml").exists() else Path.home() / ".dbt"

View File

@@ -1 +1,19 @@
# Clients README # Clients README
### Jinja
#### How are materializations defined
Model materializations are kept in `core/dbt/include/global_project/macros/materializations/models/`. Materializations are defined using syntax that isn't part of the Jinja standard library. These tags are referenced internally, and materializations can be overridden in user projects when users have specific needs.
```
-- Pseudocode for arguments
{% materialization <name>, <target name := one_of{default, adapter}> %}'
{% endmaterialization %}
```
These blocks are referred to Jinja extensions. Extensions are defined as part of the accepted Jinja code encapsulated within a dbt project. This includes system code used internally by dbt and user space (i.e. user-defined) macros. Extensions exist to help Jinja users create reusable code blocks or abstract objects--for us, materializations are a great use-case since we pass these around as arguments within dbt system code.
The code that defines this extension is a class `MaterializationExtension` and a `parse` routine. That code lives in [clients/jinja.py](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/clients/jinja.py). The routine
enables Jinja to parse (i.e. recognize) the unique comma separated arg structure our `materialization` tags exhibit (the `table, default` as seen above).

View File

@@ -367,9 +367,9 @@ class BlockIterator:
if self.current: if self.current:
linecount = self.data[: self.current.end].count("\n") + 1 linecount = self.data[: self.current.end].count("\n") + 1
dbt.exceptions.raise_compiler_error( dbt.exceptions.raise_compiler_error(
( ("Reached EOF without finding a close tag for {} (searched from line {})").format(
"Reached EOF without finding a close tag for " "{} (searched from line {})" self.current.block_type_name, linecount
).format(self.current.block_type_name, linecount) )
) )
if collect_raw_data: if collect_raw_data:

View File

@@ -27,6 +27,7 @@ from dbt.utils import (
from dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag from dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag
from dbt.contracts.graph.compiled import CompiledGenericTestNode from dbt.contracts.graph.compiled import CompiledGenericTestNode
from dbt.contracts.graph.parsed import ParsedGenericTestNode from dbt.contracts.graph.parsed import ParsedGenericTestNode
from dbt.exceptions import ( from dbt.exceptions import (
InternalException, InternalException,
raise_compiler_error, raise_compiler_error,
@@ -37,6 +38,10 @@ from dbt.exceptions import (
UndefinedMacroException, UndefinedMacroException,
) )
from dbt import flags from dbt import flags
from dbt.node_types import ModelLanguage
SUPPORTED_LANG_ARG = jinja2.nodes.Name("supported_languages", "param")
def _linecache_inject(source, write): def _linecache_inject(source, write):
@@ -161,7 +166,7 @@ def quoted_native_concat(nodes):
class NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore class NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore
environment_class = NativeSandboxEnvironment environment_class = NativeSandboxEnvironment # type: ignore
def render(self, *args, **kwargs): def render(self, *args, **kwargs):
"""Render the template to produce a native Python type. If the """Render the template to produce a native Python type. If the
@@ -301,13 +306,13 @@ class MacroGenerator(BaseMacroGenerator):
@contextmanager @contextmanager
def track_call(self): def track_call(self):
# This is only called from __call__ # This is only called from __call__
if self.stack is None or self.node is None: if self.stack is None:
yield yield
else: else:
unique_id = self.macro.unique_id unique_id = self.macro.unique_id
depth = self.stack.depth depth = self.stack.depth
# only mark depth=0 as a dependency # only mark depth=0 as a dependency, when creating this dependency we don't pass in stack
if depth == 0: if depth == 0 and self.node:
self.node.depends_on.add_macro(unique_id) self.node.depends_on.add_macro(unique_id)
self.stack.push(unique_id) self.stack.push(unique_id)
try: try:
@@ -364,9 +369,20 @@ class MaterializationExtension(jinja2.ext.Extension):
value = parser.parse_expression() value = parser.parse_expression()
adapter_name = value.value adapter_name = value.value
elif target.name == "supported_languages":
target.set_ctx("param")
node.args.append(target)
parser.stream.expect("assign")
languages = parser.parse_expression()
node.defaults.append(languages)
else: else:
invalid_materialization_argument(materialization_name, target.name) invalid_materialization_argument(materialization_name, target.name)
if SUPPORTED_LANG_ARG not in node.args:
node.args.append(SUPPORTED_LANG_ARG)
node.defaults.append(jinja2.nodes.List([jinja2.nodes.Const("sql")]))
node.name = get_materialization_macro_name(materialization_name, adapter_name) node.name = get_materialization_macro_name(materialization_name, adapter_name)
node.body = parser.parse_statements(("name:endmaterialization",), drop_needle=True) node.body = parser.parse_statements(("name:endmaterialization",), drop_needle=True)
@@ -632,3 +648,21 @@ def add_rendered_test_kwargs(
# when the test node was created in _parse_generic_test. # when the test node was created in _parse_generic_test.
kwargs = deep_map_render(_convert_function, node.test_metadata.kwargs) kwargs = deep_map_render(_convert_function, node.test_metadata.kwargs)
context[GENERIC_TEST_KWARGS_NAME] = kwargs context[GENERIC_TEST_KWARGS_NAME] = kwargs
def get_supported_languages(node: jinja2.nodes.Macro) -> List[ModelLanguage]:
if "materialization" not in node.name:
raise_compiler_error("Only materialization macros can be used with this function")
no_kwargs = not node.defaults
no_langs_found = SUPPORTED_LANG_ARG not in node.args
if no_kwargs or no_langs_found:
raise_compiler_error(f"No supported_languages found in materialization macro {node.name}")
lang_idx = node.args.index(SUPPORTED_LANG_ARG)
# indexing defaults from the end
# since supported_languages is a kwarg, and kwargs are at always after args
return [
ModelLanguage[item.value] for item in node.defaults[-(len(node.args) - lang_idx)].items
]

View File

@@ -15,7 +15,7 @@ def statically_extract_macro_calls(string, ctx, db_wrapper=None):
if hasattr(func_call, "node") and hasattr(func_call.node, "name"): if hasattr(func_call, "node") and hasattr(func_call.node, "name"):
func_name = func_call.node.name func_name = func_call.node.name
else: else:
# func_call for dbt_utils.current_timestamp macro # func_call for dbt.current_timestamp macro
# Call( # Call(
# node=Getattr( # node=Getattr(
# node=Name( # node=Name(

View File

@@ -3,9 +3,9 @@ from typing import Any, Dict, List
import requests import requests
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import ( from dbt.events.types import (
RegistryProgressMakingGETRequest, RegistryProgressGETRequest,
RegistryProgressGETResponse, RegistryProgressGETResponse,
RegistryIndexProgressMakingGETRequest, RegistryIndexProgressGETRequest,
RegistryIndexProgressGETResponse, RegistryIndexProgressGETResponse,
RegistryResponseUnexpectedType, RegistryResponseUnexpectedType,
RegistryResponseMissingTopKeys, RegistryResponseMissingTopKeys,
@@ -14,6 +14,7 @@ from dbt.events.types import (
) )
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt import deprecations from dbt import deprecations
from dbt import semver
import os import os
if os.getenv("DBT_PACKAGE_HUB_URL"): if os.getenv("DBT_PACKAGE_HUB_URL"):
@@ -37,7 +38,7 @@ def _get_with_retries(package_name, registry_base_url=None):
def _get(package_name, registry_base_url=None): def _get(package_name, registry_base_url=None):
url = _get_url(package_name, registry_base_url) url = _get_url(package_name, registry_base_url)
fire_event(RegistryProgressMakingGETRequest(url=url)) fire_event(RegistryProgressGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here # all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30) resp = requests.get(url, timeout=30)
fire_event(RegistryProgressGETResponse(url=url, resp_code=resp.status_code)) fire_event(RegistryProgressGETResponse(url=url, resp_code=resp.status_code))
@@ -125,16 +126,43 @@ def package_version(package_name, version, registry_base_url=None) -> Dict[str,
return response[version] return response[version]
def get_available_versions(package_name) -> List["str"]: def is_compatible_version(package_spec, dbt_version) -> bool:
require_dbt_version = package_spec.get("require_dbt_version")
if not require_dbt_version:
# if version requirements are missing or empty, assume any version is compatible
return True
else:
# determine whether dbt_version satisfies this package's require-dbt-version config
if not isinstance(require_dbt_version, list):
require_dbt_version = [require_dbt_version]
supported_versions = [
semver.VersionSpecifier.from_version_string(v) for v in require_dbt_version
]
return semver.versions_compatible(dbt_version, *supported_versions)
def get_compatible_versions(package_name, dbt_version, should_version_check) -> List["str"]:
# returns a list of all available versions of a package # returns a list of all available versions of a package
response = package(package_name) response = package(package_name)
# if the user doesn't care about installing compatible versions, just return them all
if not should_version_check:
return list(response) return list(response)
# otherwise, only return versions that are compatible with the installed version of dbt-core
else:
compatible_versions = [
pkg_version
for pkg_version, info in response.items()
if is_compatible_version(info, dbt_version)
]
return compatible_versions
def _get_index(registry_base_url=None): def _get_index(registry_base_url=None):
url = _get_url("index", registry_base_url) url = _get_url("index", registry_base_url)
fire_event(RegistryIndexProgressMakingGETRequest(url=url)) fire_event(RegistryIndexProgressGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here # all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30) resp = requests.get(url, timeout=30)
fire_event(RegistryIndexProgressGETResponse(url=url, resp_code=resp.status_code)) fire_event(RegistryIndexProgressGETResponse(url=url, resp_code=resp.status_code))

View File

@@ -12,6 +12,7 @@ import tarfile
import requests import requests
import stat import stat
from typing import Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union from typing import Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
from pathspec import PathSpec # type: ignore
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import ( from dbt.events.types import (
@@ -36,6 +37,7 @@ def find_matching(
root_path: str, root_path: str,
relative_paths_to_search: List[str], relative_paths_to_search: List[str],
file_pattern: str, file_pattern: str,
ignore_spec: Optional[PathSpec] = None,
) -> List[Dict[str, Any]]: ) -> List[Dict[str, Any]]:
""" """
Given an absolute `root_path`, a list of relative paths to that Given an absolute `root_path`, a list of relative paths to that
@@ -57,19 +59,30 @@ def find_matching(
reobj = re.compile(regex, re.IGNORECASE) reobj = re.compile(regex, re.IGNORECASE)
for relative_path_to_search in relative_paths_to_search: for relative_path_to_search in relative_paths_to_search:
# potential speedup for ignore_spec
# if ignore_spec.matches(relative_path_to_search):
# continue
absolute_path_to_search = os.path.join(root_path, relative_path_to_search) absolute_path_to_search = os.path.join(root_path, relative_path_to_search)
walk_results = os.walk(absolute_path_to_search) walk_results = os.walk(absolute_path_to_search)
for current_path, subdirectories, local_files in walk_results: for current_path, subdirectories, local_files in walk_results:
# potential speedup for ignore_spec
# relative_dir = os.path.relpath(current_path, root_path) + os.sep
# if ignore_spec.match(relative_dir):
# continue
for local_file in local_files: for local_file in local_files:
absolute_path = os.path.join(current_path, local_file) absolute_path = os.path.join(current_path, local_file)
relative_path = os.path.relpath(absolute_path, absolute_path_to_search) relative_path = os.path.relpath(absolute_path, absolute_path_to_search)
relative_path_to_root = os.path.join(relative_path_to_search, relative_path)
modification_time = 0.0 modification_time = 0.0
try: try:
modification_time = os.path.getmtime(absolute_path) modification_time = os.path.getmtime(absolute_path)
except OSError: except OSError:
fire_event(SystemErrorRetrievingModTime(path=absolute_path)) fire_event(SystemErrorRetrievingModTime(path=absolute_path))
if reobj.match(local_file): if reobj.match(local_file) and (
not ignore_spec or not ignore_spec.match_file(relative_path_to_root)
):
matching.append( matching.append(
{ {
"searched_path": relative_path_to_search, "searched_path": relative_path_to_search,
@@ -164,7 +177,7 @@ def write_file(path: str, contents: str = "") -> bool:
reason = "Path was possibly too long" reason = "Path was possibly too long"
# all our hard work and the path was still too long. Log and # all our hard work and the path was still too long. Log and
# continue. # continue.
fire_event(SystemCouldNotWrite(path=path, reason=reason, exc=exc)) fire_event(SystemCouldNotWrite(path=path, reason=reason, exc=str(exc)))
else: else:
raise raise
return True return True

View File

@@ -29,7 +29,7 @@ from dbt.exceptions import (
from dbt.graph import Graph from dbt.graph import Graph
from dbt.events.functions import fire_event from dbt.events.functions import fire_event
from dbt.events.types import FoundStats, CompilingNode, WritingInjectedSQLForNode from dbt.events.types import FoundStats, CompilingNode, WritingInjectedSQLForNode
from dbt.node_types import NodeType from dbt.node_types import NodeType, ModelLanguage
from dbt.events.format import pluralize from dbt.events.format import pluralize
import dbt.tracking import dbt.tracking
@@ -56,6 +56,7 @@ def print_compile_stats(stats):
NodeType.Source: "source", NodeType.Source: "source",
NodeType.Exposure: "exposure", NodeType.Exposure: "exposure",
NodeType.Metric: "metric", NodeType.Metric: "metric",
NodeType.Entity: "entity",
} }
results = {k: 0 for k in names.keys()} results = {k: 0 for k in names.keys()}
@@ -91,6 +92,8 @@ def _generate_stats(manifest: Manifest):
stats[exposure.resource_type] += 1 stats[exposure.resource_type] += 1
for metric in manifest.metrics.values(): for metric in manifest.metrics.values():
stats[metric.resource_type] += 1 stats[metric.resource_type] += 1
for entity in manifest.entities.values():
stats[entity.resource_type] += 1
for macro in manifest.macros.values(): for macro in manifest.macros.values():
stats[macro.resource_type] += 1 stats[macro.resource_type] += 1
return stats return stats
@@ -183,6 +186,7 @@ class Compiler:
context = generate_runtime_model_context(node, self.config, manifest) context = generate_runtime_model_context(node, self.config, manifest)
context.update(extra_context) context.update(extra_context)
if isinstance(node, CompiledGenericTestNode): if isinstance(node, CompiledGenericTestNode):
# for test nodes, add a special keyword args value to the context # for test nodes, add a special keyword args value to the context
jinja.add_rendered_test_kwargs(context, node) jinja.add_rendered_test_kwargs(context, node)
@@ -271,7 +275,7 @@ class Compiler:
are rolled up into the models that refer to them by are rolled up into the models that refer to them by
inserting CTEs into the SQL. inserting CTEs into the SQL.
""" """
if model.compiled_sql is None: if model.compiled_code is None:
raise RuntimeException("Cannot inject ctes into an unparsed node", model) raise RuntimeException("Cannot inject ctes into an unparsed node", model)
if model.extra_ctes_injected: if model.extra_ctes_injected:
return (model, model.extra_ctes) return (model, model.extra_ctes)
@@ -324,29 +328,28 @@ class Compiler:
_extend_prepended_ctes(prepended_ctes, new_prepended_ctes) _extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
new_cte_name = self.add_ephemeral_prefix(cte_model.name) new_cte_name = self.add_ephemeral_prefix(cte_model.name)
rendered_sql = cte_model._pre_injected_sql or cte_model.compiled_sql rendered_sql = cte_model._pre_injected_sql or cte_model.compiled_code
sql = f" {new_cte_name} as (\n{rendered_sql}\n)" sql = f" {new_cte_name} as (\n{rendered_sql}\n)"
_add_prepended_cte(prepended_ctes, InjectedCTE(id=cte.id, sql=sql)) _add_prepended_cte(prepended_ctes, InjectedCTE(id=cte.id, sql=sql))
injected_sql = self._inject_ctes_into_sql( injected_sql = self._inject_ctes_into_sql(
model.compiled_sql, model.compiled_code,
prepended_ctes, prepended_ctes,
) )
model._pre_injected_sql = model.compiled_sql model._pre_injected_sql = model.compiled_code
model.compiled_sql = injected_sql model.compiled_code = injected_sql
model.extra_ctes_injected = True model.extra_ctes_injected = True
model.extra_ctes = prepended_ctes model.extra_ctes = prepended_ctes
model.validate(model.to_dict(omit_none=True)) model.validate(model.to_dict(omit_none=True))
manifest.update_node(model) manifest.update_node(model)
return model, prepended_ctes return model, prepended_ctes
# creates a compiled_node from the ManifestNode passed in, # creates a compiled_node from the ManifestNode passed in,
# creates a "context" dictionary for jinja rendering, # creates a "context" dictionary for jinja rendering,
# and then renders the "compiled_sql" using the node, the # and then renders the "compiled_code" using the node, the
# raw_sql and the context. # raw_code and the context.
def _compile_node( def _compile_node(
self, self,
node: ManifestNode, node: ManifestNode,
@@ -362,17 +365,37 @@ class Compiler:
data.update( data.update(
{ {
"compiled": False, "compiled": False,
"compiled_sql": None, "compiled_code": None,
"extra_ctes_injected": False, "extra_ctes_injected": False,
"extra_ctes": [], "extra_ctes": [],
} }
) )
compiled_node = _compiled_type_for(node).from_dict(data) compiled_node = _compiled_type_for(node).from_dict(data)
if compiled_node.language == ModelLanguage.python:
# TODO could we also 'minify' this code at all? just aesthetic, not functional
# quoating seems like something very specific to sql so far
# for all python implementations we are seeing there's no quating.
# TODO try to find better way to do this, given that
original_quoting = self.config.quoting
self.config.quoting = {key: False for key in original_quoting.keys()}
context = self._create_node_context(compiled_node, manifest, extra_context) context = self._create_node_context(compiled_node, manifest, extra_context)
compiled_node.compiled_sql = jinja.get_rendered( postfix = jinja.get_rendered(
node.raw_sql, "{{ py_script_postfix(model) }}",
context,
node,
)
# we should NOT jinja render the python model's 'raw code'
compiled_node.compiled_code = f"{node.raw_code}\n\n{postfix}"
# restore quoting settings in the end since context is lazy evaluated
self.config.quoting = original_quoting
else:
context = self._create_node_context(compiled_node, manifest, extra_context)
compiled_node.compiled_code = jinja.get_rendered(
node.raw_code,
context, context,
node, node,
) )
@@ -487,15 +510,15 @@ class Compiler:
return Graph(linker.graph) return Graph(linker.graph)
# writes the "compiled_sql" into the target/compiled directory # writes the "compiled_code" into the target/compiled directory
def _write_node(self, node: NonSourceCompiledNode) -> ManifestNode: def _write_node(self, node: NonSourceCompiledNode) -> ManifestNode:
if not node.extra_ctes_injected or node.resource_type == NodeType.Snapshot: if not node.extra_ctes_injected or node.resource_type == NodeType.Snapshot:
return node return node
fire_event(WritingInjectedSQLForNode(unique_id=node.unique_id)) fire_event(WritingInjectedSQLForNode(unique_id=node.unique_id))
if node.compiled_sql: if node.compiled_code:
node.compiled_path = node.write_node( node.compiled_path = node.write_node(
self.config.target_path, "compiled", node.compiled_sql self.config.target_path, "compiled", node.compiled_code
) )
return node return node

View File

@@ -23,8 +23,6 @@ from .renderer import ProfileRenderer
DEFAULT_THREADS = 1 DEFAULT_THREADS = 1
DEFAULT_PROFILES_DIR = os.path.join(os.path.expanduser("~"), ".dbt")
INVALID_PROFILE_MESSAGE = """ INVALID_PROFILE_MESSAGE = """
dbt encountered an error while trying to read your profiles.yml file. dbt encountered an error while trying to read your profiles.yml file.
@@ -44,7 +42,7 @@ defined in your profiles.yml file. You can find profiles.yml here:
{profiles_file}/profiles.yml {profiles_file}/profiles.yml
""".format( """.format(
profiles_file=DEFAULT_PROFILES_DIR profiles_file=flags.DEFAULT_PROFILES_DIR
) )

View File

@@ -248,7 +248,7 @@ class PartialProject(RenderComponents):
project_name: Optional[str] = field( project_name: Optional[str] = field(
metadata=dict( metadata=dict(
description=( description=(
"The name of the project. This should always be set and will not " "be rendered" "The name of the project. This should always be set and will not be rendered"
) )
) )
) )
@@ -380,6 +380,9 @@ class PartialProject(RenderComponents):
snapshots: Dict[str, Any] snapshots: Dict[str, Any]
sources: Dict[str, Any] sources: Dict[str, Any]
tests: Dict[str, Any] tests: Dict[str, Any]
metrics: Dict[str, Any]
entities: Dict[str, Any]
exposures: Dict[str, Any]
vars_value: VarProvider vars_value: VarProvider
dispatch = cfg.dispatch dispatch = cfg.dispatch
@@ -388,6 +391,9 @@ class PartialProject(RenderComponents):
snapshots = cfg.snapshots snapshots = cfg.snapshots
sources = cfg.sources sources = cfg.sources
tests = cfg.tests tests = cfg.tests
metrics = cfg.metrics
entities = cfg.entities
exposures = cfg.exposures
if cfg.vars is None: if cfg.vars is None:
vars_dict: Dict[str, Any] = {} vars_dict: Dict[str, Any] = {}
else: else:
@@ -441,6 +447,9 @@ class PartialProject(RenderComponents):
query_comment=query_comment, query_comment=query_comment,
sources=sources, sources=sources,
tests=tests, tests=tests,
metrics=metrics,
entities=entities,
exposures=exposures,
vars=vars_value, vars=vars_value,
config_version=cfg.config_version, config_version=cfg.config_version,
unrendered=unrendered, unrendered=unrendered,
@@ -543,6 +552,9 @@ class Project:
snapshots: Dict[str, Any] snapshots: Dict[str, Any]
sources: Dict[str, Any] sources: Dict[str, Any]
tests: Dict[str, Any] tests: Dict[str, Any]
metrics: Dict[str, Any]
entities: Dict[str, Any]
exposures: Dict[str, Any]
vars: VarProvider vars: VarProvider
dbt_version: List[VersionSpecifier] dbt_version: List[VersionSpecifier]
packages: Dict[str, Any] packages: Dict[str, Any]
@@ -615,6 +627,9 @@ class Project:
"snapshots": self.snapshots, "snapshots": self.snapshots,
"sources": self.sources, "sources": self.sources,
"tests": self.tests, "tests": self.tests,
"metrics": self.metrics,
"entities": self.entities,
"exposures": self.exposures,
"vars": self.vars.to_dict(), "vars": self.vars.to_dict(),
"require-dbt-version": [v.to_version_string() for v in self.dbt_version], "require-dbt-version": [v.to_version_string() for v in self.dbt_version],
"config-version": self.config_version, "config-version": self.config_version,
@@ -658,7 +673,7 @@ class Project:
def get_selector(self, name: str) -> Union[SelectionSpec, bool]: def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
if name not in self.selectors: if name not in self.selectors:
raise RuntimeException( raise RuntimeException(
f"Could not find selector named {name}, expected one of " f"{list(self.selectors)}" f"Could not find selector named {name}, expected one of {list(self.selectors)}"
) )
return self.selectors[name]["definition"] return self.selectors[name]["definition"]

View File

@@ -3,13 +3,13 @@ import re
import os import os
from dbt.clients.jinja import get_rendered, catch_jinja from dbt.clients.jinja import get_rendered, catch_jinja
from dbt.constants import SECRET_ENV_PREFIX
from dbt.context.target import TargetContext from dbt.context.target import TargetContext
from dbt.context.secret import SecretContext, SECRET_PLACEHOLDER from dbt.context.secret import SecretContext, SECRET_PLACEHOLDER
from dbt.context.base import BaseContext from dbt.context.base import BaseContext
from dbt.contracts.connection import HasCredentials from dbt.contracts.connection import HasCredentials
from dbt.exceptions import DbtProjectError, CompilationException, RecursionException from dbt.exceptions import DbtProjectError, CompilationException, RecursionException
from dbt.utils import deep_map_render from dbt.utils import deep_map_render
from dbt.logger import SECRET_ENV_PREFIX
Keypath = Tuple[Union[str, int], ...] Keypath = Tuple[Union[str, int], ...]

View File

@@ -3,31 +3,41 @@ import os
from copy import deepcopy from copy import deepcopy
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path from pathlib import Path
from typing import Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet, Type from typing import (
Any,
Dict,
Iterable,
Iterator,
Mapping,
MutableSet,
Optional,
Tuple,
Type,
Union,
)
from .profile import Profile
from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags from dbt import flags
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths from dbt.adapters.factory import get_include_paths, get_relation_class_by_name
from dbt.helper_types import FQNPath, PathSet, DictDefaultEmptyStr
from dbt.config.profile import read_user_config from dbt.config.profile import read_user_config
from dbt.contracts.connection import AdapterRequiredConfig, Credentials from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata from dbt.contracts.graph.manifest import ManifestMetadata
from dbt.contracts.relation import ComponentName
from dbt.ui import warning_tag
from dbt.contracts.project import Configuration, UserConfig from dbt.contracts.project import Configuration, UserConfig
from dbt.exceptions import ( from dbt.contracts.relation import ComponentName
RuntimeException,
DbtProjectError,
validator_error_message,
warn_or_error,
raise_compiler_error,
)
from dbt.dataclass_schema import ValidationError from dbt.dataclass_schema import ValidationError
from dbt.exceptions import (
DbtProjectError,
RuntimeException,
raise_compiler_error,
validator_error_message,
)
from dbt.events.functions import warn_or_error
from dbt.events.types import UnusedResourceConfigPath
from dbt.helper_types import DictDefaultEmptyStr, FQNPath, PathSet
from .profile import Profile
from .project import Project, PartialProject
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
def _project_quoting_dict(proj: Project, profile: Profile) -> Dict[ComponentName, bool]: def _project_quoting_dict(proj: Project, profile: Profile) -> Dict[ComponentName, bool]:
@@ -105,6 +115,9 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
query_comment=project.query_comment, query_comment=project.query_comment,
sources=project.sources, sources=project.sources,
tests=project.tests, tests=project.tests,
metrics=project.metrics,
entities=project.entities,
exposures=project.exposures,
vars=project.vars, vars=project.vars,
config_version=project.config_version, config_version=project.config_version,
unrendered=project.unrendered, unrendered=project.unrendered,
@@ -188,28 +201,52 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
@classmethod @classmethod
def collect_parts(cls: Type["RuntimeConfig"], args: Any) -> Tuple[Project, Profile]: def collect_parts(cls: Type["RuntimeConfig"], args: Any) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(project_root, verify_version=version_check)
# build the profile using the base renderer and the one fact we know cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
# Note: only the named profile section is rendered. The rest of the
# profile is ignored. profile = cls.collect_profile(args=args)
project_renderer = DbtProjectYamlRenderer(profile, cli_vars)
project = cls.collect_project(args=args, project_renderer=project_renderer)
assert type(project) is Project
return (project, profile)
@classmethod
def collect_profile(
cls: Type["RuntimeConfig"], args: Any, profile_name: Optional[str] = None
) -> Profile:
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}")) cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
profile_renderer = ProfileRenderer(cli_vars) profile_renderer = ProfileRenderer(cli_vars)
# build the profile using the base renderer and the one fact we know
if profile_name is None:
# Note: only the named profile section is rendered here. The rest of the
# profile is ignored.
partial = cls.collect_project(args)
assert type(partial) is PartialProject
profile_name = partial.render_profile_name(profile_renderer) profile_name = partial.render_profile_name(profile_renderer)
profile = cls._get_rendered_profile(args, profile_renderer, profile_name) profile = cls._get_rendered_profile(args, profile_renderer, profile_name)
# Save env_vars encountered in rendering for partial parsing # Save env_vars encountered in rendering for partial parsing
profile.profile_env_vars = profile_renderer.ctx_obj.env_vars profile.profile_env_vars = profile_renderer.ctx_obj.env_vars
return profile
# get a new renderer using our target information and render the @classmethod
# project def collect_project(
project_renderer = DbtProjectYamlRenderer(profile, cli_vars) cls: Type["RuntimeConfig"],
args: Any,
project_renderer: Optional[DbtProjectYamlRenderer] = None,
) -> Union[Project, PartialProject]:
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(project_root, verify_version=version_check)
if project_renderer is None:
return partial
else:
project = partial.render(project_renderer) project = partial.render(project_renderer)
# Save env_vars encountered in rendering for partial parsing
project.project_env_vars = project_renderer.ctx_obj.env_vars project.project_env_vars = project_renderer.ctx_obj.env_vars
return (project, profile) return project
# Called in main.py, lib.py, task/base.py # Called in main.py, lib.py, task/base.py
@classmethod @classmethod
@@ -274,13 +311,16 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
"snapshots": self._get_config_paths(self.snapshots), "snapshots": self._get_config_paths(self.snapshots),
"sources": self._get_config_paths(self.sources), "sources": self._get_config_paths(self.sources),
"tests": self._get_config_paths(self.tests), "tests": self._get_config_paths(self.tests),
"metrics": self._get_config_paths(self.metrics),
"entities": self._get_config_paths(self.entities),
"exposures": self._get_config_paths(self.exposures),
} }
def get_unused_resource_config_paths( def warn_for_unused_resource_config_paths(
self, self,
resource_fqns: Mapping[str, PathSet], resource_fqns: Mapping[str, PathSet],
disabled: PathSet, disabled: PathSet,
) -> List[FQNPath]: ) -> None:
"""Return a list of lists of strings, where each inner list of strings """Return a list of lists of strings, where each inner list of strings
represents a type + FQN path of a resource configuration that is not represents a type + FQN path of a resource configuration that is not
used. used.
@@ -294,23 +334,13 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
for config_path in config_paths: for config_path in config_paths:
if not _is_config_used(config_path, fqns): if not _is_config_used(config_path, fqns):
unused_resource_config_paths.append((resource_type,) + config_path) resource_path = ".".join(i for i in ((resource_type,) + config_path))
return unused_resource_config_paths unused_resource_config_paths.append(resource_path)
def warn_for_unused_resource_config_paths( if len(unused_resource_config_paths) == 0:
self,
resource_fqns: Mapping[str, PathSet],
disabled: PathSet,
) -> None:
unused = self.get_unused_resource_config_paths(resource_fqns, disabled)
if len(unused) == 0:
return return
msg = UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE.format( warn_or_error(UnusedResourceConfigPath(unused_config_paths=unused_resource_config_paths))
len(unused), "\n".join("- {}".format(".".join(u)) for u in unused)
)
warn_or_error(msg, log_fmt=warning_tag("{}"))
def load_dependencies(self, base_only=False) -> Mapping[str, "RuntimeConfig"]: def load_dependencies(self, base_only=False) -> Mapping[str, "RuntimeConfig"]:
if self.dependencies is None: if self.dependencies is None:
@@ -477,6 +507,9 @@ class UnsetProfileConfig(RuntimeConfig):
"snapshots": self.snapshots, "snapshots": self.snapshots,
"sources": self.sources, "sources": self.sources,
"tests": self.tests, "tests": self.tests,
"metrics": self.metrics,
"entities": self.entities,
"exposures": self.exposures,
"vars": self.vars.to_dict(), "vars": self.vars.to_dict(),
"require-dbt-version": [v.to_version_string() for v in self.dbt_version], "require-dbt-version": [v.to_version_string() for v in self.dbt_version],
"config-version": self.config_version, "config-version": self.config_version,
@@ -537,6 +570,9 @@ class UnsetProfileConfig(RuntimeConfig):
query_comment=project.query_comment, query_comment=project.query_comment,
sources=project.sources, sources=project.sources,
tests=project.tests, tests=project.tests,
metrics=project.metrics,
entities=project.entities,
exposures=project.exposures,
vars=project.vars, vars=project.vars,
config_version=project.config_version, config_version=project.config_version,
unrendered=project.unrendered, unrendered=project.unrendered,
@@ -583,14 +619,6 @@ class UnsetProfileConfig(RuntimeConfig):
return cls.from_parts(project=project, profile=profile, args=args) return cls.from_parts(project=project, profile=profile, args=args)
UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE = """\
Configuration paths exist in your dbt_project.yml file which do not \
apply to any resources.
There are {} unused configuration paths:
{}
"""
def _is_config_used(path, fqns): def _is_config_used(path, fqns):
if fqns: if fqns:
for fqn in fqns: for fqn in fqns:

10
core/dbt/constants.py Normal file
View File

@@ -0,0 +1,10 @@
SECRET_ENV_PREFIX = "DBT_ENV_SECRET_"
DEFAULT_ENV_PLACEHOLDER = "DBT_DEFAULT_PLACEHOLDER"
METADATA_ENV_PREFIX = "DBT_ENV_CUSTOM_ENV_"
MAXIMUM_SEED_SIZE = 1 * 1024 * 1024
MAXIMUM_SEED_SIZE_NAME = "1MB"
PIN_PACKAGE_URL = (
"https://docs.getdbt.com/docs/package-management#section-specifying-package-versions"
)

View File

@@ -4,8 +4,10 @@ from typing import Any, Dict, NoReturn, Optional, Mapping, Iterable, Set, List
from dbt import flags from dbt import flags
from dbt import tracking from dbt import tracking
from dbt import utils
from dbt.clients.jinja import get_rendered from dbt.clients.jinja import get_rendered
from dbt.clients.yaml_helper import yaml, safe_load, SafeLoader, Loader, Dumper # noqa: F401 from dbt.clients.yaml_helper import yaml, safe_load, SafeLoader, Loader, Dumper # noqa: F401
from dbt.constants import SECRET_ENV_PREFIX, DEFAULT_ENV_PLACEHOLDER
from dbt.contracts.graph.compiled import CompiledResource from dbt.contracts.graph.compiled import CompiledResource
from dbt.exceptions import ( from dbt.exceptions import (
CompilationException, CompilationException,
@@ -14,9 +16,8 @@ from dbt.exceptions import (
raise_parsing_error, raise_parsing_error,
disallow_secret_env_var, disallow_secret_env_var,
) )
from dbt.logger import SECRET_ENV_PREFIX
from dbt.events.functions import fire_event, get_invocation_id from dbt.events.functions import fire_event, get_invocation_id
from dbt.events.types import MacroEventInfo, MacroEventDebug from dbt.events.types import JinjaLogInfo, JinjaLogDebug
from dbt.version import __version__ as dbt_version from dbt.version import __version__ as dbt_version
# These modules are added to the context. Consider alternative # These modules are added to the context. Consider alternative
@@ -126,7 +127,7 @@ class ContextMeta(type):
class Var: class Var:
UndefinedVarError = "Required var '{}' not found in config:\nVars " "supplied to {} = {}" UndefinedVarError = "Required var '{}' not found in config:\nVars supplied to {} = {}"
_VAR_NOTSET = object() _VAR_NOTSET = object()
def __init__( def __init__(
@@ -305,7 +306,12 @@ class BaseContext(metaclass=ContextMeta):
return_value = default return_value = default
if return_value is not None: if return_value is not None:
self.env_vars[var] = return_value # If the environment variable is set from a default, store a string indicating
# that so we can skip partial parsing. Otherwise the file will be scheduled for
# reparsing. If the default changes, the file will have been updated and therefore
# will be scheduled for reparsing anyways.
self.env_vars[var] = return_value if var in os.environ else DEFAULT_ENV_PLACEHOLDER
return return_value return return_value
else: else:
msg = f"Env var required but not provided: '{var}'" msg = f"Env var required but not provided: '{var}'"
@@ -552,9 +558,9 @@ class BaseContext(metaclass=ContextMeta):
{% endmacro %}" {% endmacro %}"
""" """
if info: if info:
fire_event(MacroEventInfo(msg=msg)) fire_event(JinjaLogInfo(msg=msg))
else: else:
fire_event(MacroEventDebug(msg=msg)) fire_event(JinjaLogDebug(msg=msg))
return "" return ""
@contextproperty @contextproperty
@@ -682,6 +688,19 @@ class BaseContext(metaclass=ContextMeta):
dict_diff.update({k: dict_a[k]}) dict_diff.update({k: dict_a[k]})
return dict_diff return dict_diff
@contextmember
@staticmethod
def local_md5(value: str) -> str:
"""Calculates an MD5 hash of the given string.
It's called "local_md5" to emphasize that it runs locally in dbt (in jinja context) and not an MD5 SQL command.
:param value: The value to hash
Usage:
{% set value_hash = local_md5("hello world") %}
"""
return utils.md5(value)
def generate_base_context(cli_vars: Dict[str, Any]) -> Dict[str, Any]: def generate_base_context(cli_vars: Dict[str, Any]) -> Dict[str, Any]:
ctx = BaseContext(cli_vars) ctx = BaseContext(cli_vars)

View File

@@ -1,8 +1,8 @@
import os import os
from typing import Any, Dict, Optional from typing import Any, Dict, Optional
from dbt.constants import SECRET_ENV_PREFIX, DEFAULT_ENV_PLACEHOLDER
from dbt.contracts.connection import AdapterRequiredConfig from dbt.contracts.connection import AdapterRequiredConfig
from dbt.logger import SECRET_ENV_PREFIX
from dbt.node_types import NodeType from dbt.node_types import NodeType
from dbt.utils import MultiDict from dbt.utils import MultiDict
@@ -94,7 +94,14 @@ class SchemaYamlContext(ConfiguredContext):
if return_value is not None: if return_value is not None:
if self.schema_yaml_vars: if self.schema_yaml_vars:
self.schema_yaml_vars.env_vars[var] = return_value # If the environment variable is set from a default, store a string indicating
# that so we can skip partial parsing. Otherwise the file will be scheduled for
# reparsing. If the default changes, the file will have been updated and therefore
# will be scheduled for reparsing anyways.
self.schema_yaml_vars.env_vars[var] = (
return_value if var in os.environ else DEFAULT_ENV_PLACEHOLDER
)
return return_value return return_value
else: else:
msg = f"Env var required but not provided: '{var}'" msg = f"Env var required but not provided: '{var}'"

Some files were not shown because too many files have changed in this diff Show More