Compare commits

...

165 Commits

Author SHA1 Message Date
github-actions[bot]
27ed2f961b Bumping version to 1.0.3 (#4760)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-02-21 14:20:18 -05:00
Gerda Shank
f2dcb6f23c [Backport] Fix bug accessing target in deps and clean commands (#4759)
* Create DictDefaultNone for to_target_dict in deps and clean commands

* Update test case to handle

* update CHANGELOG.md

* Switch to DictDefaultEmptyStr for to_target_dict
2022-02-21 14:09:46 -05:00
github-actions[bot]
77afe63c7c Bumping version to 1.0.2 (#4750)
* Bumping version to 1.0.2

* Update CHANGELOG.md

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2022-02-18 09:28:03 -05:00
Jeremy Cohen
ca7c4c147a Pin MarkupSafe==2.0.1 (#4746) (#4749) 2022-02-18 09:15:27 -05:00
Nathaniel May
4145834c5b fix test to use a secret username (#4683) 2022-02-04 15:07:11 -05:00
github-actions[bot]
aaeb94d683 Bumping version to 1.0.2rc1 manually removed docker requirements update (#4679)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2022-02-04 14:11:23 -05:00
Chenyu Li
a2662b2f83 Chenyu/backport 4565 (#4677)
* adapter compability messaging added. (#4565)

* adapter compability messaging added.

* edited plugin version compatibility message

* edited test version for plugin compability

* compare using only major and minor

* Add checking PYPI and update changelog

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: ChenyuLi <chenyu.li@dbtlabs.com>

* fix changelog

* fix changelog

Co-authored-by: nkyuray <95860273+nkyuray@users.noreply.github.com>
2022-02-04 08:36:40 -05:00
Chenyu Li
056db408cf fix comparision for new model/body (#4631) (#4676)
* fix comparison for new model/body
2022-02-03 17:34:14 -05:00
Chenyu Li
bec6becd18 Validate project names in interactive dbt init (#4536) (#4675)
* Validate project names in interactive dbt init

- workflow: ask the user to provide a valid project name until they do.
- new integration tests
- supported scenarios:
  - dbt init
  - dbt init -s
  - dbt init [name]
  - dbt init [name] -s

* Update Changelog.md

* Add full URLs to CHANGELOG.md

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Amir Kadivar <amir@amirkdv.ca>
2022-02-03 17:23:35 -05:00
Nathaniel May
3be057b6a4 Avoid saving secrets in SecretContext (#4665) (#4672) 2022-02-03 15:47:46 -05:00
Nathaniel May
e2a6c25a6d Alternative Modified Backport of #4619 (#4660)
* adds new function fire_event_if
2022-02-02 15:20:22 -05:00
leahwicz
92b3fc470d Run check_if_can_write_profile before create_profile_using_project_profile_template [CT-67] [Backport 1.0.latest] (#4447) (#4658)
* Run check_if_can_write_profile before create_profile_using_project_profile_template

* Changelog

Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>

Co-authored-by: Niall Woodward <niall@niallrees.com>
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2022-02-01 17:32:24 -05:00
Jeremy Cohen
1e9fe67393 Change InvalidRefInTestNode level to DEBUG (#4647) (#4655)
* Debug-level test depends on disabled

* Add PR link to Changelog
2022-02-01 18:18:55 +01:00
Gerda Shank
d9361259f4 [#4554] Don't require a profile for dbt deps and clean commands (#4610) (#4651) 2022-01-31 14:52:41 -05:00
Emily Rockman
7990974bd8 Retry after failure to download or failure to open files (#4609) (#4649)
* add retry logic, tests when extracting tarfile fails

* fixed bug with not catching empty responses

* specify compression type

* WIP test

* more testing work

* fixed up unit test

* add changelog

* Add more comments!

* clarify why we do the json() check for None
# Conflicts:
#	CHANGELOG.md
2022-01-31 11:32:29 -06:00
github-actions[bot]
544d3e7a3a Clarify "incompatible package version" error msg (#4587) (#4628)
* Clarify "incompatible package version" error msg

* Clarify error message when they shouldn't fall fwd

Co-authored-by: Joel Labes <joel.labes@dbtlabs.com>
2022-01-27 14:34:15 -05:00
Emily Rockman
31962beb14 Rename data directory to seeds (#4589) (#4592)
* Rename data directory to seeds

* Update CHANGELOG.md
# Conflicts:
#	CHANGELOG.md

Co-authored-by: Joel Labes <joel.labes@dbtlabs.com>
2022-01-20 08:57:26 -06:00
leahwicz
f6a0853901 Bumping version to 1.0.1 (#4543) (#4544)
* Bumping version to 1.0.1

* Update CHANGELOG.md

* Update CHANGELOG.md

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-01-03 13:19:25 -05:00
leahwicz
336a3d4987 Bumping version to 1.0.1rc1 (#4517) (#4518)
* Bumping version to 1.0.1rc1

* Update CHANGELOG.md

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-12-20 14:59:54 -05:00
Gerda Shank
74dc5c49ae [#4523] Fix error with env_var in hook (#4526) 2021-12-20 14:49:36 -05:00
leahwicz
29fa687349 Fix bool coercion to 0/1 (#4512) (#4516)
* Fix bool coercion

* Fix unit test

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-12-20 10:02:14 -05:00
Emily Rockman
39d4e729c9 scrub message of secrets (#4507) (#4510)
* scrub message of secrets

* update changelog

* use new scrubbing and scrub more places using git

* fixed small miss of string conv and missing raise

* fix bug with cloning error

* resolving message issues

* better, more specific scrubbing
2021-12-19 10:08:36 -05:00
Gerda Shank
406bdcc89c [#4470 BACKPORT] Improve checking of schema version for pre-1.0.0 manifests (#4497) (#4503) 2021-12-17 15:37:00 -05:00
Emily Rockman
9702aa733f update log message to use adapter name (#4501) (#4502)
* update log message to use adapter name

* add changelog
2021-12-16 12:59:45 -06:00
Emily Rockman
44265716f9 [BACKPORT] compile new index file for docs (#4484) (#4500)
* compile new index file for docs

* Add changelog

* move changleog entries for docs changes
2021-12-16 10:22:55 -06:00
Gerda Shank
20b27fd3b6 [#4464] Check specifically for generic node type for some partial parsing actions (#4465) (#4494)
* [#4464] Check specifically for generic node type for some partial parsing actions

* Add check for existence of macro file in saved_files

* Check for existence of patch file in saved_files
2021-12-15 09:48:00 -05:00
Emily Rockman
76c2e182ba updated DepsStartPackageInstall event to use package name (#4482) (#4485)
* updated event to user package name

* add changelog
2021-12-14 15:27:08 -06:00
Matthew McKnight
791625ddf5 made change to test of str (#4463) (#4478)
* made change to test of str

* changelog update
2021-12-13 16:04:22 -06:00
Emily Rockman
1baa05a764 Fix dbt docs overview to working url (#4442) (#4460)
* Fix to working url

* add fix to changelog

Co-authored-by: Rebekka Moyson <remoyson@gmail.com>
2021-12-08 13:06:31 -06:00
Nathaniel May
1b47b53aff point latest version check to dbt-core package (#4434) (#4435) 2021-12-03 16:19:15 -05:00
leahwicz
ec1f609f3e Bumping version to 1.0.0 (#4431) (#4432)
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2021-12-03 13:34:41 -05:00
Jeremy Cohen
b4ea003559 Changelog entries for rc3 -> final (#4389) (#4430)
* Changelog entries for rc3 -> final

* More updates

* Final entry

* Last fix, and the date

* These few, these happy few
2021-12-03 19:24:43 +01:00
Jeremy Cohen
23e1a9aa4f relax version specifier for dbt-extractor (#4427) (#4429)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-03 19:20:40 +01:00
Jeremy Cohen
9882d08a24 add new interop tests for black-box json log schema testing (#4327) (#4428)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-03 19:15:41 +01:00
leahwicz
79cc811a68 stringify generic exceptions (#4424) (#4425)
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2021-12-03 12:36:14 -05:00
leahwicz
c82572f745 Info vs debug text formatting (#4418) (#4421)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-12-03 09:22:14 -05:00
leahwicz
42a38e4deb Sources aren't materialized (#4417) (#4420)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-12-03 09:03:24 -05:00
leahwicz
ecf0ffe68c Add flag to main.py. Reinstantiate after flags (#4416) (#4419)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-12-03 08:54:48 -05:00
leahwicz
e9f26ef494 add node type codes to more events + more hook log data (#4378) (#4415)
* add node type codes to more events + more hook log

* minor fixes

* renames started/finished keys

* made process more clear

* fixed errors

* Put back report_node_data in fresshness.py

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2021-12-02 19:31:20 -05:00
leahwicz
c77dc59af8 use reference keys instead of relations (#4410) (#4414)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 18:41:20 -05:00
leahwicz
a5ebe4ff59 Logging README (#4395) (#4413)
* WIP

* more README cleanup

* readme tweaks

* small tweaks

* wording updates

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-12-02 18:12:28 -05:00
leahwicz
5c01f9006c user configurable event buffer size (#4411) (#4412)
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2021-12-02 18:05:57 -05:00
Jeremy Cohen
c92e1ed9f2 [Backport] #4388 + #4405 (#4408)
* A few final logging touch-ups (#4388)

* Rm unused events, per #4104

* More structured ConcurrencyLine

* Replace \n prefixes with EmptyLine

* Reimplement ui.warning_tag to centralize logic

* Use warning_tag for deprecations too

* Rm more unused event types

* Exclude EmptyLine from json logs

* loglines are not always created by events (#4406)

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>

* Rollover + backup for dbt.log (#4405)

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 17:51:08 -05:00
Emily Rockman
85dee41a9f update file name (#4402) (#4407)
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-12-02 17:08:32 -05:00
leahwicz
a4456feff0 change json override strategy (#4396) (#4403)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 17:05:33 -05:00
leahwicz
8d27764b0f allow log_format to be set in profile configs (#4394) (#4401)
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-12-02 16:49:41 -05:00
leahwicz
e56256d968 use rfc3339 format for log time stamps (#4384) (#4400)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 15:42:46 -05:00
leahwicz
86cb3ba6fa [#4354] Different output for console and file logs (#4379) (#4399)
* [#4354] Different output for console and file logs

* Tweak some log formats

* Change loging of thread names

Co-authored-by: Gerda Shank <gerda@fishtownanalytics.com>
2021-12-02 15:39:53 -05:00
leahwicz
4d0d2d0d6f Add windows OS error supressing for temp dir cleanups (#4380) (#4398)
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2021-12-02 15:33:10 -05:00
leahwicz
f8a3c27fb8 move event code up a level (#4381) (#4397)
move event code up a level plus minor fixes

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-12-02 15:27:04 -05:00
leahwicz
30f05b0213 Fix release process (#4385) (#4393) 2021-12-02 12:33:41 -05:00
Jeremy Cohen
f1bebb3629 Tiny touchups for deps, clean (#4366) (#4387)
* Use actual profile name for log msg

* Raise clean dep warning iff configured path missing
2021-12-02 17:35:51 +01:00
Gerda Shank
e7a40345ad Make the stdout logger actually go to stdout (#4368) (#4376) 2021-12-01 11:13:24 -05:00
Emily Rockman
ba94b8212c only log events in cache.py when flag is set set (#4371)
flag is --log-cache-events
2021-11-30 16:05:20 -06:00
Nathaniel May
284ac9b138 better dataclass field handling (#4361)
fix serializing dataclass fields so they show up at all
2021-11-30 13:34:57 -05:00
github-actions[bot]
7448ec5adb Bumping version to 1.0.0rc3 (#4363)
* Bumping version to 1.0.0rc3

* Updating Changelog for release

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-11-30 09:35:03 -05:00
Emily Rockman
caa6269bc7 add node_info to relevant logs (#4336)
* WIP

* fixed some merg issues

* WIP

* first pass with node_status logging

* add node details to one more

* another pass at node info

* fixed failures

* convert to classes

* more tweaks to basic implementation

* added in ststus, organized a bit

* saving broken state

* working state with lots of todos

* formatting

* add start/end tiemstamps

* adding node_status logging to more events

* adding node_status to more events

* Add RunningStatus and set in node

* Add NodeCompiling and NodeExecuting events, switch to _event_status dict

* add _event_status to SourceDefinition

* small tweaks to NodeInfo

* fixed misnamed attr

* small fix to validation

* rename logging timestamps to minimize name collision

* fixed flake failure

* move str formatting to events

* incorporate serialization changes

* add node_status to event_to_serializable_dict

* convert nodeInfo to dict with dataclass builtin

* Try to fix failing unit, flake8, mypy tests (#4362)

* fixed leftover merge conflict

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2021-11-30 09:34:28 -05:00
Gerda Shank
31691c3b88 Events with graph_func include actual output of graph_func (#4360) 2021-11-29 20:20:22 -05:00
Ian Knox
3a904a811f Event buffer for structlog (#4358)
Add Internal event buffer

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2021-11-29 20:12:20 -05:00
Nathaniel May
b927a31a53 make json serialization overridable for events (#4326)
* simplify scrubbing

* add overridable serialize method to events

* add imperfect test for json serialization of events

Co-authored-by: Ian Knox <ian.knox@fishtownanalytics.com>
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
2021-11-29 18:19:34 -05:00
Kyle Wigley
d8dd75320c set invocation id when generating psuedo config (#4359) 2021-11-29 17:29:12 -05:00
Nathaniel May
a613556246 add thread_name to json output (#4353) 2021-11-29 14:01:50 -05:00
Jeremy Cohen
8d2351d541 Logging: restore previous (small) behaviors (#4341)
* Log formatting from flags earlier

* WARN-level stdout for list task

* Readd tracking events to File

* PR feedback, annotate hacks

* Revert "PR feedback, annotate hacks"

This reverts commit 5508fa230b.

* This is maybe better

* Annotate main.py

* One more comment in base.py

* Update changelog
2021-11-29 19:05:39 +01:00
leahwicz
f72b603196 Adding release workflow (#4288) 2021-11-29 10:37:14 -05:00
Gerda Shank
4eb17b57fb Provide function to set the invocation_id (#4351) 2021-11-29 10:15:19 -05:00
Cor
85a4b87267 Use cls in classmethod (#4345)
Instead of calling the class explicitly, use the `cls` variable instead.
2021-11-29 09:57:52 -05:00
jan zens
0d320c58da fix typo in UnparsedSourceDefinition.__post_serialize_ (#4349)
* fix typo in UnparsedSourceDefinition.__post_serialize_

fix typo in UnparsedSourceDefinition.__post_serialize_

* update CHANGELOG.md

update CHANGELOG.md

add #4349

* Update changelog

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-29 11:36:11 +01:00
Emilie Lima Schario
ed1ff2caac Adjust logic when finding approx matches for model or test matching (#4076)
* adjust logic when finding approx matches

* update changelog

* Update core/dbt/adapters/base/relation.py

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>

* Update changelog

Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-29 11:20:01 +01:00
sarah-weatherbee
d80646c258 adds additional augmented assignment statements (#4315) (#4331)
* adds additional augmented assignment statements (#4315)

* Per PR comments, revised CHANGELOG.md to note change and contributor info
2021-11-27 09:04:40 -06:00
Matthew McKnight
a9b4316346 Mc knight 42/test event codes (#4338)
* pushing up to get eye on from Nate

* updating to compare

* latest push

* finished test for duplicate codes with a lot of help from Nate

* resolving suggestions

* removed duplicated code in types.py, made minor changes to test_events.py

* added missing func call
2021-11-24 16:03:43 -06:00
Gerda Shank
36776b96e7 [#4337] Always create an invocation_id, even when not tracking (#4340) 2021-11-24 16:54:17 -05:00
Jeremy Cohen
7f2d3cd24f Fix static parser tracking logic (#4332)
* Fix static parser tracking logic

* Add changelog note
2021-11-24 17:26:56 +01:00
Gerda Shank
d046ae0606 [#4253] Support partial parsing of env_vars in metrics definitions (#4322) 2021-11-23 15:02:47 -05:00
Gerda Shank
e8c267275e [#4254] Change some CompilationExceptions to ParsingException in the parser (#4328) 2021-11-23 13:50:00 -05:00
github-actions[bot]
a4951749a8 Bumping version to 1.0.0rc2 (#4321)
* Bumping version to 1.0.0rc2

* Update changelog

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-22 21:26:15 +01:00
Ian Knox
e1a2e8d9f5 Add codes to all log events (re-work of PR #4268) (#4319)
* re-work of old branch
2021-11-22 13:14:33 -06:00
Emily Rockman
f80c78e3a5 add logic to scrub more than str types (#4317) 2021-11-22 12:58:10 -06:00
Emily Rockman
c541eca592 structured logging: add data attributes to json log output (#4301)
* simplified data construction

* fixed missed scrubbing of secrets

* switched to vars()

* scrub entire log line, update how attributes get pulled

* get ahead of serialization errors

* store if data is serialized and modify values instead of a copy of values

* fixed unused import from merge
2021-11-19 15:43:26 -06:00
Nathaniel May
726aba0586 version logging (#4289)
* start adding version logging, noticed some wrong stuff

* fix bad pid and ts

* remove level format on json logs

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-11-19 14:53:50 -06:00
Jeremy Cohen
d300addee1 SecretContext for secret env vars, profiles + packages only (#4311)
* SecretContext for secret env vars

* Cleanup exception. Add + edit tests

* Add changelog entry
2021-11-19 19:36:19 +01:00
Kyle Wigley
d5d16f01f4 Fix flags import (#4307) 2021-11-18 14:59:49 -05:00
Kyle Wigley
2cb26e2699 Add supported dbt tasks (#4200) 2021-11-18 14:05:00 -05:00
Nathaniel May
b4793b4f9b Fix adapter failures due to string formatting issues (#4305)
fix adapter failures due to string formatting issues
2021-11-18 12:54:20 -05:00
Gerda Shank
045e70ccf1 [#4298] Fix 'created_at' in ParsedMetric to allow recalculating metrics depends_on refs (#4299) 2021-11-18 09:29:09 -05:00
Jeremy Cohen
ba23395c8e Fix metrics count in compile stats (#4292)
* Fix metrics count in compile stats

* Add changelog entry
2021-11-18 09:28:13 +01:00
Joel Labes
0aacd99168 Get prerelease packages when specifically requested (#4295)
* Get prerelease packages when specifically required. Add test validating it works

* Update CHANGELOG.md
2021-11-18 09:11:49 +01:00
Nathaniel May
e4b5d73dc4 adjust level length for text only (#4303)
adjust level length for text log lines only
2021-11-17 17:32:15 -05:00
Gerda Shank
bd950f687a [#4252] Serialization error when missing quotes in metrics model ref() call (#4287) 2021-11-17 17:14:32 -05:00
Gerda Shank
aea23a488b [#4272] Move validator keyword argument in jinja 'config.get' to after 'default' (#4297) 2021-11-17 17:12:26 -05:00
Jeremy Cohen
22731df07b Fix: default log formatting (#4302)
* Respect log formatting

* PR feedback
2021-11-17 21:10:14 +01:00
Jeremy Cohen
c55be164e6 Separate warnings. Fix duplication (#4291) 2021-11-17 18:01:28 +01:00
kadero
9d73304c1a Alow snapshot defer (#4296)
* Alow snapshot defer

* Update changelog
2021-11-17 16:56:37 +01:00
Nathaniel May
719b2026ab Minor Cleanup of Structured Logging Module (#4266)
* cleanup structured logging module

* update adapter logger to preserve new-style logging formatting
2021-11-16 20:22:11 -05:00
kadero
22416980d1 Avoid errors when missing column in yaml doc (#4285)
* Update postgres__alter_column_comment

* Update changelog

* Add integration test
2021-11-16 13:22:18 +01:00
Gerda Shank
3d28b6704c [#3689] Fix filesystem searcher and tests that mock it (#4271) 2021-11-15 09:46:17 -05:00
Mila Page
5d1b104e1f Feature/3997 profiles test selection flag (#4270)
* Address 3997. Test selection flag can be in profile.yml.

* Per Jerco's 4104 PR unresolved comments, unify i.s. predicate and add env var.

* Couple of flake8 touchups.

* Classier error handling using enum semantics.

* Cherry-pick in part of Gerda's commit to hopefully avoid a future merge conflict.

* Add 3997 to changelog.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2021-11-15 14:07:22 +01:00
Jeremy Cohen
4a8a68049d Try removing dupe logging during integration tests (#4275) 2021-11-15 11:00:29 +01:00
Jeremy Cohen
4b7fd1d46a Update dbt-postgres readme (#4263)
* Update dbt-postgres readme

* Rm redshift references

* Update plugins/postgres/README.md

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

* Update plugins/postgres/README.md

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>

Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
2021-11-12 17:12:00 +01:00
github-actions[bot]
0722922c03 Bumping version to 1.0.0rc1 (#4234)
* Bumping version to 1.0.0rc1

* Update changelog

* Add Dockerfile to bumpversion, update reqs

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-10 14:24:40 +01:00
kadero
40321d7966 Dbt init with provided project name (#4249)
* Dbt init with provided project name

* Update changelog.md

* Fix changelog.md

* Fix typo in help

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-10 11:58:49 +01:00
Nathaniel May
434f3d2678 Merge pull request #4055 from dbt-labs/feature/structured-logging
Add Structured Logging
2021-11-09 17:42:19 -05:00
Jeremy Cohen
6dd9c2c5ba Env var shim to enable legacy logger (#4255)
* Env var shim to reenable logbook

* Rename to ENABLE_LEGACY_LOGGER
2021-11-09 23:04:47 +01:00
Nathaniel May
5e6be1660e configure event logger for integration tests (#4257)
* apply test fixes

* remove presto test
2021-11-09 16:13:13 -05:00
Nathaniel May
31acb95d7a rebased on main and added new partial parsing event 2021-11-09 11:40:18 -05:00
Nathaniel May
683190b711 fixes 2021-11-09 11:26:01 -05:00
Nathaniel May
ebb84c404f postgres adapter to use new logger 2021-11-09 11:26:01 -05:00
Nathaniel May
2ca6ce688b whitespace change 2021-11-09 11:26:01 -05:00
Nathaniel May
a40550b89d std logger for structured logging (#4231)
structured logging powered by the stdlib logger

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Ian Knox <81931810+iknox-fa@users.noreply.github.com>
2021-11-09 11:26:01 -05:00
Ian Knox
b2aea11cdb Struct log for adapter call sites (#4189)
graph call sites for structured logging

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2021-11-09 11:26:01 -05:00
Emily Rockman
43b39fd1aa removed redundant timestamp (#4239) 2021-11-09 11:26:01 -05:00
Emily Rockman
5cc8626e96 updates associated with merging main
- removed 3 new log call sites and replaced with structured logs
- removed 2 unused struc logs
2021-11-09 11:26:01 -05:00
Nathaniel May
f95e9efbc0 use event types in main even before the logger is set up. (#4219) 2021-11-09 11:26:01 -05:00
Nathaniel May
25c974af8c lazy logging in event module (#4210)
* switches on debug level to guard against expensive messages

* adds memoization to msg construction
2021-11-09 11:26:01 -05:00
Emily Rockman
b5c6f09a9e remove unused import (#4217) 2021-11-09 11:26:01 -05:00
Emily Rockman
bd3e623240 test/integration call sites (#4209)
* added struct logging to base

* fixed merge wierdness

* convert to use single type for integration tests

* converted to 3 reusable test types in sep module

* tweak message

* clean up and making test_types complete for future

* fix missed import
2021-11-09 11:26:01 -05:00
Emily Rockman
63343653a9 trivial logger removal (#4216) 2021-11-09 11:26:01 -05:00
Emily Rockman
d8b97c1077 call sites in core/dbt (excluding main.py) (#4202)
* add struct logging to compilation

* add struct logging to tracking

* add struct logging to utils

* add struct logging to exceptions

* fixed some misc errors

* updated to send raw ex, removed resulting circ dep
2021-11-09 11:26:01 -05:00
Emily Rockman
e0b0edaeed deps call sites (#4199)
* add struct logging to base

* add struct logging to git

* add struct logging to deps

* remove blank line

* fixed stray merge error
2021-11-09 11:26:01 -05:00
Emily Rockman
3cafc9e13f task callsites: part 2 (#4188)
* add struct logging to docs serve

* remove merge fluff

* struct logging to seed command

* converting print to use structured logging

* more structured logging print conversion

* pulling apart formatting more

* added struct logging by disecting printer.py

* add struct logging to runnable

* add struct logging to task init

* fixed formatting

* more formatting and moving things around
2021-11-09 11:26:01 -05:00
Nathaniel May
13f31aed90 scrub the secrets (#4203)
scrub secrets in event module
2021-11-09 11:26:01 -05:00
Nathaniel May
d513491046 Show Exception should trigger a stack trace (#4190) 2021-11-09 11:26:01 -05:00
Emily Rockman
281d2491a5 task call sites part 1 (#4183)
* add struct logging to base.py

* struct logging in run_operation

* add struct logging to base

* add struct logging to clean

* add struct logging to debug

* add struct logging to deps

* fix errors

* add struct logging to run.py

* fixed flake error

* add struct logging to geneerate

* added debug level stack trace

* fixed flake error

* added struct logging to compile

* added struct logging to freshness

* cleaned up errors

* resolved bug that broke everything

* removed accidental import

* fixed bug with unused args
2021-11-09 11:26:01 -05:00
Emily Rockman
9857e1dd83 parser call sites (#4177)
* convert generic_test to structured logging

* convert macros to structured logging

* add struc logging to most of manifest.py

* add struct logging to models.py

* added struct logging to partial.py

* finished conversion of manifest

* fixing errors

* fixed 1 todo and added another

* fixed bugs from merge
2021-11-09 11:26:01 -05:00
Emily Rockman
6b36b18029 config call sites (#4169)
* update config use structured logging

* WIP

* minor cleanup

* fixed merge error

* added in ShowException

* added todo to remove defaults after dropping 3.6

* removed todo that is obsolete
2021-11-09 11:26:01 -05:00
Ian Knox
d8868c5197 Dataclass compatibility (#4180)
* use __post_init__() instead of fake dataclass member vars
2021-11-09 11:26:01 -05:00
Emily Rockman
b141620125 contracts call sites (#4166)
* first pass adding structured logging
2021-11-09 11:26:01 -05:00
Emily Rockman
51d8440dd4 Change Graph logger call sites (#4165)
graph call sites for structured logging
2021-11-09 11:26:01 -05:00
Nathaniel May
5b2562a919 Client call sites (#4163)
update log call sites with new event system
2021-11-09 11:26:01 -05:00
Nathaniel May
44a9da621e Handle exec info (#4168)
handle exec info
2021-11-09 11:26:01 -05:00
Emily Rockman
69aa6bf964 context call sites (#4164)
* updated context dir to new structured logging
2021-11-09 11:26:01 -05:00
Nathaniel May
f9ef9da110 Initial structured logging work with fire_event (#4137)
add event type modeling and fire_event calls
2021-11-09 11:26:01 -05:00
Nathaniel May
57ae9180c2 init 2021-11-09 11:26:01 -05:00
Jeremy Cohen
efe926d20c Change user instead of pass (#4250) 2021-11-09 13:10:34 +01:00
Jeremy Cohen
1081b8e720 Improve error msg on pip install dbt (#4244) 2021-11-09 10:40:45 +01:00
Kyle Wigley
8205921c4b Update docs (#4241)
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-08 19:47:28 -05:00
Jeremy Cohen
da6c211611 Wrap get_batch_size() in return() (#4240) 2021-11-09 00:46:15 +01:00
Jeremy Cohen
354c1e0d4d Rm py36 tests, pkg metadata, bump reqs (#4223) 2021-11-09 00:19:09 +01:00
Gerda Shank
855419d698 [#4071] Add metrics feature (#4235)
* first cut at supporting metrics definitions

* teach dbt about metrics

* wip

* support partial parsing for metrics

* working on tests

* Fix some tests

* Add partial parsing metrics test

* Fix some more tests

* Update CHANGELOG.md

* Fix partial parsing yaml file to correct model syntax

Co-authored-by: Drew Banin <drew@fishtownanalytics.com>
2021-11-08 17:44:01 -05:00
Gerda Shank
e94fd61b24 Issue message instead of exception when patch does not have a matching (#4236) node 2021-11-08 15:35:14 -05:00
Kyle Wigley
4cf9b73c3d Raise parsing error instead of compilation when extracting test args (#4237) 2021-11-08 14:51:52 -05:00
Jeremy Cohen
8442fb66a5 Reorganize global project (macros) (#4154)
* Add integration tests

* Reorganize + dispatch more global macros

* Reorg materializations subdir

* Move around + document generic tests

* Fix failing tests

* Fix merge conflict

* Grab fix from #4148

* PR feedback

* Fixup

* Add load_relation back, it was nice

* Last few test fixes

* Rm incremental_upsert, now unused

* Add changelog entry
2021-11-08 19:09:54 +01:00
dependabot[bot]
f8cefa3eff Update agate requirement from <1.6.2,>=1.6 to >=1.6,<1.6.4 in /core (#3585)
Updates the requirements on [agate](https://github.com/wireservice/agate) to permit the latest version.
- [Release notes](https://github.com/wireservice/agate/releases)
- [Changelog](https://github.com/wireservice/agate/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/wireservice/agate/compare/1.6.0...1.6.3)

---
updated-dependencies:
- dependency-name: agate
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-08 18:41:04 +01:00
dependabot[bot]
d83e0fb8d8 Bump mashumaro from 2.5 to 2.9 in /core (#4193)
Bumps [mashumaro](https://github.com/Fatal1ty/mashumaro) from 2.5 to 2.9.
- [Release notes](https://github.com/Fatal1ty/mashumaro/releases)
- [Commits](https://github.com/Fatal1ty/mashumaro/compare/v2.5...v2.9)

---
updated-dependencies:
- dependency-name: mashumaro
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-08 18:40:25 +01:00
Gerda Shank
3e9da06365 [#3885] Skip partial parsing if project env vars change (#4212)
* [#3885] Skip partial parsing if project env vars change

* Support env_vars in the profile
2021-11-08 11:51:38 -05:00
Gerda Shank
bda70c988e [#3885] Partially parse when environment variables in schema files change (#4162)
* [#3885] Partially parse when environment variables in schema files
change

* Add documentation for test kwargs

* Add test and fix for schema configs with env_var
2021-11-08 11:28:43 -05:00
Rachel
229e897070 Clears adapters before registering to fix dbt-server cacheing behavior (#4218) 2021-11-08 10:33:39 -05:00
Benoit Perigaud
f20e83a32b Fix/dbt deps retry none answer (#4225)
* Fix issue #4178
Allow retries when the answer is None

* Include fix for #4178
Allow retries when the answer from dbt deps is None

* Add link to the PR

* Update exception and shorten line size

* Add test when dbt deps returns None
2021-11-08 12:30:38 +01:00
Jeremy Cohen
dd84f9a896 Raise error on pip install dbt (#4133)
* Raise error on pip install dbt

* Fix relative path logic

* Do not build dist for dbt

* Fix long descriptions

* Trigger code checks

* Using root readme more trouble than good

* only fail on install, not build

* Edit dist script. Avoid README duplication

* jk, be less clever

* Ignore 'dbt' source distribution when testing

* Add changelog entry

Co-authored-by: Kyle Wigley <kyle@dbtlabs.com>
2021-11-07 17:55:30 +01:00
Mila Page
e6df4266f6 Parser no longer takes greedy. Accepts indirect selection, a bool. (#4104)
* Parser no longer takes greedy. Accepts indirect selection, a bool.

* Remove references to greedy and supporting functions.

* 1. Set testing flag default to True. 2. Improve arg parsing.

* Update tests and add new case for when flag unset.

* Update names and styling to fit test requirements. Add default value for option.

* Correct several failing tests now that default behavior was flipped.

* Tests expect eager on by default.

* All but selector test passing.

* Get integration tests working, add them, and mix in selector syntax.

* Clean code and correct test.

* Add changelog entry

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-07 17:41:56 +01:00
Christophe Oudar
b591e1a2b7 Use common columns for incremental schema changes (#4170)
* Use common columns for incremental schema changes

* on_schema_change:append_new_columns should gracefully handle column removal

* review changes

* Lean approach for `process_schema_changes` to simplify

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-07 17:31:30 +01:00
Jeremy Cohen
3dab058c73 incorporate_indirect_nodes should pass if not needed (#4214)
* Pass incorporate_indirect_nodes if not needed

* Fix flake8

* Add changelog entry
2021-11-05 16:55:58 +01:00
Robert
c7bc6eb812 Add error surfacing for git cloning errors (#4124)
* Add error surfacing for git cloning errors

* Update CHANGELOG.md

* Fix formatting and remove redundant except: raise

* Turn error handling for duplicate packages back on
2021-11-05 10:12:07 +01:00
Jeremy Cohen
c690ecc1fd Fixup changelog (#4206) 2021-11-04 13:54:23 +01:00
Jeremy Cohen
73e272f06e Add get_where_subquery to test namespace (#4197)
* Add get_where_subquery to test namespace

* Add integration test

* Fix test, add comment, smarter approach

* Fix unit tests

* Add changelog entry
2021-11-04 11:53:28 +01:00
leahwicz
95d087b51b Bumping artifact versions for v1 (#4191)
* Bumping artifact versions for v1

* Adding schema in Changelog

* Update CHANGELOG.md

* Update changelog entry

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-11-04 11:19:36 +01:00
Jeremy Cohen
40ae6b6bc8 Any subset, strict or not (#4160) 2021-11-02 17:59:46 +01:00
Jeremy Cohen
fe20534a98 Add extra graph edges for build only (#4143)
* Resolve extra graph edges for build only

* Fix flake8

* Change test to reflect functional change

* Rename method + args. Add changelog entry
2021-11-02 17:41:14 +01:00
leahwicz
dd7af477ac Perf improvement to subgraph selection (#4155)
Perf improvement to get_subset_graph
Co-authored-by: Ian Knox <ian.knox@fishtownanalytics.com>
2021-10-29 16:06:09 -05:00
Jeremy Cohen
178f74b753 Fix comma if only removing columns in on_schema_change: sync_all_columns (#4148)
* Fix comma if only removing in on_schema_change: sync

* Add changelog entry
2021-10-28 10:19:05 +02:00
Emily Rockman
a14f563ec8 port error scrub from 0.21.latests up for main (#4145) 2021-10-27 14:06:22 -05:00
Kyle Wigley
ff109e1806 Expose lib to to run tasks and compile/execute sql (#4111) 2021-10-27 13:30:46 -04:00
Frank Cash
5e46694b68 assertRaisesRegexp => assertRaisesRegex (#4136)
* assertRaisesRegexp => assertRaisesRegex

* Update CHANGELOG.md

* Update CHANGELOG.md
2021-10-27 13:05:24 +02:00
Gerda Shank
73af9a56e5 [#3885] Handle env_vars in partial parsing of SQL files (#4101)
* [#3885] Handle env_vars in partial parsing

* Comment method to build env_vars_to_source_files
2021-10-26 11:16:36 -04:00
kadero
d2aa920275 Feature: nullable error_after in source (#3955)
* Add nullable error after feature

* add merge_error_after method

* Fix FreshnessThreshold merged test

* Fix other tests

* Fix merge error after

* Fix test docs generate integration test

* Fix source integration test

* Typo and fix linting.

* Fix mypy test

* More terse way to express merge_freshness_time_thresholds

* Update Changelog.md

* Add integration test

* Fix conflict

* Fix contributing.md

* Fix integration tests

* Move up changelog entry

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2021-10-26 15:23:57 +02:00
Gerda Shank
c34f3530c8 Use platform agnostic code when searching generic test directory (#4131) 2021-10-25 22:46:36 -04:00
267 changed files with 11813 additions and 3547 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 1.0.0b2
current_version = 1.0.3
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
@@ -35,3 +35,5 @@ first_value = 1
[bumpversion:file:plugins/postgres/setup.py]
[bumpversion:file:plugins/postgres/dbt/adapters/postgres/__version__.py]
[bumpversion:file:docker/requirements/requirements.txt]

View File

@@ -1,6 +1,6 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.6", "3.7", "3.8", "3.9"];
const supportedPythonVersions = ["3.7", "3.8", "3.9"];
const supportedAdapters = ["postgres"];
// if PR, generate matrix based on files changed and PR labels

View File

@@ -77,7 +77,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [3.6, 3.7, 3.8] # TODO: support unit testing for python 3.9 (https://github.com/dbt-labs/dbt/issues/3689)
python-version: [3.7, 3.8, 3.9]
env:
TOXENV: "unit"
@@ -167,7 +167,7 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9]
python-version: [3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
@@ -198,8 +198,9 @@ jobs:
dbt --version
- name: Install source distributions
# ignore dbt-1.0.0, which intentionally raises an error when installed from source
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
find ./dist/dbt-[a-z]*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |

200
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,200 @@
# **what?**
# Take the given commit, run unit tests specifically on that sha, build and
# package it, and then release to GitHub and PyPi with that specific build
# **why?**
# Ensure an automated and tested release process
# **when?**
# This will only run manually with a given sha and version
name: Release to GitHub and PyPi
on:
workflow_dispatch:
inputs:
sha:
description: 'The last commit sha in the release'
required: true
version_number:
description: 'The release version number (i.e. 1.0.0b1)'
required: true
defaults:
run:
shell: bash
jobs:
unit:
name: Unit test
runs-on: ubuntu-latest
env:
TOXENV: "unit"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox
run: tox
build:
name: build packages
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
- name: Build distributions
run: ./scripts/build-dist.sh
- name: Show distributions
run: ls -lh dist/
- name: Check distribution descriptions
run: |
twine check dist/*
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: |
dist/
!dist/dbt-${{github.event.inputs.version_number}}.tar.gz
test-build:
name: verify packages
needs: [build, unit]
runs-on: ubuntu-latest
steps:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version
github-release:
name: GitHub Release
needs: test-build
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: '.'
# Need to set an output variable because env variables can't be taken as input
# This is needed for the next step with releasing to GitHub
- name: Find release type
id: release_type
env:
IS_PRERELEASE: ${{ contains(github.event.inputs.version_number, 'rc') || contains(github.event.inputs.version_number, 'b') }}
run: |
echo ::set-output name=isPrerelease::$IS_PRERELEASE
- name: Creating GitHub Release
uses: softprops/action-gh-release@v1
with:
name: dbt-core v${{github.event.inputs.version_number}}
tag_name: v${{github.event.inputs.version_number}}
prerelease: ${{ steps.release_type.outputs.isPrerelease }}
target_commitish: ${{github.event.inputs.sha}}
body: |
[Release notes](https://github.com/dbt-labs/dbt-core/blob/main/CHANGELOG.md)
files: |
dbt_postgres-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt_core-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt-postgres-${{github.event.inputs.version_number}}.tar.gz
dbt-core-${{github.event.inputs.version_number}}.tar.gz
pypi-release:
name: Pypi release
runs-on: ubuntu-latest
needs: github-release
environment: PypiProd
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: 'dist'
- name: Publish distribution to PyPI
uses: pypa/gh-action-pypi-publish@v1.4.2
with:
password: ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -0,0 +1,71 @@
# This Action checks makes a dbt run to sample json structured logs
# and checks that they conform to the currently documented schema.
#
# If this action fails it either means we have unintentionally deviated
# from our documented structured logging schema, or we need to bump the
# version of our structured logging and add new documentation to
# communicate these changes.
name: Structured Logging Schema Check
on:
push:
branches:
- "main"
- "*.latest"
- "releases/*"
pull_request:
workflow_dispatch:
permissions: read-all
jobs:
# run the performance measurements on the current or default branch
test-schema:
name: Test Log Schema
runs-on: ubuntu-latest
env:
# turns warnings into errors
RUSTFLAGS: "-D warnings"
# points tests to the log file
LOG_DIR: "/home/runner/work/dbt-core/dbt-core/logs"
# tells integration tests to output into json format
DBT_LOG_FORMAT: 'json'
steps:
- name: checkout dev
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Setup Python
uses: actions/setup-python@v2.2.2
with:
python-version: "3.8"
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: Set up postgres
uses: ./.github/actions/setup-postgres-linux
- name: ls
run: ls
# integration tests generate a ton of logs in different files. the next step will find them all.
# we actually care if these pass, because the normal test run doesn't usually include many json log outputs
- name: Run integration tests
run: tox -e py38-postgres -- -nauto
# apply our schema tests to every log event from the previous step
# skips any output that isn't valid json
- uses: actions-rs/cargo@v1
with:
command: run
args: --manifest-path test/interop/log_parsing/Cargo.toml

View File

@@ -1,4 +1,172 @@
## dbt-core 1.0.0 (Release TBD)
## dbt-core 1.0.3 (TBD)
### Fixes
- Fix bug accessing target fields in deps and clean commands ([#4752](https://github.com/dbt-labs/dbt-core/issues/4752), [#4758](https://github.com/dbt-labs/dbt-core/issues/4758))
## dbt-core 1.0.2 (February 18, 2022)
### Dependencies
- Pin `MarkupSafe==2.0.1`. Deprecation of `soft_unicode` in `MarkupSafe==2.1.0` is not supported by `Jinja2==2.11`
## dbt-core 1.0.2rc1 (February 4, 2022)
### Fixes
- Projects created using `dbt init` now have the correct `seeds` directory created (instead of `data`) ([#4588](https://github.com/dbt-labs/dbt-core/issues/4588), [#4599](https://github.com/dbt-labs/dbt-core/pull/4589))
- Don't require a profile for dbt deps and clean commands ([#4554](https://github.com/dbt-labs/dbt-core/issues/4554), [#4610](https://github.com/dbt-labs/dbt-core/pull/4610))
- Select modified.body works correctly when new model added([#4570](https://github.com/dbt-labs/dbt-core/issues/4570), [#4631](https://github.com/dbt-labs/dbt-core/pull/4631))
- Fix bug in retry logic for bad response from hub and when there is a bad git tarball download. ([#4577](https://github.com/dbt-labs/dbt-core/issues/4577), [#4579](https://github.com/dbt-labs/dbt-core/issues/4579), [#4609](https://github.com/dbt-labs/dbt-core/pull/4609))
- Restore previous log level (DEBUG) when a test depends on a disabled resource. Still WARN if the resource is missing ([#4594](https://github.com/dbt-labs/dbt-core/issues/4594), [#4647](https://github.com/dbt-labs/dbt-core/pull/4647))
- User wasn't asked for permission to overwite a profile entry when running init inside an existing project ([#4375](https://github.com/dbt-labs/dbt-core/issues/4375), [#4447](https://github.com/dbt-labs/dbt-core/pull/4447))
- A change in secret environment variables won't trigger a full reparse [#4650](https://github.com/dbt-labs/dbt-core/issues/4650) [4665](https://github.com/dbt-labs/dbt-core/pull/4665)
- adapter compability messaging added([#4438](https://github.com/dbt-labs/dbt-core/pull/4438) [#4565](https://github.com/dbt-labs/dbt-core/pull/4565))
- Add project name validation to `dbt init` ([#4490](https://github.com/dbt-labs/dbt-core/issues/4490),[#4536](https://github.com/dbt-labs/dbt-core/pull/4536))
Contributors:
- [@NiallRees](https://github.com/NiallRees) ([#4447](https://github.com/dbt-labs/dbt-core/pull/4447))
- [@amirkdv](https://github.com/amirkdv) ([#4536](https://github.com/dbt-labs/dbt-core/pull/4536))
- [@nkyuray](https://github.com/nkyuray) ([#4565](https://github.com/dbt-labs/dbt-core/pull/4565))
## dbt-core 1.0.1 (January 03, 2022)
## dbt-core 1.0.1rc1 (December 20, 2021)
### Fixes
- Fix wrong url in the dbt docs overview homepage ([#4442](https://github.com/dbt-labs/dbt-core/pull/4442))
- Fix redefined status param of SQLQueryStatus to typecheck the string which passes on `._message` value of `AdapterResponse` or the `str` value sent by adapter plugin. ([#4463](https://github.com/dbt-labs/dbt-core/pull/4463#issuecomment-990174166))
- Fix `DepsStartPackageInstall` event to use package name instead of version number. ([#4482](https://github.com/dbt-labs/dbt-core/pull/4482))
- Reimplement log message to use adapter name instead of the object method. ([#4501](https://github.com/dbt-labs/dbt-core/pull/4501))
- Issue better error message for incompatible schemas ([#4470](https://github.com/dbt-labs/dbt-core/pull/4442), [#4497](https://github.com/dbt-labs/dbt-core/pull/4497))
- Remove secrets from error related to packages. ([#4507](https://github.com/dbt-labs/dbt-core/pull/4507))
- Prevent coercion of boolean values (`True`, `False`) to numeric values (`0`, `1`) in query results ([#4511](https://github.com/dbt-labs/dbt-core/issues/4511), [#4512](https://github.com/dbt-labs/dbt-core/pull/4512))
- Fix error with an env_var in a project hook ([#4523](https://github.com/dbt-labs/dbt-core/issues/4523), [#4524](https://github.com/dbt-labs/dbt-core/pull/4524))
### Docs
- Fix missing data on exposures in docs ([#4467](https://github.com/dbt-labs/dbt-core/issues/4467))
Contributors:
- [remoyson](https://github.com/remoyson) ([#4442](https://github.com/dbt-labs/dbt-core/pull/4442))
## dbt-core 1.0.0 (December 3, 2021)
### Fixes
- Configure the CLI logger destination to use stdout instead of stderr ([#4368](https://github.com/dbt-labs/dbt-core/pull/4368))
- Make the size of `EVENT_HISTORY` configurable, via `EVENT_BUFFER_SIZE` global config ([#4411](https://github.com/dbt-labs/dbt-core/pull/4411), [#4416](https://github.com/dbt-labs/dbt-core/pull/4416))
- Change type of `log_format` in `profiles.yml` user config to be string, not boolean ([#4394](https://github.com/dbt-labs/dbt-core/pull/4394))
### Under the hood
- Only log cache events if `LOG_CACHE_EVENTS` is enabled, and disable by default. This restores previous behavior ([#4369](https://github.com/dbt-labs/dbt-core/pull/4369))
- Move event codes to be a top-level attribute of JSON-formatted logs, rather than nested in `data` ([#4381](https://github.com/dbt-labs/dbt-core/pull/4381))
- Fix failing integration test on Windows ([#4380](https://github.com/dbt-labs/dbt-core/pull/4380))
- Clean up warning messages for `clean` + `deps` ([#4366](https://github.com/dbt-labs/dbt-core/pull/4366))
- Use RFC3339 timestamps for log messages ([#4384](https://github.com/dbt-labs/dbt-core/pull/4384))
- Different text output for console (info) and file (debug) logs ([#4379](https://github.com/dbt-labs/dbt-core/pull/4379), [#4418](https://github.com/dbt-labs/dbt-core/pull/4418))
- Remove unused events. More structured `ConcurrencyLine`. Replace `\n` message starts/ends with `EmptyLine` events, and exclude `EmptyLine` from JSON-formatted output ([#4388](https://github.com/dbt-labs/dbt-core/pull/4388))
- Update `events` module README ([#4395](https://github.com/dbt-labs/dbt-core/pull/4395))
- Rework approach to JSON serialization for events with non-standard properties ([#4396](https://github.com/dbt-labs/dbt-core/pull/4396))
- Update legacy logger file name to `dbt.log.legacy` ([#4402](https://github.com/dbt-labs/dbt-core/pull/4402))
- Rollover `dbt.log` at 10 MB, and keep up to 5 backups, restoring previous behavior ([#4405](https://github.com/dbt-labs/dbt-core/pull/4405))
- Use reference keys instead of full relation objects in cache events ([#4410](https://github.com/dbt-labs/dbt-core/pull/4410))
- Add `node_type` contextual info to more events ([#4378](https://github.com/dbt-labs/dbt-core/pull/4378))
- Make `materialized` config optional in `node_type` ([#4417](https://github.com/dbt-labs/dbt-core/pull/4417))
- Stringify exception in `GenericExceptionOnRun` to support JSON serialization ([#4424](https://github.com/dbt-labs/dbt-core/pull/4424))
- Add "interop" tests for machine consumption of structured log output ([#4327](https://github.com/dbt-labs/dbt-core/pull/4327))
- Relax version specifier for `dbt-extractor` to `~=0.4.0`, to support compiled wheels for additional architectures when available ([#4427](https://github.com/dbt-labs/dbt-core/pull/4427))
## dbt-core 1.0.0rc3 (November 30, 2021)
### Fixes
- Support partial parsing of env_vars in metrics ([#4253](https://github.com/dbt-labs/dbt-core/issues/4293), [#4322](https://github.com/dbt-labs/dbt-core/pull/4322))
- Fix typo in `UnparsedSourceDefinition.__post_serialize__` ([#3545](https://github.com/dbt-labs/dbt-core/issues/3545), [#4349](https://github.com/dbt-labs/dbt-core/pull/4349))
### Under the hood
- Change some CompilationExceptions to ParsingExceptions ([#4254](http://github.com/dbt-labs/dbt-core/issues/4254), [#4328](https://github.com/dbt-core/pull/4328))
- Reorder logic for static parser sampling to speed up model parsing ([#4332](https://github.com/dbt-labs/dbt-core/pull/4332))
- Use more augmented assignment statements ([#4315](https://github.com/dbt-labs/dbt-core/issues/4315)), ([#4311](https://github.com/dbt-labs/dbt-core/pull/4331))
- Adjust logic when finding approximate matches for models and tests ([#3835](https://github.com/dbt-labs/dbt-core/issues/3835)), [#4076](https://github.com/dbt-labs/dbt-core/pull/4076))
- Restore small previous behaviors for logging: JSON formatting for first few events; `WARN`-level stdout for `list` task; include tracking events in `dbt.log` ([#4341](https://github.com/dbt-labs/dbt-core/pull/4341))
Contributors:
- [@sarah-weatherbee](https://github.com/sarah-weatherbee) ([#4331](https://github.com/dbt-labs/dbt-core/pull/4331))
- [@emilieschario](https://github.com/emilieschario) ([#4076](https://github.com/dbt-labs/dbt-core/pull/4076))
- [@sneznaj](https://github.com/sneznaj) ([#4349](https://github.com/dbt-labs/dbt-core/pull/4349))
## dbt-core 1.0.0rc2 (November 22, 2021)
### Breaking changes
- Restrict secret env vars (prefixed `DBT_ENV_SECRET_`) to `profiles.yml` + `packages.yml` _only_. Raise an exception if a secret env var is used elsewhere ([#4310](https://github.com/dbt-labs/dbt-core/issues/4310), [#4311](https://github.com/dbt-labs/dbt-core/pull/4311))
- Reorder arguments to `config.get()` so that `default` is second ([#4273](https://github.com/dbt-labs/dbt-core/issues/4273), [#4297](https://github.com/dbt-labs/dbt-core/pull/4297))
### Features
- Avoid error when missing column in YAML description ([#4151](https://github.com/dbt-labs/dbt-core/issues/4151), [#4285](https://github.com/dbt-labs/dbt-core/pull/4285))
- Allow `--defer` flag to `dbt snapshot` ([#4110](https://github.com/dbt-labs/dbt-core/issues/4110), [#4296](https://github.com/dbt-labs/dbt-core/pull/4296))
- Install prerelease packages when `version` explicitly references a prerelease version, regardless of `install-prerelease` status ([#4243](https://github.com/dbt-labs/dbt-core/issues/4243), [#4295](https://github.com/dbt-labs/dbt-core/pull/4295))
- Add data attributes to json log messages ([#4301](https://github.com/dbt-labs/dbt-core/pull/4301))
- Add event codes to all log events ([#4319](https://github.com/dbt-labs/dbt-core/pull/4319))
### Fixes
- Fix serialization error with missing quotes in metrics model ref ([#4252](https://github.com/dbt-labs/dbt-core/issues/4252), [#4287](https://github.com/dbt-labs/dbt-core/pull/4289))
- Correct definition of 'created_at' in ParsedMetric nodes ([#4298](http://github.com/dbt-labs/dbt-core/issues/4298), [#4299](https://github.com/dbt-labs/dbt-core/pull/4299))
### Fixes
- Allow specifying default in Jinja config.get with default keyword ([#4273](https://github.com/dbt-labs/dbt-core/issues/4273), [#4297](https://github.com/dbt-labs/dbt-core/pull/4297))
- Fix serialization error with missing quotes in metrics model ref ([#4252](https://github.com/dbt-labs/dbt-core/issues/4252), [#4287](https://github.com/dbt-labs/dbt-core/pull/4289))
- Correct definition of 'created_at' in ParsedMetric nodes ([#4298](https://github.com/dbt-labs/dbt-core/issues/4298), [#4299](https://github.com/dbt-labs/dbt-core/pull/4299))
### Under the hood
- Add --indirect-selection parameter to profiles.yml and builtin DBT_ env vars; stringified parameter to enable multi-modal use ([#3997](https://github.com/dbt-labs/dbt-core/issues/3997), [#4270](https://github.com/dbt-labs/dbt-core/pull/4270))
- Fix filesystem searcher test failure on Python 3.9 ([#3689](https://github.com/dbt-labs/dbt-core/issues/3689), [#4271](https://github.com/dbt-labs/dbt-core/pull/4271))
- Clean up deprecation warnings shown for `dbt_project.yml` config renames ([#4276](https://github.com/dbt-labs/dbt-core/issues/4276), [#4291](https://github.com/dbt-labs/dbt-core/pull/4291))
- Fix metrics count in compiled project stats ([#4290](https://github.com/dbt-labs/dbt-core/issues/4290), [#4292](https://github.com/dbt-labs/dbt-core/pull/4292))
- First pass at supporting more dbt tasks via python lib ([#4200](https://github.com/dbt-labs/dbt-core/pull/4200))
Contributors:
- [@kadero](https://github.com/kadero) ([#4285](https://github.com/dbt-labs/dbt-core/pull/4285), [#4296](https://github.com/dbt-labs/dbt-core/pull/4296))
- [@joellabes](https://github.com/joellabes) ([#4295](https://github.com/dbt-labs/dbt-core/pull/4295))
## dbt-core 1.0.0rc1 (November 10, 2021)
### Breaking changes
- Replace `greedy` flag/property for test selection with `indirect_selection: eager/cautious` flag/property. Set to `eager` by default. **Note:** This reverts test selection to its pre-v0.20 behavior by default. `dbt test -s my_model` _will_ select multi-parent tests, such as `relationships`, that depend on unselected resources. To achieve the behavior change in v0.20 + v0.21, set `--indirect-selection=cautious` on the CLI or `indirect_selection: cautious` in yaml selectors. ([#4082](https://github.com/dbt-labs/dbt-core/issues/4082), [#4104](https://github.com/dbt-labs/dbt-core/pull/4104))
- In v1.0.0, **`pip install dbt` will raise an explicit error.** Instead, please use `pip install dbt-<adapter>` (to use dbt with that database adapter), or `pip install dbt-core` (for core functionality). For parity with the previous behavior of `pip install dbt`, you can use: `pip install dbt-core dbt-postgres dbt-redshift dbt-snowflake dbt-bigquery` ([#4100](https://github.com/dbt-labs/dbt-core/issues/4100), [#4133](https://github.com/dbt-labs/dbt-core/pull/4133))
- Reorganize the `global_project` (macros) into smaller files with clearer names. Remove unused global macros: `column_list`, `column_list_for_create_table`, `incremental_upsert` ([#4154](https://github.com/dbt-labs/dbt-core/pull/4154))
- Introduce structured event interface, and begin conversion of all legacy logging ([#3359](https://github.com/dbt-labs/dbt-core/issues/3359), [#4055](https://github.com/dbt-labs/dbt-core/pull/4055))
- **This is a breaking change for adapter plugins, requiring a very simple migration.** See [`events` module README](core/dbt/events/README.md#adapter-maintainers) for details.
- If you maintain another kind of dbt-core plugin that makes heavy use of legacy logging, and you need time to cut over to the new event interface, you can re-enable the legacy logger via an environment variable shim, `DBT_ENABLE_LEGACY_LOGGER=True`. Be advised that we will remove this capability in a future version of dbt-core.
### Features
- Allow nullable `error_after` in source freshness ([#3874](https://github.com/dbt-labs/dbt-core/issues/3874), [#3955](https://github.com/dbt-labs/dbt-core/pull/3955))
- Add `metrics` nodes ([#4071](https://github.com/dbt-labs/dbt-core/issues/4071), [#4235](https://github.com/dbt-labs/dbt-core/pull/4235))
- Add support for `dbt init <project_name>`, and support for `skip_profile_setup` argument (`dbt init -s`) ([#4156](https://github.com/dbt-labs/dbt-core/issues/4156), [#4249](https://github.com/dbt-labs/dbt-core/pull/4249))
### Fixes
- Changes unit tests using `assertRaisesRegexp` to `assertRaisesRegex` ([#4136](https://github.com/dbt-labs/dbt-core/issues/4132), [#4136](https://github.com/dbt-labs/dbt-core/pull/4136))
- Allow retries when the answer from a `dbt deps` is `None` ([#4178](https://github.com/dbt-labs/dbt-core/issues/4178), [#4225](https://github.com/dbt-labs/dbt-core/pull/4225))
### Docs
- Fix non-alphabetical sort of Source Tables in source overview page ([docs#81](https://github.com/dbt-labs/dbt-docs/issues/81), [docs#218](https://github.com/dbt-labs/dbt-docs/pull/218))
- Add title tag to node elements in tree ([docs#202](https://github.com/dbt-labs/dbt-docs/issues/202), [docs#203](https://github.com/dbt-labs/dbt-docs/pull/203))
- Account for test rename: `schema` &rarr; `generic`, `data` &rarr;` singular`. Use `test_metadata` instead of `schema`/`data` tags to differentiate ([docs#216](https://github.com/dbt-labs/dbt-docs/issues/216), [docs#222](https://github.com/dbt-labs/dbt-docs/pull/222))
- Add `metrics` ([core#216](https://github.com/dbt-labs/dbt-core/issues/4235), [docs#223](https://github.com/dbt-labs/dbt-docs/pull/223))
### Under the hood
- Bump artifact schema versions for 1.0.0: manifest v4, run results v4, sources v3. Notable changes: added `metrics` nodes; schema test + data test nodes are renamed to generic test + singular test nodes; freshness threshold default values ([#4191](https://github.com/dbt-labs/dbt-core/pull/4191))
- Speed up node selection by skipping `incorporate_indirect_nodes` if not needed ([#4213](https://github.com/dbt-labs/dbt-core/issues/4213), [#4214](https://github.com/dbt-labs/dbt-core/issues/4214))
- When `on_schema_change` is set, pass common columns as `dest_columns` in incremental merge macros ([#4144](https://github.com/dbt-labs/dbt-core/issues/4144), [#4170](https://github.com/dbt-labs/dbt-core/pull/4170))
- Clear adapters before registering in `lib` module config generation ([#4218](https://github.com/dbt-labs/dbt-core/pull/4218))
- Remove official support for python 3.6, which is reaching end of life on December 23, 2021 ([#4134](https://github.com/dbt-labs/dbt-core/issues/4134), [#4223](https://github.com/dbt-labs/dbt-core/pull/4223))
Contributors:
- [@kadero](https://github.com/kadero) ([#3955](https://github.com/dbt-labs/dbt-core/pull/3955), [#4249](https://github.com/dbt-labs/dbt-core/pull/4249))
- [@frankcash](https://github.com/frankcash) ([#4136](https://github.com/dbt-labs/dbt-core/pull/4136))
- [@Kayrnt](https://github.com/Kayrnt) ([#4136](https://github.com/dbt-labs/dbt-core/pull/4170))
- [@VersusFacit](https://github.com/VersusFacit) ([#4104](https://github.com/dbt-labs/dbt-core/pull/4104))
- [@joellabes](https://github.com/joellabes) ([#4104](https://github.com/dbt-labs/dbt-core/pull/4104))
- [@b-per](https://github.com/b-per) ([#4225](https://github.com/dbt-labs/dbt-core/pull/4225))
- [@salmonsd](https://github.com/salmonsd) ([docs#218](https://github.com/dbt-labs/dbt-docs/pull/218))
- [@miike](https://github.com/miike) ([docs#203](https://github.com/dbt-labs/dbt-docs/pull/203))
## dbt-core 1.0.0b2 (October 25, 2021)
@@ -15,12 +183,16 @@
- `dbt init` is now interactive, generating profiles.yml when run inside existing project ([#3625](https://github.com/dbt-labs/dbt/pull/3625))
### Under the hood
- Fix intermittent errors in partial parsing tests ([#4060](https://github.com/dbt-labs/dbt-core/issues/4060), [#4068](https://github.com/dbt-labs/dbt-core/pull/4068))
- Make finding disabled nodes more consistent ([#4069](https://github.com/dbt-labs/dbt-core/issues/4069), [#4073](https://github.com/dbt-labas/dbt-core/pull/4073))
- Remove connection from `render_with_context` during parsing, thereby removing misleading log message ([#3137](https://github.com/dbt-labs/dbt-core/issues/3137), [#4062](https://github.com/dbt-labas/dbt-core/pull/4062))
- Wait for postgres docker container to be ready in `setup_db.sh`. ([#3876](https://github.com/dbt-labs/dbt-core/issues/3876), [#3908](https://github.com/dbt-labs/dbt-core/pull/3908))
- Prefer macros defined in the project over the ones in a package by default ([#4106](https://github.com/dbt-labs/dbt-core/issues/4106), [#4114](https://github.com/dbt-labs/dbt-core/pull/4114))
- Prefer macros defined in the project over the ones in a package by default ([#4106](https://github.com/dbt-labs/dbt-core/issues/4106), [#4114](https://github.com/dbt-labs/dbt-core/pull/4114))
- Dependency updates ([#4079](https://github.com/dbt-labs/dbt-core/pull/4079)), ([#3532](https://github.com/dbt-labs/dbt-core/pull/3532)
- Schedule partial parsing for SQL files with env_var changes ([#3885](https://github.com/dbt-labs/dbt-core/issues/3885), [#4101](https://github.com/dbt-labs/dbt-core/pull/4101))
- Schedule partial parsing for schema files with env_var changes ([#3885](https://github.com/dbt-labs/dbt-core/issues/3885), [#4162](https://github.com/dbt-labs/dbt-core/pull/4162))
- Skip partial parsing when env_vars change in dbt_project or profile ([#3885](https://github.com/dbt-labs/dbt-core/issues/3885), [#4212](https://github.com/dbt-labs/dbt-core/pull/4212))
Contributors:
- [@sungchun12](https://github.com/sungchun12) ([#4017](https://github.com/dbt-labs/dbt/pull/4017))
@@ -54,7 +226,6 @@ Contributors:
- Fixed bug with `error_if` test option ([#4070](https://github.com/dbt-labs/dbt-core/pull/4070))
### Under the hood
- Enact deprecation for `materialization-return` and replace deprecation warning with an exception. ([#3896](https://github.com/dbt-labs/dbt-core/issues/3896))
- Build catalog for only relational, non-ephemeral nodes in the graph ([#3920](https://github.com/dbt-labs/dbt-core/issues/3920))
- Enact deprecation to remove the `release` arg from the `execute_macro` method. ([#3900](https://github.com/dbt-labs/dbt-core/issues/3900))
@@ -67,6 +238,7 @@ Contributors:
- Update the default project paths to be `analysis-paths = ['analyses']` and `test-paths = ['tests]`. Also have starter project set `analysis-paths: ['analyses']` from now on. ([#2659](https://github.com/dbt-labs/dbt-core/issues/2659))
- Define the data type of `sources` as an array of arrays of string in the manifest artifacts. ([#3966](https://github.com/dbt-labs/dbt-core/issues/3966), [#3967](https://github.com/dbt-labs/dbt-core/pull/3967))
- Marked `source-paths` and `data-paths` as deprecated keys in `dbt_project.yml` in favor of `model-paths` and `seed-paths` respectively.([#1607](https://github.com/dbt-labs/dbt-core/issues/1607))
- Surface git errors to `stdout` when cloning dbt packages from Github. ([#3167](https://github.com/dbt-labs/dbt-core/issues/3167))
Contributors:
@@ -75,14 +247,25 @@ Contributors:
- [@samlader](https://github.com/samlader) ([#3993](https://github.com/dbt-labs/dbt-core/pull/3993))
- [@yu-iskw](https://github.com/yu-iskw) ([#3967](https://github.com/dbt-labs/dbt-core/pull/3967))
- [@laxjesse](https://github.com/laxjesse) ([#4019](https://github.com/dbt-labs/dbt-core/pull/4019))
- [@gitznik](https://github.com/Gitznik) ([#4124](https://github.com/dbt-labs/dbt-core/pull/4124))
## dbt 0.21.1 (Release TBD)
## dbt 0.21.1 (November 29, 2021)
### Fixes
- Add `get_where_subquery` to test macro namespace, fixing custom generic tests that rely on introspecting the `model` arg at parse time ([#4195](https://github.com/dbt-labs/dbt/issues/4195), [#4197](https://github.com/dbt-labs/dbt/pull/4197))
## dbt 0.21.1rc1 (November 03, 2021)
### Fixes
- Performance: Use child_map to find tests for nodes in resolve_graph ([#4012](https://github.com/dbt-labs/dbt/issues/4012), [#4022](https://github.com/dbt-labs/dbt/pull/4022))
- Switch `unique_field` from abstractproperty to optional property. Add docstring ([#4025](https://github.com/dbt-labs/dbt/issues/4025), [#4028](https://github.com/dbt-labs/dbt/pull/4028))
- Include only relational nodes in `database_schema_set` ([#4063](https://github.com/dbt-labs/dbt-core/issues/4063), [#4077](https://github.com/dbt-labs/dbt-core/pull/4077))
- Added support for tests on databases that lack real boolean types. ([#4084](https://github.com/dbt-labs/dbt-core/issues/4084))
- Scrub secrets coming from `CommandError`s so they don't get exposed in logs. ([#4138](https://github.com/dbt-labs/dbt-core/pull/4138))
- Syntax fix in `alter_relation_add_remove_columns` if only removing columns in `on_schema_change: sync_all_columns` ([#4147](https://github.com/dbt-labs/dbt-core/issues/4147))
- Increase performance of graph subset selection ([#4135](https://github.com/dbt-labs/dbt-core/issues/4135),[#4155](https://github.com/dbt-labs/dbt-core/pull/4155))
- Add downstream test edges for `build` task _only_. Restore previous graph construction, compilation performance, and node selection behavior (`test+`) for all other tasks ([#4135](https://github.com/dbt-labs/dbt-core/issues/4135), [#4143](https://github.com/dbt-labs/dbt-core/pull/4143))
- Don't require a strict/proper subset when adding testing edges to specialized graph for `build` ([#4158](https://github.com/dbt-labs/dbt-core/issues/4135), [#4158](https://github.com/dbt-labs/dbt-core/pull/4160))
Contributors:
- [@ljhopkins2](https://github.com/ljhopkins2) ([#4077](https://github.com/dbt-labs/dbt-core/pull/4077))
@@ -210,7 +393,7 @@ Contributors:
- [@jmriego](https://github.com/jmriego) ([#3526](https://github.com/dbt-labs/dbt-core/pull/3526))
- [@danielefrigo](https://github.com/danielefrigo) ([#3547](https://github.com/dbt-labs/dbt-core/pull/3547))
## dbt 0.20.2 (Release TBD)
## dbt 0.20.2 (September 07, 2021)
### Under the hood

42
core/README.md Normal file
View File

@@ -0,0 +1,42 @@
<p align="center">
<img src="https://raw.githubusercontent.com/dbt-labs/dbt-core/fa1ea14ddfb1d5ae319d5141844910dd53ab2834/etc/dbt-core.svg" alt="dbt logo" width="750"/>
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
</a>
</p>
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
![architecture](https://raw.githubusercontent.com/dbt-labs/dbt-core/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-arch.png)
## Understanding dbt
Analysts using dbt can transform their data by simply writing select statements, while dbt handles turning these statements into tables and views in a data warehouse.
These select statements, or "models", form a dbt project. Models frequently build on top of one another dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
![dbt dag](https://raw.githubusercontent.com/dbt-labs/dbt-core/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/etc/dbt-dag.png)
## Getting started
- [Install dbt](https://docs.getdbt.com/docs/installation)
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Join the dbt Community
- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-core/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt-core/blob/HEAD/CONTRIBUTING.md)
## Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).

View File

@@ -18,7 +18,17 @@ from dbt.contracts.graph.manifest import Manifest
from dbt.adapters.base.query_headers import (
MacroQueryStringSetter,
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import (
NewConnection,
ConnectionReused,
ConnectionLeftOpen,
ConnectionLeftOpen2,
ConnectionClosed,
ConnectionClosed2,
Rollback,
RollbackFailed
)
from dbt import flags
@@ -136,14 +146,10 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
if conn.name == conn_name and conn.state == 'open':
return conn
logger.debug(
'Acquiring new {} connection "{}".'.format(self.TYPE, conn_name))
fire_event(NewConnection(conn_name=conn_name, conn_type=self.TYPE))
if conn.state == 'open':
logger.debug(
'Re-using an available connection from the pool (formerly {}).'
.format(conn.name)
)
fire_event(ConnectionReused(conn_name=conn_name))
else:
conn.handle = LazyHandle(self.open)
@@ -190,11 +196,9 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
with self.lock:
for connection in self.thread_connections.values():
if connection.state not in {'closed', 'init'}:
logger.debug("Connection '{}' was left open."
.format(connection.name))
fire_event(ConnectionLeftOpen(conn_name=connection.name))
else:
logger.debug("Connection '{}' was properly closed."
.format(connection.name))
fire_event(ConnectionClosed(conn_name=connection.name))
self.close(connection)
# garbage collect these connections
@@ -220,20 +224,17 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
try:
connection.handle.rollback()
except Exception:
logger.debug(
'Failed to rollback {}'.format(connection.name),
exc_info=True
)
fire_event(RollbackFailed(conn_name=connection.name))
@classmethod
def _close_handle(cls, connection: Connection) -> None:
"""Perform the actual close operation."""
# On windows, sometimes connection handles don't have a close() attr.
if hasattr(connection.handle, 'close'):
logger.debug(f'On {connection.name}: Close')
fire_event(ConnectionClosed2(conn_name=connection.name))
connection.handle.close()
else:
logger.debug(f'On {connection.name}: No close available on handle')
fire_event(ConnectionLeftOpen2(conn_name=connection.name))
@classmethod
def _rollback(cls, connection: Connection) -> None:
@@ -244,7 +245,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
f'"{connection.name}", but it does not have one open!'
)
logger.debug(f'On {connection.name}: ROLLBACK')
fire_event(Rollback(conn_name=connection.name))
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -256,7 +257,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
return connection
if connection.transaction_open and connection.handle:
logger.debug('On {}: ROLLBACK'.format(connection.name))
fire_event(Rollback(conn_name=connection.name))
cls._rollback_handle(connection)
connection.transaction_open = False

View File

@@ -29,7 +29,8 @@ from dbt.contracts.graph.compiled import (
from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import CacheMiss, ListRelations
from dbt.utils import filter_null_values, executor
from dbt.adapters.base.connections import Connection, AdapterResponse
@@ -38,7 +39,7 @@ from dbt.adapters.base.relation import (
ComponentName, BaseRelation, InformationSchema, SchemaSearchMap
)
from dbt.adapters.base import Column as BaseColumn
from dbt.adapters.cache import RelationsCache
from dbt.adapters.cache import RelationsCache, _make_key
SeedModel = Union[ParsedSeedNode, CompiledSeedNode]
@@ -288,9 +289,12 @@ class BaseAdapter(metaclass=AdapterMeta):
"""Check if the schema is cached, and by default logs if it is not."""
if (database, schema) not in self.cache:
logger.debug(
'On "{}": cache miss for schema "{}.{}", this is inefficient'
.format(self.nice_connection_name(), database, schema)
fire_event(
CacheMiss(
conn_name=self.nice_connection_name(),
database=database,
schema=schema
)
)
return False
else:
@@ -672,9 +676,12 @@ class BaseAdapter(metaclass=AdapterMeta):
relations = self.list_relations_without_caching(
schema_relation
)
fire_event(ListRelations(
database=database,
schema=schema,
relations=[_make_key(x) for x in relations]
))
logger.debug('with database={}, schema={}, relations={}'
.format(database, schema, relations))
return relations
def _make_match_kwargs(
@@ -968,10 +975,10 @@ class BaseAdapter(metaclass=AdapterMeta):
'dbt could not find a macro with the name "{}" in {}'
.format(macro_name, package_name)
)
# This causes a reference cycle, as generate_runtime_macro()
# This causes a reference cycle, as generate_runtime_macro_context()
# ends up calling get_adapter, so the import has to be here.
from dbt.context.providers import generate_runtime_macro
macro_context = generate_runtime_macro(
from dbt.context.providers import generate_runtime_macro_context
macro_context = generate_runtime_macro_context(
macro=macro,
config=self.config,
manifest=manifest,

View File

@@ -89,7 +89,10 @@ class BaseRelation(FakeAPIObject, Hashable):
if not self._is_exactish_match(k, v):
exact_match = False
if self.path.get_lowered_part(k) != v.lower():
if (
self.path.get_lowered_part(k).strip(self.quote_character) !=
v.lower().strip(self.quote_character)
):
approximate_match = False
if approximate_match and not exact_match:

View File

@@ -1,23 +1,27 @@
from collections import namedtuple
from copy import deepcopy
from typing import List, Iterable, Optional, Dict, Set, Tuple, Any
import threading
from copy import deepcopy
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
from dbt.logger import CACHE_LOGGER as logger
from dbt.utils import lowercase
from dbt.adapters.reference_keys import _make_key, _ReferenceKey
import dbt.exceptions
_ReferenceKey = namedtuple('_ReferenceKey', 'database schema identifier')
def _make_key(relation) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting
"""
# databases and schemas can both be None
return _ReferenceKey(lowercase(relation.database),
lowercase(relation.schema),
lowercase(relation.identifier))
from dbt.events.functions import fire_event, fire_event_if
from dbt.events.types import (
AddLink,
AddRelation,
DropCascade,
DropMissingRelation,
DropRelation,
DumpAfterAddGraph,
DumpAfterRenameSchema,
DumpBeforeAddGraph,
DumpBeforeRenameSchema,
RenameSchema,
TemporaryRelation,
UncachedRelation,
UpdateReference
)
import dbt.flags as flags
from dbt.utils import lowercase
def dot_separated(key: _ReferenceKey) -> str:
@@ -157,12 +161,6 @@ class _CachedRelation:
return [dot_separated(r) for r in self.referenced_by]
def lazy_log(msg, func):
if logger.disabled:
return
logger.debug(msg.format(func()))
class RelationsCache:
"""A cache of the relations known to dbt. Keeps track of relationships
declared between tables and handles renames/drops as a real database would.
@@ -278,6 +276,7 @@ class RelationsCache:
referenced.add_reference(dependent)
# TODO: Is this dead code? I can't seem to find it grepping the codebase.
def add_link(self, referenced, dependent):
"""Add a link between two relations to the database. If either relation
does not exist, it will be added as an "external" relation.
@@ -297,11 +296,7 @@ class RelationsCache:
# if we have not cached the referenced schema at all, we must be
# referring to a table outside our control. There's no need to make
# a link - we will never drop the referenced relation during a run.
logger.debug(
'{dep!s} references {ref!s} but {ref.database}.{ref.schema} '
'is not in the cache, skipping assumed external relation'
.format(dep=dependent, ref=ref_key)
)
fire_event(UncachedRelation(dep_key=dependent, ref_key=ref_key))
return
if ref_key not in self.relations:
# Insert a dummy "external" relation.
@@ -317,9 +312,7 @@ class RelationsCache:
type=referenced.External
)
self.add(dependent)
logger.debug(
'adding link, {!s} references {!s}'.format(dep_key, ref_key)
)
fire_event(AddLink(dep_key=dep_key, ref_key=ref_key))
with self.lock:
self._add_link(ref_key, dep_key)
@@ -330,14 +323,12 @@ class RelationsCache:
:param BaseRelation relation: The underlying relation.
"""
cached = _CachedRelation(relation)
logger.debug('Adding relation: {!s}'.format(cached))
lazy_log('before adding: {!s}', self.dump_graph)
fire_event(AddRelation(relation=_make_key(cached)))
fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpBeforeAddGraph(dump=self.dump_graph()))
with self.lock:
self._setdefault(cached)
lazy_log('after adding: {!s}', self.dump_graph)
fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpAfterAddGraph(dump=self.dump_graph()))
def _remove_refs(self, keys):
"""Removes all references to all entries in keys. This does not
@@ -359,13 +350,10 @@ class RelationsCache:
:param _CachedRelation dropped: An existing _CachedRelation to drop.
"""
if dropped not in self.relations:
logger.debug('dropped a nonexistent relationship: {!s}'
.format(dropped))
fire_event(DropMissingRelation(relation=dropped))
return
consequences = self.relations[dropped].collect_consequences()
logger.debug(
'drop {} is cascading to {}'.format(dropped, consequences)
)
fire_event(DropCascade(dropped=dropped, consequences=consequences))
self._remove_refs(consequences)
def drop(self, relation):
@@ -380,7 +368,7 @@ class RelationsCache:
:param str identifier: The identifier of the relation to drop.
"""
dropped = _make_key(relation)
logger.debug('Dropping relation: {!s}'.format(dropped))
fire_event(DropRelation(dropped=dropped))
with self.lock:
self._drop_cascade_relation(dropped)
@@ -403,9 +391,8 @@ class RelationsCache:
# update all the relations that refer to it
for cached in self.relations.values():
if cached.is_referenced_by(old_key):
logger.debug(
'updated reference from {0} -> {2} to {1} -> {2}'
.format(old_key, new_key, cached.key())
fire_event(
UpdateReference(old_key=old_key, new_key=new_key, cached_key=cached.key())
)
cached.rename_key(old_key, new_key)
@@ -435,10 +422,7 @@ class RelationsCache:
)
if old_key not in self.relations:
logger.debug(
'old key {} not found in self.relations, assuming temporary'
.format(old_key)
)
fire_event(TemporaryRelation(key=old_key))
return False
return True
@@ -456,11 +440,11 @@ class RelationsCache:
"""
old_key = _make_key(old)
new_key = _make_key(new)
logger.debug('Renaming relation {!s} to {!s}'.format(
old_key, new_key
))
lazy_log('before rename: {!s}', self.dump_graph)
fire_event(RenameSchema(old_key=old_key, new_key=new_key))
fire_event_if(
flags.LOG_CACHE_EVENTS,
lambda: DumpBeforeRenameSchema(dump=self.dump_graph())
)
with self.lock:
if self._check_rename_constraints(old_key, new_key):
@@ -468,7 +452,10 @@ class RelationsCache:
else:
self._setdefault(_CachedRelation(new))
lazy_log('after rename: {!s}', self.dump_graph)
fire_event_if(
flags.LOG_CACHE_EVENTS,
lambda: DumpAfterRenameSchema(dump=self.dump_graph())
)
def get_relations(
self, database: Optional[str], schema: Optional[str]

View File

@@ -8,10 +8,9 @@ from dbt.include.global_project import (
PACKAGE_PATH as GLOBAL_PROJECT_PATH,
PROJECT_NAME as GLOBAL_PROJECT_NAME,
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import AdapterImportError, PluginLoadError
from dbt.contracts.connection import Credentials, AdapterRequiredConfig
from dbt.adapters.protocol import (
AdapterProtocol,
AdapterConfig,
@@ -67,11 +66,12 @@ class AdapterContainer:
# if we failed to import the target module in particular, inform
# the user about it via a runtime error
if exc.name == 'dbt.adapters.' + name:
fire_event(AdapterImportError(exc=exc))
raise RuntimeException(f'Could not find adapter type {name}!')
logger.info(f'Error importing adapter: {exc}')
# otherwise, the error had to have come from some underlying
# library. Log the stack trace.
logger.debug('', exc_info=True)
fire_event(PluginLoadError())
raise
plugin: AdapterPlugin = mod.Plugin
plugin_type = plugin.adapter.type()

View File

@@ -0,0 +1,24 @@
# this module exists to resolve circular imports with the events module
from collections import namedtuple
from typing import Optional
_ReferenceKey = namedtuple('_ReferenceKey', 'database schema identifier')
def lowercase(value: Optional[str]) -> Optional[str]:
if value is None:
return None
else:
return value.lower()
def _make_key(relation) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting
"""
# databases and schemas can both be None
return _ReferenceKey(lowercase(relation.database),
lowercase(relation.schema),
lowercase(relation.identifier))

View File

@@ -10,7 +10,8 @@ from dbt.adapters.base import BaseConnectionManager
from dbt.contracts.connection import (
Connection, ConnectionState, AdapterResponse
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import ConnectionUsed, SQLQuery, SQLCommit, SQLQueryStatus
class SQLConnectionManager(BaseConnectionManager):
@@ -58,9 +59,7 @@ class SQLConnectionManager(BaseConnectionManager):
connection = self.get_thread_connection()
if auto_begin and connection.transaction_open is False:
self.begin()
logger.debug('Using {} connection "{}".'
.format(self.TYPE, connection.name))
fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=connection.name))
with self.exception_handler(sql):
if abridge_sql_log:
@@ -68,19 +67,17 @@ class SQLConnectionManager(BaseConnectionManager):
else:
log_sql = sql
logger.debug(
'On {connection_name}: {sql}',
connection_name=connection.name,
sql=log_sql,
)
fire_event(SQLQuery(conn_name=connection.name, sql=log_sql))
pre = time.time()
cursor = connection.handle.cursor()
cursor.execute(sql, bindings)
logger.debug(
"SQL status: {status} in {elapsed:0.2f} seconds",
status=self.get_response(cursor),
elapsed=(time.time() - pre)
fire_event(
SQLQueryStatus(
status=str(self.get_response(cursor)),
elapsed=round((time.time() - pre), 2)
)
)
return connection, cursor
@@ -160,7 +157,7 @@ class SQLConnectionManager(BaseConnectionManager):
'Tried to commit transaction on connection "{}", but '
'it does not have one open!'.format(connection.name))
logger.debug('On {}: COMMIT'.format(connection.name))
fire_event(SQLCommit(conn_name=connection.name))
self.add_commit_query()
connection.transaction_open = False

View File

@@ -5,8 +5,11 @@ import dbt.clients.agate_helper
from dbt.contracts.connection import Connection
import dbt.exceptions
from dbt.adapters.base import BaseAdapter, available
from dbt.adapters.cache import _make_key
from dbt.adapters.sql import SQLConnectionManager
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import ColTypeChange, SchemaCreation, SchemaDrop
from dbt.adapters.base.relation import BaseRelation
@@ -116,8 +119,13 @@ class SQLAdapter(BaseAdapter):
target_column.can_expand_to(reference_column):
col_string_size = reference_column.string_size()
new_type = self.Column.string_type(col_string_size)
logger.debug("Changing col type from {} to {} in table {}",
target_column.data_type, new_type, current)
fire_event(
ColTypeChange(
orig_type=target_column.data_type,
new_type=new_type,
table=current,
)
)
self.alter_column_type(current, column_name, new_type)
@@ -175,7 +183,7 @@ class SQLAdapter(BaseAdapter):
def create_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
logger.debug('Creating schema "{}"', relation)
fire_event(SchemaCreation(relation=_make_key(relation)))
kwargs = {
'relation': relation,
}
@@ -186,7 +194,7 @@ class SQLAdapter(BaseAdapter):
def drop_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
logger.debug('Dropping schema "{}".', relation)
fire_event(SchemaDrop(relation=_make_key(relation)))
kwargs = {
'relation': relation,
}

View File

@@ -13,6 +13,18 @@ from dbt.exceptions import RuntimeException
BOM = BOM_UTF8.decode('utf-8') # '\ufeff'
class Number(agate.data_types.Number):
# undo the change in https://github.com/wireservice/agate/pull/733
# i.e. do not cast True and False to numeric 1 and 0
def cast(self, d):
if type(d) == bool:
raise agate.exceptions.CastError(
'Do not cast True to 1 or False to 0.'
)
else:
return super().cast(d)
class ISODateTime(agate.data_types.DateTime):
def cast(self, d):
# this is agate.data_types.DateTime.cast with the "clever" bits removed
@@ -41,7 +53,7 @@ def build_type_tester(
) -> agate.TypeTester:
types = [
agate.data_types.Number(null_values=('null', '')),
Number(null_values=('null', '')),
agate.data_types.Date(null_values=('null', ''),
date_format='%Y-%m-%d'),
agate.data_types.DateTime(null_values=('null', ''),

View File

@@ -2,8 +2,16 @@ import re
import os.path
from dbt.clients.system import run_cmd, rmdir
from dbt.logger import GLOBAL_LOGGER as logger
import dbt.exceptions
from dbt.events.functions import fire_event
from dbt.events.types import (
GitSparseCheckoutSubdirectory, GitProgressCheckoutRevision,
GitProgressUpdatingExistingDependency, GitProgressPullingNewDependency,
GitNothingToDo, GitProgressUpdatedCheckoutRange, GitProgressCheckedOutAt
)
from dbt.exceptions import (
CommandResultError, RuntimeException, bad_package_spec, raise_git_cloning_error,
raise_git_cloning_problem
)
from packaging import version
@@ -12,13 +20,23 @@ def _is_commit(revision: str) -> bool:
return bool(re.match(r"\b[0-9a-f]{40}\b", revision))
def _raise_git_cloning_error(repo, revision, error):
stderr = error.stderr.decode('utf-8').strip()
if 'usage: git' in stderr:
stderr = stderr.split('\nusage: git')[0]
if re.match("fatal: destination path '(.+)' already exists", stderr):
raise_git_cloning_error(error)
bad_package_spec(repo, revision, stderr)
def clone(repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirectory=None):
has_revision = revision is not None
is_commit = _is_commit(revision or "")
clone_cmd = ['git', 'clone', '--depth', '1']
if subdirectory:
logger.debug(' Subdirectory specified: {}, using sparse checkout.'.format(subdirectory))
fire_event(GitSparseCheckoutSubdirectory(subdir=subdirectory))
out, _ = run_cmd(cwd, ['git', '--version'], env={'LC_ALL': 'C'})
git_version = version.parse(re.search(r"\d+\.\d+\.\d+", out.decode("utf-8")).group(0))
if not git_version >= version.parse("2.25.0"):
@@ -36,10 +54,18 @@ def clone(repo, cwd, dirname=None, remove_git_dir=False, revision=None, subdirec
if dirname is not None:
clone_cmd.append(dirname)
result = run_cmd(cwd, clone_cmd, env={'LC_ALL': 'C'})
try:
result = run_cmd(cwd, clone_cmd, env={'LC_ALL': 'C'})
except CommandResultError as exc:
_raise_git_cloning_error(repo, revision, exc)
if subdirectory:
run_cmd(os.path.join(cwd, dirname or ''), ['git', 'sparse-checkout', 'set', subdirectory])
cwd_subdir = os.path.join(cwd, dirname or '')
clone_cmd_subdir = ['git', 'sparse-checkout', 'set', subdirectory]
try:
run_cmd(cwd_subdir, clone_cmd_subdir)
except CommandResultError as exc:
_raise_git_cloning_error(repo, revision, exc)
if remove_git_dir:
rmdir(os.path.join(dirname, '.git'))
@@ -54,7 +80,7 @@ def list_tags(cwd):
def _checkout(cwd, repo, revision):
logger.debug(' Checking out revision {}.'.format(revision))
fire_event(GitProgressCheckoutRevision(revision=revision))
fetch_cmd = ["git", "fetch", "origin", "--depth", "1"]
@@ -82,9 +108,9 @@ def checkout(cwd, repo, revision=None):
revision = 'HEAD'
try:
return _checkout(cwd, repo, revision)
except dbt.exceptions.CommandResultError as exc:
except CommandResultError as exc:
stderr = exc.stderr.decode('utf-8').strip()
dbt.exceptions.bad_package_spec(repo, revision, stderr)
bad_package_spec(repo, revision, stderr)
def get_current_sha(cwd):
@@ -108,35 +134,36 @@ def clone_and_checkout(repo, cwd, dirname=None, remove_git_dir=False,
remove_git_dir=remove_git_dir,
subdirectory=subdirectory,
)
except dbt.exceptions.CommandResultError as exc:
except CommandResultError as exc:
err = exc.stderr.decode('utf-8')
exists = re.match("fatal: destination path '(.+)' already exists", err)
if not exists: # something else is wrong, raise it
raise
if not exists:
raise_git_cloning_problem(repo)
directory = None
start_sha = None
if exists:
directory = exists.group(1)
logger.debug('Updating existing dependency {}.', directory)
fire_event(GitProgressUpdatingExistingDependency(dir=directory))
else:
matches = re.match("Cloning into '(.+)'", err.decode('utf-8'))
if matches is None:
raise dbt.exceptions.RuntimeException(
raise RuntimeException(
f'Error cloning {repo} - never saw "Cloning into ..." from git'
)
directory = matches.group(1)
logger.debug('Pulling new dependency {}.', directory)
fire_event(GitProgressPullingNewDependency(dir=directory))
full_path = os.path.join(cwd, directory)
start_sha = get_current_sha(full_path)
checkout(full_path, repo, revision)
end_sha = get_current_sha(full_path)
if exists:
if start_sha == end_sha:
logger.debug(' Already at {}, nothing to do.', start_sha[:7])
fire_event(GitNothingToDo(sha=start_sha[:7]))
else:
logger.debug(' Updated checkout from {} to {}.',
start_sha[:7], end_sha[:7])
fire_event(GitProgressUpdatedCheckoutRange(
start_sha=start_sha[:7], end_sha=end_sha[:7]
))
else:
logger.debug(' Checked out at {}.', end_sha[:7])
fire_event(GitProgressCheckedOutAt(end_sha=end_sha[:7]))
return os.path.join(directory, subdirectory or '')

View File

@@ -21,7 +21,7 @@ import jinja2.sandbox
from dbt.utils import (
get_dbt_macro_name, get_docs_macro_name, get_materialization_macro_name,
get_test_macro_name, deep_map
get_test_macro_name, deep_map_render
)
from dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag
@@ -33,7 +33,6 @@ from dbt.exceptions import (
UndefinedMacroException
)
from dbt import flags
from dbt.logger import GLOBAL_LOGGER as logger # noqa
def _linecache_inject(source, write):
@@ -661,5 +660,7 @@ def add_rendered_test_kwargs(
return value
kwargs = deep_map(_convert_function, node.test_metadata.kwargs)
# The test_metadata.kwargs come from the test builder, and were set
# when the test node was created in _parse_generic_test.
kwargs = deep_map_render(_convert_function, node.test_metadata.kwargs)
context[GENERIC_TEST_KWARGS_NAME] = kwargs

View File

@@ -1,7 +1,11 @@
import functools
import requests
from dbt.events.functions import fire_event
from dbt.events.types import (
RegistryProgressMakingGETRequest,
RegistryProgressGETResponse
)
from dbt.utils import memoized, _connection_exception_retry as connection_exception_retry
from dbt.logger import GLOBAL_LOGGER as logger
from dbt import deprecations
import os
@@ -25,11 +29,19 @@ def _get_with_retries(path, registry_base_url=None):
def _get(path, registry_base_url=None):
url = _get_url(path, registry_base_url)
logger.debug('Making package registry request: GET {}'.format(url))
fire_event(RegistryProgressMakingGETRequest(url=url))
resp = requests.get(url, timeout=30)
logger.debug('Response from registry: GET {} {}'.format(url,
resp.status_code))
fire_event(RegistryProgressGETResponse(url=url, resp_code=resp.status_code))
resp.raise_for_status()
# It is unexpected for the content of the response to be None so if it is, raising this error
# will cause this function to retry (if called within _get_with_retries) and hopefully get
# a response. This seems to happen when there's an issue with the Hub.
# See https://github.com/dbt-labs/dbt-core/issues/4577
if resp.json() is None:
raise requests.exceptions.ContentDecodingError(
'Request error: The response is None', response=resp
)
return resp.json()

View File

@@ -15,8 +15,12 @@ from typing import (
Type, NoReturn, List, Optional, Dict, Any, Tuple, Callable, Union
)
from dbt.events.functions import fire_event
from dbt.events.types import (
SystemErrorRetrievingModTime, SystemCouldNotWrite, SystemExecutingCmd, SystemStdOutMsg,
SystemStdErrMsg, SystemReportReturnCode
)
import dbt.exceptions
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.utils import _connection_exception_retry as connection_exception_retry
if sys.platform == 'win32':
@@ -65,9 +69,7 @@ def find_matching(
try:
modification_time = os.path.getmtime(absolute_path)
except OSError:
logger.exception(
f"Error retrieving modification time for file {absolute_path}"
)
fire_event(SystemErrorRetrievingModTime(path=absolute_path))
if reobj.match(local_file):
matching.append({
'searched_path': relative_path_to_search,
@@ -161,10 +163,7 @@ def write_file(path: str, contents: str = '') -> bool:
reason = 'Path was possibly too long'
# all our hard work and the path was still too long. Log and
# continue.
logger.debug(
f'Could not write to path {path}({len(path)} characters): '
f'{reason}\nexception: {exc}'
)
fire_event(SystemCouldNotWrite(path=path, reason=reason, exc=exc))
else:
raise
return True
@@ -412,7 +411,7 @@ def _interpret_oserror(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def run_cmd(
cwd: str, cmd: List[str], env: Optional[Dict[str, Any]] = None
) -> Tuple[bytes, bytes]:
logger.debug('Executing "{}"'.format(' '.join(cmd)))
fire_event(SystemExecutingCmd(cmd=cmd))
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
@@ -438,11 +437,11 @@ def run_cmd(
except OSError as exc:
_interpret_oserror(exc, cwd, cmd)
logger.debug('STDOUT: "{!s}"'.format(out))
logger.debug('STDERR: "{!s}"'.format(err))
fire_event(SystemStdOutMsg(bmsg=out))
fire_event(SystemStdErrMsg(bmsg=err))
if proc.returncode != 0:
logger.debug('command return code={}'.format(proc.returncode))
fire_event(SystemReportReturnCode(returncode=proc.returncode))
raise dbt.exceptions.CommandResultError(cwd, cmd, proc.returncode,
out, err)
@@ -486,7 +485,7 @@ def untar_package(
) -> None:
tar_path = convert_path(tar_path)
tar_dir_name = None
with tarfile.open(tar_path, 'r') as tarball:
with tarfile.open(tar_path, 'r:gz') as tarball:
tarball.extractall(dest_dir)
tar_dir_name = os.path.commonprefix(tarball.getnames())
if rename_to:

View File

@@ -9,7 +9,7 @@ from dbt import flags
from dbt.adapters.factory import get_adapter
from dbt.clients import jinja
from dbt.clients.system import make_directory
from dbt.context.providers import generate_runtime_model
from dbt.context.providers import generate_runtime_model_context
from dbt.contracts.graph.manifest import Manifest, UniqueID
from dbt.contracts.graph.compiled import (
COMPILED_TYPES,
@@ -26,9 +26,10 @@ from dbt.exceptions import (
RuntimeException,
)
from dbt.graph import Graph
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import FoundStats, CompilingNode, WritingInjectedSQLForNode
from dbt.node_types import NodeType
from dbt.utils import pluralize
from dbt.events.format import pluralize
import dbt.tracking
graph_file_name = 'graph.gpickle'
@@ -53,6 +54,7 @@ def print_compile_stats(stats):
NodeType.Seed: 'seed file',
NodeType.Source: 'source',
NodeType.Exposure: 'exposure',
NodeType.Metric: 'metric'
}
results = {k: 0 for k in names.keys()}
@@ -68,7 +70,7 @@ def print_compile_stats(stats):
if t in names
])
logger.info("Found {}".format(stat_line))
fire_event(FoundStats(stat_line=stat_line))
def _node_enabled(node: ManifestNode):
@@ -89,6 +91,8 @@ def _generate_stats(manifest: Manifest):
stats[source.resource_type] += 1
for exposure in manifest.exposures.values():
stats[exposure.resource_type] += 1
for metric in manifest.metrics.values():
stats[metric.resource_type] += 1
for macro in manifest.macros.values():
stats[macro.resource_type] += 1
return stats
@@ -178,7 +182,7 @@ class Compiler:
extra_context: Dict[str, Any],
) -> Dict[str, Any]:
context = generate_runtime_model(
context = generate_runtime_model_context(
node, self.config, manifest
)
context.update(extra_context)
@@ -366,7 +370,7 @@ class Compiler:
if extra_context is None:
extra_context = {}
logger.debug("Compiling {}".format(node.unique_id))
fire_event(CompilingNode(unique_id=node.unique_id))
data = node.to_dict(omit_none=True)
data.update({
@@ -418,42 +422,44 @@ class Compiler:
else:
dependency_not_found(node, dependency)
def link_graph(self, linker: Linker, manifest: Manifest):
def link_graph(self, linker: Linker, manifest: Manifest, add_test_edges: bool = False):
for source in manifest.sources.values():
linker.add_node(source.unique_id)
for node in manifest.nodes.values():
self.link_node(linker, node, manifest)
for exposure in manifest.exposures.values():
self.link_node(linker, exposure, manifest)
for metric in manifest.metrics.values():
self.link_node(linker, metric, manifest)
cycle = linker.find_cycles()
if cycle:
raise RuntimeError("Found a cycle: {}".format(cycle))
manifest.build_parent_and_child_maps()
if add_test_edges:
manifest.build_parent_and_child_maps()
self.add_test_edges(linker, manifest)
self.resolve_graph(linker, manifest)
def resolve_graph(self, linker: Linker, manifest: Manifest) -> None:
def add_test_edges(self, linker: Linker, manifest: Manifest) -> None:
""" This method adds additional edges to the DAG. For a given non-test
executable node, add an edge from an upstream test to the given node if
the set of nodes the test depends on is a proper/strict subset of the
upstream nodes for the given node. """
the set of nodes the test depends on is a subset of the upstream nodes
for the given node. """
# Given a graph:
# model1 --> model2 --> model3
# | |
# | \/
# \/ test 2
# | |
# | \/
# \/ test 2
# test1
#
# Produce the following graph:
# model1 --> model2 --> model3
# | | /\ /\
# | \/ | |
# \/ test2 ------- |
# test1 -------------------
# | /\ | /\ /\
# | | \/ | |
# \/ | test2 ----| |
# test1 ----|---------------|
for node_id in linker.graph:
# If node is executable (in manifest.nodes) and does _not_
@@ -491,21 +497,19 @@ class Compiler:
)
# If the set of nodes that an upstream test depends on
# is a proper (or strict) subset of all upstream nodes of
# the current node, add an edge from the upstream test
# to the current node. Must be a proper/strict subset to
# avoid adding a circular dependency to the graph.
if (test_depends_on < upstream_nodes):
# is a subset of all upstream nodes of the current node,
# add an edge from the upstream test to the current node.
if (test_depends_on.issubset(upstream_nodes)):
linker.graph.add_edge(
upstream_test,
node_id
)
def compile(self, manifest: Manifest, write=True) -> Graph:
def compile(self, manifest: Manifest, write=True, add_test_edges=False) -> Graph:
self.initialize()
linker = Linker()
self.link_graph(linker, manifest)
self.link_graph(linker, manifest, add_test_edges)
stats = _generate_stats(manifest)
@@ -520,7 +524,7 @@ class Compiler:
if (not node.extra_ctes_injected or
node.resource_type == NodeType.Snapshot):
return node
logger.debug(f'Writing injected SQL for node "{node.unique_id}"')
fire_event(WritingInjectedSQLForNode(unique_id=node.unique_id))
if node.compiled_sql:
node.compiled_path = node.write_node(

View File

@@ -15,7 +15,8 @@ from dbt.exceptions import DbtProjectError
from dbt.exceptions import ValidationException
from dbt.exceptions import RuntimeException
from dbt.exceptions import validator_error_message
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.types import MissingProfileTarget
from dbt.events.functions import fire_event
from dbt.utils import coerce_dict_str
from .renderer import ProfileRenderer
@@ -91,6 +92,7 @@ class Profile(HasCredentials):
user_config: UserConfig
threads: int
credentials: Credentials
profile_env_vars: Dict[str, Any]
def __init__(
self,
@@ -108,6 +110,7 @@ class Profile(HasCredentials):
self.user_config = user_config
self.threads = threads
self.credentials = credentials
self.profile_env_vars = {} # never available on init
def to_profile_info(
self, serialize_credentials: bool = False
@@ -291,10 +294,7 @@ class Profile(HasCredentials):
target_name = renderer.render_value(raw_profile['target'])
else:
target_name = 'default'
logger.debug(
"target not specified in profile '{}', using '{}'"
.format(profile_name, target_name)
)
fire_event(MissingProfileTarget(profile_name=profile_name, target_name=target_name))
raw_profile_data = cls._get_profile_data(
raw_profile, profile_name, target_name

View File

@@ -45,7 +45,7 @@ INVALID_VERSION_ERROR = """\
This version of dbt is not supported with the '{package}' package.
Installed version of dbt: {installed}
Required version of dbt for '{package}': {version_spec}
Check the requirements for the '{package}' package, or run dbt again with \
Check for a different version of the '{package}' package, or run dbt again with \
--no-version-check
"""
@@ -54,7 +54,7 @@ IMPOSSIBLE_VERSION_ERROR = """\
The package version requirement can never be satisfied for the '{package}
package.
Required versions of dbt for '{package}': {version_spec}
Check the requirements for the '{package}' package, or run dbt again with \
Check for a different version of the '{package}' package, or run dbt again with \
--no-version-check
"""
@@ -284,6 +284,7 @@ class PartialProject(RenderComponents):
selectors_dict=rendered_selectors,
)
# Called by 'collect_parts' in RuntimeConfig
def render(self, renderer: DbtProjectYamlRenderer) -> 'Project':
try:
rendered = self.get_rendered(renderer)
@@ -304,7 +305,7 @@ class PartialProject(RenderComponents):
)
raise DbtProjectError(msg.format(deprecated_path=deprecated_path,
exp_path=exp_path))
deprecations.warn('project_config_path',
deprecations.warn(f'project-config-{deprecated_path}',
deprecated_path=deprecated_path,
exp_path=exp_path)
@@ -397,6 +398,8 @@ class PartialProject(RenderComponents):
vars_dict = cfg.vars
vars_value = VarProvider(vars_dict)
# There will never be any project_env_vars when it's first created
project_env_vars: Dict[str, Any] = {}
on_run_start: List[str] = value_or(cfg.on_run_start, [])
on_run_end: List[str] = value_or(cfg.on_run_end, [])
@@ -444,6 +447,7 @@ class PartialProject(RenderComponents):
vars=vars_value,
config_version=cfg.config_version,
unrendered=unrendered,
project_env_vars=project_env_vars,
)
# sanity check - this means an internal issue
project.validate()
@@ -556,6 +560,7 @@ class Project:
query_comment: QueryComment
config_version: int
unrendered: RenderComponents
project_env_vars: Dict[str, Any]
@property
def all_source_paths(self) -> List[str]:
@@ -564,6 +569,13 @@ class Project:
self.analysis_paths, self.macro_paths
)
@property
def generic_test_paths(self):
generic_test_paths = []
for test_path in self.test_paths:
generic_test_paths.append(os.path.join(test_path, 'generic'))
return generic_test_paths
def __str__(self):
cfg = self.to_project_config(with_packages=True)
return str(cfg)
@@ -638,26 +650,6 @@ class Project:
verify_version=verify_version,
)
@classmethod
def render_from_dict(
cls,
project_root: str,
project_dict: Dict[str, Any],
packages_dict: Dict[str, Any],
selectors_dict: Dict[str, Any],
renderer: DbtProjectYamlRenderer,
*,
verify_version: bool = False
) -> 'Project':
partial = PartialProject.from_dicts(
project_root=project_root,
project_dict=project_dict,
packages_dict=packages_dict,
selectors_dict=selectors_dict,
verify_version=verify_version,
)
return partial.render(renderer)
@classmethod
def from_project_root(
cls,

View File

@@ -1,12 +1,14 @@
from typing import Dict, Any, Tuple, Optional, Union, Callable
from dbt.clients.jinja import get_rendered, catch_jinja
from dbt.context.target import TargetContext
from dbt.context.secret import SecretContext
from dbt.context.base import BaseContext
from dbt.contracts.connection import HasCredentials
from dbt.exceptions import (
DbtProjectError, CompilationException, RecursionException
)
from dbt.node_types import NodeType
from dbt.utils import deep_map
from dbt.utils import deep_map_render
Keypath = Tuple[Union[str, int], ...]
@@ -47,7 +49,7 @@ class BaseRenderer:
self, data: Dict[str, Any]
) -> Dict[str, Any]:
try:
return deep_map(self.render_entry, data)
return deep_map_render(self.render_entry, data)
except RecursionException:
raise DbtProjectError(
f'Cycle detected: {self.name} input has a reference to itself',
@@ -99,6 +101,23 @@ class ProjectPostprocessor(Dict[Keypath, Callable[[Any], Any]]):
class DbtProjectYamlRenderer(BaseRenderer):
_KEYPATH_HANDLERS = ProjectPostprocessor()
def __init__(
self, profile: Optional[HasCredentials] = None,
cli_vars: Optional[Dict[str, Any]] = None
) -> None:
# Generate contexts here because we want to save the context
# object in order to retrieve the env_vars. This is almost always
# a TargetContext, but in the debug task we want a project
# even when we don't have a profile.
if cli_vars is None:
cli_vars = {}
if profile:
self.ctx_obj = TargetContext(profile, cli_vars)
else:
self.ctx_obj = BaseContext(cli_vars) # type:ignore
context = self.ctx_obj.to_dict()
super().__init__(context)
@property
def name(self):
'Project config'
@@ -157,82 +176,36 @@ class DbtProjectYamlRenderer(BaseRenderer):
return True
class ProfileRenderer(BaseRenderer):
@property
def name(self):
'Profile'
class SchemaYamlRenderer(BaseRenderer):
DOCUMENTABLE_NODES = frozenset(
n.pluralize() for n in NodeType.documentable()
)
@property
def name(self):
return 'Rendering yaml'
def _is_norender_key(self, keypath: Keypath) -> bool:
"""
models:
- name: blah
- description: blah
tests: ...
- columns:
- name:
- description: blah
tests: ...
Return True if it's tests or description - those aren't rendered
"""
if len(keypath) >= 2 and keypath[1] in ('tests', 'description'):
return True
if (
len(keypath) >= 4 and
keypath[1] == 'columns' and
keypath[3] in ('tests', 'description')
):
return True
return False
# don't render descriptions or test keyword arguments
def should_render_keypath(self, keypath: Keypath) -> bool:
if len(keypath) < 2:
return True
if keypath[0] not in self.DOCUMENTABLE_NODES:
return True
if len(keypath) < 3:
return True
if keypath[0] == NodeType.Source.pluralize():
if keypath[2] == 'description':
return False
if keypath[2] == 'tables':
if self._is_norender_key(keypath[3:]):
return False
elif keypath[0] == NodeType.Macro.pluralize():
if keypath[2] == 'arguments':
if self._is_norender_key(keypath[3:]):
return False
elif self._is_norender_key(keypath[1:]):
return False
else: # keypath[0] in self.DOCUMENTABLE_NODES:
if self._is_norender_key(keypath[1:]):
return False
return True
class PackageRenderer(BaseRenderer):
@property
def name(self):
return 'Packages config'
class SelectorRenderer(BaseRenderer):
@property
def name(self):
return 'Selector config'
class SecretRenderer(BaseRenderer):
def __init__(
self, cli_vars: Optional[Dict[str, Any]] = None
) -> None:
# Generate contexts here because we want to save the context
# object in order to retrieve the env_vars.
if cli_vars is None:
cli_vars = {}
self.ctx_obj = SecretContext(cli_vars)
context = self.ctx_obj.to_dict()
super().__init__(context)
@property
def name(self):
return 'Secret'
class ProfileRenderer(SecretRenderer):
@property
def name(self):
return 'Profile'
class PackageRenderer(SecretRenderer):
@property
def name(self):
return 'Packages config'

View File

@@ -1,7 +1,7 @@
import itertools
import os
from copy import deepcopy
from dataclasses import dataclass, fields
from dataclasses import dataclass
from pathlib import Path
from typing import (
Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet,
@@ -13,21 +13,17 @@ from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt import tracking
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
from dbt.context.base import generate_base_context
from dbt.context.target import generate_target_context
from dbt.helper_types import FQNPath, PathSet, DictDefaultEmptyStr
from dbt.config.profile import read_user_config
from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata
from dbt.contracts.relation import ComponentName
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.ui import warning_tag
from dbt.contracts.project import Configuration, UserConfig
from dbt.exceptions import (
RuntimeException,
DbtProfileError,
DbtProjectError,
validator_error_message,
warn_or_error,
@@ -60,6 +56,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
def __post_init__(self):
self.validate()
# Called by 'new_project' and 'from_args'
@classmethod
def from_parts(
cls,
@@ -116,6 +113,8 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
project_env_vars=project.project_env_vars,
profile_env_vars=profile.profile_env_vars,
profile_name=profile.profile_name,
target_name=profile.target_name,
user_config=profile.user_config,
@@ -126,6 +125,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
dependencies=dependencies,
)
# Called by 'load_projects' in this class
def new_project(self, project_root: str) -> 'RuntimeConfig':
"""Given a new project root, read in its project dictionary, supply the
existing project's profile info, and create a new project file.
@@ -140,7 +140,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
profile.validate()
# load the new project and its packages. Don't pass cli variables.
renderer = DbtProjectYamlRenderer(generate_target_context(profile, {}))
renderer = DbtProjectYamlRenderer(profile)
project = Project.from_project_root(
project_root,
@@ -148,14 +148,14 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
verify_version=bool(flags.VERSION_CHECK),
)
cfg = self.from_parts(
runtime_config = self.from_parts(
project=project,
profile=profile,
args=deepcopy(self.args),
)
# force our quoting back onto the new project.
cfg.quoting = deepcopy(self.quoting)
return cfg
runtime_config.quoting = deepcopy(self.quoting)
return runtime_config
def serialize(self) -> Dict[str, Any]:
"""Serialize the full configuration to a single dictionary. For any
@@ -188,6 +188,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
profile_renderer: ProfileRenderer,
profile_name: Optional[str],
) -> Profile:
return Profile.render_from_args(
args, profile_renderer, profile_name
)
@@ -205,21 +206,26 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
)
# build the profile using the base renderer and the one fact we know
# Note: only the named profile section is rendered. The rest of the
# profile is ignored.
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, 'vars', '{}'))
profile_renderer = ProfileRenderer(generate_base_context(cli_vars))
profile_renderer = ProfileRenderer(cli_vars)
profile_name = partial.render_profile_name(profile_renderer)
profile = cls._get_rendered_profile(
args, profile_renderer, profile_name
)
# Save env_vars encountered in rendering for partial parsing
profile.profile_env_vars = profile_renderer.ctx_obj.env_vars
# get a new renderer using our target information and render the
# project
ctx = generate_target_context(profile, cli_vars)
project_renderer = DbtProjectYamlRenderer(ctx)
project_renderer = DbtProjectYamlRenderer(profile, cli_vars)
project = partial.render(project_renderer)
# Save env_vars encountered in rendering for partial parsing
project.project_env_vars = project_renderer.ctx_obj.env_vars
return (project, profile)
# Called in main.py, lib.py, task/base.py
@classmethod
def from_args(cls, args: Any) -> 'RuntimeConfig':
"""Given arguments, read in dbt_project.yml from the current directory,
@@ -253,7 +259,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
) -> PathSet:
for key, value in config.items():
if isinstance(value, dict) and not key.startswith('+'):
self._get_v2_config_paths(value, path + (key,), paths)
self._get_config_paths(value, path + (key,), paths)
else:
paths.add(path)
return frozenset(paths)
@@ -360,6 +366,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
def clear_dependencies(self):
self.dependencies = None
# Called by 'load_dependencies' in this class
def load_projects(
self, paths: Iterable[Path]
) -> Iterator[Tuple[str, 'RuntimeConfig']]:
@@ -403,27 +410,18 @@ class UnsetCredentials(Credentials):
return ()
class UnsetConfig(UserConfig):
def __getattribute__(self, name):
if name in {f.name for f in fields(UserConfig)}:
raise AttributeError(
f"'UnsetConfig' object has no attribute {name}"
)
def __post_serialize__(self, dct):
return {}
# This is used by UnsetProfileConfig, for commands which do
# not require a profile, i.e. dbt deps and clean
class UnsetProfile(Profile):
def __init__(self):
self.credentials = UnsetCredentials()
self.user_config = UnsetConfig()
self.user_config = UserConfig() # This will be read in _get_rendered_profile
self.profile_name = ''
self.target_name = ''
self.threads = -1
def to_target_dict(self):
return {}
return DictDefaultEmptyStr({})
def __getattribute__(self, name):
if name in {'profile_name', 'target_name', 'threads'}:
@@ -434,6 +432,8 @@ class UnsetProfile(Profile):
return Profile.__getattribute__(self, name)
# This class is used by the dbt deps and clean commands, because they don't
# require a functioning profile.
@dataclass
class UnsetProfileConfig(RuntimeConfig):
"""This class acts a lot _like_ a RuntimeConfig, except if your profile is
@@ -460,7 +460,7 @@ class UnsetProfileConfig(RuntimeConfig):
def to_target_dict(self):
# re-override the poisoned profile behavior
return {}
return DictDefaultEmptyStr({})
@classmethod
def from_parts(
@@ -512,9 +512,11 @@ class UnsetProfileConfig(RuntimeConfig):
vars=project.vars,
config_version=project.config_version,
unrendered=project.unrendered,
project_env_vars=project.project_env_vars,
profile_env_vars=profile.profile_env_vars,
profile_name='',
target_name='',
user_config=UnsetConfig(),
user_config=UserConfig(),
threads=getattr(args, 'threads', 1),
credentials=UnsetCredentials(),
args=args,
@@ -529,22 +531,12 @@ class UnsetProfileConfig(RuntimeConfig):
profile_renderer: ProfileRenderer,
profile_name: Optional[str],
) -> Profile:
try:
profile = Profile.render_from_args(
args, profile_renderer, profile_name
)
except (DbtProjectError, DbtProfileError) as exc:
logger.debug(
'Profile not loaded due to error: {}', exc, exc_info=True
)
logger.info(
'No profile "{}" found, continuing with no target',
profile_name
)
# return the poisoned form
profile = UnsetProfile()
# disable anonymous usage statistics
tracking.disable_tracking()
profile = UnsetProfile()
# The profile (for warehouse connection) is not needed, but we want
# to get the UserConfig, which is also in profiles.yml
user_config = read_user_config(flags.PROFILES_DIR)
profile.user_config = user_config
return profile
@classmethod
@@ -559,9 +551,6 @@ class UnsetProfileConfig(RuntimeConfig):
:raises ValidationException: If the cli variables are invalid.
"""
project, profile = cls.collect_parts(args)
if not isinstance(profile, UnsetProfile):
# if it's a real profile, return a real config
cls = RuntimeConfig
return cls.from_parts(
project=project,

View File

@@ -1,8 +1,9 @@
from typing import Dict, Any
from dbt.clients import yaml_helper
from dbt.events.functions import fire_event
from dbt.exceptions import raise_compiler_error, ValidationException
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.types import InvalidVarsYAML
def parse_cli_vars(var_string: str) -> Dict[str, Any]:
@@ -17,7 +18,5 @@ def parse_cli_vars(var_string: str) -> Dict[str, Any]:
"The --vars argument must be a YAML dictionary, but was "
"of type '{}'".format(type_name))
except ValidationException:
logger.error(
"The YAML provided in the --vars argument is not valid.\n"
)
fire_event(InvalidVarsYAML())
raise

View File

@@ -6,13 +6,17 @@ from typing import (
from dbt import flags
from dbt import tracking
from dbt.clients.jinja import undefined_error, get_rendered
from dbt.clients.jinja import get_rendered
from dbt.clients.yaml_helper import ( # noqa: F401
yaml, safe_load, SafeLoader, Loader, Dumper
)
from dbt.contracts.graph.compiled import CompiledResource
from dbt.exceptions import raise_compiler_error, MacroReturn
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.exceptions import (
raise_compiler_error, MacroReturn, raise_parsing_error, disallow_secret_env_var
)
from dbt.logger import SECRET_ENV_PREFIX
from dbt.events.functions import fire_event, get_invocation_id
from dbt.events.types import MacroEventInfo, MacroEventDebug
from dbt.version import __version__ as dbt_version
# These modules are added to the context. Consider alternative
@@ -21,6 +25,39 @@ import pytz
import datetime
import re
# Contexts in dbt Core
# Contexts are used for Jinja rendering. They include context methods,
# executable macros, and various settings that are available in Jinja.
#
# Different contexts are used in different places because we allow access
# to different methods and data in different places. Executable SQL, for
# example, includes the available macros and the model, while Jinja in
# yaml files is more limited.
#
# The context that is passed to Jinja is always in a dictionary format,
# not an actual class, so a 'to_dict()' is executed on a context class
# before it is used for rendering.
#
# Each context has a generate_<name>_context function to create the context.
# ProviderContext subclasses have different generate functions for
# parsing and for execution.
#
# Context class hierarchy
#
# BaseContext -- core/dbt/context/base.py
# SecretContext -- core/dbt/context/secret.py
# TargetContext -- core/dbt/context/target.py
# ConfiguredContext -- core/dbt/context/configured.py
# SchemaYamlContext -- core/dbt/context/configured.py
# DocsRuntimeContext -- core/dbt/context/configured.py
# MacroResolvingContext -- core/dbt/context/configured.py
# ManifestContext -- core/dbt/context/manifest.py
# QueryHeaderContext -- core/dbt/context/manifest.py
# ProviderContext -- core/dbt/context/provider.py
# MacroContext -- core/dbt/context/provider.py
# ModelContext -- core/dbt/context/provider.py
# TestContext -- core/dbt/context/provider.py
def get_pytz_module_context() -> Dict[str, Any]:
context_exports = pytz.__all__ # type: ignore
@@ -160,9 +197,11 @@ class Var:
class BaseContext(metaclass=ContextMeta):
# subclass is TargetContext
def __init__(self, cli_vars):
self._ctx = {}
self.cli_vars = cli_vars
self.env_vars = {}
def generate_builtins(self):
builtins: Dict[str, Any] = {}
@@ -271,20 +310,26 @@ class BaseContext(metaclass=ContextMeta):
return Var(self._ctx, self.cli_vars)
@contextmember
@staticmethod
def env_var(var: str, default: Optional[str] = None) -> str:
def env_var(self, var: str, default: Optional[str] = None) -> str:
"""The env_var() function. Return the environment variable named 'var'.
If there is no such environment variable set, return the default.
If the default is None, raise an exception for an undefined variable.
"""
return_value = None
if var.startswith(SECRET_ENV_PREFIX):
disallow_secret_env_var(var)
if var in os.environ:
return os.environ[var]
return_value = os.environ[var]
elif default is not None:
return default
return_value = default
if return_value is not None:
self.env_vars[var] = return_value
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
undefined_error(msg)
raise_parsing_error(msg)
if os.environ.get('DBT_MACRO_DEBUGGING'):
@contextmember
@@ -443,9 +488,9 @@ class BaseContext(metaclass=ContextMeta):
{% endmacro %}"
"""
if info:
logger.info(msg)
fire_event(MacroEventInfo(msg))
else:
logger.debug(msg)
fire_event(MacroEventDebug(msg))
return ''
@contextproperty
@@ -481,10 +526,7 @@ class BaseContext(metaclass=ContextMeta):
"""invocation_id outputs a UUID generated for this dbt run (useful for
auditing)
"""
if tracking.active_user is not None:
return tracking.active_user.invocation_id
else:
return None
return get_invocation_id()
@contextproperty
def modules(self) -> Dict[str, Any]:

View File

@@ -1,14 +1,18 @@
from typing import Any, Dict
import os
from typing import Any, Dict, Optional
from dbt.contracts.connection import AdapterRequiredConfig
from dbt.logger import SECRET_ENV_PREFIX
from dbt.node_types import NodeType
from dbt.utils import MultiDict
from dbt.context.base import contextproperty, Var
from dbt.context.base import contextproperty, contextmember, Var
from dbt.context.target import TargetContext
from dbt.exceptions import raise_parsing_error, disallow_secret_env_var
class ConfiguredContext(TargetContext):
# subclasses are SchemaYamlContext, MacroResolvingContext, ManifestContext
config: AdapterRequiredConfig
def __init__(
@@ -63,10 +67,18 @@ class ConfiguredVar(Var):
return self.get_missing_var(var_name)
class SchemaYamlVars():
def __init__(self):
self.env_vars = {}
self.vars = {}
class SchemaYamlContext(ConfiguredContext):
def __init__(self, config, project_name: str):
# subclass is DocsRuntimeContext
def __init__(self, config, project_name: str, schema_yaml_vars: Optional[SchemaYamlVars]):
super().__init__(config)
self._project_name = project_name
self.schema_yaml_vars = schema_yaml_vars
@contextproperty
def var(self) -> ConfiguredVar:
@@ -74,6 +86,24 @@ class SchemaYamlContext(ConfiguredContext):
self._ctx, self.config, self._project_name
)
@contextmember
def env_var(self, var: str, default: Optional[str] = None) -> str:
return_value = None
if var.startswith(SECRET_ENV_PREFIX):
disallow_secret_env_var(var)
if var in os.environ:
return_value = os.environ[var]
elif default is not None:
return_value = default
if return_value is not None:
if self.schema_yaml_vars:
self.schema_yaml_vars.env_vars[var] = return_value
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
raise_parsing_error(msg)
class MacroResolvingContext(ConfiguredContext):
def __init__(self, config):
@@ -86,10 +116,10 @@ class MacroResolvingContext(ConfiguredContext):
)
def generate_schema_yml(
config: AdapterRequiredConfig, project_name: str
def generate_schema_yml_context(
config: AdapterRequiredConfig, project_name: str, schema_yaml_vars: SchemaYamlVars = None
) -> Dict[str, Any]:
ctx = SchemaYamlContext(config, project_name)
ctx = SchemaYamlContext(config, project_name, schema_yaml_vars)
return ctx.to_dict()

View File

@@ -23,7 +23,7 @@ class DocsRuntimeContext(SchemaYamlContext):
manifest: Manifest,
current_project: str,
) -> None:
super().__init__(config, current_project)
super().__init__(config, current_project, None)
self.node = node
self.manifest = manifest
@@ -75,7 +75,7 @@ class DocsRuntimeContext(SchemaYamlContext):
return target_doc.block_contents
def generate_runtime_docs(
def generate_runtime_docs_context(
config: RuntimeConfig,
target: Any,
manifest: Manifest,

View File

@@ -17,6 +17,7 @@ class ManifestContext(ConfiguredContext):
The given macros can override any previous context values, which will be
available as if they were accessed relative to the package name.
"""
# subclasses are QueryHeaderContext and ProviderContext
def __init__(
self,
config: AdapterRequiredConfig,

View File

@@ -16,6 +16,7 @@ from dbt.config import RuntimeConfig, Project
from .base import contextmember, contextproperty, Var
from .configured import FQNLookup
from .context_config import ContextConfig
from dbt.logger import SECRET_ENV_PREFIX
from dbt.context.macro_resolver import MacroResolver, TestMacroNamespace
from .macros import MacroNamespaceBuilder, MacroNamespace
from .manifest import ManifestContext
@@ -31,11 +32,13 @@ from dbt.contracts.graph.compiled import (
from dbt.contracts.graph.parsed import (
ParsedMacro,
ParsedExposure,
ParsedMetric,
ParsedSeedNode,
ParsedSourceDefinition,
)
from dbt.exceptions import (
CompilationException,
ParsingException,
InternalException,
ValidationException,
RuntimeException,
@@ -47,9 +50,10 @@ from dbt.exceptions import (
ref_bad_context,
source_target_not_found,
wrapped_exports,
raise_parsing_error,
disallow_secret_env_var,
)
from dbt.config import IsFQNResource
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt.node_types import NodeType
from dbt.utils import (
@@ -323,7 +327,7 @@ class ParseConfigObject(Config):
def require(self, name, validator=None):
return ''
def get(self, name, validator=None, default=None):
def get(self, name, default=None, validator=None):
return ''
def persist_relation_docs(self) -> bool:
@@ -367,7 +371,7 @@ class RuntimeConfigObject(Config):
return to_return
def get(self, name, validator=None, default=None):
def get(self, name, default=None, validator=None):
to_return = self._lookup(name, default)
if validator is not None and default is not None:
@@ -636,6 +640,7 @@ T = TypeVar('T')
# Base context collection, used for parsing configs.
class ProviderContext(ManifestContext):
# subclasses are MacroContext, ModelContext, TestContext
def __init__(
self,
model,
@@ -1161,6 +1166,37 @@ class ProviderContext(ManifestContext):
)
raise CompilationException(msg)
@contextmember
def env_var(self, var: str, default: Optional[str] = None) -> str:
"""The env_var() function. Return the environment variable named 'var'.
If there is no such environment variable set, return the default.
If the default is None, raise an exception for an undefined variable.
"""
return_value = None
if var.startswith(SECRET_ENV_PREFIX):
disallow_secret_env_var(var)
if var in os.environ:
return_value = os.environ[var]
elif default is not None:
return_value = default
if return_value is not None:
# Save the env_var value in the manifest and the var name in the source_file.
# If this is compiling, do not save because it's irrelevant to parsing.
if self.model and not hasattr(self.model, 'compiled'):
self.manifest.env_vars[var] = return_value
# hooks come from dbt_project.yml which doesn't have a real file_id
if self.model.file_id in self.manifest.files:
source_file = self.manifest.files[self.model.file_id]
# Schema files should never get here
if source_file.parse_file_type != 'schema':
source_file.env_vars.append(var)
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
raise_parsing_error(msg)
class MacroContext(ProviderContext):
"""Internally, macros can be executed like nodes, with some restrictions:
@@ -1262,7 +1298,7 @@ class ModelContext(ProviderContext):
# This is called by '_context_for', used in 'render_with_context'
def generate_parser_model(
def generate_parser_model_context(
model: ManifestNode,
config: RuntimeConfig,
manifest: Manifest,
@@ -1279,7 +1315,7 @@ def generate_parser_model(
return ctx.to_dict()
def generate_generate_component_name_macro(
def generate_generate_name_macro_context(
macro: ParsedMacro,
config: RuntimeConfig,
manifest: Manifest,
@@ -1290,7 +1326,7 @@ def generate_generate_component_name_macro(
return ctx.to_dict()
def generate_runtime_model(
def generate_runtime_model_context(
model: ManifestNode,
config: RuntimeConfig,
manifest: Manifest,
@@ -1301,7 +1337,7 @@ def generate_runtime_model(
return ctx.to_dict()
def generate_runtime_macro(
def generate_runtime_macro_context(
macro: ParsedMacro,
config: RuntimeConfig,
manifest: Manifest,
@@ -1355,6 +1391,44 @@ def generate_parse_exposure(
}
class MetricRefResolver(BaseResolver):
def __call__(self, *args) -> str:
package = None
if len(args) == 1:
name = args[0]
elif len(args) == 2:
package, name = args
else:
ref_invalid_args(self.model, args)
self.validate_args(name, package)
self.model.refs.append(list(args))
return ''
def validate_args(self, name, package):
if not isinstance(name, str):
raise ParsingException(
f'In a metrics section in {self.model.original_file_path} '
f'the name argument to ref() must be a string'
)
def generate_parse_metrics(
metric: ParsedMetric,
config: RuntimeConfig,
manifest: Manifest,
package_name: str,
) -> Dict[str, Any]:
project = config.load_dependencies()[package_name]
return {
'ref': MetricRefResolver(
None,
metric,
project,
manifest,
),
}
# This class is currently used by the schema parser in order
# to limit the number of macros in the context by using
# the TestMacroNamespace
@@ -1389,8 +1463,13 @@ class TestContext(ProviderContext):
# 'depends_on.macros' by using the TestMacroNamespace
def _build_test_namespace(self):
depends_on_macros = []
# all generic tests use a macro named 'get_where_subquery' to wrap 'model' arg
# see generic_test_builders.build_model_str
get_where_subquery = self.macro_resolver.macros_by_name.get('get_where_subquery')
if get_where_subquery:
depends_on_macros.append(get_where_subquery.unique_id)
if self.model.depends_on and self.model.depends_on.macros:
depends_on_macros = self.model.depends_on.macros
depends_on_macros.extend(self.model.depends_on.macros)
lookup_macros = depends_on_macros.copy()
for macro_unique_id in lookup_macros:
lookup_macro = self.macro_resolver.macros.get(macro_unique_id)
@@ -1403,6 +1482,30 @@ class TestContext(ProviderContext):
)
self.namespace = macro_namespace
@contextmember
def env_var(self, var: str, default: Optional[str] = None) -> str:
return_value = None
if var.startswith(SECRET_ENV_PREFIX):
disallow_secret_env_var(var)
if var in os.environ:
return_value = os.environ[var]
elif default is not None:
return_value = default
if return_value is not None:
# Save the env_var value in the manifest and the var name in the source_file
if self.model:
self.manifest.env_vars[var] = return_value
# the "model" should only be test nodes, but just in case, check
if self.model.resource_type == NodeType.Test and self.model.file_key_name:
source_file = self.manifest.files[self.model.file_id]
(yaml_key, name) = self.model.file_key_name.split('.')
source_file.add_env_var(var, yaml_key, name)
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
raise_parsing_error(msg)
def generate_test_context(
model: ManifestNode,

View File

@@ -0,0 +1,45 @@
import os
from typing import Any, Dict, Optional
from .base import BaseContext, contextmember
from dbt.exceptions import raise_parsing_error
from dbt.logger import SECRET_ENV_PREFIX
class SecretContext(BaseContext):
"""This context is used in profiles.yml + packages.yml. It can render secret
env vars that aren't usable elsewhere"""
@contextmember
def env_var(self, var: str, default: Optional[str] = None) -> str:
"""The env_var() function. Return the environment variable named 'var'.
If there is no such environment variable set, return the default.
If the default is None, raise an exception for an undefined variable.
In this context *only*, env_var will return the actual values of
env vars prefixed with DBT_ENV_SECRET_
"""
return_value = None
if var in os.environ:
return_value = os.environ[var]
elif default is not None:
return_value = default
if return_value is not None:
# do not save secret environment variables
if not var.startswith(SECRET_ENV_PREFIX):
self.env_vars[var] = return_value
# return the value even if its a secret
return return_value
else:
msg = f"Env var required but not provided: '{var}'"
raise_parsing_error(msg)
def generate_secret_context(cli_vars: Dict[str, Any]) -> Dict[str, Any]:
ctx = SecretContext(cli_vars)
# This is not a Mashumaro to_dict call
return ctx.to_dict()

View File

@@ -8,6 +8,7 @@ from dbt.context.base import (
class TargetContext(BaseContext):
# subclass is ConfiguredContext
def __init__(self, config: HasCredentials, cli_vars: Dict[str, Any]):
super().__init__(cli_vars=cli_vars)
self.config = config

View File

@@ -7,7 +7,8 @@ from typing import (
)
from dbt.exceptions import InternalException
from dbt.utils import translate_aliases
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import NewConnectionOpening
from typing_extensions import Protocol
from dbt.dataclass_schema import (
dbtClassMixin, StrEnum, ExtensibleDbtClassMixin, HyphenatedDbtClassMixin,
@@ -101,10 +102,7 @@ class LazyHandle:
self.opener = opener
def resolve(self, connection: Connection) -> Connection:
logger.debug(
'Opening a new connection, currently in state {}'
.format(connection.state)
)
fire_event(NewConnectionOpening(connection_state=connection.state))
return self.opener(connection)

View File

@@ -190,6 +190,7 @@ class SourceFile(BaseSourceFile):
nodes: List[str] = field(default_factory=list)
docs: List[str] = field(default_factory=list)
macros: List[str] = field(default_factory=list)
env_vars: List[str] = field(default_factory=list)
@classmethod
def big_seed(cls, path: FilePath) -> 'SourceFile':
@@ -222,6 +223,7 @@ class SchemaSourceFile(BaseSourceFile):
tests: Dict[str, Any] = field(default_factory=dict)
sources: List[str] = field(default_factory=list)
exposures: List[str] = field(default_factory=list)
metrics: List[str] = field(default_factory=list)
# node patches contain models, seeds, snapshots, analyses
ndp: List[str] = field(default_factory=list)
# any macro patches in this file by macro unique_id.
@@ -230,6 +232,7 @@ class SchemaSourceFile(BaseSourceFile):
# Patches are only against external sources. Sources can be
# created too, but those are in 'sources'
sop: List[SourceKey] = field(default_factory=list)
env_vars: Dict[str, Any] = field(default_factory=dict)
pp_dict: Optional[Dict[str, Any]] = None
pp_test_index: Optional[Dict[str, Any]] = None
@@ -252,7 +255,7 @@ class SchemaSourceFile(BaseSourceFile):
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
# Remove partial parsing specific data
for key in ('pp_files', 'pp_test_index', 'pp_dict'):
for key in ('pp_test_index', 'pp_dict'):
if key in dct:
del dct[key]
return dct
@@ -299,5 +302,21 @@ class SchemaSourceFile(BaseSourceFile):
test_ids.extend(self.tests[key][name])
return test_ids
def add_env_var(self, var, yaml_key, name):
if yaml_key not in self.env_vars:
self.env_vars[yaml_key] = {}
if name not in self.env_vars[yaml_key]:
self.env_vars[yaml_key][name] = []
if var not in self.env_vars[yaml_key][name]:
self.env_vars[yaml_key][name].append(var)
def delete_from_env_vars(self, yaml_key, name):
# We delete all vars for this yaml_key/name because the
# entry has been scheduled for reparsing.
if yaml_key in self.env_vars and name in self.env_vars[yaml_key]:
del self.env_vars[yaml_key][name]
if not self.env_vars[yaml_key]:
del self.env_vars[yaml_key]
AnySourceFile = Union[SchemaSourceFile, SourceFile]

View File

@@ -6,8 +6,10 @@ from dbt.contracts.graph.parsed import (
ParsedHookNode,
ParsedModelNode,
ParsedExposure,
ParsedMetric,
ParsedResource,
ParsedRPCNode,
ParsedSqlNode,
ParsedGenericTestNode,
ParsedSeedNode,
ParsedSnapshotNode,
@@ -81,11 +83,17 @@ class CompiledModelNode(CompiledNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Model]})
# TODO: rm?
@dataclass
class CompiledRPCNode(CompiledNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.RPCCall]})
@dataclass
class CompiledSqlNode(CompiledNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.SqlOperation]})
@dataclass
class CompiledSeedNode(CompiledNode):
# keep this in sync with ParsedSeedNode!
@@ -119,6 +127,7 @@ class CompiledGenericTestNode(CompiledNode, HasTestMetadata):
# keep this in sync with ParsedGenericTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
file_key_name: Optional[str] = None
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type:ignore
@@ -142,6 +151,7 @@ PARSED_TYPES: Dict[Type[CompiledNode], Type[ParsedResource]] = {
CompiledModelNode: ParsedModelNode,
CompiledHookNode: ParsedHookNode,
CompiledRPCNode: ParsedRPCNode,
CompiledSqlNode: ParsedSqlNode,
CompiledSeedNode: ParsedSeedNode,
CompiledSnapshotNode: ParsedSnapshotNode,
CompiledSingularTestNode: ParsedSingularTestNode,
@@ -154,6 +164,7 @@ COMPILED_TYPES: Dict[Type[ParsedResource], Type[CompiledNode]] = {
ParsedModelNode: CompiledModelNode,
ParsedHookNode: CompiledHookNode,
ParsedRPCNode: CompiledRPCNode,
ParsedSqlNode: CompiledSqlNode,
ParsedSeedNode: CompiledSeedNode,
ParsedSnapshotNode: CompiledSnapshotNode,
ParsedSingularTestNode: CompiledSingularTestNode,
@@ -189,6 +200,7 @@ NonSourceCompiledNode = Union[
CompiledModelNode,
CompiledHookNode,
CompiledRPCNode,
CompiledSqlNode,
CompiledGenericTestNode,
CompiledSeedNode,
CompiledSnapshotNode,
@@ -200,6 +212,7 @@ NonSourceParsedNode = Union[
ParsedHookNode,
ParsedModelNode,
ParsedRPCNode,
ParsedSqlNode,
ParsedGenericTestNode,
ParsedSeedNode,
ParsedSnapshotNode,
@@ -220,8 +233,10 @@ CompileResultNode = Union[
ParsedSourceDefinition,
]
# anything that participates in the graph: sources, exposures, manifest nodes
# anything that participates in the graph: sources, exposures, metrics,
# or manifest nodes
GraphMemberNode = Union[
CompileResultNode,
ParsedExposure,
ParsedMetric,
]

View File

@@ -15,8 +15,8 @@ from dbt.contracts.graph.compiled import (
)
from dbt.contracts.graph.parsed import (
ParsedMacro, ParsedDocumentation,
ParsedSourceDefinition, ParsedExposure, HasUniqueID,
UnpatchedSourceDefinition, ManifestNodes
ParsedSourceDefinition, ParsedExposure, ParsedMetric,
HasUniqueID, UnpatchedSourceDefinition, ManifestNodes
)
from dbt.contracts.graph.unparsed import SourcePatch
from dbt.contracts.files import SourceFile, SchemaSourceFile, FileHash, AnySourceFile
@@ -29,7 +29,8 @@ from dbt.exceptions import (
raise_duplicate_resource_name, raise_compiler_error,
)
from dbt.helper_types import PathSet
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import MergedFromState
from dbt.node_types import NodeType
from dbt.ui import line_wrap_message
from dbt import flags
@@ -546,6 +547,8 @@ class ParsingInfo:
@dataclass
class ManifestStateCheck(dbtClassMixin):
vars_hash: FileHash = field(default_factory=FileHash.empty)
project_env_vars_hash: FileHash = field(default_factory=FileHash.empty)
profile_env_vars_hash: FileHash = field(default_factory=FileHash.empty)
profile_hash: FileHash = field(default_factory=FileHash.empty)
project_hashes: MutableMapping[str, FileHash] = field(default_factory=dict)
@@ -562,6 +565,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
macros: MutableMapping[str, ParsedMacro] = field(default_factory=dict)
docs: MutableMapping[str, ParsedDocumentation] = field(default_factory=dict)
exposures: MutableMapping[str, ParsedExposure] = field(default_factory=dict)
metrics: MutableMapping[str, ParsedMetric] = field(default_factory=dict)
selectors: MutableMapping[str, Any] = field(default_factory=dict)
files: MutableMapping[str, AnySourceFile] = field(default_factory=dict)
metadata: ManifestMetadata = field(default_factory=ManifestMetadata)
@@ -569,6 +573,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
state_check: ManifestStateCheck = field(default_factory=ManifestStateCheck)
source_patches: MutableMapping[SourceKey, SourcePatch] = field(default_factory=dict)
disabled: MutableMapping[str, List[CompileResultNode]] = field(default_factory=dict)
env_vars: MutableMapping[str, str] = field(default_factory=dict)
_doc_lookup: Optional[DocLookup] = field(
default=None, metadata={'serialize': lambda x: None, 'deserialize': lambda x: None}
@@ -628,6 +633,9 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
def update_exposure(self, new_exposure: ParsedExposure):
_update_into(self.exposures, new_exposure)
def update_metric(self, new_metric: ParsedMetric):
_update_into(self.metrics, new_metric)
def update_node(self, new_node: ManifestNode):
_update_into(self.nodes, new_node)
@@ -645,6 +653,10 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
k: v.to_dict(omit_none=False)
for k, v in self.exposures.items()
},
'metrics': {
k: v.to_dict(omit_none=False)
for k, v in self.metrics.items()
},
'nodes': {
k: v.to_dict(omit_none=False)
for k, v in self.nodes.items()
@@ -697,7 +709,12 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
def get_resource_fqns(self) -> Mapping[str, PathSet]:
resource_fqns: Dict[str, Set[Tuple[str, ...]]] = {}
all_resources = chain(self.exposures.values(), self.nodes.values(), self.sources.values())
all_resources = chain(
self.exposures.values(),
self.nodes.values(),
self.sources.values(),
self.metrics.values()
)
for resource in all_resources:
resource_type_plural = resource.resource_type.pluralize()
if resource_type_plural not in resource_fqns:
@@ -725,6 +742,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
macros={k: _deepcopy(v) for k, v in self.macros.items()},
docs={k: _deepcopy(v) for k, v in self.docs.items()},
exposures={k: _deepcopy(v) for k, v in self.exposures.items()},
metrics={k: _deepcopy(v) for k, v in self.metrics.items()},
selectors={k: _deepcopy(v) for k, v in self.selectors.items()},
metadata=self.metadata,
disabled={k: _deepcopy(v) for k, v in self.disabled.items()},
@@ -737,6 +755,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.nodes.values(),
self.sources.values(),
self.exposures.values(),
self.metrics.values(),
))
forward_edges, backward_edges = build_node_edges(edge_members)
self.child_map = forward_edges
@@ -758,6 +777,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
macros=self.macros,
docs=self.docs,
exposures=self.exposures,
metrics=self.metrics,
selectors=self.selectors,
metadata=self.metadata,
disabled=self.disabled,
@@ -777,6 +797,8 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
return self.sources[unique_id]
elif unique_id in self.exposures:
return self.exposures[unique_id]
elif unique_id in self.metrics:
return self.metrics[unique_id]
else:
# something terrible has happened
raise dbt.exceptions.InternalException(
@@ -937,9 +959,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
# log up to 5 items
sample = list(islice(merged, 5))
logger.debug(
f'Merged {len(merged)} items from state (sample: {sample})'
)
fire_event(MergedFromState(nbr_merged=len(merged), sample=sample))
# Methods that were formerly in ParseResult
@@ -1005,6 +1025,11 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.exposures[exposure.unique_id] = exposure
source_file.exposures.append(exposure.unique_id)
def add_metric(self, source_file: SchemaSourceFile, metric: ParsedMetric):
_check_duplicates(metric, self.metrics)
self.metrics[metric.unique_id] = metric
source_file.metrics.append(metric.unique_id)
def add_disabled_nofile(self, node: CompileResultNode):
# There can be multiple disabled nodes for the same unique_id
if node.unique_id in self.disabled:
@@ -1041,6 +1066,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.macros,
self.docs,
self.exposures,
self.metrics,
self.selectors,
self.files,
self.metadata,
@@ -1048,6 +1074,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.state_check,
self.source_patches,
self.disabled,
self.env_vars,
self._doc_lookup,
self._source_lookup,
self._ref_lookup,
@@ -1070,7 +1097,7 @@ AnyManifest = Union[Manifest, MacroManifest]
@dataclass
@schema_version('manifest', 3)
@schema_version('manifest', 4)
class WritableManifest(ArtifactMixin):
nodes: Mapping[UniqueID, ManifestNode] = field(
metadata=dict(description=(
@@ -1097,6 +1124,11 @@ class WritableManifest(ArtifactMixin):
'The exposures defined in the dbt project and its dependencies'
))
)
metrics: Mapping[UniqueID, ParsedMetric] = field(
metadata=dict(description=(
'The metrics defined in the dbt project and its dependencies'
))
)
selectors: Mapping[UniqueID, Any] = field(
metadata=dict(description=(
'The selectors defined in selectors.yml'

View File

@@ -26,11 +26,10 @@ from dbt.contracts.graph.unparsed import (
UnparsedBaseNode, FreshnessThreshold, ExternalTable,
HasYamlMetadata, MacroArgument, UnparsedSourceDefinition,
UnparsedSourceTableDefinition, UnparsedColumn, TestDef,
ExposureOwner, ExposureType, MaturityType
ExposureOwner, ExposureType, MaturityType, MetricFilter
)
from dbt.contracts.util import Replaceable, AdditionalPropertiesMixin
from dbt.exceptions import warn_or_error
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt import flags
from dbt.node_types import NodeType
@@ -151,7 +150,7 @@ class ParsedNodeMixins(dbtClassMixin):
# Note: config should already be updated
self.patch_path: Optional[str] = patch.file_id
# update created_at so process_docs will run in partial parsing
self.created_at = int(time.time())
self.created_at = time.time()
self.description = patch.description
self.columns = patch.columns
self.meta = patch.meta
@@ -193,8 +192,9 @@ class ParsedNodeDefaults(ParsedNodeMandatory):
build_path: Optional[str] = None
deferred: bool = False
unrendered_config: Dict[str, Any] = field(default_factory=dict)
created_at: int = field(default_factory=lambda: int(time.time()))
created_at: float = field(default_factory=lambda: time.time())
config_call_dict: Dict[str, Any] = field(default_factory=dict)
_event_status: Dict[str, Any] = field(default_factory=dict)
def write_node(self, target_path: str, subdirectory: str, payload: str):
if (os.path.basename(self.path) ==
@@ -224,6 +224,8 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
def __post_serialize__(self, dct):
if 'config_call_dict' in dct:
del dct['config_call_dict']
if '_event_status' in dct:
del dct['_event_status']
return dct
@classmethod
@@ -240,6 +242,8 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
return ParsedSeedNode.from_dict(dct)
elif resource_type == 'rpc':
return ParsedRPCNode.from_dict(dct)
elif resource_type == 'sql':
return ParsedSqlNode.from_dict(dct)
elif resource_type == 'test':
if 'test_metadata' in dct:
return ParsedGenericTestNode.from_dict(dct)
@@ -341,11 +345,17 @@ class ParsedModelNode(ParsedNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.Model]})
# TODO: rm?
@dataclass
class ParsedRPCNode(ParsedNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.RPCCall]})
@dataclass
class ParsedSqlNode(ParsedNode):
resource_type: NodeType = field(metadata={'restrict': [NodeType.SqlOperation]})
def same_seeds(first: ParsedNode, second: ParsedNode) -> bool:
# for seeds, we check the hashes. If the hashes are different types,
# no match. If the hashes are both the same 'path', log a warning and
@@ -402,6 +412,9 @@ class ParsedSeedNode(ParsedNode):
@dataclass
class TestMetadata(dbtClassMixin, Replaceable):
name: str
# kwargs are the args that are left in the test builder after
# removing configs. They are set from the test builder when
# the test node is created.
kwargs: Dict[str, Any] = field(default_factory=dict)
namespace: Optional[str] = None
@@ -418,12 +431,17 @@ class ParsedSingularTestNode(ParsedNode):
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
@property
def test_node_type(self):
return 'singular'
@dataclass
class ParsedGenericTestNode(ParsedNode, HasTestMetadata):
# keep this in sync with CompiledGenericTestNode!
resource_type: NodeType = field(metadata={'restrict': [NodeType.Test]})
column_name: Optional[str] = None
file_key_name: Optional[str] = None
# Was not able to make mypy happy and keep the code working. We need to
# refactor the various configs.
config: TestConfig = field(default_factory=TestConfig) # type: ignore
@@ -438,6 +456,10 @@ class ParsedGenericTestNode(ParsedNode, HasTestMetadata):
True
)
@property
def test_node_type(self):
return 'generic'
@dataclass
class IntermediateSnapshotNode(ParsedNode):
@@ -493,12 +515,12 @@ class ParsedMacro(UnparsedBaseNode, HasUniqueID):
docs: Docs = field(default_factory=Docs)
patch_path: Optional[str] = None
arguments: List[MacroArgument] = field(default_factory=list)
created_at: int = field(default_factory=lambda: int(time.time()))
created_at: float = field(default_factory=lambda: time.time())
def patch(self, patch: ParsedMacroPatch):
self.patch_path: Optional[str] = patch.file_id
self.description = patch.description
self.created_at = int(time.time())
self.created_at = time.time()
self.meta = patch.meta
self.docs = patch.docs
self.arguments = patch.arguments
@@ -614,7 +636,13 @@ class ParsedSourceDefinition(
patch_path: Optional[Path] = None
unrendered_config: Dict[str, Any] = field(default_factory=dict)
relation_name: Optional[str] = None
created_at: int = field(default_factory=lambda: int(time.time()))
created_at: float = field(default_factory=lambda: time.time())
_event_status: Dict[str, Any] = field(default_factory=dict)
def __post_serialize__(self, dct):
if '_event_status' in dct:
del dct['_event_status']
return dct
def same_database_representation(
self, other: 'ParsedSourceDefinition'
@@ -725,7 +753,7 @@ class ParsedExposure(UnparsedBaseNode, HasUniqueID, HasFqn):
depends_on: DependsOn = field(default_factory=DependsOn)
refs: List[List[str]] = field(default_factory=list)
sources: List[List[str]] = field(default_factory=list)
created_at: int = field(default_factory=lambda: int(time.time()))
created_at: float = field(default_factory=lambda: time.time())
@property
def depends_on_nodes(self):
@@ -771,12 +799,88 @@ class ParsedExposure(UnparsedBaseNode, HasUniqueID, HasFqn):
)
@dataclass
class ParsedMetric(UnparsedBaseNode, HasUniqueID, HasFqn):
model: str
name: str
description: str
label: str
type: str
sql: Optional[str]
timestamp: Optional[str]
filters: List[MetricFilter]
time_grains: List[str]
dimensions: List[str]
resource_type: NodeType = NodeType.Metric
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
sources: List[List[str]] = field(default_factory=list)
depends_on: DependsOn = field(default_factory=DependsOn)
refs: List[List[str]] = field(default_factory=list)
created_at: float = field(default_factory=lambda: time.time())
@property
def depends_on_nodes(self):
return self.depends_on.nodes
@property
def search_name(self):
return self.name
def same_model(self, old: 'ParsedMetric') -> bool:
return self.model == old.model
def same_dimensions(self, old: 'ParsedMetric') -> bool:
return self.dimensions == old.dimensions
def same_filters(self, old: 'ParsedMetric') -> bool:
return self.filters == old.filters
def same_description(self, old: 'ParsedMetric') -> bool:
return self.description == old.description
def same_label(self, old: 'ParsedMetric') -> bool:
return self.label == old.label
def same_type(self, old: 'ParsedMetric') -> bool:
return self.type == old.type
def same_sql(self, old: 'ParsedMetric') -> bool:
return self.sql == old.sql
def same_timestamp(self, old: 'ParsedMetric') -> bool:
return self.timestamp == old.timestamp
def same_time_grains(self, old: 'ParsedMetric') -> bool:
return self.time_grains == old.time_grains
def same_contents(self, old: Optional['ParsedMetric']) -> bool:
# existing when it didn't before is a change!
# metadata/tags changes are not "changes"
if old is None:
return True
return (
self.same_model(old) and
self.same_dimensions(old) and
self.same_filters(old) and
self.same_description(old) and
self.same_label(old) and
self.same_type(old) and
self.same_sql(old) and
self.same_timestamp(old) and
self.same_time_grains(old) and
True
)
ManifestNodes = Union[
ParsedAnalysisNode,
ParsedSingularTestNode,
ParsedHookNode,
ParsedModelNode,
ParsedRPCNode,
ParsedSqlNode,
ParsedGenericTestNode,
ParsedSeedNode,
ParsedSnapshotNode,
@@ -788,5 +892,6 @@ ParsedResource = Union[
ParsedMacro,
ParsedNode,
ParsedExposure,
ParsedMetric,
ParsedSourceDefinition,
]

View File

@@ -60,6 +60,7 @@ class UnparsedNode(UnparsedBaseNode, HasSQL):
NodeType.Operation,
NodeType.Seed,
NodeType.RPCCall,
NodeType.SqlOperation,
]})
@property
@@ -167,20 +168,25 @@ class TimePeriod(StrEnum):
@dataclass
class Time(dbtClassMixin, Replaceable):
count: int
period: TimePeriod
class Time(dbtClassMixin, Mergeable):
count: Optional[int] = None
period: Optional[TimePeriod] = None
def exceeded(self, actual_age: float) -> bool:
kwargs = {self.period.plural(): self.count}
if self.period is None or self.count is None:
return False
kwargs: Dict[str, int] = {self.period.plural(): self.count}
difference = timedelta(**kwargs).total_seconds()
return actual_age > difference
def __bool__(self):
return self.count is not None and self.period is not None
@dataclass
class FreshnessThreshold(dbtClassMixin, Mergeable):
warn_after: Optional[Time] = None
error_after: Optional[Time] = None
warn_after: Optional[Time] = field(default_factory=Time)
error_after: Optional[Time] = field(default_factory=Time)
filter: Optional[str] = None
def status(self, age: float) -> "dbt.contracts.results.FreshnessStatus":
@@ -193,7 +199,7 @@ class FreshnessThreshold(dbtClassMixin, Mergeable):
return FreshnessStatus.Pass
def __bool__(self):
return self.warn_after is not None or self.error_after is not None
return bool(self.warn_after) or bool(self.error_after)
@dataclass
@@ -279,7 +285,7 @@ class UnparsedSourceDefinition(dbtClassMixin, Replaceable):
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
if 'freshnewss' not in dct and self.freshness is None:
if 'freshness' not in dct and self.freshness is None:
dct['freshness'] = None
return dct
@@ -440,3 +446,27 @@ class UnparsedExposure(dbtClassMixin, Replaceable):
tags: List[str] = field(default_factory=list)
url: Optional[str] = None
depends_on: List[str] = field(default_factory=list)
@dataclass
class MetricFilter(dbtClassMixin, Replaceable):
field: str
operator: str
# TODO : Can we make this Any?
value: str
@dataclass
class UnparsedMetric(dbtClassMixin, Replaceable):
model: str
name: str
label: str
type: str
description: str = ''
sql: Optional[str] = None
timestamp: Optional[str] = None
time_grains: List[str] = field(default_factory=list)
dimensions: List[str] = field(default_factory=list)
filters: List[MetricFilter] = field(default_factory=list)
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)

View File

@@ -1,7 +1,6 @@
from dbt.contracts.util import Replaceable, Mergeable, list_str
from dbt.contracts.connection import QueryComment, UserConfigContract
from dbt.helper_types import NoValue
from dbt.logger import GLOBAL_LOGGER as logger # noqa
from dbt.dataclass_schema import (
dbtClassMixin, ValidationError,
HyphenatedDbtClassMixin,
@@ -19,6 +18,18 @@ DEFAULT_SEND_ANONYMOUS_USAGE_STATS = True
class Name(ValidatedStringMixin):
ValidationRegex = r'^[^\d\W]\w*$'
@classmethod
def is_valid(cls, value: Any) -> bool:
if not isinstance(value, str):
return False
try:
cls.validate(value)
except ValidationError:
return False
return True
register_pattern(Name, r'^[^\d\W]\w*$')
@@ -232,12 +243,13 @@ class UserConfig(ExtensibleDbtClassMixin, Replaceable, UserConfigContract):
printer_width: Optional[int] = None
write_json: Optional[bool] = None
warn_error: Optional[bool] = None
log_format: Optional[bool] = None
log_format: Optional[str] = None
debug: Optional[bool] = None
version_check: Optional[bool] = None
fail_fast: Optional[bool] = None
use_experimental_parser: Optional[bool] = None
static_parser: Optional[bool] = None
indirect_selection: Optional[str] = None
@dataclass

View File

@@ -11,10 +11,11 @@ from dbt.contracts.util import (
schema_version,
)
from dbt.exceptions import InternalException
from dbt.events.functions import fire_event
from dbt.events.types import TimingInfoCollected
from dbt.logger import (
TimingProcessor,
JsonOnly,
GLOBAL_LOGGER as logger,
)
from dbt.utils import lowercase
from dbt.dataclass_schema import dbtClassMixin, StrEnum
@@ -54,7 +55,13 @@ class collect_timing_info:
def __exit__(self, exc_type, exc_value, traceback):
self.timing_info.end()
with JsonOnly(), TimingProcessor(self.timing_info):
logger.debug('finished collecting timing info')
fire_event(TimingInfoCollected())
class RunningStatus(StrEnum):
Started = 'started'
Compiling = 'compiling'
Executing = 'executing'
class NodeStatus(StrEnum):
@@ -185,7 +192,7 @@ class RunExecutionResult(
@dataclass
@schema_version('run-results', 3)
@schema_version('run-results', 4)
class RunResultsArtifact(ExecutionResult, ArtifactMixin):
results: Sequence[RunResultOutput]
args: Dict[str, Any] = field(default_factory=dict)
@@ -369,7 +376,7 @@ class FreshnessResult(ExecutionResult):
@dataclass
@schema_version('sources', 2)
@schema_version('sources', 3)
class FreshnessExecutionResultArtifact(
ArtifactMixin,
VersionedSchema,

View File

@@ -1,819 +0,0 @@
import enum
import os
import uuid
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from typing import Optional, Union, List, Any, Dict, Type, Sequence
from dbt.dataclass_schema import dbtClassMixin, StrEnum
from dbt.contracts.graph.compiled import CompileResultNode
from dbt.contracts.graph.manifest import WritableManifest
from dbt.contracts.results import (
RunResult, RunResultsArtifact, TimingInfo,
CatalogArtifact,
CatalogResults,
ExecutionResult,
FreshnessExecutionResultArtifact,
FreshnessResult,
RunOperationResult,
RunOperationResultsArtifact,
RunExecutionResult,
)
from dbt.contracts.util import VersionedSchema, schema_version
from dbt.exceptions import InternalException
from dbt.logger import LogMessage
from dbt.utils import restrict_to
TaskTags = Optional[Dict[str, Any]]
TaskID = uuid.UUID
# Inputs
@dataclass
class RPCParameters(dbtClassMixin):
task_tags: TaskTags
timeout: Optional[float]
@classmethod
def __pre_deserialize__(cls, data, omit_none=True):
data = super().__pre_deserialize__(data)
if 'timeout' not in data:
data['timeout'] = None
if 'task_tags' not in data:
data['task_tags'] = None
return data
@dataclass
class RPCExecParameters(RPCParameters):
name: str
sql: str
macros: Optional[str] = None
@dataclass
class RPCCompileParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@dataclass
class RPCListParameters(RPCParameters):
resource_types: Optional[List[str]] = None
models: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
selector: Optional[str] = None
output: Optional[str] = 'json'
output_keys: Optional[List[str]] = None
@dataclass
class RPCRunParameters(RPCParameters):
threads: Optional[int] = None
models: Union[None, str, List[str]] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCSnapshotParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
@dataclass
class RPCTestParameters(RPCCompileParameters):
data: bool = False
schema: bool = False
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCSeedParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
show: bool = False
state: Optional[str] = None
@dataclass
class RPCDocsGenerateParameters(RPCParameters):
compile: bool = True
state: Optional[str] = None
@dataclass
class RPCBuildParameters(RPCParameters):
resource_types: Optional[List[str]] = None
select: Union[None, str, List[str]] = None
threads: Optional[int] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
state: Optional[str] = None
defer: Optional[bool] = None
@dataclass
class RPCCliParameters(RPCParameters):
cli: str
@dataclass
class RPCDepsParameters(RPCParameters):
pass
@dataclass
class KillParameters(RPCParameters):
task_id: TaskID
@dataclass
class PollParameters(RPCParameters):
request_token: TaskID
logs: bool = True
logs_start: int = 0
@dataclass
class PSParameters(RPCParameters):
active: bool = True
completed: bool = False
@dataclass
class StatusParameters(RPCParameters):
pass
@dataclass
class GCSettings(dbtClassMixin):
# start evicting the longest-ago-ended tasks here
maxsize: int
# start evicting all tasks before now - auto_reap_age when we have this
# many tasks in the table
reapsize: int
# a positive timedelta indicating how far back we should go
auto_reap_age: timedelta
@dataclass
class GCParameters(RPCParameters):
"""The gc endpoint takes three arguments, any of which may be present:
- task_ids: An optional list of task ID UUIDs to try to GC
- before: If provided, should be a datetime string. All tasks that finished
before that datetime will be GCed
- settings: If provided, should be a GCSettings object in JSON form. It
will be applied to the task manager before GC starts. By default the
existing gc settings remain.
"""
task_ids: Optional[List[TaskID]] = None
before: Optional[datetime] = None
settings: Optional[GCSettings] = None
@dataclass
class RPCRunOperationParameters(RPCParameters):
macro: str
args: Dict[str, Any] = field(default_factory=dict)
@dataclass
class RPCSourceFreshnessParameters(RPCParameters):
threads: Optional[int] = None
select: Union[None, str, List[str]] = None
exclude: Union[None, str, List[str]] = None
selector: Optional[str] = None
@dataclass
class GetManifestParameters(RPCParameters):
pass
# Outputs
@dataclass
class RemoteResult(VersionedSchema):
logs: List[LogMessage]
@dataclass
@schema_version('remote-list-results', 1)
class RemoteListResults(RemoteResult):
output: List[Any]
generated_at: datetime = field(default_factory=datetime.utcnow)
@dataclass
@schema_version('remote-deps-result', 1)
class RemoteDepsResult(RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
@dataclass
@schema_version('remote-catalog-result', 1)
class RemoteCatalogResults(CatalogResults, RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
def write(self, path: str):
artifact = CatalogArtifact.from_results(
generated_at=self.generated_at,
nodes=self.nodes,
sources=self.sources,
compile_results=self._compile_results,
errors=self.errors,
)
artifact.write(path)
@dataclass
class RemoteCompileResultMixin(RemoteResult):
raw_sql: str
compiled_sql: str
node: CompileResultNode
timing: List[TimingInfo]
@dataclass
@schema_version('remote-compile-result', 1)
class RemoteCompileResult(RemoteCompileResultMixin):
generated_at: datetime = field(default_factory=datetime.utcnow)
@property
def error(self):
return None
@dataclass
@schema_version('remote-execution-result', 1)
class RemoteExecutionResult(ExecutionResult, RemoteResult):
results: Sequence[RunResult]
args: Dict[str, Any] = field(default_factory=dict)
generated_at: datetime = field(default_factory=datetime.utcnow)
def write(self, path: str):
writable = RunResultsArtifact.from_execution_results(
generated_at=self.generated_at,
results=self.results,
elapsed_time=self.elapsed_time,
args=self.args,
)
writable.write(path)
@classmethod
def from_local_result(
cls,
base: RunExecutionResult,
logs: List[LogMessage],
) -> 'RemoteExecutionResult':
return cls(
generated_at=base.generated_at,
results=base.results,
elapsed_time=base.elapsed_time,
args=base.args,
logs=logs,
)
@dataclass
class ResultTable(dbtClassMixin):
column_names: List[str]
rows: List[Any]
@dataclass
@schema_version('remote-run-operation-result', 1)
class RemoteRunOperationResult(RunOperationResult, RemoteResult):
generated_at: datetime = field(default_factory=datetime.utcnow)
@classmethod
def from_local_result(
cls,
base: RunOperationResultsArtifact,
logs: List[LogMessage],
) -> 'RemoteRunOperationResult':
return cls(
generated_at=base.metadata.generated_at,
results=base.results,
elapsed_time=base.elapsed_time,
success=base.success,
logs=logs,
)
def write(self, path: str):
writable = RunOperationResultsArtifact.from_success(
success=self.success,
generated_at=self.generated_at,
elapsed_time=self.elapsed_time,
)
writable.write(path)
@dataclass
@schema_version('remote-freshness-result', 1)
class RemoteFreshnessResult(FreshnessResult, RemoteResult):
@classmethod
def from_local_result(
cls,
base: FreshnessResult,
logs: List[LogMessage],
) -> 'RemoteFreshnessResult':
return cls(
metadata=base.metadata,
results=base.results,
elapsed_time=base.elapsed_time,
logs=logs,
)
def write(self, path: str):
writable = FreshnessExecutionResultArtifact.from_result(base=self)
writable.write(path)
@dataclass
@schema_version('remote-run-result', 1)
class RemoteRunResult(RemoteCompileResultMixin):
table: ResultTable
generated_at: datetime = field(default_factory=datetime.utcnow)
RPCResult = Union[
RemoteCompileResult,
RemoteExecutionResult,
RemoteFreshnessResult,
RemoteCatalogResults,
RemoteDepsResult,
RemoteRunOperationResult,
]
# GC types
class GCResultState(StrEnum):
Deleted = 'deleted' # successful GC
Missing = 'missing' # nothing to GC
Running = 'running' # can't GC
@dataclass
@schema_version('remote-gc-result', 1)
class GCResult(RemoteResult):
logs: List[LogMessage] = field(default_factory=list)
deleted: List[TaskID] = field(default_factory=list)
missing: List[TaskID] = field(default_factory=list)
running: List[TaskID] = field(default_factory=list)
def add_result(self, task_id: TaskID, state: GCResultState):
if state == GCResultState.Missing:
self.missing.append(task_id)
elif state == GCResultState.Running:
self.running.append(task_id)
elif state == GCResultState.Deleted:
self.deleted.append(task_id)
else:
raise InternalException(
f'Got invalid state in add_result: {state}'
)
# Task management types
class TaskHandlerState(StrEnum):
NotStarted = 'not started'
Initializing = 'initializing'
Running = 'running'
Success = 'success'
Error = 'error'
Killed = 'killed'
Failed = 'failed'
def __lt__(self, other) -> bool:
"""A logical ordering for TaskHandlerState:
NotStarted < Initializing < Running < (Success, Error, Killed, Failed)
"""
if not isinstance(other, TaskHandlerState):
raise TypeError('cannot compare to non-TaskHandlerState')
order = (self.NotStarted, self.Initializing, self.Running)
smaller = set()
for value in order:
smaller.add(value)
if self == value:
return other not in smaller
return False
def __le__(self, other) -> bool:
# so that ((Success <= Error) is True)
return ((self < other) or
(self == other) or
(self.finished and other.finished))
def __gt__(self, other) -> bool:
if not isinstance(other, TaskHandlerState):
raise TypeError('cannot compare to non-TaskHandlerState')
order = (self.NotStarted, self.Initializing, self.Running)
smaller = set()
for value in order:
smaller.add(value)
if self == value:
return other in smaller
return other in smaller
def __ge__(self, other) -> bool:
# so that ((Success <= Error) is True)
return ((self > other) or
(self == other) or
(self.finished and other.finished))
@property
def finished(self) -> bool:
return self in (self.Error, self.Success, self.Killed, self.Failed)
@dataclass
class TaskTiming(dbtClassMixin):
state: TaskHandlerState
start: Optional[datetime]
end: Optional[datetime]
elapsed: Optional[float]
# These ought to be defaults but superclass order doesn't
# allow that to work
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
for field_name in ('start', 'end', 'elapsed'):
if field_name not in data:
data[field_name] = None
return data
@dataclass
class TaskRow(TaskTiming):
task_id: TaskID
request_source: str
method: str
request_id: Union[str, int]
tags: TaskTags = None
timeout: Optional[float] = None
@dataclass
@schema_version('remote-ps-result', 1)
class PSResult(RemoteResult):
rows: List[TaskRow]
class KillResultStatus(StrEnum):
Missing = 'missing'
NotStarted = 'not_started'
Killed = 'killed'
Finished = 'finished'
@dataclass
@schema_version('remote-kill-result', 1)
class KillResult(RemoteResult):
state: KillResultStatus = KillResultStatus.Missing
logs: List[LogMessage] = field(default_factory=list)
@dataclass
@schema_version('remote-manifest-result', 1)
class GetManifestResult(RemoteResult):
manifest: Optional[WritableManifest] = None
# this is kind of carefuly structured: BlocksManifestTasks is implied by
# RequiresConfigReloadBefore and RequiresManifestReloadAfter
class RemoteMethodFlags(enum.Flag):
Empty = 0
BlocksManifestTasks = 1
RequiresConfigReloadBefore = 3
RequiresManifestReloadAfter = 5
Builtin = 8
# Polling types
@dataclass
class PollResult(RemoteResult, TaskTiming):
state: TaskHandlerState
tags: TaskTags
start: Optional[datetime]
end: Optional[datetime]
elapsed: Optional[float]
# These ought to be defaults but superclass order doesn't
# allow that to work
@classmethod
def __pre_deserialize__(cls, data):
data = super().__pre_deserialize__(data)
for field_name in ('start', 'end', 'elapsed'):
if field_name not in data:
data[field_name] = None
return data
@dataclass
@schema_version('poll-remote-deps-result', 1)
class PollRemoteEmptyCompleteResult(PollResult, RemoteResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
generated_at: datetime = field(default_factory=datetime.utcnow)
@classmethod
def from_result(
cls: Type['PollRemoteEmptyCompleteResult'],
base: RemoteDepsResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollRemoteEmptyCompleteResult':
return cls(
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at
)
@dataclass
@schema_version('poll-remote-killed-result', 1)
class PollKilledResult(PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Killed),
)
@dataclass
@schema_version('poll-remote-execution-result', 1)
class PollExecuteCompleteResult(
RemoteExecutionResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollExecuteCompleteResult'],
base: RemoteExecutionResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollExecuteCompleteResult':
return cls(
results=base.results,
elapsed_time=base.elapsed_time,
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at,
)
@dataclass
@schema_version('poll-remote-compile-result', 1)
class PollCompileCompleteResult(
RemoteCompileResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollCompileCompleteResult'],
base: RemoteCompileResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollCompileCompleteResult':
return cls(
raw_sql=base.raw_sql,
compiled_sql=base.compiled_sql,
node=base.node,
timing=base.timing,
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at
)
@dataclass
@schema_version('poll-remote-run-result', 1)
class PollRunCompleteResult(
RemoteRunResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollRunCompleteResult'],
base: RemoteRunResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollRunCompleteResult':
return cls(
raw_sql=base.raw_sql,
compiled_sql=base.compiled_sql,
node=base.node,
timing=base.timing,
logs=logs,
table=base.table,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
generated_at=base.generated_at
)
@dataclass
@schema_version('poll-remote-run-operation-result', 1)
class PollRunOperationCompleteResult(
RemoteRunOperationResult,
PollResult,
):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollRunOperationCompleteResult'],
base: RemoteRunOperationResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollRunOperationCompleteResult':
return cls(
success=base.success,
results=base.results,
generated_at=base.generated_at,
elapsed_time=base.elapsed_time,
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
)
@dataclass
@schema_version('poll-remote-catalog-result', 1)
class PollCatalogCompleteResult(RemoteCatalogResults, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollCatalogCompleteResult'],
base: RemoteCatalogResults,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollCatalogCompleteResult':
return cls(
nodes=base.nodes,
sources=base.sources,
generated_at=base.generated_at,
errors=base.errors,
_compile_results=base._compile_results,
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
)
@dataclass
@schema_version('poll-remote-in-progress-result', 1)
class PollInProgressResult(PollResult):
pass
@dataclass
@schema_version('poll-remote-get-manifest-result', 1)
class PollGetManifestResult(GetManifestResult, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollGetManifestResult'],
base: GetManifestResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollGetManifestResult':
return cls(
manifest=base.manifest,
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
)
@dataclass
@schema_version('poll-remote-freshness-result', 1)
class PollFreshnessResult(RemoteFreshnessResult, PollResult):
state: TaskHandlerState = field(
metadata=restrict_to(TaskHandlerState.Success,
TaskHandlerState.Failed),
)
@classmethod
def from_result(
cls: Type['PollFreshnessResult'],
base: RemoteFreshnessResult,
tags: TaskTags,
timing: TaskTiming,
logs: List[LogMessage],
) -> 'PollFreshnessResult':
return cls(
logs=logs,
tags=tags,
state=timing.state,
start=timing.start,
end=timing.end,
elapsed=timing.elapsed,
metadata=base.metadata,
results=base.results,
elapsed_time=base.elapsed_time,
)
# Manifest parsing types
class ManifestStatus(StrEnum):
Init = 'init'
Compiling = 'compiling'
Ready = 'ready'
Error = 'error'
@dataclass
@schema_version('remote-status-result', 1)
class LastParse(RemoteResult):
state: ManifestStatus = ManifestStatus.Init
logs: List[LogMessage] = field(default_factory=list)
error: Optional[Dict[str, Any]] = None
timestamp: datetime = field(default_factory=datetime.utcnow)
pid: int = field(default_factory=os.getpid)

88
core/dbt/contracts/sql.py Normal file
View File

@@ -0,0 +1,88 @@
import uuid
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional, List, Any, Dict, Sequence
from dbt.dataclass_schema import dbtClassMixin
from dbt.contracts.graph.compiled import CompileResultNode
from dbt.contracts.results import (
RunResult, RunResultsArtifact, TimingInfo,
ExecutionResult,
RunExecutionResult,
)
from dbt.contracts.util import VersionedSchema, schema_version
from dbt.logger import LogMessage
TaskTags = Optional[Dict[str, Any]]
TaskID = uuid.UUID
# Outputs
@dataclass
class RemoteResult(VersionedSchema):
logs: List[LogMessage]
@dataclass
class RemoteCompileResultMixin(RemoteResult):
raw_sql: str
compiled_sql: str
node: CompileResultNode
timing: List[TimingInfo]
@dataclass
@schema_version('remote-compile-result', 1)
class RemoteCompileResult(RemoteCompileResultMixin):
generated_at: datetime = field(default_factory=datetime.utcnow)
@property
def error(self):
return None
@dataclass
@schema_version('remote-execution-result', 1)
class RemoteExecutionResult(ExecutionResult, RemoteResult):
results: Sequence[RunResult]
args: Dict[str, Any] = field(default_factory=dict)
generated_at: datetime = field(default_factory=datetime.utcnow)
def write(self, path: str):
writable = RunResultsArtifact.from_execution_results(
generated_at=self.generated_at,
results=self.results,
elapsed_time=self.elapsed_time,
args=self.args,
)
writable.write(path)
@classmethod
def from_local_result(
cls,
base: RunExecutionResult,
logs: List[LogMessage],
) -> 'RemoteExecutionResult':
return cls(
generated_at=base.generated_at,
results=base.results,
elapsed_time=base.elapsed_time,
args=base.args,
logs=logs,
)
@dataclass
class ResultTable(dbtClassMixin):
column_names: List[str]
rows: List[Any]
@dataclass
@schema_version('remote-run-result', 1)
class RemoteRunResult(RemoteCompileResultMixin):
table: ResultTable
generated_at: datetime = field(default_factory=datetime.utcnow)

View File

@@ -14,7 +14,8 @@ class PreviousState:
manifest_path = self.path / 'manifest.json'
if manifest_path.exists() and manifest_path.is_file():
try:
self.manifest = WritableManifest.read(str(manifest_path))
# we want to bail with an error if schema versions don't match
self.manifest = WritableManifest.read_and_check_versions(str(manifest_path))
except IncompatibleSchemaException as exc:
exc.add_filename(str(manifest_path))
raise
@@ -22,7 +23,8 @@ class PreviousState:
results_path = self.path / 'run_results.json'
if results_path.exists() and results_path.is_file():
try:
self.results = RunResultsArtifact.read(str(results_path))
# we want to bail with an error if schema versions don't match
self.results = RunResultsArtifact.read_and_check_versions(str(results_path))
except IncompatibleSchemaException as exc:
exc.add_filename(str(results_path))
raise

View File

@@ -9,9 +9,10 @@ from dbt.clients.system import write_json, read_json
from dbt.exceptions import (
InternalException,
RuntimeException,
IncompatibleSchemaException
)
from dbt.version import __version__
from dbt.tracking import get_invocation_id
from dbt.events.functions import get_invocation_id
from dbt.dataclass_schema import dbtClassMixin
SourceKey = Tuple[str, str]
@@ -158,6 +159,8 @@ def get_metadata_env() -> Dict[str, str]:
}
# This is used in the ManifestMetadata, RunResultsMetadata, RunOperationResultMetadata,
# FreshnessMetadata, and CatalogMetadata classes
@dataclasses.dataclass
class BaseArtifactMetadata(dbtClassMixin):
dbt_schema_version: str
@@ -177,6 +180,17 @@ class BaseArtifactMetadata(dbtClassMixin):
return dct
# This is used as a class decorator to set the schema_version in the
# 'dbt_schema_version' class attribute. (It's copied into the metadata objects.)
# Name attributes of SchemaVersion in classes with the 'schema_version' decorator:
# manifest
# run-results
# run-operation-result
# sources
# catalog
# remote-compile-result
# remote-execution-result
# remote-run-result
def schema_version(name: str, version: int):
def inner(cls: Type[VersionedSchema]):
cls.dbt_schema_version = SchemaVersion(
@@ -187,6 +201,7 @@ def schema_version(name: str, version: int):
return inner
# This is used in the ArtifactMixin and RemoteResult classes
@dataclasses.dataclass
class VersionedSchema(dbtClassMixin):
dbt_schema_version: ClassVar[SchemaVersion]
@@ -198,6 +213,30 @@ class VersionedSchema(dbtClassMixin):
result['$id'] = str(cls.dbt_schema_version)
return result
@classmethod
def read_and_check_versions(cls, path: str):
try:
data = read_json(path)
except (EnvironmentError, ValueError) as exc:
raise RuntimeException(
f'Could not read {cls.__name__} at "{path}" as JSON: {exc}'
) from exc
# Check metadata version. There is a class variable 'dbt_schema_version', but
# that doesn't show up in artifacts, where it only exists in the 'metadata'
# dictionary.
if hasattr(cls, 'dbt_schema_version'):
if 'metadata' in data and 'dbt_schema_version' in data['metadata']:
previous_schema_version = data['metadata']['dbt_schema_version']
# cls.dbt_schema_version is a SchemaVersion object
if str(cls.dbt_schema_version) != previous_schema_version:
raise IncompatibleSchemaException(
expected=str(cls.dbt_schema_version),
found=previous_schema_version
)
return cls.from_dict(data) # type: ignore
T = TypeVar('T', bound='ArtifactMixin')
@@ -205,6 +244,8 @@ T = TypeVar('T', bound='ArtifactMixin')
# metadata should really be a Generic[T_M] where T_M is a TypeVar bound to
# BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:
# https://github.com/python/mypy/issues/7520
# This is used in the WritableManifest, RunResultsArtifact, RunOperationResultsArtifact,
# and CatalogArtifact
@dataclasses.dataclass(init=False)
class ArtifactMixin(VersionedSchema, Writable, Readable):
metadata: BaseArtifactMetadata

View File

@@ -22,7 +22,7 @@ class DateTimeSerialization(SerializationStrategy):
out = value.isoformat()
# Assume UTC if timezone is missing
if value.tzinfo is None:
out = out + "Z"
out += "Z"
return out
def deserialize(self, value):

View File

@@ -36,9 +36,9 @@ class DBTDeprecation:
if self.name not in active_deprecations:
desc = self.description.format(**kwargs)
msg = ui.line_wrap_message(
desc, prefix='* Deprecation Warning: '
desc, prefix='Deprecated functionality\n\n'
)
dbt.exceptions.warn_or_error(msg)
dbt.exceptions.warn_or_error(msg, log_fmt=ui.warning_tag('{}'))
self.track_deprecation_warn()
active_deprecations.add(self.name)
@@ -61,13 +61,20 @@ class PackageInstallPathDeprecation(DBTDeprecation):
class ConfigPathDeprecation(DBTDeprecation):
_name = 'project_config_path'
_description = '''\
The `{deprecated_path}` config has been deprecated in favor of `{exp_path}`.
The `{deprecated_path}` config has been renamed to `{exp_path}`.
Please update your `dbt_project.yml` configuration to reflect this change.
'''
class ConfigSourcePathDeprecation(ConfigPathDeprecation):
_name = 'project-config-source-paths'
class ConfigDataPathDeprecation(ConfigPathDeprecation):
_name = 'project-config-data-paths'
_adapter_renamed_description = """\
The adapter function `adapter.{old_name}` is deprecated and will be removed in
a future release of dbt. Please use `adapter.{new_name}` instead.
@@ -106,7 +113,8 @@ def warn(name, *args, **kwargs):
active_deprecations: Set[str] = set()
deprecations_list: List[DBTDeprecation] = [
ConfigPathDeprecation(),
ConfigSourcePathDeprecation(),
ConfigDataPathDeprecation(),
PackageInstallPathDeprecation(),
PackageRedirectDeprecation()
]

View File

@@ -6,7 +6,8 @@ from typing import List, Optional, Generic, TypeVar
from dbt.clients import system
from dbt.contracts.project import ProjectPackageMetadata
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import DepsSetDownloadDirectory
DOWNLOADS_PATH = None
@@ -31,7 +32,7 @@ def downloads_directory():
remove_downloads = True
system.make_directory(DOWNLOADS_PATH)
logger.debug("Set downloads directory='{}'".format(DOWNLOADS_PATH))
fire_event(DepsSetDownloadDirectory(path=DOWNLOADS_PATH))
yield DOWNLOADS_PATH

View File

@@ -12,7 +12,8 @@ from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.exceptions import (
ExecutableError, warn_or_error, raise_dependency_error
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import EnsureGitInstalled
from dbt import ui
PIN_PACKAGE_URL = 'https://docs.getdbt.com/docs/package-management#section-specifying-package-versions' # noqa
@@ -81,11 +82,7 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
)
except ExecutableError as exc:
if exc.cmd and exc.cmd[0] == 'git':
logger.error(
'Make sure git is installed on your machine. More '
'information: '
'https://docs.getdbt.com/docs/package-management'
)
fire_event(EnsureGitInstalled())
raise
return os.path.join(get_downloads_path(), dir_)

View File

@@ -6,7 +6,8 @@ from dbt.contracts.project import (
ProjectPackageMetadata,
LocalPackage,
)
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import DepsCreatingLocalSymlink, DepsSymlinkNotAvailable
class LocalPackageMixin:
@@ -57,12 +58,11 @@ class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
system.remove_file(dest_path)
if can_create_symlink:
logger.debug(' Creating symlink to local dependency.')
fire_event(DepsCreatingLocalSymlink())
system.make_symlink(src_path, dest_path)
else:
logger.debug(' Symlinks are not available on this '
'OS, copying dependency.')
fire_event(DepsSymlinkNotAvailable())
shutil.copytree(src_path, dest_path)

View File

@@ -1,4 +1,5 @@
import os
import functools
from typing import List
from dbt import semver
@@ -14,6 +15,7 @@ from dbt.exceptions import (
DependencyException,
package_not_found,
)
from dbt.utils import _connection_exception_retry as connection_exception_retry
class RegistryPackageMixin:
@@ -68,9 +70,28 @@ class RegistryPinnedPackage(RegistryPackageMixin, PinnedPackage):
system.make_directory(os.path.dirname(tar_path))
download_url = metadata.downloads.tarball
system.download_with_retries(download_url, tar_path)
deps_path = project.packages_install_path
package_name = self.get_project_name(project, renderer)
download_untar_fn = functools.partial(
self.download_and_untar,
download_url,
tar_path,
deps_path,
package_name
)
connection_exception_retry(download_untar_fn, 5)
def download_and_untar(self, download_url, tar_path, deps_path, package_name):
"""
Sometimes the download of the files fails and we want to retry. Sometimes the
download appears successful but the file did not make it through as expected
(generally due to a github incident). Either way we want to retry downloading
and untarring to see if we can get a success. Call this within
`_connection_exception_retry`
"""
system.download(download_url, tar_path)
system.untar_package(tar_path, deps_path, package_name)
@@ -127,9 +148,12 @@ class RegistryUnpinnedPackage(
raise DependencyException(new_msg) from e
available = registry.get_available_versions(self.package)
prerelease_version_specified = any(
bool(version.prerelease) for version in self.versions
)
installable = semver.filter_installable(
available,
self.install_prerelease
self.install_prerelease or prerelease_version_specified
)
available_latest = installable[-1]

View File

@@ -3,7 +3,6 @@ from typing import Dict, List, NoReturn, Union, Type, Iterator, Set
from dbt.exceptions import raise_dependency_error, InternalException
from dbt.context.target import generate_target_context
from dbt.config import Project, RuntimeConfig
from dbt.config.renderer import DbtProjectYamlRenderer
from dbt.deps.base import BasePackage, PinnedPackage, UnpinnedPackage
@@ -126,8 +125,7 @@ def resolve_packages(
pending = PackageListing.from_contracts(packages)
final = PackageListing()
ctx = generate_target_context(config, config.cli_vars)
renderer = DbtProjectYamlRenderer(ctx)
renderer = DbtProjectYamlRenderer(config, config.cli_vars)
while pending:
next_pending = PackageListing()

68
core/dbt/events/README.md Normal file
View File

@@ -0,0 +1,68 @@
# Events Module
The Events module is the implmentation for structured logging. These events represent both a programatic interface to dbt processes as well as human-readable messaging in one centralized place. The centralization allows for leveraging mypy to enforce interface invariants across all dbt events, and the distinct type layer allows for decoupling events and libraries such as loggers.
# Using the Events Module
The event module provides types that represent what is happening in dbt in `events.types`. These types are intended to represent an exhaustive list of all things happening within dbt that will need to be logged, streamed, or printed. To fire an event, `events.functions::fire_event` is the entry point to the module from everywhere in dbt.
# Adding a New Event
In `events.types` add a new class that represents the new event. All events must be a dataclass with, at minimum, a code. You may also include some other values to construct downstream messaging. Only include the data necessary to construct this message within this class. You must extend all destinations (e.g. - if your log message belongs on the cli, extend `Cli`) as well as the loglevel this event belongs to. This system has been designed to take full advantage of mypy so running it will catch anything you may miss.
## Required for Every Event
- a string attribute `code`, that's unique across events
- assign a log level by extending `DebugLevel`, `InfoLevel`, `WarnLevel`, or `ErrorLevel`
- a message()
- extend `File` and/or `Cli` based on where it should output
Example
```
@dataclass
class PartialParsingDeletedExposure(DebugLevel, Cli, File):
unique_id: str
code: str = "I049"
def message(self) -> str:
return f"Partial parsing: deleted exposure {self.unique_id}"
```
## Optional (based on your event)
- Events associated with node status changes must have `report_node_data` passed in and be extended with `NodeInfo`
- define `asdict` if your data is not serializable to json
Example
```
@dataclass
class SuperImportantNodeEvent(InfoLevel, File, NodeInfo):
node_name: str
run_result: RunResult
report_node_data: ParsedModelNode # may vary
code: str = "Q036"
def message(self) -> str:
return f"{self.node_name} had overly verbose result of {run_result}"
@classmethod
def asdict(cls, data: list) -> dict:
return dict((k, str(v)) for k, v in data)
```
All values other than `code` and `report_node_data` will be included in the `data` node of the json log output.
Once your event has been added, add a dummy call to your new event at the bottom of `types.py` and also add your new Event to the list `sample_values` in `test/unit/test_events.py'.
# Adapter Maintainers
To integrate existing log messages from adapters, you likely have a line of code like this in your adapter already:
```python
from dbt.logger import GLOBAL_LOGGER as logger
```
Simply change it to these two lines with your adapter's database name, and all your existing call sites will now use the new system for v1.0:
```python
from dbt.events import AdapterLogger
logger = AdapterLogger("<database name>")
# e.g. AdapterLogger("Snowflake")
```

View File

@@ -0,0 +1 @@
from .adapter_endpoint import AdapterLogger # noqa: F401

View File

@@ -0,0 +1,65 @@
from dataclasses import dataclass
from dbt.events.functions import fire_event
from dbt.events.types import (
AdapterEventDebug, AdapterEventInfo, AdapterEventWarning, AdapterEventError
)
@dataclass
class AdapterLogger():
name: str
def debug(self, msg, *args, exc_info=None, extra=None, stack_info=False):
event = AdapterEventDebug(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def info(self, msg, *args, exc_info=None, extra=None, stack_info=False):
event = AdapterEventInfo(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def warning(self, msg, *args, exc_info=None, extra=None, stack_info=False):
event = AdapterEventWarning(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def error(self, msg, *args, exc_info=None, extra=None, stack_info=False):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
# The default exc_info=True is what makes this method different
def exception(self, msg, *args, exc_info=True, extra=None, stack_info=False):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def critical(self, msg, *args, exc_info=False, extra=None, stack_info=False):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)

View File

@@ -0,0 +1,173 @@
from abc import ABCMeta, abstractmethod, abstractproperty
from dataclasses import dataclass
from datetime import datetime
import os
import threading
from typing import Any, Optional
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# These base types define the _required structure_ for the concrete event #
# types defined in types.py #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# in preparation for #3977
class TestLevel():
def level_tag(self) -> str:
return "test"
class DebugLevel():
def level_tag(self) -> str:
return "debug"
class InfoLevel():
def level_tag(self) -> str:
return "info"
class WarnLevel():
def level_tag(self) -> str:
return "warn"
class ErrorLevel():
def level_tag(self) -> str:
return "error"
class Cache():
# Events with this class will only be logged when the `--log-cache-events` flag is passed
pass
@dataclass
class Node():
node_path: str
node_name: str
unique_id: str
resource_type: str
materialized: str
node_status: str
node_started_at: datetime
node_finished_at: Optional[datetime]
type: str = 'node_status'
@dataclass
class ShowException():
# N.B.:
# As long as we stick with the current convention of setting the member vars in the
# `message` method of subclasses, this is a safe operation.
# If that ever changes we'll want to reassess.
def __post_init__(self):
self.exc_info: Any = True
self.stack_info: Any = None
self.extra: Any = None
# TODO add exhaustiveness checking for subclasses
# can't use ABCs with @dataclass because of https://github.com/python/mypy/issues/5374
# top-level superclass for all events
class Event(metaclass=ABCMeta):
# fields that should be on all events with their default implementations
log_version: int = 1
ts: Optional[datetime] = None # use getter for non-optional
ts_rfc3339: Optional[str] = None # use getter for non-optional
pid: Optional[int] = None # use getter for non-optional
node_info: Optional[Node]
# four digit string code that uniquely identifies this type of event
# uniqueness and valid characters are enforced by tests
@abstractproperty
@staticmethod
def code() -> str:
raise Exception("code() not implemented for event")
# do not define this yourself. inherit it from one of the above level types.
@abstractmethod
def level_tag(self) -> str:
raise Exception("level_tag not implemented for Event")
# Solely the human readable message. Timestamps and formatting will be added by the logger.
# Must override yourself
@abstractmethod
def message(self) -> str:
raise Exception("msg not implemented for Event")
# exactly one time stamp per concrete event
def get_ts(self) -> datetime:
if not self.ts:
self.ts = datetime.utcnow()
self.ts_rfc3339 = self.ts.strftime('%Y-%m-%dT%H:%M:%S.%fZ')
return self.ts
# preformatted time stamp
def get_ts_rfc3339(self) -> str:
if not self.ts_rfc3339:
# get_ts() creates the formatted string too so all time logic is centralized
self.get_ts()
return self.ts_rfc3339 # type: ignore
# exactly one pid per concrete event
def get_pid(self) -> int:
if not self.pid:
self.pid = os.getpid()
return self.pid
# in theory threads can change so we don't cache them.
def get_thread_name(self) -> str:
return threading.current_thread().getName()
@classmethod
def get_invocation_id(cls) -> str:
from dbt.events.functions import get_invocation_id
return get_invocation_id()
# default dict factory for all events. can override on concrete classes.
@classmethod
def asdict(cls, data: list) -> dict:
d = dict()
for k, v in data:
# stringify all exceptions
if isinstance(v, Exception) or isinstance(v, BaseException):
d[k] = str(v)
# skip all binary data
elif isinstance(v, bytes):
continue
else:
d[k] = v
return d
@dataclass # type: ignore
class NodeInfo(Event, metaclass=ABCMeta):
report_node_data: Any # Union[ParsedModelNode, ...] TODO: resolve circular imports
def get_node_info(self):
node_info = Node(
node_path=self.report_node_data.path,
node_name=self.report_node_data.name,
unique_id=self.report_node_data.unique_id,
resource_type=self.report_node_data.resource_type.value,
materialized=self.report_node_data.config.get('materialized'),
node_status=str(self.report_node_data._event_status.get('node_status')),
node_started_at=self.report_node_data._event_status.get("started_at"),
node_finished_at=self.report_node_data._event_status.get("finished_at")
)
return node_info
class File(Event, metaclass=ABCMeta):
# Solely the human readable message. Timestamps and formatting will be added by the logger.
def file_msg(self) -> str:
# returns the event msg unless overriden in the concrete class
return self.message()
class Cli(Event, metaclass=ABCMeta):
# Solely the human readable message. Timestamps and formatting will be added by the logger.
def cli_msg(self) -> str:
# returns the event msg unless overriden in the concrete class
return self.message()

49
core/dbt/events/format.py Normal file
View File

@@ -0,0 +1,49 @@
from dbt import ui
from dbt.node_types import NodeType
from typing import Optional, Union
def format_fancy_output_line(
msg: str, status: str, index: Optional[int],
total: Optional[int], execution_time: Optional[float] = None,
truncate: bool = False
) -> str:
if index is None or total is None:
progress = ''
else:
progress = '{} of {} '.format(index, total)
prefix = "{progress}{message}".format(
progress=progress,
message=msg)
truncate_width = ui.printer_width() - 3
justified = prefix.ljust(ui.printer_width(), ".")
if truncate and len(justified) > truncate_width:
justified = justified[:truncate_width] + '...'
if execution_time is None:
status_time = ""
else:
status_time = " in {execution_time:0.2f}s".format(
execution_time=execution_time)
output = "{justified} [{status}{status_time}]".format(
justified=justified, status=status, status_time=status_time)
return output
def _pluralize(string: Union[str, NodeType]) -> str:
try:
convert = NodeType(string)
except ValueError:
return f'{string}s'
else:
return convert.pluralize()
def pluralize(count, string: Union[str, NodeType]):
pluralized: str = str(string)
if count != 1:
pluralized = _pluralize(string)
return f'{count} {pluralized}'

View File

@@ -0,0 +1,383 @@
from colorama import Style
from datetime import datetime
import dbt.events.functions as this # don't worry I hate it too.
from dbt.events.base_types import Cli, Event, File, ShowException, NodeInfo, Cache
from dbt.events.types import EventBufferFull, T_Event, MainReportVersion, EmptyLine
import dbt.flags as flags
# TODO this will need to move eventually
from dbt.logger import SECRET_ENV_PREFIX, make_log_dir_if_missing, GLOBAL_LOGGER
import json
import io
from io import StringIO, TextIOWrapper
import logbook
import logging
from logging import Logger
import sys
from logging.handlers import RotatingFileHandler
import os
import uuid
import threading
from typing import Any, Callable, Dict, List, Optional, Union
import dataclasses
from collections import deque
# create the global event history buffer with the default max size (10k)
# python 3.7 doesn't support type hints on globals, but mypy requires them. hence the ignore.
# TODO the flags module has not yet been resolved when this is created
global EVENT_HISTORY
EVENT_HISTORY = deque(maxlen=flags.EVENT_BUFFER_SIZE) # type: ignore
# create the global file logger with no configuration
global FILE_LOG
FILE_LOG = logging.getLogger('default_file')
null_handler = logging.NullHandler()
FILE_LOG.addHandler(null_handler)
# set up logger to go to stdout with defaults
# setup_event_logger will be called once args have been parsed
global STDOUT_LOG
STDOUT_LOG = logging.getLogger('default_stdout')
STDOUT_LOG.setLevel(logging.INFO)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.INFO)
STDOUT_LOG.addHandler(stdout_handler)
format_color = True
format_json = False
invocation_id: Optional[str] = None
def setup_event_logger(log_path, level_override=None):
# flags have been resolved, and log_path is known
global EVENT_HISTORY
EVENT_HISTORY = deque(maxlen=flags.EVENT_BUFFER_SIZE) # type: ignore
make_log_dir_if_missing(log_path)
this.format_json = flags.LOG_FORMAT == 'json'
# USE_COLORS can be None if the app just started and the cli flags
# havent been applied yet
this.format_color = True if flags.USE_COLORS else False
# TODO this default should live somewhere better
log_dest = os.path.join(log_path, 'dbt.log')
level = level_override or (logging.DEBUG if flags.DEBUG else logging.INFO)
# overwrite the STDOUT_LOG logger with the configured one
this.STDOUT_LOG = logging.getLogger('configured_std_out')
this.STDOUT_LOG.setLevel(level)
FORMAT = "%(message)s"
stdout_passthrough_formatter = logging.Formatter(fmt=FORMAT)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(stdout_passthrough_formatter)
stdout_handler.setLevel(level)
# clear existing stdout TextIOWrapper stream handlers
this.STDOUT_LOG.handlers = [
h for h in this.STDOUT_LOG.handlers
if not (hasattr(h, 'stream') and isinstance(h.stream, TextIOWrapper)) # type: ignore
]
this.STDOUT_LOG.addHandler(stdout_handler)
# overwrite the FILE_LOG logger with the configured one
this.FILE_LOG = logging.getLogger('configured_file')
this.FILE_LOG.setLevel(logging.DEBUG) # always debug regardless of user input
file_passthrough_formatter = logging.Formatter(fmt=FORMAT)
file_handler = RotatingFileHandler(
filename=log_dest,
encoding='utf8',
maxBytes=10 * 1024 * 1024, # 10 mb
backupCount=5
)
file_handler.setFormatter(file_passthrough_formatter)
file_handler.setLevel(logging.DEBUG) # always debug regardless of user input
this.FILE_LOG.handlers.clear()
this.FILE_LOG.addHandler(file_handler)
# used for integration tests
def capture_stdout_logs() -> StringIO:
capture_buf = io.StringIO()
stdout_capture_handler = logging.StreamHandler(capture_buf)
stdout_handler.setLevel(logging.DEBUG)
this.STDOUT_LOG.addHandler(stdout_capture_handler)
return capture_buf
# used for integration tests
def stop_capture_stdout_logs() -> None:
this.STDOUT_LOG.handlers = [
h for h in this.STDOUT_LOG.handlers
if not (hasattr(h, 'stream') and isinstance(h.stream, StringIO)) # type: ignore
]
def env_secrets() -> List[str]:
return [
v for k, v in os.environ.items()
if k.startswith(SECRET_ENV_PREFIX)
]
def scrub_secrets(msg: str, secrets: List[str]) -> str:
scrubbed = msg
for secret in secrets:
scrubbed = scrubbed.replace(secret, "*****")
return scrubbed
# returns a dictionary representation of the event fields. You must specify which of the
# available messages you would like to use (i.e. - e.message, e.cli_msg(), e.file_msg())
# used for constructing json formatted events. includes secrets which must be scrubbed at
# the usage site.
def event_to_serializable_dict(
e: T_Event, ts_fn: Callable[[datetime], str],
msg_fn: Callable[[T_Event], str]
) -> Dict[str, Any]:
data = dict()
node_info = dict()
log_line = dict()
try:
log_line = dataclasses.asdict(e, dict_factory=type(e).asdict)
except AttributeError:
event_type = type(e).__name__
raise Exception( # TODO this may hang async threads
f"type {event_type} is not serializable to json."
f" First make sure that the call sites for {event_type} match the type hints"
f" and if they do, you can override the dataclass method `asdict` in {event_type} in"
" types.py to define your own serialization function to a dictionary of valid json"
" types"
)
if isinstance(e, NodeInfo):
node_info = dataclasses.asdict(e.get_node_info())
for field, value in log_line.items(): # type: ignore[attr-defined]
if field not in ["code", "report_node_data"]:
data[field] = value
event_dict = {
'type': 'log_line',
'log_version': e.log_version,
'ts': ts_fn(e.get_ts()),
'pid': e.get_pid(),
'msg': msg_fn(e),
'level': e.level_tag(),
'data': data,
'invocation_id': e.get_invocation_id(),
'thread_name': e.get_thread_name(),
'node_info': node_info,
'code': e.code
}
return event_dict
# translates an Event to a completely formatted text-based log line
# you have to specify which message you want. (i.e. - e.message, e.cli_msg(), e.file_msg())
# type hinting everything as strings so we don't get any unintentional string conversions via str()
def create_info_text_log_line(e: T_Event, msg_fn: Callable[[T_Event], str]) -> str:
color_tag: str = '' if this.format_color else Style.RESET_ALL
ts: str = e.get_ts().strftime("%H:%M:%S")
scrubbed_msg: str = scrub_secrets(msg_fn(e), env_secrets())
log_line: str = f"{color_tag}{ts} {scrubbed_msg}"
return log_line
def create_debug_text_log_line(e: T_Event, msg_fn: Callable[[T_Event], str]) -> str:
log_line: str = ''
# Create a separator if this is the beginning of an invocation
if type(e) == MainReportVersion:
separator = 30 * '='
log_line = f'\n\n{separator} {e.get_ts()} | {get_invocation_id()} {separator}\n'
color_tag: str = '' if this.format_color else Style.RESET_ALL
ts: str = e.get_ts().strftime("%H:%M:%S.%f")
scrubbed_msg: str = scrub_secrets(msg_fn(e), env_secrets())
level: str = e.level_tag() if len(e.level_tag()) == 5 else f"{e.level_tag()} "
thread = ''
if threading.current_thread().getName():
thread_name = threading.current_thread().getName()
thread_name = thread_name[:10]
thread_name = thread_name.ljust(10, ' ')
thread = f' [{thread_name}]:'
log_line = log_line + f"{color_tag}{ts} [{level}]{thread} {scrubbed_msg}"
return log_line
# translates an Event to a completely formatted json log line
# you have to specify which message you want. (i.e. - e.message(), e.cli_msg(), e.file_msg())
def create_json_log_line(e: T_Event, msg_fn: Callable[[T_Event], str]) -> Optional[str]:
if type(e) == EmptyLine:
return None # will not be sent to logger
# using preformatted string instead of formatting it here to be extra careful about timezone
values = event_to_serializable_dict(e, lambda _: e.get_ts_rfc3339(), lambda x: msg_fn(x))
raw_log_line = json.dumps(values, sort_keys=True)
return scrub_secrets(raw_log_line, env_secrets())
# calls create_stdout_text_log_line() or create_json_log_line() according to logger config
def create_log_line(
e: T_Event,
msg_fn: Callable[[T_Event], str],
file_output=False
) -> Optional[str]:
if this.format_json:
return create_json_log_line(e, msg_fn) # json output, both console and file
elif file_output is True or flags.DEBUG:
return create_debug_text_log_line(e, msg_fn) # default file output
else:
return create_info_text_log_line(e, msg_fn) # console output
# allows for resuse of this obnoxious if else tree.
# do not use for exceptions, it doesn't pass along exc_info, stack_info, or extra
def send_to_logger(l: Union[Logger, logbook.Logger], level_tag: str, log_line: str):
if not log_line:
return
if level_tag == 'test':
# TODO after implmenting #3977 send to new test level
l.debug(log_line)
elif level_tag == 'debug':
l.debug(log_line)
elif level_tag == 'info':
l.info(log_line)
elif level_tag == 'warn':
l.warning(log_line)
elif level_tag == 'error':
l.error(log_line)
else:
raise AssertionError(
f"While attempting to log {log_line}, encountered the unhandled level: {level_tag}"
)
def send_exc_to_logger(
l: Logger,
level_tag: str,
log_line: str,
exc_info=True,
stack_info=False,
extra=False
):
if level_tag == 'test':
# TODO after implmenting #3977 send to new test level
l.debug(
log_line,
exc_info=exc_info,
stack_info=stack_info,
extra=extra
)
elif level_tag == 'debug':
l.debug(
log_line,
exc_info=exc_info,
stack_info=stack_info,
extra=extra
)
elif level_tag == 'info':
l.info(
log_line,
exc_info=exc_info,
stack_info=stack_info,
extra=extra
)
elif level_tag == 'warn':
l.warning(
log_line,
exc_info=exc_info,
stack_info=stack_info,
extra=extra
)
elif level_tag == 'error':
l.error(
log_line,
exc_info=exc_info,
stack_info=stack_info,
extra=extra
)
else:
raise AssertionError(
f"While attempting to log {log_line}, encountered the unhandled level: {level_tag}"
)
# an alternative to fire_event which only creates and logs the event value
# if the condition is met. Does nothing otherwise.
def fire_event_if(conditional: bool, lazy_e: Callable[[], Event]) -> None:
if conditional:
fire_event(lazy_e())
# top-level method for accessing the new eventing system
# this is where all the side effects happen branched by event type
# (i.e. - mutating the event history, printing to stdout, logging
# to files, etc.)
def fire_event(e: Event) -> None:
# skip logs when `--log-cache-events` is not passed
if isinstance(e, Cache) and not flags.LOG_CACHE_EVENTS:
return
# if and only if the event history deque will be completely filled by this event
# fire warning that old events are now being dropped
global EVENT_HISTORY
if len(EVENT_HISTORY) == (flags.EVENT_BUFFER_SIZE - 1):
EVENT_HISTORY.append(e)
fire_event(EventBufferFull())
else:
EVENT_HISTORY.append(e)
# backwards compatibility for plugins that require old logger (dbt-rpc)
if flags.ENABLE_LEGACY_LOGGER:
# using Event::message because the legacy logger didn't differentiate messages by
# destination
log_line = create_log_line(e, msg_fn=lambda x: x.message())
if log_line:
send_to_logger(GLOBAL_LOGGER, e.level_tag(), log_line)
return # exit the function to avoid using the current logger as well
# always logs debug level regardless of user input
if isinstance(e, File):
log_line = create_log_line(e, msg_fn=lambda x: x.file_msg(), file_output=True)
# doesn't send exceptions to exception logger
if log_line:
send_to_logger(FILE_LOG, level_tag=e.level_tag(), log_line=log_line)
if isinstance(e, Cli):
# explicitly checking the debug flag here so that potentially expensive-to-construct
# log messages are not constructed if debug messages are never shown.
if e.level_tag() == 'debug' and not flags.DEBUG:
return # eat the message in case it was one of the expensive ones
log_line = create_log_line(e, msg_fn=lambda x: x.cli_msg())
if log_line:
if not isinstance(e, ShowException):
send_to_logger(STDOUT_LOG, level_tag=e.level_tag(), log_line=log_line)
# CliEventABC and ShowException
else:
send_exc_to_logger(
STDOUT_LOG,
level_tag=e.level_tag(),
log_line=log_line,
exc_info=e.exc_info,
stack_info=e.stack_info,
extra=e.extra
)
def get_invocation_id() -> str:
global invocation_id
if invocation_id is None:
invocation_id = str(uuid.uuid4())
return invocation_id
def set_invocation_id() -> None:
# This is primarily for setting the invocation_id for separate
# commands in the dbt servers. It shouldn't be necessary for the CLI.
global invocation_id
invocation_id = str(uuid.uuid4())

72
core/dbt/events/stubs.py Normal file
View File

@@ -0,0 +1,72 @@
from typing import (
Any,
List,
NamedTuple,
Optional,
Dict,
)
# N.B.:
# These stubs were autogenerated by stubgen and then hacked
# to pieces to ensure we had something other than "Any" types
# where using external classes to instantiate event subclasses
# in events/types.py.
#
# This goes away when we turn mypy on for everything.
#
# Don't trust them too much at all!
class _ReferenceKey(NamedTuple):
database: Any
schema: Any
identifier: Any
class _CachedRelation:
referenced_by: Any
inner: Any
class BaseRelation:
path: Any
type: Optional[Any]
quote_character: str
include_policy: Any
quote_policy: Any
dbt_created: bool
class InformationSchema(BaseRelation):
information_schema_view: Optional[str]
class CompiledNode():
compiled_sql: Optional[str]
extra_ctes_injected: bool
extra_ctes: List[Any]
relation_name: Optional[str]
class CompiledModelNode(CompiledNode):
resource_type: Any
class ParsedModelNode():
resource_type: Any
class ParsedHookNode():
resource_type: Any
index: Optional[int]
class RunResult():
status: str
timing: List[Any]
thread_id: str
execution_time: float
adapter_response: Dict[str, Any]
message: Optional[str]
failures: Optional[int]
node: Any

View File

@@ -0,0 +1,81 @@
from dataclasses import dataclass
from .types import (
InfoLevel,
DebugLevel,
WarnLevel,
ErrorLevel,
ShowException,
Cli
)
# Keeping log messages for testing separate since they are used for debugging.
# Reuse the existing messages when adding logs to tests.
@dataclass
class IntegrationTestInfo(InfoLevel, Cli):
msg: str
code: str = "T001"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestDebug(DebugLevel, Cli):
msg: str
code: str = "T002"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestWarn(WarnLevel, Cli):
msg: str
code: str = "T003"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestError(ErrorLevel, Cli):
msg: str
code: str = "T004"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestException(ShowException, ErrorLevel, Cli):
msg: str
code: str = "T005"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class UnitTestInfo(InfoLevel, Cli):
msg: str
code: str = "T006"
def message(self) -> str:
return f"Unit Test: {self.msg}"
# since mypy doesn't run on every file we need to suggest to mypy that every
# class gets instantiated. But we don't actually want to run this code.
# making the conditional `if False` causes mypy to skip it as dead code so
# we need to skirt around that by computing something it doesn't check statically.
#
# TODO remove these lines once we run mypy everywhere.
if 1 == 0:
IntegrationTestInfo(msg='')
IntegrationTestDebug(msg='')
IntegrationTestWarn(msg='')
IntegrationTestError(msg='')
IntegrationTestException(msg='')
UnitTestInfo(msg='')

2776
core/dbt/events/types.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,8 @@ import builtins
import functools
from typing import NoReturn, Optional, Mapping, Any
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event, scrub_secrets, env_secrets
from dbt.events.types import GeneralWarningMsg, GeneralWarningException
from dbt.node_types import NodeType
from dbt import flags
from dbt.ui import line_wrap_message, warning_tag
@@ -52,7 +53,7 @@ class RuntimeException(RuntimeError, Exception):
def __init__(self, msg, node=None):
self.stack = []
self.node = node
self.msg = msg
self.msg = scrub_secrets(msg, env_secrets())
def add_node(self, node=None):
if node is not None and node is not self.node:
@@ -242,6 +243,15 @@ class ValidationException(RuntimeException):
MESSAGE = "Validation Error"
class ParsingException(RuntimeException):
CODE = 10015
MESSAGE = "Parsing Error"
@property
def type(self):
return 'Parsing'
class JSONValidationException(ValidationException):
def __init__(self, typename, errors):
self.typename = typename
@@ -444,12 +454,38 @@ def raise_compiler_error(msg, node=None) -> NoReturn:
raise CompilationException(msg, node)
def raise_parsing_error(msg, node=None) -> NoReturn:
raise ParsingException(msg, node)
def raise_database_error(msg, node=None) -> NoReturn:
raise DatabaseException(msg, node)
def raise_dependency_error(msg) -> NoReturn:
raise DependencyException(msg)
raise DependencyException(scrub_secrets(msg, env_secrets()))
def raise_git_cloning_error(error: CommandResultError) -> NoReturn:
error.cmd = scrub_secrets(str(error.cmd), env_secrets())
raise error
def raise_git_cloning_problem(repo) -> NoReturn:
repo = scrub_secrets(repo, env_secrets())
msg = '''\
Something went wrong while cloning {}
Check the debug logs for more information
'''
raise RuntimeException(msg.format(repo))
def disallow_secret_env_var(env_var_name) -> NoReturn:
"""Raise an error when a secret env var is referenced outside allowed
rendering contexts"""
msg = ("Secret env vars are allowed only in profiles.yml or packages.yml. "
"Found '{env_var_name}' referenced elsewhere.")
raise_parsing_error(msg.format(env_var_name=env_var_name))
def invalid_type_error(method_name, arg_name, got_value, expected_type,
@@ -667,9 +703,9 @@ def missing_materialization(model, adapter_type):
def bad_package_spec(repo, spec, error_message):
raise InternalException(
"Error checking out spec='{}' for repo {}\n{}".format(
spec, repo, error_message))
msg = "Error checking out spec='{}' for repo {}\n{}".format(spec, repo, error_message)
raise InternalException(scrub_secrets(msg, env_secrets()))
def raise_cache_inconsistent(message):
@@ -738,6 +774,10 @@ def system_error(operation_name):
class ConnectionException(Exception):
"""
There was a problem with the connection that returned a bad response,
timed out, or resulted in a file that is corrupt.
"""
pass
@@ -974,21 +1014,16 @@ def raise_duplicate_alias(
def warn_or_error(msg, node=None, log_fmt=None):
if flags.WARN_ERROR:
raise_compiler_error(msg, node)
raise_compiler_error(scrub_secrets(msg, env_secrets()), node)
else:
if log_fmt is not None:
msg = log_fmt.format(msg)
logger.warning(msg)
fire_event(GeneralWarningMsg(msg=msg, log_fmt=log_fmt))
def warn_or_raise(exc, log_fmt=None):
if flags.WARN_ERROR:
raise exc
else:
msg = str(exc)
if log_fmt is not None:
msg = log_fmt.format(msg)
logger.warning(msg)
fire_event(GeneralWarningException(exc=exc, log_fmt=log_fmt))
def warn(msg, node=None):

View File

@@ -17,7 +17,6 @@ PROFILES_DIR = os.path.expanduser(
STRICT_MODE = False # Only here for backwards compatibility
FULL_REFRESH = False # subcommand
STORE_FAILURES = False # subcommand
GREEDY = None # subcommand
# Global CLI commands
USE_EXPERIMENTAL_PARSER = None
@@ -33,6 +32,9 @@ FAIL_FAST = None
SEND_ANONYMOUS_USAGE_STATS = None
PRINTER_WIDTH = 80
WHICH = None
INDIRECT_SELECTION = None
LOG_CACHE_EVENTS = None
EVENT_BUFFER_SIZE = 100000
# Global CLI defaults. These flags are set from three places:
# CLI args, environment variables, and user_config (profiles.yml).
@@ -50,7 +52,10 @@ flag_defaults = {
"VERSION_CHECK": True,
"FAIL_FAST": False,
"SEND_ANONYMOUS_USAGE_STATS": True,
"PRINTER_WIDTH": 80
"PRINTER_WIDTH": 80,
"INDIRECT_SELECTION": 'eager',
"LOG_CACHE_EVENTS": False,
"EVENT_BUFFER_SIZE": 100000
}
@@ -81,6 +86,7 @@ def env_set_path(key: str) -> Optional[Path]:
MACRO_DEBUGGING = env_set_truthy('DBT_MACRO_DEBUGGING')
DEFER_MODE = env_set_truthy('DBT_DEFER_TO_STATE')
ARTIFACT_STATE_PATH = env_set_path('DBT_ARTIFACT_STATE_PATH')
ENABLE_LEGACY_LOGGER = env_set_truthy('DBT_ENABLE_LEGACY_LOGGER')
def _get_context():
@@ -95,15 +101,14 @@ MP_CONTEXT = _get_context()
def set_from_args(args, user_config):
global STRICT_MODE, FULL_REFRESH, WARN_ERROR, \
USE_EXPERIMENTAL_PARSER, STATIC_PARSER, WRITE_JSON, PARTIAL_PARSE, \
USE_COLORS, STORE_FAILURES, PROFILES_DIR, DEBUG, LOG_FORMAT, GREEDY, \
USE_COLORS, STORE_FAILURES, PROFILES_DIR, DEBUG, LOG_FORMAT, INDIRECT_SELECTION, \
VERSION_CHECK, FAIL_FAST, SEND_ANONYMOUS_USAGE_STATS, PRINTER_WIDTH, \
WHICH
WHICH, LOG_CACHE_EVENTS, EVENT_BUFFER_SIZE
STRICT_MODE = False # backwards compatibility
# cli args without user_config or env var option
FULL_REFRESH = getattr(args, 'full_refresh', FULL_REFRESH)
STORE_FAILURES = getattr(args, 'store_failures', STORE_FAILURES)
GREEDY = getattr(args, 'greedy', GREEDY)
WHICH = getattr(args, 'which', WHICH)
# global cli flags with env var and user_config alternatives
@@ -120,6 +125,9 @@ def set_from_args(args, user_config):
FAIL_FAST = get_flag_value('FAIL_FAST', args, user_config)
SEND_ANONYMOUS_USAGE_STATS = get_flag_value('SEND_ANONYMOUS_USAGE_STATS', args, user_config)
PRINTER_WIDTH = get_flag_value('PRINTER_WIDTH', args, user_config)
INDIRECT_SELECTION = get_flag_value('INDIRECT_SELECTION', args, user_config)
LOG_CACHE_EVENTS = get_flag_value('LOG_CACHE_EVENTS', args, user_config)
EVENT_BUFFER_SIZE = get_flag_value('EVENT_BUFFER_SIZE', args, user_config)
def get_flag_value(flag, args, user_config):
@@ -132,7 +140,13 @@ def get_flag_value(flag, args, user_config):
if env_value is not None and env_value != '':
env_value = env_value.lower()
# non Boolean values
if flag in ['LOG_FORMAT', 'PRINTER_WIDTH', 'PROFILES_DIR']:
if flag in [
'LOG_FORMAT',
'PRINTER_WIDTH',
'PROFILES_DIR',
'INDIRECT_SELECTION',
'EVENT_BUFFER_SIZE'
]:
flag_value = env_value
else:
flag_value = env_set_bool(env_value)
@@ -140,7 +154,7 @@ def get_flag_value(flag, args, user_config):
flag_value = getattr(user_config, lc_flag)
else:
flag_value = flag_defaults[flag]
if flag == 'PRINTER_WIDTH': # printer_width must be an int or it hangs
if flag in ['PRINTER_WIDTH', 'EVENT_BUFFER_SIZE']: # must be ints
flag_value = int(flag_value)
if flag == 'PROFILES_DIR':
flag_value = os.path.abspath(flag_value)
@@ -163,4 +177,7 @@ def get_flag_dict():
"fail_fast": FAIL_FAST,
"send_anonymous_usage_stats": SEND_ANONYMOUS_USAGE_STATS,
"printer_width": PRINTER_WIDTH,
"indirect_selection": INDIRECT_SELECTION,
"log_cache_events": LOG_CACHE_EVENTS,
"event_buffer_size": EVENT_BUFFER_SIZE
}

View File

@@ -16,16 +16,18 @@ from .selector_spec import (
SelectionIntersection,
SelectionDifference,
SelectionCriteria,
IndirectSelection
)
INTERSECTION_DELIMITER = ','
DEFAULT_INCLUDES: List[str] = ['fqn:*', 'source:*', 'exposure:*']
DEFAULT_INCLUDES: List[str] = ['fqn:*', 'source:*', 'exposure:*', 'metric:*']
DEFAULT_EXCLUDES: List[str] = []
def parse_union(
components: List[str], expect_exists: bool, greedy: bool = False
components: List[str], expect_exists: bool,
indirect_selection: IndirectSelection = IndirectSelection.Eager
) -> SelectionUnion:
# turn ['a b', 'c'] -> ['a', 'b', 'c']
raw_specs = itertools.chain.from_iterable(
@@ -36,7 +38,7 @@ def parse_union(
# ['a', 'b', 'c,d'] -> union('a', 'b', intersection('c', 'd'))
for raw_spec in raw_specs:
intersection_components: List[SelectionSpec] = [
SelectionCriteria.from_single_spec(part, greedy=greedy)
SelectionCriteria.from_single_spec(part, indirect_selection=indirect_selection)
for part in raw_spec.split(INTERSECTION_DELIMITER)
]
union_components.append(SelectionIntersection(
@@ -52,21 +54,36 @@ def parse_union(
def parse_union_from_default(
raw: Optional[List[str]], default: List[str], greedy: bool = False
raw: Optional[List[str]], default: List[str],
indirect_selection: IndirectSelection = IndirectSelection.Eager
) -> SelectionUnion:
components: List[str]
expect_exists: bool
if raw is None:
return parse_union(components=default, expect_exists=False, greedy=greedy)
return parse_union(
components=default,
expect_exists=False,
indirect_selection=indirect_selection)
else:
return parse_union(components=raw, expect_exists=True, greedy=greedy)
return parse_union(
components=raw,
expect_exists=True,
indirect_selection=indirect_selection)
def parse_difference(
include: Optional[List[str]], exclude: Optional[List[str]]
) -> SelectionDifference:
included = parse_union_from_default(include, DEFAULT_INCLUDES, greedy=bool(flags.GREEDY))
excluded = parse_union_from_default(exclude, DEFAULT_EXCLUDES, greedy=True)
included = parse_union_from_default(
include,
DEFAULT_INCLUDES,
indirect_selection=IndirectSelection(flags.INDIRECT_SELECTION)
)
excluded = parse_union_from_default(
exclude,
DEFAULT_EXCLUDES,
indirect_selection=IndirectSelection.Eager)
return SelectionDifference(components=[included, excluded])
@@ -148,7 +165,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
union_def_parts = _get_list_dicts(definition, 'union')
include, exclude = _parse_include_exclude_subdefs(union_def_parts)
union = SelectionUnion(components=include, greedy_warning=False)
union = SelectionUnion(components=include)
if exclude is None:
union.raw = definition
@@ -156,8 +173,7 @@ def parse_union_definition(definition: Dict[str, Any]) -> SelectionSpec:
else:
return SelectionDifference(
components=[union, exclude],
raw=definition,
greedy_warning=False
raw=definition
)
@@ -166,7 +182,7 @@ def parse_intersection_definition(
) -> SelectionSpec:
intersection_def_parts = _get_list_dicts(definition, 'intersection')
include, exclude = _parse_include_exclude_subdefs(intersection_def_parts)
intersection = SelectionIntersection(components=include, greedy_warning=False)
intersection = SelectionIntersection(components=include)
if exclude is None:
intersection.raw = definition
@@ -174,8 +190,7 @@ def parse_intersection_definition(
else:
return SelectionDifference(
components=[intersection, exclude],
raw=definition,
greedy_warning=False
raw=definition
)
@@ -209,7 +224,7 @@ def parse_dict_definition(definition: Dict[str, Any]) -> SelectionSpec:
if diff_arg is None:
return base
else:
return SelectionDifference(components=[base, diff_arg], greedy_warning=False)
return SelectionDifference(components=[base, diff_arg])
def parse_from_definition(

View File

@@ -1,6 +1,7 @@
from typing import (
Set, Iterable, Iterator, Optional, NewType
)
from itertools import product
import networkx as nx # type: ignore
from dbt.exceptions import InternalException
@@ -77,17 +78,26 @@ class Graph:
successors.update(self.graph.successors(node))
return successors
def get_subset_graph(self, selected: Iterable[UniqueId]) -> 'Graph':
def get_subset_graph(self, selected: Iterable[UniqueId]) -> "Graph":
"""Create and return a new graph that is a shallow copy of the graph,
but with only the nodes in include_nodes. Transitive edges across
removed nodes are preserved as explicit new edges.
"""
new_graph = nx.algorithms.transitive_closure(self.graph)
new_graph = self.graph.copy()
include_nodes = set(selected)
for node in self:
if node not in include_nodes:
source_nodes = [x for x, _ in new_graph.in_edges(node)]
target_nodes = [x for _, x in new_graph.out_edges(node)]
new_edges = product(source_nodes, target_nodes)
non_cyclic_new_edges = [
(source, target) for source, target in new_edges if source != target
] # removes cyclic refs
new_graph.add_edges_from(non_cyclic_new_edges)
new_graph.remove_node(node)
for node in include_nodes:
@@ -96,6 +106,7 @@ class Graph:
"Couldn't find model '{}' -- does it exist or is "
"it disabled?".format(node)
)
return Graph(new_graph)
def subgraph(self, nodes: Iterable[UniqueId]) -> 'Graph':

View File

@@ -5,7 +5,7 @@ from queue import PriorityQueue
from typing import Dict, Set, List, Generator, Optional
from .graph import UniqueId
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedExposure
from dbt.contracts.graph.parsed import ParsedSourceDefinition, ParsedExposure, ParsedMetric
from dbt.contracts.graph.compiled import GraphMemberNode
from dbt.contracts.graph.manifest import Manifest
from dbt.node_types import NodeType
@@ -47,8 +47,8 @@ class GraphQueue:
node = self.manifest.expect(node_id)
if node.resource_type != NodeType.Model:
return False
# must be a Model - tell mypy this won't be a Source or Exposure
assert not isinstance(node, (ParsedSourceDefinition, ParsedExposure))
# must be a Model - tell mypy this won't be a Source or Exposure or Metric
assert not isinstance(node, (ParsedSourceDefinition, ParsedExposure, ParsedMetric))
if node.is_ephemeral:
return False
return True

View File

@@ -3,9 +3,10 @@ from typing import Set, List, Optional, Tuple
from .graph import Graph, UniqueId
from .queue import GraphQueue
from .selector_methods import MethodManager
from .selector_spec import SelectionCriteria, SelectionSpec
from .selector_spec import SelectionCriteria, SelectionSpec, IndirectSelection
from dbt.logger import GLOBAL_LOGGER as logger
from dbt.events.functions import fire_event
from dbt.events.types import SelectorReportInvalidSelector
from dbt.node_types import NodeType
from dbt.exceptions import (
InternalException,
@@ -29,24 +30,6 @@ def alert_non_existence(raw_spec, nodes):
)
def alert_unused_nodes(raw_spec, node_names):
summary_nodes_str = ("\n - ").join(node_names[:3])
debug_nodes_str = ("\n - ").join(node_names)
and_more_str = f"\n - and {len(node_names) - 3} more" if len(node_names) > 4 else ""
summary_msg = (
f"\nSome tests were excluded because at least one parent is not selected. "
f"Use the --greedy flag to include them."
f"\n - {summary_nodes_str}{and_more_str}"
)
logger.info(summary_msg)
if len(node_names) > 4:
debug_msg = (
f"Full list of tests that were excluded:"
f"\n - {debug_nodes_str}"
)
logger.debug(debug_msg)
def can_select_indirectly(node):
"""If a node is not selected itself, but its parent(s) are, it may qualify
for indirect selection.
@@ -103,17 +86,17 @@ class NodeSelector(MethodManager):
try:
collected = self.select_included(nodes, spec)
except InvalidSelectorException:
valid_selectors = ", ".join(self.SELECTOR_METHODS)
logger.info(
f"The '{spec.method}' selector specified in {spec.raw} is "
f"invalid. Must be one of [{valid_selectors}]"
)
fire_event(SelectorReportInvalidSelector(
selector_methods=self.SELECTOR_METHODS,
spec_method=spec.method,
raw_spec=spec.raw
))
return set(), set()
neighbors = self.collect_specified_neighbors(spec, collected)
direct_nodes, indirect_nodes = self.expand_selection(
selected=(collected | neighbors),
greedy=spec.greedy
indirect_selection=spec.indirect_selection
)
return direct_nodes, indirect_nodes
@@ -185,6 +168,8 @@ class NodeSelector(MethodManager):
return source.config.enabled
elif unique_id in self.manifest.exposures:
return True
elif unique_id in self.manifest.metrics:
return True
node = self.manifest.nodes[unique_id]
return not node.empty and node.config.enabled
@@ -202,6 +187,8 @@ class NodeSelector(MethodManager):
node = self.manifest.sources[unique_id]
elif unique_id in self.manifest.exposures:
node = self.manifest.exposures[unique_id]
elif unique_id in self.manifest.metrics:
node = self.manifest.metrics[unique_id]
else:
raise InternalException(
f'Node {unique_id} not found in the manifest!'
@@ -217,21 +204,21 @@ class NodeSelector(MethodManager):
}
def expand_selection(
self, selected: Set[UniqueId], greedy: bool = False
self, selected: Set[UniqueId],
indirect_selection: IndirectSelection = IndirectSelection.Eager
) -> Tuple[Set[UniqueId], Set[UniqueId]]:
# Test selection can expand to include an implicitly/indirectly selected test.
# In this way, `dbt test -m model_a` also includes tests that directly depend on `model_a`.
# Expansion has two modes, GREEDY and NOT GREEDY.
# Test selection by default expands to include an implicitly/indirectly selected tests.
# `dbt test -m model_a` also includes tests that directly depend on `model_a`.
# Expansion has two modes, EAGER and CAUTIOUS.
#
# GREEDY mode: If ANY parent is selected, select the test. We use this for EXCLUSION.
# EAGER mode: If ANY parent is selected, select the test.
#
# NOT GREEDY mode:
# CAUTIOUS mode:
# - If ALL parents are selected, select the test.
# - If ANY parent is missing, return it separately. We'll keep it around
# for later and see if its other parents show up.
# We use this for INCLUSION.
# Users can also opt in to inclusive GREEDY mode by passing --greedy flag,
# or by specifying `greedy: true` in a yaml selector
# Users can opt out of inclusive EAGER mode by passing --indirect-selection cautious
# CLI argument or by specifying `indirect_selection: true` in a yaml selector
direct_nodes = set(selected)
indirect_nodes = set()
@@ -241,7 +228,10 @@ class NodeSelector(MethodManager):
node = self.manifest.nodes[unique_id]
if can_select_indirectly(node):
# should we add it in directly?
if greedy or set(node.depends_on.nodes) <= set(selected):
if (
indirect_selection == IndirectSelection.Eager or
set(node.depends_on.nodes) <= set(selected)
):
direct_nodes.add(unique_id)
# if not:
else:
@@ -255,6 +245,10 @@ class NodeSelector(MethodManager):
# Check tests previously selected indirectly to see if ALL their
# parents are now present.
# performance: if identical, skip the processing below
if set(direct_nodes) == set(indirect_nodes):
return direct_nodes
selected = set(direct_nodes)
for unique_id in indirect_nodes:
@@ -278,16 +272,6 @@ class NodeSelector(MethodManager):
selected_nodes, indirect_only = self.select_nodes(spec)
filtered_nodes = self.filter_selection(selected_nodes)
if indirect_only:
filtered_unused_nodes = self.filter_selection(indirect_only)
if filtered_unused_nodes and spec.greedy_warning:
# log anything that didn't make the cut
unused_node_names = []
for unique_id in filtered_unused_nodes:
name = self.manifest.nodes[unique_id].name
unused_node_names.append(name)
alert_unused_nodes(spec, unused_node_names)
return filtered_nodes
def get_graph_queue(self, spec: SelectionSpec) -> GraphQueue:

View File

@@ -1,7 +1,7 @@
import abc
from itertools import chain
from pathlib import Path
from typing import Set, List, Dict, Iterator, Tuple, Any, Union, Type, Optional
from typing import Set, List, Dict, Iterator, Tuple, Any, Union, Type, Optional, Callable
from dbt.dataclass_schema import StrEnum
@@ -18,6 +18,7 @@ from dbt.contracts.graph.parsed import (
HasTestMetadata,
ParsedSingularTestNode,
ParsedExposure,
ParsedMetric,
ParsedGenericTestNode,
ParsedSourceDefinition,
)
@@ -45,6 +46,7 @@ class MethodName(StrEnum):
ResourceType = 'resource_type'
State = 'state'
Exposure = 'exposure'
Metric = 'metric'
Result = 'result'
@@ -72,7 +74,7 @@ def is_selected_node(fqn: List[str], node_selector: str):
return True
SelectorTarget = Union[ParsedSourceDefinition, ManifestNode, ParsedExposure]
SelectorTarget = Union[ParsedSourceDefinition, ManifestNode, ParsedExposure, ParsedMetric]
class SelectorMethod(metaclass=abc.ABCMeta):
@@ -119,13 +121,25 @@ class SelectorMethod(metaclass=abc.ABCMeta):
continue
yield unique_id, exposure
def metric_nodes(
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, ParsedMetric]]:
for key, metric in self.manifest.metrics.items():
unique_id = UniqueId(key)
if unique_id not in included_nodes:
continue
yield unique_id, metric
def all_nodes(
self,
included_nodes: Set[UniqueId]
) -> Iterator[Tuple[UniqueId, SelectorTarget]]:
yield from chain(self.parsed_nodes(included_nodes),
self.source_nodes(included_nodes),
self.exposure_nodes(included_nodes))
self.exposure_nodes(included_nodes),
self.metric_nodes(included_nodes))
def configurable_nodes(
self,
@@ -137,9 +151,10 @@ class SelectorMethod(metaclass=abc.ABCMeta):
def non_source_nodes(
self,
included_nodes: Set[UniqueId],
) -> Iterator[Tuple[UniqueId, Union[ParsedExposure, ManifestNode]]]:
) -> Iterator[Tuple[UniqueId, Union[ParsedExposure, ManifestNode, ParsedMetric]]]:
yield from chain(self.parsed_nodes(included_nodes),
self.exposure_nodes(included_nodes))
self.exposure_nodes(included_nodes),
self.metric_nodes(included_nodes))
@abc.abstractmethod
def search(
@@ -251,6 +266,33 @@ class ExposureSelectorMethod(SelectorMethod):
yield node
class MetricSelectorMethod(SelectorMethod):
def search(
self, included_nodes: Set[UniqueId], selector: str
) -> Iterator[UniqueId]:
parts = selector.split('.')
target_package = SELECTOR_GLOB
if len(parts) == 1:
target_name = parts[0]
elif len(parts) == 2:
target_package, target_name = parts
else:
msg = (
'Invalid metric selector value "{}". Metrics must be of '
'the form ${{metric_name}} or '
'${{metric_package.metric_name}}'
).format(selector)
raise RuntimeException(msg)
for node, real_node in self.metric_nodes(included_nodes):
if target_package not in (real_node.package_name, SELECTOR_GLOB):
continue
if target_name not in (real_node.name, SELECTOR_GLOB):
continue
yield node
class PathSelectorMethod(SelectorMethod):
def search(
self, included_nodes: Set[UniqueId], selector: str
@@ -436,42 +478,28 @@ class StateSelectorMethod(SelectorMethod):
previous_macros = []
return self.recursively_check_macros_modified(node, previous_macros)
def check_modified(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
# TODO check modifed_content and check_modified macro seems a bit redundent
def check_modified_content(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
different_contents = not new.same_contents(old) # type: ignore
upstream_macro_change = self.check_macros_modified(new)
return different_contents or upstream_macro_change
def check_modified_body(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_body"):
return not new.same_body(old) # type: ignore
else:
return False
def check_modified_configs(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, "same_config"):
return not new.same_config(old) # type: ignore
else:
return False
def check_modified_persisted_descriptions(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
if hasattr(new, "same_persisted_description"):
return not new.same_persisted_description(old) # type: ignore
else:
return False
def check_modified_relation(
self, old: Optional[SelectorTarget], new: SelectorTarget
) -> bool:
if hasattr(new, "same_database_representation"):
return not new.same_database_representation(old) # type: ignore
else:
return False
def check_modified_macros(self, _, new: SelectorTarget) -> bool:
return self.check_macros_modified(new)
@staticmethod
def check_modified_factory(
compare_method: str
) -> Callable[[Optional[SelectorTarget], SelectorTarget], bool]:
# get a function that compares two selector target based on compare method provided
def check_modified_things(old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
if hasattr(new, compare_method):
# when old body does not exist or old and new are not the same
return not old or not getattr(new, compare_method)(old) # type: ignore
else:
return False
return check_modified_things
def check_new(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:
return old is None
@@ -485,14 +513,21 @@ class StateSelectorMethod(SelectorMethod):
state_checks = {
# it's new if there is no old version
'new': lambda old, _: old is None,
'new':
lambda old, _: old is None,
# use methods defined above to compare properties of old + new
'modified': self.check_modified,
'modified.body': self.check_modified_body,
'modified.configs': self.check_modified_configs,
'modified.persisted_descriptions': self.check_modified_persisted_descriptions,
'modified.relation': self.check_modified_relation,
'modified.macros': self.check_modified_macros,
'modified':
self.check_modified_content,
'modified.body':
self.check_modified_factory('same_body'),
'modified.configs':
self.check_modified_factory('same_config'),
'modified.persisted_descriptions':
self.check_modified_factory('same_persisted_description'),
'modified.relation':
self.check_modified_factory('same_database_representation'),
'modified.macros':
self.check_modified_macros,
}
if selector in state_checks:
checker = state_checks[selector]
@@ -512,6 +547,8 @@ class StateSelectorMethod(SelectorMethod):
previous_node = manifest.sources[node]
elif node in manifest.exposures:
previous_node = manifest.exposures[node]
elif node in manifest.metrics:
previous_node = manifest.metrics[node]
if checker(previous_node, real_node):
yield node
@@ -544,8 +581,10 @@ class MethodManager:
MethodName.Config: ConfigSelectorMethod,
MethodName.TestName: TestNameSelectorMethod,
MethodName.TestType: TestTypeSelectorMethod,
MethodName.ResourceType: ResourceTypeSelectorMethod,
MethodName.State: StateSelectorMethod,
MethodName.Exposure: ExposureSelectorMethod,
MethodName.Metric: MetricSelectorMethod,
MethodName.Result: ResultSelectorMethod,
}

View File

@@ -2,6 +2,7 @@ import os
import re
from abc import ABCMeta, abstractmethod
from dataclasses import dataclass
from dbt.dataclass_schema import StrEnum
from typing import (
Set, Iterator, List, Optional, Dict, Union, Any, Iterable, Tuple
@@ -22,6 +23,11 @@ RAW_SELECTOR_PATTERN = re.compile(
SELECTOR_METHOD_SEPARATOR = '.'
class IndirectSelection(StrEnum):
Eager = 'eager'
Cautious = 'cautious'
def _probably_path(value: str):
"""Decide if value is probably a path. Windows has two path separators, so
we should check both sep ('\\') and altsep ('/') there.
@@ -66,8 +72,7 @@ class SelectionCriteria:
parents_depth: Optional[int]
children: bool
children_depth: Optional[int]
greedy: bool = False
greedy_warning: bool = False # do not raise warning for yaml selectors
indirect_selection: IndirectSelection = IndirectSelection.Eager
def __post_init__(self):
if self.children and self.childrens_parents:
@@ -105,7 +110,8 @@ class SelectionCriteria:
@classmethod
def selection_criteria_from_dict(
cls, raw: Any, dct: Dict[str, Any], greedy: bool = False
cls, raw: Any, dct: Dict[str, Any],
indirect_selection: IndirectSelection = IndirectSelection.Eager
) -> 'SelectionCriteria':
if 'value' not in dct:
raise RuntimeException(
@@ -115,6 +121,12 @@ class SelectionCriteria:
parents_depth = _match_to_int(dct, 'parents_depth')
children_depth = _match_to_int(dct, 'children_depth')
# If defined field in selector, override CLI flag
indirect_selection = IndirectSelection(
dct.get('indirect_selection', None) or indirect_selection
)
return cls(
raw=raw,
method=method_name,
@@ -125,7 +137,7 @@ class SelectionCriteria:
parents_depth=parents_depth,
children=bool(dct.get('children')),
children_depth=children_depth,
greedy=(greedy or bool(dct.get('greedy'))),
indirect_selection=indirect_selection
)
@classmethod
@@ -137,7 +149,7 @@ class SelectionCriteria:
method_name, method_arguments = cls.parse_method(dct)
meth_name = str(method_name)
if method_arguments:
meth_name = meth_name + '.' + '.'.join(method_arguments)
meth_name += '.' + '.'.join(method_arguments)
dct['method'] = meth_name
dct = {k: v for k, v in dct.items() if (v is not None and v != '')}
if 'childrens_parents' in dct:
@@ -146,18 +158,23 @@ class SelectionCriteria:
dct['parents'] = bool(dct.get('parents'))
if 'children' in dct:
dct['children'] = bool(dct.get('children'))
if 'greedy' in dct:
dct['greedy'] = bool(dct.get('greedy'))
return dct
@classmethod
def from_single_spec(cls, raw: str, greedy: bool = False) -> 'SelectionCriteria':
def from_single_spec(
cls, raw: str,
indirect_selection: IndirectSelection = IndirectSelection.Eager
) -> 'SelectionCriteria':
result = RAW_SELECTOR_PATTERN.match(raw)
if result is None:
# bad spec!
raise RuntimeException(f'Invalid selector spec "{raw}"')
return cls.selection_criteria_from_dict(raw, result.groupdict(), greedy=greedy)
return cls.selection_criteria_from_dict(
raw,
result.groupdict(),
indirect_selection=indirect_selection
)
class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
@@ -165,12 +182,10 @@ class BaseSelectionGroup(Iterable[SelectionSpec], metaclass=ABCMeta):
self,
components: Iterable[SelectionSpec],
expect_exists: bool = False,
greedy_warning: bool = True,
raw: Any = None,
):
self.components: List[SelectionSpec] = list(components)
self.expect_exists = expect_exists
self.greedy_warning = greedy_warning
self.raw = raw
def __iter__(self) -> Iterator[SelectionSpec]:

View File

@@ -93,3 +93,10 @@ dbtClassMixin.register_field_encoders({
FQNPath = Tuple[str, ...]
PathSet = AbstractSet[FQNPath]
# This class is used in to_target_dict, so that accesses to missing keys
# will return an empty string instead of Undefined
class DictDefaultEmptyStr(dict):
def __getitem__(self, key):
return dict.get(self, key, "")

View File

@@ -2,5 +2,6 @@ config-version: 2
name: dbt
version: 1.0
docs-paths: ['docs']
docs-paths: ["docs"]
macro-paths: ["macros"]
test-paths: ["tests"]

View File

@@ -26,7 +26,7 @@ On model pages, you'll see the immediate parents and children of the model you'r
button at the top-right of this lineage pane, you'll be able to see all of the models that are used to build,
or are built from, the model you're exploring.
Once expanded, you'll be able to use the `--models` and `--exclude` model selection syntax to filter the
Once expanded, you'll be able to use the `--select` and `--exclude` model selection syntax to filter the
models in the graph. For more information on model selection, check out the [dbt docs](https://docs.getdbt.com/docs/model-selection-syntax).
Note that you can also right-click on models to interactively filter and explore the graph.
@@ -35,9 +35,9 @@ Note that you can also right-click on models to interactively filter and explore
### More information
- [What is dbt](https://docs.getdbt.com/docs/overview)?
- [What is dbt](https://docs.getdbt.com/docs/introduction)?
- Read the [dbt viewpoint](https://docs.getdbt.com/docs/viewpoint)
- [Installation](https://docs.getdbt.com/docs/installation)
- Join the [chat](https://community.getdbt.com/) on Slack for live questions and support.
- Join the [dbt Community](https://www.getdbt.com/community/) for questions and discussion
{% enddocs %}

View File

@@ -0,0 +1,89 @@
{% macro get_columns_in_relation(relation) -%}
{{ return(adapter.dispatch('get_columns_in_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro default__get_columns_in_relation(relation) -%}
{{ exceptions.raise_not_implemented(
'get_columns_in_relation macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{# helper for adapter-specific implementations of get_columns_in_relation #}
{% macro sql_convert_columns_in_relation(table) -%}
{% set columns = [] %}
{% for row in table %}
{% do columns.append(api.Column(*row)) %}
{% endfor %}
{{ return(columns) }}
{% endmacro %}
{% macro get_columns_in_query(select_sql) -%}
{{ return(adapter.dispatch('get_columns_in_query', 'dbt')(select_sql)) }}
{% endmacro %}
{% macro default__get_columns_in_query(select_sql) %}
{% call statement('get_columns_in_query', fetch_result=True, auto_begin=False) -%}
select * from (
{{ select_sql }}
) as __dbt_sbq
where false
limit 0
{% endcall %}
{{ return(load_result('get_columns_in_query').table.columns | map(attribute='name') | list) }}
{% endmacro %}
{% macro alter_column_type(relation, column_name, new_column_type) -%}
{{ return(adapter.dispatch('alter_column_type', 'dbt')(relation, column_name, new_column_type)) }}
{% endmacro %}
{% macro default__alter_column_type(relation, column_name, new_column_type) -%}
{#
1. Create a new column (w/ temp name and correct type)
2. Copy data over to it
3. Drop the existing column (cascade!)
4. Rename the new column to existing column
#}
{%- set tmp_column = column_name + "__dbt_alter" -%}
{% call statement('alter_column_type') %}
alter table {{ relation }} add column {{ adapter.quote(tmp_column) }} {{ new_column_type }};
update {{ relation }} set {{ adapter.quote(tmp_column) }} = {{ adapter.quote(column_name) }};
alter table {{ relation }} drop column {{ adapter.quote(column_name) }} cascade;
alter table {{ relation }} rename column {{ adapter.quote(tmp_column) }} to {{ adapter.quote(column_name) }}
{% endcall %}
{% endmacro %}
{% macro alter_relation_add_remove_columns(relation, add_columns = none, remove_columns = none) -%}
{{ return(adapter.dispatch('alter_relation_add_remove_columns', 'dbt')(relation, add_columns, remove_columns)) }}
{% endmacro %}
{% macro default__alter_relation_add_remove_columns(relation, add_columns, remove_columns) %}
{% if add_columns is none %}
{% set add_columns = [] %}
{% endif %}
{% if remove_columns is none %}
{% set remove_columns = [] %}
{% endif %}
{% set sql -%}
alter {{ relation.type }} {{ relation }}
{% for column in add_columns %}
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
{% endfor %}{{ ',' if add_columns and remove_columns }}
{% for column in remove_columns %}
drop column {{ column.name }}{{ ',' if not loop.last }}
{% endfor %}
{%- endset -%}
{% do run_query(sql) %}
{% endmacro %}

View File

@@ -1,344 +0,0 @@
{% macro get_columns_in_query(select_sql) -%}
{{ return(adapter.dispatch('get_columns_in_query', 'dbt')(select_sql)) }}
{% endmacro %}
{% macro default__get_columns_in_query(select_sql) %}
{% call statement('get_columns_in_query', fetch_result=True, auto_begin=False) -%}
select * from (
{{ select_sql }}
) as __dbt_sbq
where false
limit 0
{% endcall %}
{{ return(load_result('get_columns_in_query').table.columns | map(attribute='name') | list) }}
{% endmacro %}
{% macro create_schema(relation) -%}
{{ adapter.dispatch('create_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__create_schema(relation) -%}
{%- call statement('create_schema') -%}
create schema if not exists {{ relation.without_identifier() }}
{% endcall %}
{% endmacro %}
{% macro drop_schema(relation) -%}
{{ adapter.dispatch('drop_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__drop_schema(relation) -%}
{%- call statement('drop_schema') -%}
drop schema if exists {{ relation.without_identifier() }} cascade
{% endcall %}
{% endmacro %}
{% macro create_table_as(temporary, relation, sql) -%}
{{ adapter.dispatch('create_table_as', 'dbt')(temporary, relation, sql) }}
{%- endmacro %}
{% macro default__create_table_as(temporary, relation, sql) -%}
{%- set sql_header = config.get('sql_header', none) -%}
{{ sql_header if sql_header is not none }}
create {% if temporary: -%}temporary{%- endif %} table
{{ relation.include(database=(not temporary), schema=(not temporary)) }}
as (
{{ sql }}
);
{% endmacro %}
{% macro get_create_index_sql(relation, index_dict) -%}
{{ return(adapter.dispatch('get_create_index_sql', 'dbt')(relation, index_dict)) }}
{% endmacro %}
{% macro default__get_create_index_sql(relation, index_dict) -%}
{% do return(None) %}
{% endmacro %}
{% macro create_indexes(relation) -%}
{{ adapter.dispatch('create_indexes', 'dbt')(relation) }}
{%- endmacro %}
{% macro default__create_indexes(relation) -%}
{%- set _indexes = config.get('indexes', default=[]) -%}
{% for _index_dict in _indexes %}
{% set create_index_sql = get_create_index_sql(relation, _index_dict) %}
{% if create_index_sql %}
{% do run_query(create_index_sql) %}
{% endif %}
{% endfor %}
{% endmacro %}
{% macro create_view_as(relation, sql) -%}
{{ adapter.dispatch('create_view_as', 'dbt')(relation, sql) }}
{%- endmacro %}
{% macro default__create_view_as(relation, sql) -%}
{%- set sql_header = config.get('sql_header', none) -%}
{{ sql_header if sql_header is not none }}
create view {{ relation }} as (
{{ sql }}
);
{% endmacro %}
{% macro get_catalog(information_schema, schemas) -%}
{{ return(adapter.dispatch('get_catalog', 'dbt')(information_schema, schemas)) }}
{%- endmacro %}
{% macro default__get_catalog(information_schema, schemas) -%}
{% set typename = adapter.type() %}
{% set msg -%}
get_catalog not implemented for {{ typename }}
{%- endset %}
{{ exceptions.raise_compiler_error(msg) }}
{% endmacro %}
{% macro get_columns_in_relation(relation) -%}
{{ return(adapter.dispatch('get_columns_in_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro sql_convert_columns_in_relation(table) -%}
{% set columns = [] %}
{% for row in table %}
{% do columns.append(api.Column(*row)) %}
{% endfor %}
{{ return(columns) }}
{% endmacro %}
{% macro default__get_columns_in_relation(relation) -%}
{{ exceptions.raise_not_implemented(
'get_columns_in_relation macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro alter_column_type(relation, column_name, new_column_type) -%}
{{ return(adapter.dispatch('alter_column_type', 'dbt')(relation, column_name, new_column_type)) }}
{% endmacro %}
{% macro alter_column_comment(relation, column_dict) -%}
{{ return(adapter.dispatch('alter_column_comment', 'dbt')(relation, column_dict)) }}
{% endmacro %}
{% macro default__alter_column_comment(relation, column_dict) -%}
{{ exceptions.raise_not_implemented(
'alter_column_comment macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro alter_relation_comment(relation, relation_comment) -%}
{{ return(adapter.dispatch('alter_relation_comment', 'dbt')(relation, relation_comment)) }}
{% endmacro %}
{% macro default__alter_relation_comment(relation, relation_comment) -%}
{{ exceptions.raise_not_implemented(
'alter_relation_comment macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro persist_docs(relation, model, for_relation=true, for_columns=true) -%}
{{ return(adapter.dispatch('persist_docs', 'dbt')(relation, model, for_relation, for_columns)) }}
{% endmacro %}
{% macro default__persist_docs(relation, model, for_relation, for_columns) -%}
{% if for_relation and config.persist_relation_docs() and model.description %}
{% do run_query(alter_relation_comment(relation, model.description)) %}
{% endif %}
{% if for_columns and config.persist_column_docs() and model.columns %}
{% do run_query(alter_column_comment(relation, model.columns)) %}
{% endif %}
{% endmacro %}
{% macro default__alter_column_type(relation, column_name, new_column_type) -%}
{#
1. Create a new column (w/ temp name and correct type)
2. Copy data over to it
3. Drop the existing column (cascade!)
4. Rename the new column to existing column
#}
{%- set tmp_column = column_name + "__dbt_alter" -%}
{% call statement('alter_column_type') %}
alter table {{ relation }} add column {{ adapter.quote(tmp_column) }} {{ new_column_type }};
update {{ relation }} set {{ adapter.quote(tmp_column) }} = {{ adapter.quote(column_name) }};
alter table {{ relation }} drop column {{ adapter.quote(column_name) }} cascade;
alter table {{ relation }} rename column {{ adapter.quote(tmp_column) }} to {{ adapter.quote(column_name) }}
{% endcall %}
{% endmacro %}
{% macro drop_relation(relation) -%}
{{ return(adapter.dispatch('drop_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro default__drop_relation(relation) -%}
{% call statement('drop_relation', auto_begin=False) -%}
drop {{ relation.type }} if exists {{ relation }} cascade
{%- endcall %}
{% endmacro %}
{% macro truncate_relation(relation) -%}
{{ return(adapter.dispatch('truncate_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro default__truncate_relation(relation) -%}
{% call statement('truncate_relation') -%}
truncate table {{ relation }}
{%- endcall %}
{% endmacro %}
{% macro rename_relation(from_relation, to_relation) -%}
{{ return(adapter.dispatch('rename_relation', 'dbt')(from_relation, to_relation)) }}
{% endmacro %}
{% macro default__rename_relation(from_relation, to_relation) -%}
{% set target_name = adapter.quote_as_configured(to_relation.identifier, 'identifier') %}
{% call statement('rename_relation') -%}
alter table {{ from_relation }} rename to {{ target_name }}
{%- endcall %}
{% endmacro %}
{% macro information_schema_name(database) %}
{{ return(adapter.dispatch('information_schema_name', 'dbt')(database)) }}
{% endmacro %}
{% macro default__information_schema_name(database) -%}
{%- if database -%}
{{ database }}.INFORMATION_SCHEMA
{%- else -%}
INFORMATION_SCHEMA
{%- endif -%}
{%- endmacro %}
{% macro list_schemas(database) -%}
{{ return(adapter.dispatch('list_schemas', 'dbt')(database)) }}
{% endmacro %}
{% macro default__list_schemas(database) -%}
{% set sql %}
select distinct schema_name
from {{ information_schema_name(database) }}.SCHEMATA
where catalog_name ilike '{{ database }}'
{% endset %}
{{ return(run_query(sql)) }}
{% endmacro %}
{% macro check_schema_exists(information_schema, schema) -%}
{{ return(adapter.dispatch('check_schema_exists', 'dbt')(information_schema, schema)) }}
{% endmacro %}
{% macro default__check_schema_exists(information_schema, schema) -%}
{% set sql -%}
select count(*)
from {{ information_schema.replace(information_schema_view='SCHEMATA') }}
where catalog_name='{{ information_schema.database }}'
and schema_name='{{ schema }}'
{%- endset %}
{{ return(run_query(sql)) }}
{% endmacro %}
{% macro list_relations_without_caching(schema_relation) %}
{{ return(adapter.dispatch('list_relations_without_caching', 'dbt')(schema_relation)) }}
{% endmacro %}
{% macro default__list_relations_without_caching(schema_relation) %}
{{ exceptions.raise_not_implemented(
'list_relations_without_caching macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro current_timestamp() -%}
{{ adapter.dispatch('current_timestamp', 'dbt')() }}
{%- endmacro %}
{% macro default__current_timestamp() -%}
{{ exceptions.raise_not_implemented(
'current_timestamp macro not implemented for adapter '+adapter.type()) }}
{%- endmacro %}
{% macro collect_freshness(source, loaded_at_field, filter) %}
{{ return(adapter.dispatch('collect_freshness', 'dbt')(source, loaded_at_field, filter))}}
{% endmacro %}
{% macro default__collect_freshness(source, loaded_at_field, filter) %}
{% call statement('collect_freshness', fetch_result=True, auto_begin=False) -%}
select
max({{ loaded_at_field }}) as max_loaded_at,
{{ current_timestamp() }} as snapshotted_at
from {{ source }}
{% if filter %}
where {{ filter }}
{% endif %}
{% endcall %}
{{ return(load_result('collect_freshness').table) }}
{% endmacro %}
{% macro make_temp_relation(base_relation, suffix='__dbt_tmp') %}
{{ return(adapter.dispatch('make_temp_relation', 'dbt')(base_relation, suffix))}}
{% endmacro %}
{% macro default__make_temp_relation(base_relation, suffix) %}
{% set tmp_identifier = base_relation.identifier ~ suffix %}
{% set tmp_relation = base_relation.incorporate(
path={"identifier": tmp_identifier}) -%}
{% do return(tmp_relation) %}
{% endmacro %}
{% macro set_sql_header(config) -%}
{{ config.set('sql_header', caller()) }}
{%- endmacro %}
{% macro alter_relation_add_remove_columns(relation, add_columns = none, remove_columns = none) -%}
{{ return(adapter.dispatch('alter_relation_add_remove_columns', 'dbt')(relation, add_columns, remove_columns)) }}
{% endmacro %}
{% macro default__alter_relation_add_remove_columns(relation, add_columns, remove_columns) %}
{% if add_columns is none %}
{% set add_columns = [] %}
{% endif %}
{% if remove_columns is none %}
{% set remove_columns = [] %}
{% endif %}
{% set sql -%}
alter {{ relation.type }} {{ relation }}
{% for column in add_columns %}
add column {{ column.name }} {{ column.data_type }}{{ ',' if not loop.last }}
{% endfor %}{{ ',' if remove_columns | length > 0 }}
{% for column in remove_columns %}
drop column {{ column.name }}{{ ',' if not loop.last }}
{% endfor %}
{%- endset -%}
{% do run_query(sql) %}
{% endmacro %}

View File

@@ -0,0 +1,26 @@
{% macro current_timestamp() -%}
{{ adapter.dispatch('current_timestamp', 'dbt')() }}
{%- endmacro %}
{% macro default__current_timestamp() -%}
{{ exceptions.raise_not_implemented(
'current_timestamp macro not implemented for adapter '+adapter.type()) }}
{%- endmacro %}
{% macro collect_freshness(source, loaded_at_field, filter) %}
{{ return(adapter.dispatch('collect_freshness', 'dbt')(source, loaded_at_field, filter))}}
{% endmacro %}
{% macro default__collect_freshness(source, loaded_at_field, filter) %}
{% call statement('collect_freshness', fetch_result=True, auto_begin=False) -%}
select
max({{ loaded_at_field }}) as max_loaded_at,
{{ current_timestamp() }} as snapshotted_at
from {{ source }}
{% if filter %}
where {{ filter }}
{% endif %}
{% endcall %}
{{ return(load_result('collect_freshness').table) }}
{% endmacro %}

View File

@@ -0,0 +1,23 @@
{% macro get_create_index_sql(relation, index_dict) -%}
{{ return(adapter.dispatch('get_create_index_sql', 'dbt')(relation, index_dict)) }}
{% endmacro %}
{% macro default__get_create_index_sql(relation, index_dict) -%}
{% do return(None) %}
{% endmacro %}
{% macro create_indexes(relation) -%}
{{ adapter.dispatch('create_indexes', 'dbt')(relation) }}
{%- endmacro %}
{% macro default__create_indexes(relation) -%}
{%- set _indexes = config.get('indexes', default=[]) -%}
{% for _index_dict in _indexes %}
{% set create_index_sql = get_create_index_sql(relation, _index_dict) %}
{% if create_index_sql %}
{% do run_query(create_index_sql) %}
{% endif %}
{% endfor %}
{% endmacro %}

View File

@@ -0,0 +1,65 @@
{% macro get_catalog(information_schema, schemas) -%}
{{ return(adapter.dispatch('get_catalog', 'dbt')(information_schema, schemas)) }}
{%- endmacro %}
{% macro default__get_catalog(information_schema, schemas) -%}
{% set typename = adapter.type() %}
{% set msg -%}
get_catalog not implemented for {{ typename }}
{%- endset %}
{{ exceptions.raise_compiler_error(msg) }}
{% endmacro %}
{% macro information_schema_name(database) %}
{{ return(adapter.dispatch('information_schema_name', 'dbt')(database)) }}
{% endmacro %}
{% macro default__information_schema_name(database) -%}
{%- if database -%}
{{ database }}.INFORMATION_SCHEMA
{%- else -%}
INFORMATION_SCHEMA
{%- endif -%}
{%- endmacro %}
{% macro list_schemas(database) -%}
{{ return(adapter.dispatch('list_schemas', 'dbt')(database)) }}
{% endmacro %}
{% macro default__list_schemas(database) -%}
{% set sql %}
select distinct schema_name
from {{ information_schema_name(database) }}.SCHEMATA
where catalog_name ilike '{{ database }}'
{% endset %}
{{ return(run_query(sql)) }}
{% endmacro %}
{% macro check_schema_exists(information_schema, schema) -%}
{{ return(adapter.dispatch('check_schema_exists', 'dbt')(information_schema, schema)) }}
{% endmacro %}
{% macro default__check_schema_exists(information_schema, schema) -%}
{% set sql -%}
select count(*)
from {{ information_schema.replace(information_schema_view='SCHEMATA') }}
where catalog_name='{{ information_schema.database }}'
and schema_name='{{ schema }}'
{%- endset %}
{{ return(run_query(sql)) }}
{% endmacro %}
{% macro list_relations_without_caching(schema_relation) %}
{{ return(adapter.dispatch('list_relations_without_caching', 'dbt')(schema_relation)) }}
{% endmacro %}
{% macro default__list_relations_without_caching(schema_relation) %}
{{ exceptions.raise_not_implemented(
'list_relations_without_caching macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}

View File

@@ -0,0 +1,33 @@
{% macro alter_column_comment(relation, column_dict) -%}
{{ return(adapter.dispatch('alter_column_comment', 'dbt')(relation, column_dict)) }}
{% endmacro %}
{% macro default__alter_column_comment(relation, column_dict) -%}
{{ exceptions.raise_not_implemented(
'alter_column_comment macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro alter_relation_comment(relation, relation_comment) -%}
{{ return(adapter.dispatch('alter_relation_comment', 'dbt')(relation, relation_comment)) }}
{% endmacro %}
{% macro default__alter_relation_comment(relation, relation_comment) -%}
{{ exceptions.raise_not_implemented(
'alter_relation_comment macro not implemented for adapter '+adapter.type()) }}
{% endmacro %}
{% macro persist_docs(relation, model, for_relation=true, for_columns=true) -%}
{{ return(adapter.dispatch('persist_docs', 'dbt')(relation, model, for_relation, for_columns)) }}
{% endmacro %}
{% macro default__persist_docs(relation, model, for_relation, for_columns) -%}
{% if for_relation and config.persist_relation_docs() and model.description %}
{% do run_query(alter_relation_comment(relation, model.description)) %}
{% endif %}
{% if for_columns and config.persist_column_docs() and model.columns %}
{% do run_query(alter_column_comment(relation, model.columns)) %}
{% endif %}
{% endmacro %}

View File

@@ -0,0 +1,84 @@
{% macro make_temp_relation(base_relation, suffix='__dbt_tmp') %}
{{ return(adapter.dispatch('make_temp_relation', 'dbt')(base_relation, suffix))}}
{% endmacro %}
{% macro default__make_temp_relation(base_relation, suffix) %}
{% set tmp_identifier = base_relation.identifier ~ suffix %}
{% set tmp_relation = base_relation.incorporate(
path={"identifier": tmp_identifier}) -%}
{% do return(tmp_relation) %}
{% endmacro %}
{% macro drop_relation(relation) -%}
{{ return(adapter.dispatch('drop_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro default__drop_relation(relation) -%}
{% call statement('drop_relation', auto_begin=False) -%}
drop {{ relation.type }} if exists {{ relation }} cascade
{%- endcall %}
{% endmacro %}
{% macro truncate_relation(relation) -%}
{{ return(adapter.dispatch('truncate_relation', 'dbt')(relation)) }}
{% endmacro %}
{% macro default__truncate_relation(relation) -%}
{% call statement('truncate_relation') -%}
truncate table {{ relation }}
{%- endcall %}
{% endmacro %}
{% macro rename_relation(from_relation, to_relation) -%}
{{ return(adapter.dispatch('rename_relation', 'dbt')(from_relation, to_relation)) }}
{% endmacro %}
{% macro default__rename_relation(from_relation, to_relation) -%}
{% set target_name = adapter.quote_as_configured(to_relation.identifier, 'identifier') %}
{% call statement('rename_relation') -%}
alter table {{ from_relation }} rename to {{ target_name }}
{%- endcall %}
{% endmacro %}
{% macro get_or_create_relation(database, schema, identifier, type) -%}
{{ return(adapter.dispatch('get_or_create_relation', 'dbt')(database, schema, identifier, type)) }}
{% endmacro %}
{% macro default__get_or_create_relation(database, schema, identifier, type) %}
{%- set target_relation = adapter.get_relation(database=database, schema=schema, identifier=identifier) %}
{% if target_relation %}
{% do return([true, target_relation]) %}
{% endif %}
{%- set new_relation = api.Relation.create(
database=database,
schema=schema,
identifier=identifier,
type=type
) -%}
{% do return([false, new_relation]) %}
{% endmacro %}
{# a user-friendly interface into adapter.get_relation #}
{% macro load_relation(relation) %}
{% do return(adapter.get_relation(
database=relation.database,
schema=relation.schema,
identifier=relation.identifier
)) -%}
{% endmacro %}
{# not used much, here for backwards compatibility #}
{% macro drop_relation_if_exists(relation) %}
{% if relation is not none %}
{{ adapter.drop_relation(relation) }}
{% endif %}
{% endmacro %}

View File

@@ -0,0 +1,20 @@
{% macro create_schema(relation) -%}
{{ adapter.dispatch('create_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__create_schema(relation) -%}
{%- call statement('create_schema') -%}
create schema if not exists {{ relation.without_identifier() }}
{% endcall %}
{% endmacro %}
{% macro drop_schema(relation) -%}
{{ adapter.dispatch('drop_schema', 'dbt')(relation) }}
{% endmacro %}
{% macro default__drop_schema(relation) -%}
{%- call statement('drop_schema') -%}
drop schema if exists {{ relation.without_identifier() }} cascade
{% endcall %}
{% endmacro %}

View File

@@ -1,4 +1,3 @@
{% macro convert_datetime(date_str, date_fmt) %}
{% set error_msg -%}
@@ -10,6 +9,7 @@
{% endmacro %}
{% macro dates_in_range(start_date_str, end_date_str=none, in_fmt="%Y%m%d", out_fmt="%Y%m%d") %}
{% set end_date_str = start_date_str if end_date_str is none else end_date_str %}
@@ -38,6 +38,7 @@
{{ return(date_list) }}
{% endmacro %}
{% macro partition_range(raw_partition_date, date_fmt='%Y%m%d') %}
{% set partition_range = (raw_partition_date | string).split(",") %}
@@ -54,6 +55,7 @@
{{ return(dates_in_range(start_date, end_date, in_fmt=date_fmt)) }}
{% endmacro %}
{% macro py_current_timestring() %}
{% set dt = modules.datetime.datetime.now() %}
{% do return(dt.strftime("%Y%m%d%H%M%S%f")) %}

View File

@@ -1,8 +0,0 @@
{% macro run_query(sql) %}
{% call statement("run_query_statement", fetch_result=true, auto_begin=false) %}
{{ sql }}
{% endcall %}
{% do return(load_result("run_query_statement").table) %}
{% endmacro %}

View File

@@ -15,6 +15,7 @@
{%- endif -%}
{%- endmacro %}
{% macro noop_statement(name=None, message=None, code=None, rows_affected=None, res=None) -%}
{%- set sql = caller() -%}
@@ -28,3 +29,13 @@
{%- endif -%}
{%- endmacro %}
{# a user-friendly interface into statements #}
{% macro run_query(sql) %}
{% call statement("run_query_statement", fetch_result=true, auto_begin=false) %}
{{ sql }}
{% endcall %}
{% do return(load_result("run_query_statement").table) %}
{% endmacro %}

View File

@@ -25,9 +25,3 @@ where value_field not in (
)
{% endmacro %}
{% test accepted_values(model, column_name, values, quote=True) %}
{% set macro = adapter.dispatch('test_accepted_values', 'dbt') %}
{{ macro(model, column_name, values, quote) }}
{% endtest %}

View File

@@ -0,0 +1,8 @@
{% macro default__test_not_null(model, column_name) %}
select *
from {{ model }}
where {{ column_name }} is null
{% endmacro %}

View File

@@ -1,4 +1,3 @@
{% macro default__test_relationships(model, column_name, to, field) %}
with child as (
@@ -22,9 +21,3 @@ left join parent
where parent.to_field is null
{% endmacro %}
{% test relationships(model, column_name, to, field) %}
{% set macro = adapter.dispatch('test_relationships', 'dbt') %}
{{ macro(model, column_name, to, field) }}
{% endtest %}

View File

@@ -11,8 +11,3 @@ having count(*) > 1
{% endmacro %}
{% test unique(model, column_name) %}
{% set macro = adapter.dispatch('test_unique', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

View File

@@ -1,13 +0,0 @@
{% macro default__test_not_null(model, column_name) %}
select *
from {{ model }}
where {{ column_name }} is null
{% endmacro %}
{% test not_null(model, column_name) %}
{% set macro = adapter.dispatch('test_not_null', 'dbt') %}
{{ macro(model, column_name) }}
{% endtest %}

View File

@@ -12,6 +12,7 @@
node: The available node that an alias is being generated for, or none
#}
{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
{% do return(adapter.dispatch('generate_alias_name', 'dbt')(custom_alias_name, node)) %}
{%- endmacro %}

View File

@@ -14,7 +14,7 @@
node: The node the schema is being generated for
#}
{% macro generate_schema_name(custom_schema_name, node) -%}
{% macro generate_schema_name(custom_schema_name=none, node=none) -%}
{{ return(adapter.dispatch('generate_schema_name', 'dbt')(custom_schema_name, node)) }}
{% endmacro %}

View File

@@ -0,0 +1,21 @@
{% macro set_sql_header(config) -%}
{{ config.set('sql_header', caller()) }}
{%- endmacro %}
{% macro should_full_refresh() %}
{% set config_full_refresh = config.get('full_refresh') %}
{% if config_full_refresh is none %}
{% set config_full_refresh = flags.FULL_REFRESH %}
{% endif %}
{% do return(config_full_refresh) %}
{% endmacro %}
{% macro should_store_failures() %}
{% set config_store_failures = config.get('store_failures') %}
{% if config_store_failures is none %}
{% set config_store_failures = flags.STORE_FAILURES %}
{% endif %}
{% do return(config_store_failures) %}
{% endmacro %}

View File

@@ -1,83 +0,0 @@
{% macro run_hooks(hooks, inside_transaction=True) %}
{% for hook in hooks | selectattr('transaction', 'equalto', inside_transaction) %}
{% if not inside_transaction and loop.first %}
{% call statement(auto_begin=inside_transaction) %}
commit;
{% endcall %}
{% endif %}
{% set rendered = render(hook.get('sql')) | trim %}
{% if (rendered | length) > 0 %}
{% call statement(auto_begin=inside_transaction) %}
{{ rendered }}
{% endcall %}
{% endif %}
{% endfor %}
{% endmacro %}
{% macro column_list(columns) %}
{%- for col in columns %}
{{ col.name }} {% if not loop.last %},{% endif %}
{% endfor -%}
{% endmacro %}
{% macro column_list_for_create_table(columns) %}
{%- for col in columns %}
{{ col.name }} {{ col.data_type }} {%- if not loop.last %},{% endif %}
{% endfor -%}
{% endmacro %}
{% macro make_hook_config(sql, inside_transaction) %}
{{ tojson({"sql": sql, "transaction": inside_transaction}) }}
{% endmacro %}
{% macro before_begin(sql) %}
{{ make_hook_config(sql, inside_transaction=False) }}
{% endmacro %}
{% macro in_transaction(sql) %}
{{ make_hook_config(sql, inside_transaction=True) }}
{% endmacro %}
{% macro after_commit(sql) %}
{{ make_hook_config(sql, inside_transaction=False) }}
{% endmacro %}
{% macro drop_relation_if_exists(relation) %}
{% if relation is not none %}
{{ adapter.drop_relation(relation) }}
{% endif %}
{% endmacro %}
{% macro load_relation(relation) %}
{% do return(adapter.get_relation(
database=relation.database,
schema=relation.schema,
identifier=relation.identifier
)) -%}
{% endmacro %}
{% macro should_full_refresh() %}
{% set config_full_refresh = config.get('full_refresh') %}
{% if config_full_refresh is none %}
{% set config_full_refresh = flags.FULL_REFRESH %}
{% endif %}
{% do return(config_full_refresh) %}
{% endmacro %}
{% macro should_store_failures() %}
{% set config_store_failures = config.get('store_failures') %}
{% if config_store_failures is none %}
{% set config_store_failures = flags.STORE_FAILURES %}
{% endif %}
{% do return(config_store_failures) %}
{% endmacro %}

View File

@@ -0,0 +1,35 @@
{% macro run_hooks(hooks, inside_transaction=True) %}
{% for hook in hooks | selectattr('transaction', 'equalto', inside_transaction) %}
{% if not inside_transaction and loop.first %}
{% call statement(auto_begin=inside_transaction) %}
commit;
{% endcall %}
{% endif %}
{% set rendered = render(hook.get('sql')) | trim %}
{% if (rendered | length) > 0 %}
{% call statement(auto_begin=inside_transaction) %}
{{ rendered }}
{% endcall %}
{% endif %}
{% endfor %}
{% endmacro %}
{% macro make_hook_config(sql, inside_transaction) %}
{{ tojson({"sql": sql, "transaction": inside_transaction}) }}
{% endmacro %}
{% macro before_begin(sql) %}
{{ make_hook_config(sql, inside_transaction=False) }}
{% endmacro %}
{% macro in_transaction(sql) %}
{{ make_hook_config(sql, inside_transaction=True) }}
{% endmacro %}
{% macro after_commit(sql) %}
{{ make_hook_config(sql, inside_transaction=False) }}
{% endmacro %}

View File

@@ -1,21 +0,0 @@
{% macro incremental_upsert(tmp_relation, target_relation, unique_key=none, statement_name="main") %}
{%- set dest_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set dest_cols_csv = dest_columns | map(attribute='quoted') | join(', ') -%}
{%- if unique_key is not none -%}
delete
from {{ target_relation }}
where ({{ unique_key }}) in (
select ({{ unique_key }})
from {{ tmp_relation }}
);
{%- endif %}
insert into {{ target_relation }} ({{ dest_cols_csv }})
(
select {{ dest_cols_csv }}
from {{ tmp_relation }}
);
{%- endmacro %}

View File

@@ -0,0 +1,52 @@
/* {#
Helper macros for internal use with incremental materializations.
Use with care if calling elsewhere.
#} */
{% macro get_quoted_csv(column_names) %}
{% set quoted = [] %}
{% for col in column_names -%}
{%- do quoted.append(adapter.quote(col)) -%}
{%- endfor %}
{%- set dest_cols_csv = quoted | join(', ') -%}
{{ return(dest_cols_csv) }}
{% endmacro %}
{% macro diff_columns(source_columns, target_columns) %}
{% set result = [] %}
{% set source_names = source_columns | map(attribute = 'column') | list %}
{% set target_names = target_columns | map(attribute = 'column') | list %}
{# --check whether the name attribute exists in the target - this does not perform a data type check #}
{% for sc in source_columns %}
{% if sc.name not in target_names %}
{{ result.append(sc) }}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro diff_column_data_types(source_columns, target_columns) %}
{% set result = [] %}
{% for sc in source_columns %}
{% set tc = target_columns | selectattr("name", "equalto", sc.name) | list | first %}
{% if tc %}
{% if sc.data_type != tc.data_type %}
{{ result.append( { 'column_name': tc.name, 'new_type': sc.data_type } ) }}
{% endif %}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}

View File

@@ -53,8 +53,12 @@
{% do adapter.expand_target_column_types(
from_relation=tmp_relation,
to_relation=target_relation) %}
{% do process_schema_changes(on_schema_change, tmp_relation, existing_relation) %}
{% set build_sql = incremental_upsert(tmp_relation, target_relation, unique_key=unique_key) %}
{#-- Process schema changes. Returns dict of changes if successful. Use source columns for upserting/merging --#}
{% set dest_columns = process_schema_changes(on_schema_change, tmp_relation, existing_relation) %}
{% if not dest_columns %}
{% set dest_columns = adapter.get_columns_in_relation(existing_relation) %}
{% endif %}
{% set build_sql = get_delete_insert_merge_sql(target_relation, tmp_relation, unique_key, dest_columns) %}
{% endif %}

View File

@@ -1,20 +1,7 @@
{% macro get_merge_sql(target, source, unique_key, dest_columns, predicates=none) -%}
{{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, predicates) }}
{%- endmacro %}
{% macro get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{{ adapter.dispatch('get_delete_insert_merge_sql', 'dbt')(target, source, unique_key, dest_columns) }}
{%- endmacro %}
{% macro get_insert_overwrite_merge_sql(target, source, dest_columns, predicates, include_sql_header=false) -%}
{{ adapter.dispatch('get_insert_overwrite_merge_sql', 'dbt')(target, source, dest_columns, predicates, include_sql_header) }}
{%- endmacro %}
{% macro default__get_merge_sql(target, source, unique_key, dest_columns, predicates) -%}
{%- set predicates = [] if predicates is none else [] + predicates -%}
{%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
@@ -52,18 +39,11 @@
{% endmacro %}
{% macro get_quoted_csv(column_names) %}
{% set quoted = [] %}
{% for col in column_names -%}
{%- do quoted.append(adapter.quote(col)) -%}
{%- endfor %}
{% macro get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{{ adapter.dispatch('get_delete_insert_merge_sql', 'dbt')(target, source, unique_key, dest_columns) }}
{%- endmacro %}
{%- set dest_cols_csv = quoted | join(', ') -%}
{{ return(dest_cols_csv) }}
{% endmacro %}
{% macro common_get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{% macro default__get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
@@ -83,10 +63,10 @@
{%- endmacro %}
{% macro default__get_delete_insert_merge_sql(target, source, unique_key, dest_columns) -%}
{{ common_get_delete_insert_merge_sql(target, source, unique_key, dest_columns) }}
{% endmacro %}
{% macro get_insert_overwrite_merge_sql(target, source, dest_columns, predicates, include_sql_header=false) -%}
{{ adapter.dispatch('get_insert_overwrite_merge_sql', 'dbt')(target, source, dest_columns, predicates, include_sql_header) }}
{%- endmacro %}
{% macro default__get_insert_overwrite_merge_sql(target, source, dest_columns, predicates, include_sql_header) -%}
{%- set predicates = [] if predicates is none else [] + predicates -%}

View File

@@ -15,39 +15,6 @@
{% endmacro %}
{% macro diff_columns(source_columns, target_columns) %}
{% set result = [] %}
{% set source_names = source_columns | map(attribute = 'column') | list %}
{% set target_names = target_columns | map(attribute = 'column') | list %}
{# --check whether the name attribute exists in the target - this does not perform a data type check #}
{% for sc in source_columns %}
{% if sc.name not in target_names %}
{{ result.append(sc) }}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro diff_column_data_types(source_columns, target_columns) %}
{% set result = [] %}
{% for sc in source_columns %}
{% set tc = target_columns | selectattr("name", "equalto", sc.name) | list | first %}
{% if tc %}
{% if sc.data_type != tc.data_type %}
{{ result.append( { 'column_name': tc.name, 'new_type': sc.data_type } ) }}
{% endif %}
{% endif %}
{% endfor %}
{{ return(result) }}
{% endmacro %}
{% macro check_for_schema_changes(source_relation, target_relation) %}
@@ -57,7 +24,7 @@
{%- set target_columns = adapter.get_columns_in_relation(target_relation) -%}
{%- set source_not_in_target = diff_columns(source_columns, target_columns) -%}
{%- set target_not_in_source = diff_columns(target_columns, source_columns) -%}
{% set new_target_types = diff_column_data_types(source_columns, target_columns) %}
{% if source_not_in_target != [] %}
@@ -72,6 +39,8 @@
'schema_changed': schema_changed,
'source_not_in_target': source_not_in_target,
'target_not_in_source': target_not_in_source,
'source_columns': source_columns,
'target_columns': target_columns,
'new_target_types': new_target_types
} %}
@@ -132,7 +101,11 @@
{% macro process_schema_changes(on_schema_change, source_relation, target_relation) %}
{% if on_schema_change != 'ignore' %}
{% if on_schema_change == 'ignore' %}
{{ return({}) }}
{% else %}
{% set schema_changes_dict = check_for_schema_changes(source_relation, target_relation) %}
@@ -158,6 +131,8 @@
{% endif %}
{% endif %}
{{ return(schema_changes_dict['source_columns']) }}
{% endif %}

Some files were not shown because too many files have changed in this diff Show More