Compare commits

..

54 Commits

Author SHA1 Message Date
Callum McCann
fb8b161351 adding entity inheritence 2022-12-06 16:20:12 -06:00
Callum McCann
7ecb431278 making dimensions possible! 2022-12-06 13:54:28 -06:00
Callum McCann
792150ff6a let there be entity 2022-12-05 16:02:14 -06:00
leahwicz
85d0b5afc7 Reverting back to older ubuntu image (#6363)
* Reverting back to older ubuntu image

* Updating the structured logging workflow as well
2022-12-02 12:09:46 -05:00
Matthew McKnight
1fbcaa4484 reformatting of test after some spike investigation (#6314)
* reformatting of test after some spike investigation

* reformat code to pull tests back into base class definition, move a test to more appropriate spot
2022-12-01 16:54:58 -06:00
justbldwn
481235a943 clarify error log for number of allowed models in a Python file (#6251) 2022-12-01 14:43:36 -05:00
Michelle Ark
2289e45571 Exposures support metrics (#6342)
* exposures support metrics
2022-12-01 11:01:16 -05:00
dependabot[bot]
b5d303f12a Bump mashumaro[msgpack] from 3.0.4 to 3.1 in /core (#6108)
* Bump mashumaro[msgpack] from 3.0.4 to 3.1.1 in /core

Bumps [mashumaro[msgpack]](https://github.com/Fatal1ty/mashumaro) from 3.0.4 to 3.1.
- [Release notes](https://github.com/Fatal1ty/mashumaro/releases)
- [Commits](https://github.com/Fatal1ty/mashumaro/compare/v3.0.4...v3.1)

---
updated-dependencies:
- dependency-name: mashumaro[msgpack]
  dependency-type: direct:production
  update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
2022-11-30 17:32:47 -05:00
Mila Page
c3be975783 Ct 288/convert 070 incremental test (#6330)
* Convert incremental schema tests.

* Drop the old test.

* Bad git add. My disappoint is immeasurable and my day has been ruined.

* Adjustments for flake8.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 12:47:20 -08:00
Mila Page
47c2edb42a Ct 1518/convert 063 relation names tests (#6304)
* Convert old test.

Add documentation. Adapt and reenable previously skipped test.

* Convert test and adapt and comment for current standards.

* Remove old versions of tests.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 12:25:36 -08:00
Stu Kilgore
b3440417ad Add GHA workflow to build CLI API docs (#6187) 2022-11-29 13:30:47 -06:00
leahwicz
020f639c7a Update stale.yml (#6258) 2022-11-29 09:40:59 -05:00
Mila Page
55db15aba8 Convert test 067. (#6305)
* Convert test 067. One bug outstanding.

* Test now working! Schema needed renaming to avoid 63 char max problems

* Remove old test.

* Add some docs and rewrite.

* Add exception for when audit tables' schema runs over the db limit.

* Code cleanup.

* Revert exception.

* Round out comments.

* Rename what shouldn't be a base class.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-11-29 00:06:07 -08:00
Itamar Hartstein
bce0e7c096 BaseContext: expose md5 function in context (#6247)
* BaseContext: expose md5 function in context

* BaseContext: add return value type

* Add changie entry

* rename "md5" to "local_md5"

* fix test_context.py
2022-11-28 10:23:40 -05:00
Gerda Shank
7d7066466d CT 1537 fix event test and rename a couple of fields (#6293)
* Rename MacroEvent to JinjaLog

* Rename ConnectionClosed/2

* Fix LogSeedResult

* Rename ConnectionLeftOpen events, fix test_events.py

* Update events README.md, add "category" to EventInfo

* Rename GeneralMacroWarning to JinjaLogWarning
2022-11-22 14:54:20 -05:00
Emily Rockman
517576c088 add back in conditional node length check (#6298) 2022-11-21 21:20:55 -08:00
leahwicz
987764858b Revert "Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker (#6180)" (#6281)
This reverts commit 8e28f5906e.
2022-11-17 09:14:22 -05:00
FishtownBuildBot
a235abd176 Add new index.html and changelog yaml files from dbt-docs (#6265) 2022-11-16 17:00:33 +01:00
dependabot[bot]
9297e4d55c Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core (#5917)
* Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core

Updates the requirements on [pathspec](https://github.com/cpburnz/python-pathspec) to permit the latest version.
- [Release notes](https://github.com/cpburnz/python-pathspec/releases)
- [Changelog](https://github.com/cpburnz/python-pathspec/blob/master/CHANGES.rst)
- [Commits](https://github.com/cpburnz/python-pathspec/compare/v0.9.0...v0.10.1)

---
updated-dependencies:
- dependency-name: pathspec
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-15 22:02:37 -05:00
Michelle Ark
eae98677b9 s/gitlab/github for flake8 precommit repo (#6252) 2022-11-15 10:30:00 -05:00
Matthew McKnight
66ac107409 [CT-1262] Convert dbt_debug (#6125)
* init pr for dbt_debug test conversion

* removal of old test

* minor test format change

* add new Base class and Test classes

* reformatting test, new method for capsys and error messgae to check, todo fix badproject

* refomatting tests, ready for review

* checking yaml file, and small reformat

* modifying since update wasn't working in ci/cd
2022-11-14 14:22:48 -06:00
Michelle Ark
39c5c42215 converting 044_test_run_operations (#6122)
* converting 044_test_run_operations
2022-11-14 10:39:57 -05:00
dependabot[bot]
9f280a8469 Update colorama requirement from <0.4.6,>=0.3.9 to >=0.3.9,<0.4.7 in /core (#6144)
* Update colorama requirement in /core

Updates the requirements on [colorama](https://github.com/tartley/colorama) to permit the latest version.
- [Release notes](https://github.com/tartley/colorama/releases)
- [Changelog](https://github.com/tartley/colorama/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/tartley/colorama/compare/0.3.9...0.4.6)

---
updated-dependencies:
- dependency-name: colorama
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-13 09:57:33 -05:00
Joe Berni
73116fb816 feature/favor-state-node (#5859) 2022-11-09 10:58:01 -06:00
Stu Kilgore
f02243506d Convert postgres index tests (#6228) 2022-11-08 15:30:29 -06:00
Stu Kilgore
d5e9ce1797 Convert color tests to pytest (#6230) 2022-11-08 15:25:57 -06:00
Stu Kilgore
4e786184d2 Convert threading tests to pytest (#6226) 2022-11-08 08:56:10 -06:00
Chenyu Li
930bd3541e properly track hook running (#6059) 2022-11-07 10:44:29 -06:00
Gerda Shank
6c76137da4 CT 1443 remove root path (#6172)
* Remove root_path

* Bump manifest schema to 8

* Update tests and compability utility for v8, root_path removal
2022-11-04 16:38:26 -04:00
Gerda Shank
68d06d8a9c Combine various print result log events with different levels (#6174)
* Combine various print result log events with different levels

* Changie

* more merge cleanup

* Specify DynamicLevel for event classes that must specify level
2022-11-04 14:26:37 -04:00
Rachel
d0543c9242 Updates lib to use new profile name functionality (#6202)
* Updates lib to use new profile name functionality

* Adds changie entry

* Fixes formatting
2022-11-04 10:05:24 -07:00
Michelle Ark
cfad27f963 add typing to DepsTask.run (#6192) 2022-11-03 17:35:16 -04:00
Emily Rockman
c3ccbe3357 add python version and upgrade action (#6204) 2022-11-03 09:13:00 -05:00
dependabot[bot]
8e28f5906e Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker (#6180)
* Bump python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye in /docker

Bumps python from 3.10.7-slim-bullseye to 3.11.0-slim-bullseye.

---
updated-dependencies:
- dependency-name: python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2022-11-02 08:40:51 -07:00
FishtownBuildBot
d23285b4ba Add new index.html and changelog yaml files from dbt-docs (#6112) 2022-11-02 08:36:56 -07:00
Michelle Ark
a42748433d converting 023_exit_codes_tests (#6105)
* converting 023_exit_codes_tests

* use packages fixture, clean up test names
2022-11-01 16:26:12 -04:00
Emily Rockman
be4a91a0fe Convert messages to struct logs (#6064)
* Initial structured logging changes

* remove "this" from core/dbt/events/functions.py

* CT-1047: Fix execution_time definitions to use float

* CT-1047: Revert unintended checking of changes to functions.py

* WIP

* first pass to resolve circular deps

* more circular dep resolution

* remove a bunch of duplication

* move message into log line

* update comments

* fix field that wen missing during rebase

* remove double import

* remove some comments and extra code

* fix pre-commit

* rework deprecations

* WIP converting messages

* WIP converting messages

* remove stray comment

* WIP more message conversion

* WIP more message conversion

* tweak the messages

* convert last message

* rename

* remove warn_or_raise as never used

* add fake calls to all new events

* fix some tests

* put back deprecation

* restore deprecation fully

* fix unit test

* fix log levels

* remove some skipped ids

* fix macro log function

* fix how messages are built to match expected outcome

* fix expected test message

* small fixes from reviews

* fix conflict resolution in UI

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
2022-10-31 12:04:56 -05:00
Emily Rockman
8145eed603 revert to community action (#6163) 2022-10-27 16:10:58 -05:00
Emily Rockman
fc00239f36 point to correct workflow (#6161)
* point to correct workflow

* add inputs
2022-10-27 14:05:09 -05:00
Ian Knox
77dfec7214 more ergonomic profile name handling (#6157) 2022-10-27 10:49:27 -05:00
Emily Rockman
7b73264ec8 switch out to use internal action for triage labels (#6120)
* switch out to use our action

* point to main
2022-10-27 08:33:15 -05:00
Mila Page
1916784287 Ct 1167/030 statement tests conversion (#6109)
* Convert test to functional set.

* Remove old statement tests from integration test set.

* Nix whitespace

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-26 03:37:44 -07:00
Ian Knox
c2856017a1 [BUGFIX] Force tox to update pip (fixes psycopg2-binary @ 2.9.5) (#6134) 2022-10-25 13:01:38 -05:00
Michelle Ark
17b82661d2 convert 027 cycle test (#6094)
* convert 027 cycle test

* remove no-op expect_pass=False

* remove postgres from test names
2022-10-21 11:41:51 -04:00
Michelle Ark
6c8609499a Add 'michelleark' to changie's core_team list (#6084) 2022-10-20 14:41:41 -04:00
Peter Webb
53ae325576 CT-1099: Migrate test 071_commented_yaml_regression_3568_tests (#6106) 2022-10-20 12:43:30 -04:00
Mila Page
a7670a3ab9 Add unit tests for recent stringifier functors added to events library. (#6095)
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-19 22:52:32 -07:00
Mila Page
ff2f1f42c3 Working solution serialization bug. (#5874)
* Create functors to initialize event types with str-type member attributes. Before this change, the spec of various classes expected base_msg and msg params to be str's. This assumption did not always hold true. post_init hooks ensures the spec is obeyed.
* Add new changelog.
* Add msg type change functor to a few other events that could use it.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2022-10-18 12:20:30 -07:00
Luke Bassett
35f7975d8f Updated string formatting on non-f-strings. (#6086)
* Updated string formatting on non-f-strings.

Found all cases of strings separated by white space on a single line and
removed white space separation. EX: "hello " "world" -> "hello world".

* add changelog entry
2022-10-17 15:58:31 -05:00
Eve Johns
a9c8bc0e0a f-string cleanup #6068 (#6082)
* fix f string issue

* removed one space

* Add changelog

* fixed return format

Co-authored-by: Leah Antkiewicz <leah.antkiewicz@fishtownanalytics.com>
2022-10-17 16:58:04 -04:00
Peter Webb
73aebd8159 CT-625: Fail with clear message for invalid materialized vals (#6025)
* CT-625: Fail with clear message for invalid materialized vals

* CT-625: Increase test coverage, run pre-commit checks

* CT-625: run black on problem file

* CT-625: Add changelog entry

* CT-625: Remove test that didn't make sense
2022-10-14 16:14:32 -04:00
Gerda Shank
9b84b6e2e8 Initial structured logging changes (#5954)
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2022-10-14 13:57:46 -04:00
colin-rogers-dbt
095997913e migrate 034 integration test to functional (#6054)
* migrate 034 integration test to functional

* formatting fixes

* move to adapter directory

* add extra args
2022-10-12 15:24:51 -07:00
pgoslatara
6de1d29cf9 Allowing partitions in external table to be a list (#5930)
* Allowing partitions in external table to be a list

* Adding changelog entry
2022-10-12 15:23:58 -07:00
311 changed files with 34226 additions and 5905 deletions

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Update pathspec requirement from ~=0.9.0 to >=0.9,<0.11 in /core"
time: 2022-09-23T00:06:46.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 5917

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Bump mashumaro[msgpack] from 3.0.4 to 3.1.1 in /core"
time: 2022-10-20T00:07:53.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 6108

View File

@@ -0,0 +1,7 @@
kind: "Dependency"
body: "Update colorama requirement from <0.4.6,>=0.3.9 to >=0.3.9,<0.4.7 in /core"
time: 2022-10-26T00:09:10.00000Z
custom:
Author: dependabot[bot]
Issue: 4904
PR: 6144

View File

@@ -0,0 +1,6 @@
kind: Docs
time: 2022-10-17T17:14:11.715348-05:00
custom:
Author: paulbenschmidt
Issue: "5880"
PR: "324"

View File

@@ -0,0 +1,7 @@
kind: Docs
body: Fix rendering of sample code for metrics
time: 2022-11-16T15:57:43.204201+01:00
custom:
Author: jtcohen6
Issue: "323"
PR: "346"

View File

@@ -0,0 +1,8 @@
kind: Features
body: Added favor-state flag to optionally favor state nodes even if unselected node
exists
time: 2022-04-08T16:54:59.696564+01:00
custom:
Author: daniel-murray josephberni
Issue: "2968"
PR: "5859"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Proto logging messages
time: 2022-08-17T15:48:57.225267-04:00
custom:
Author: gshank
Issue: "5610"
PR: "5643"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Allow partitions in external tables to be supplied as a list
time: 2022-09-25T21:16:51.051239654+02:00
custom:
Author: pgoslatara
Issue: "5929"
PR: "5930"

View File

@@ -0,0 +1,8 @@
kind: Features
body: This pulls the profile name from args when constructing a RuntimeConfig in lib.py,
enabling the dbt-server to override the value that's in the dbt_project.yml
time: 2022-11-02T15:00:03.000805-05:00
custom:
Author: racheldaniel
Issue: "6201"
PR: "6202"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Added an md5 function to the base context
time: 2022-11-14T18:52:07.788593+02:00
custom:
Author: haritamar
Issue: "6246"
PR: "6247"

View File

@@ -0,0 +1,7 @@
kind: Features
body: Exposures support metrics in lineage
time: 2022-11-30T11:29:13.256034-05:00
custom:
Author: michelleark
Issue: "6057"
PR: "6342"

View File

@@ -0,0 +1,8 @@
kind: Fixes
body: Add functors to ensure event types with str-type attributes are initialized
to spec, even when provided non-str type params.
time: 2022-10-16T17:37:42.846683-07:00
custom:
Author: versusfacit
Issue: "5436"
PR: "5874"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Allow hooks to fail without halting execution flow
time: 2022-11-07T09:53:14.340257-06:00
custom:
Author: ChenyuLInx
Issue: "5625"
PR: "6059"

View File

@@ -0,0 +1,7 @@
kind: Fixes
body: Clarify Error Message for how many models are allowed in a Python file
time: 2022-11-15T08:10:21.527884-05:00
custom:
Author: justbldwn
Issue: "6245"
PR: "6251"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Provide useful errors when the value of 'materialized' is invalid
time: 2022-10-13T18:19:12.167548-04:00
custom:
Author: peterallenwebb
Issue: "5229"
PR: "6025"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Fixed extra whitespace in strings introduced by black.
time: 2022-10-17T15:15:11.499246-05:00
custom:
Author: luke-bassett
Issue: "1350"
PR: "6086"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Clean up string formatting
time: 2022-10-17T15:58:44.676549-04:00
custom:
Author: eve-johns
Issue: "6068"
PR: "6082"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Remove the 'root_path' field from most nodes
time: 2022-10-28T10:48:37.687886-04:00
custom:
Author: gshank
Issue: "6171"
PR: "6172"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Combine certain logging events with different levels
time: 2022-10-28T11:03:44.887836-04:00
custom:
Author: gshank
Issue: "6173"
PR: "6174"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert threading tests to pytest
time: 2022-11-08T07:45:50.589147-06:00
custom:
Author: stu-k
Issue: "5942"
PR: "6226"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert postgres index tests to pytest
time: 2022-11-08T11:56:33.743042-06:00
custom:
Author: stu-k
Issue: "5770"
PR: "6228"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Convert use color tests to pytest
time: 2022-11-08T13:31:04.788547-06:00
custom:
Author: stu-k
Issue: "5771"
PR: "6230"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Add github actions workflow to generate high level CLI API docs
time: 2022-11-16T13:00:37.916202-06:00
custom:
Author: stu-k
Issue: "5942"
PR: "6187"

View File

@@ -44,7 +44,7 @@ custom:
footerFormat: |
{{- $contributorDict := dict }}
{{- /* any names added to this list should be all lowercase for later matching purposes */}}
{{- $core_team := list "peterallenwebb" "emmyoop" "nathaniel-may" "gshank" "leahwicz" "chenyulinx" "stu-k" "iknox-fa" "versusfacit" "mcknight-42" "jtcohen6" "dependabot[bot]" "snyk-bot" "colin-rogers-dbt" }}
{{- $core_team := list "michelleark" "peterallenwebb" "emmyoop" "nathaniel-may" "gshank" "leahwicz" "chenyulinx" "stu-k" "iknox-fa" "versusfacit" "mcknight-42" "jtcohen6" "dependabot[bot]" "snyk-bot" "colin-rogers-dbt" }}
{{- range $change := .Changes }}
{{- $authorList := splitList " " $change.Custom.Author }}
{{- /* loop through all authors for a PR */}}

View File

@@ -0,0 +1,166 @@
# **what?**
# On push, if anything in core/dbt/docs or core/dbt/cli has been
# created or modified, regenerate the CLI API docs using sphinx.
# **why?**
# We watch for changes in core/dbt/cli because the CLI API docs rely on click
# and all supporting flags/params to be generated. We watch for changes in
# core/dbt/docs since any changes to sphinx configuration or any of the
# .rst files there could result in a differently build final index.html file.
# **when?**
# Whenever a change has been pushed to a branch, and only if there is a diff
# between the PR branch and main's core/dbt/cli and or core/dbt/docs dirs.
# TODO: add bot comment to PR informing contributor that the docs have been committed
# TODO: figure out why github action triggered pushes cause github to fail to report
# the status of jobs
name: Generate CLI API docs
on:
pull_request:
permissions:
contents: write
pull-requests: write
env:
CLI_DIR: ${{ github.workspace }}/core/dbt/cli
DOCS_DIR: ${{ github.workspace }}/core/dbt/docs
DOCS_BUILD_DIR: ${{ github.workspace }}/core/dbt/docs/build
jobs:
check_gen:
name: check if generation needed
runs-on: ubuntu-latest
outputs:
cli_dir_changed: ${{ steps.check_cli.outputs.cli_dir_changed }}
docs_dir_changed: ${{ steps.check_docs.outputs.docs_dir_changed }}
steps:
- name: "[DEBUG] print variables"
run: |
echo "env.CLI_DIR: ${{ env.CLI_DIR }}"
echo "env.DOCS_BUILD_DIR: ${{ env.DOCS_BUILD_DIR }}"
echo "env.DOCS_DIR: ${{ env.DOCS_DIR }}"
echo ">>>>> git log"
git log --pretty=oneline | head -5
- name: git checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.head_ref }}
- name: set shas
id: set_shas
run: |
THIS_SHA=$(git rev-parse @)
LAST_SHA=$(git rev-parse @~1)
echo "this sha: $THIS_SHA"
echo "last sha: $LAST_SHA"
echo "this_sha=$THIS_SHA" >> $GITHUB_OUTPUT
echo "last_sha=$LAST_SHA" >> $GITHUB_OUTPUT
- name: check for changes in core/dbt/cli
id: check_cli
run: |
CLI_DIR_CHANGES=$(git diff \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.CLI_DIR }})
if [ -n "$CLI_DIR_CHANGES" ]; then
echo "changes found"
echo $CLI_DIR_CHANGES
echo "cli_dir_changed=true" >> $GITHUB_OUTPUT
exit 0
fi
echo "cli_dir_changed=false" >> $GITHUB_OUTPUT
echo "no changes found"
- name: check for changes in core/dbt/docs
id: check_docs
if: steps.check_cli.outputs.cli_dir_changed == 'false'
run: |
DOCS_DIR_CHANGES=$(git diff --name-only \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.DOCS_DIR }} ':!${{ env.DOCS_BUILD_DIR }}')
DOCS_BUILD_DIR_CHANGES=$(git diff --name-only \
${{ steps.set_shas.outputs.last_sha }} \
${{ steps.set_shas.outputs.this_sha }} \
-- ${{ env.DOCS_BUILD_DIR }})
if [ -n "$DOCS_DIR_CHANGES" ] && [ -z "$DOCS_BUILD_DIR_CHANGES" ]; then
echo "changes found"
echo $DOCS_DIR_CHANGES
echo "docs_dir_changed=true" >> $GITHUB_OUTPUT
exit 0
fi
echo "docs_dir_changed=false" >> $GITHUB_OUTPUT
echo "no changes found"
gen_docs:
name: generate docs
runs-on: ubuntu-latest
needs: [check_gen]
if: |
needs.check_gen.outputs.cli_dir_changed == 'true'
|| needs.check_gen.outputs.docs_dir_changed == 'true'
steps:
- name: "[DEBUG] print variables"
run: |
echo "env.DOCS_DIR: ${{ env.DOCS_DIR }}"
echo "github head_ref: ${{ github.head_ref }}"
- name: git checkout
uses: actions/checkout@v3
with:
ref: ${{ github.head_ref }}
- name: install python
uses: actions/setup-python@v4.3.0
with:
python-version: 3.8
- name: install dev requirements
run: |
python3 -m venv env
source env/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt -r dev-requirements.txt
- name: generate docs
run: |
source env/bin/activate
cd ${{ env.DOCS_DIR }}
echo "cleaning existing docs"
make clean
echo "creating docs"
make html
- name: debug
run: |
echo ">>>>> status"
git status
echo ">>>>> remotes"
git remote -v
echo ">>>>> branch"
git branch -v
echo ">>>>> log"
git log --pretty=oneline | head -5
- name: commit docs
run: |
git config user.name 'Github Build Bot'
git config user.email 'buildbot@fishtownanalytics.com'
git commit -am "Add generated CLI API docs"
git push -u origin ${{ github.head_ref }}

View File

@@ -45,7 +45,9 @@ jobs:
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v4.3.0
with:
python-version: '3.8'
- name: Install python dependencies
run: |
@@ -82,7 +84,7 @@ jobs:
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4.3.0
with:
python-version: ${{ matrix.python-version }}
@@ -117,7 +119,7 @@ jobs:
fail-fast: false
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
os: [ubuntu-latest]
os: [ubuntu-20.04]
include:
- python-version: 3.8
os: windows-latest
@@ -137,7 +139,7 @@ jobs:
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4.3.0
with:
python-version: ${{ matrix.python-version }}
@@ -190,9 +192,9 @@ jobs:
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v4.3.0
with:
python-version: 3.8
python-version: '3.8'
- name: Install python dependencies
run: |

View File

@@ -9,13 +9,4 @@ permissions:
jobs:
stale:
runs-on: ubuntu-latest
steps:
# pinned at v4 (https://github.com/actions/stale/releases/tag/v4.0.0)
- uses: actions/stale@cdf15f641adb27a71842045a94023bef6945e3aa
with:
stale-issue-message: "This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days."
stale-pr-message: "This PR has been marked as Stale because it has been open for 180 days with no activity. If you would like the PR to remain open, please remove the stale label or comment on the PR, or it will be closed in 7 days."
close-issue-message: "Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest; add a comment to notify the maintainers."
# mark issues/PRs stale when they haven't seen activity in 180 days
days-before-stale: 180
uses: dbt-labs/actions/.github/workflows/stale-bot-matrix.yml@main

View File

@@ -22,7 +22,7 @@ jobs:
# run the performance measurements on the current or default branch
test-schema:
name: Test Log Schema
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
env:
# turns warnings into errors
RUSTFLAGS: "-D warnings"
@@ -46,12 +46,6 @@ jobs:
with:
python-version: "3.8"
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install python dependencies
run: |
pip install --user --upgrade pip
@@ -69,10 +63,3 @@ jobs:
# we actually care if these pass, because the normal test run doesn't usually include many json log outputs
- name: Run integration tests
run: tox -e integration -- -nauto
# apply our schema tests to every log event from the previous step
# skips any output that isn't valid json
- uses: actions-rs/cargo@v1
with:
command: run
args: --manifest-path test/interop/log_parsing/Cargo.toml

1
.gitignore vendored
View File

@@ -11,6 +11,7 @@ __pycache__/
env*/
dbt_env/
build/
!core/dbt/docs/build
develop-eggs/
dist/
downloads/

View File

@@ -2,7 +2,7 @@
# Eventually the hooks described here will be run as tests before merging each PR.
# TODO: remove global exclusion of tests when testing overhaul is complete
exclude: ^test/
exclude: ^(test/|core/dbt/docs/build/)
# Force all unspecified python hooks to run python 3.8
default_language_version:
@@ -30,7 +30,7 @@ repos:
args:
- "--check"
- "--diff"
- repo: https://gitlab.com/pycqa/flake8
- repo: https://github.com/pycqa/flake8
rev: 4.0.1
hooks:
- id: flake8

View File

@@ -2,6 +2,7 @@ import abc
import os
from time import sleep
import sys
import traceback
# multiprocessing.RLock is a function returning this type
from multiprocessing.synchronize import RLock
@@ -40,14 +41,15 @@ from dbt.events.functions import fire_event
from dbt.events.types import (
NewConnection,
ConnectionReused,
ConnectionLeftOpenInCleanup,
ConnectionLeftOpen,
ConnectionLeftOpen2,
ConnectionClosedInCleanup,
ConnectionClosed,
ConnectionClosed2,
Rollback,
RollbackFailed,
)
from dbt import flags
from dbt.utils import cast_to_str
SleepTime = Union[int, float] # As taken by time.sleep.
AdapterHandle = Any # Adapter connection handle objects can be any class.
@@ -304,9 +306,9 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
with self.lock:
for connection in self.thread_connections.values():
if connection.state not in {"closed", "init"}:
fire_event(ConnectionLeftOpen(conn_name=connection.name))
fire_event(ConnectionLeftOpenInCleanup(conn_name=cast_to_str(connection.name)))
else:
fire_event(ConnectionClosed(conn_name=connection.name))
fire_event(ConnectionClosedInCleanup(conn_name=cast_to_str(connection.name)))
self.close(connection)
# garbage collect these connections
@@ -332,17 +334,21 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
try:
connection.handle.rollback()
except Exception:
fire_event(RollbackFailed(conn_name=connection.name))
fire_event(
RollbackFailed(
conn_name=cast_to_str(connection.name), exc_info=traceback.format_exc()
)
)
@classmethod
def _close_handle(cls, connection: Connection) -> None:
"""Perform the actual close operation."""
# On windows, sometimes connection handles don't have a close() attr.
if hasattr(connection.handle, "close"):
fire_event(ConnectionClosed2(conn_name=connection.name))
fire_event(ConnectionClosed(conn_name=cast_to_str(connection.name)))
connection.handle.close()
else:
fire_event(ConnectionLeftOpen2(conn_name=connection.name))
fire_event(ConnectionLeftOpen(conn_name=cast_to_str(connection.name)))
@classmethod
def _rollback(cls, connection: Connection) -> None:
@@ -353,7 +359,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
f'"{connection.name}", but it does not have one open!'
)
fire_event(Rollback(conn_name=connection.name))
fire_event(Rollback(conn_name=cast_to_str(connection.name)))
cls._rollback_handle(connection)
connection.transaction_open = False
@@ -365,7 +371,7 @@ class BaseConnectionManager(metaclass=abc.ABCMeta):
return connection
if connection.transaction_open and connection.handle:
fire_event(Rollback(conn_name=connection.name))
fire_event(Rollback(conn_name=cast_to_str(connection.name)))
cls._rollback_handle(connection)
connection.transaction_open = False

View File

@@ -41,15 +41,15 @@ from dbt.clients.jinja import MacroGenerator
from dbt.contracts.graph.compiled import CompileResultNode, CompiledSeedNode
from dbt.contracts.graph.manifest import Manifest, MacroManifest
from dbt.contracts.graph.parsed import ParsedSeedNode
from dbt.exceptions import warn_or_error
from dbt.events.functions import fire_event
from dbt.events.functions import fire_event, warn_or_error
from dbt.events.types import (
CacheMiss,
ListRelations,
CodeExecution,
CodeExecutionStatus,
CatalogGenerationError,
)
from dbt.utils import filter_null_values, executor
from dbt.utils import filter_null_values, executor, cast_to_str
from dbt.adapters.base.connections import Connection, AdapterResponse
from dbt.adapters.base.meta import AdapterMeta, available
@@ -61,7 +61,7 @@ from dbt.adapters.base.relation import (
)
from dbt.adapters.base import Column as BaseColumn
from dbt.adapters.base import Credentials
from dbt.adapters.cache import RelationsCache, _make_key
from dbt.adapters.cache import RelationsCache, _make_ref_key_msg
SeedModel = Union[ParsedSeedNode, CompiledSeedNode]
@@ -343,7 +343,7 @@ class BaseAdapter(metaclass=AdapterMeta):
fire_event(
CacheMiss(
conn_name=self.nice_connection_name(),
database=database,
database=cast_to_str(database),
schema=schema,
)
)
@@ -581,7 +581,7 @@ class BaseAdapter(metaclass=AdapterMeta):
:rtype: List[self.Relation]
"""
raise NotImplementedException(
"`list_relations_without_caching` is not implemented for this " "adapter!"
"`list_relations_without_caching` is not implemented for this adapter!"
)
###
@@ -726,9 +726,9 @@ class BaseAdapter(metaclass=AdapterMeta):
relations = self.list_relations_without_caching(schema_relation)
fire_event(
ListRelations(
database=database,
database=cast_to_str(database),
schema=schema,
relations=[_make_key(x) for x in relations],
relations=[_make_ref_key_msg(x) for x in relations],
)
)
@@ -1327,7 +1327,7 @@ def catch_as_completed(
elif isinstance(exc, KeyboardInterrupt) or not isinstance(exc, Exception):
raise exc
else:
warn_or_error(f"Encountered an error while generating catalog: {str(exc)}")
warn_or_error(CatalogGenerationError(exc=str(exc)))
# exc is not None, derives from Exception, and isn't ctrl+c
exceptions.append(exc)
return merge_tables(tables), exceptions

View File

@@ -3,9 +3,14 @@ import threading
from copy import deepcopy
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
from dbt.adapters.reference_keys import _make_key, _ReferenceKey
from dbt.adapters.reference_keys import (
_make_ref_key,
_make_ref_key_msg,
_make_msg_from_ref_key,
_ReferenceKey,
)
import dbt.exceptions
from dbt.events.functions import fire_event
from dbt.events.functions import fire_event, fire_event_if
from dbt.events.types import (
AddLink,
AddRelation,
@@ -21,8 +26,8 @@ from dbt.events.types import (
UncachedRelation,
UpdateReference,
)
import dbt.flags as flags
from dbt.utils import lowercase
from dbt.helper_types import Lazy
def dot_separated(key: _ReferenceKey) -> str:
@@ -82,7 +87,7 @@ class _CachedRelation:
:return _ReferenceKey: A key for this relation.
"""
return _make_key(self)
return _make_ref_key(self)
def add_reference(self, referrer: "_CachedRelation"):
"""Add a reference from referrer to self, indicating that if this node
@@ -294,13 +299,18 @@ class RelationsCache:
:param BaseRelation dependent: The dependent model.
:raises InternalError: If either entry does not exist.
"""
ref_key = _make_key(referenced)
dep_key = _make_key(dependent)
ref_key = _make_ref_key(referenced)
dep_key = _make_ref_key(dependent)
if (ref_key.database, ref_key.schema) not in self:
# if we have not cached the referenced schema at all, we must be
# referring to a table outside our control. There's no need to make
# a link - we will never drop the referenced relation during a run.
fire_event(UncachedRelation(dep_key=dep_key, ref_key=ref_key))
fire_event(
UncachedRelation(
dep_key=_make_msg_from_ref_key(dep_key),
ref_key=_make_msg_from_ref_key(ref_key),
)
)
return
if ref_key not in self.relations:
# Insert a dummy "external" relation.
@@ -310,7 +320,11 @@ class RelationsCache:
# Insert a dummy "external" relation.
dependent = dependent.replace(type=referenced.External)
self.add(dependent)
fire_event(AddLink(dep_key=dep_key, ref_key=ref_key))
fire_event(
AddLink(
dep_key=_make_msg_from_ref_key(dep_key), ref_key=_make_msg_from_ref_key(ref_key)
)
)
with self.lock:
self._add_link(ref_key, dep_key)
@@ -321,12 +335,12 @@ class RelationsCache:
:param BaseRelation relation: The underlying relation.
"""
cached = _CachedRelation(relation)
fire_event(AddRelation(relation=_make_key(cached)))
fire_event(DumpBeforeAddGraph(dump=Lazy.defer(lambda: self.dump_graph())))
fire_event(AddRelation(relation=_make_ref_key_msg(cached)))
fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpBeforeAddGraph(dump=self.dump_graph()))
with self.lock:
self._setdefault(cached)
fire_event(DumpAfterAddGraph(dump=Lazy.defer(lambda: self.dump_graph())))
fire_event_if(flags.LOG_CACHE_EVENTS, lambda: DumpAfterAddGraph(dump=self.dump_graph()))
def _remove_refs(self, keys):
"""Removes all references to all entries in keys. This does not
@@ -341,19 +355,6 @@ class RelationsCache:
for cached in self.relations.values():
cached.release_references(keys)
def _drop_cascade_relation(self, dropped_key):
"""Drop the given relation and cascade it appropriately to all
dependent relations.
:param _CachedRelation dropped: An existing _CachedRelation to drop.
"""
if dropped_key not in self.relations:
fire_event(DropMissingRelation(relation=dropped_key))
return
consequences = self.relations[dropped_key].collect_consequences()
fire_event(DropCascade(dropped=dropped_key, consequences=consequences))
self._remove_refs(consequences)
def drop(self, relation):
"""Drop the named relation and cascade it appropriately to all
dependent relations.
@@ -365,10 +366,19 @@ class RelationsCache:
:param str schema: The schema of the relation to drop.
:param str identifier: The identifier of the relation to drop.
"""
dropped_key = _make_key(relation)
fire_event(DropRelation(dropped=dropped_key))
dropped_key = _make_ref_key(relation)
dropped_key_msg = _make_ref_key_msg(relation)
fire_event(DropRelation(dropped=dropped_key_msg))
with self.lock:
self._drop_cascade_relation(dropped_key)
if dropped_key not in self.relations:
fire_event(DropMissingRelation(relation=dropped_key_msg))
return
consequences = self.relations[dropped_key].collect_consequences()
# convert from a list of _ReferenceKeys to a list of ReferenceKeyMsgs
consequence_msgs = [_make_msg_from_ref_key(key) for key in consequences]
fire_event(DropCascade(dropped=dropped_key_msg, consequences=consequence_msgs))
self._remove_refs(consequences)
def _rename_relation(self, old_key, new_relation):
"""Rename a relation named old_key to new_key, updating references.
@@ -390,7 +400,11 @@ class RelationsCache:
for cached in self.relations.values():
if cached.is_referenced_by(old_key):
fire_event(
UpdateReference(old_key=old_key, new_key=new_key, cached_key=cached.key())
UpdateReference(
old_key=_make_ref_key_msg(old_key),
new_key=_make_ref_key_msg(new_key),
cached_key=_make_ref_key_msg(cached.key()),
)
)
cached.rename_key(old_key, new_key)
@@ -436,7 +450,7 @@ class RelationsCache:
)
if old_key not in self.relations:
fire_event(TemporaryRelation(key=old_key))
fire_event(TemporaryRelation(key=_make_msg_from_ref_key(old_key)))
return False
return True
@@ -452,11 +466,17 @@ class RelationsCache:
:param BaseRelation new: The new relation name information.
:raises InternalError: If the new key is already present.
"""
old_key = _make_key(old)
new_key = _make_key(new)
fire_event(RenameSchema(old_key=old_key, new_key=new_key))
old_key = _make_ref_key(old)
new_key = _make_ref_key(new)
fire_event(
RenameSchema(
old_key=_make_msg_from_ref_key(old_key), new_key=_make_msg_from_ref_key(new)
)
)
fire_event(DumpBeforeRenameSchema(dump=Lazy.defer(lambda: self.dump_graph())))
fire_event_if(
flags.LOG_CACHE_EVENTS, lambda: DumpBeforeRenameSchema(dump=self.dump_graph())
)
with self.lock:
if self._check_rename_constraints(old_key, new_key):
@@ -464,7 +484,9 @@ class RelationsCache:
else:
self._setdefault(_CachedRelation(new))
fire_event(DumpAfterRenameSchema(dump=Lazy.defer(lambda: self.dump_graph())))
fire_event_if(
flags.LOG_CACHE_EVENTS, lambda: DumpAfterRenameSchema(dump=self.dump_graph())
)
def get_relations(self, database: Optional[str], schema: Optional[str]) -> List[Any]:
"""Case-insensitively yield all relations matching the given schema.
@@ -512,6 +534,6 @@ class RelationsCache:
"""
for relation in to_remove:
# it may have been cascaded out already
drop_key = _make_key(relation)
drop_key = _make_ref_key(relation)
if drop_key in self.relations:
self.drop(drop_key)

View File

@@ -1,4 +1,5 @@
import threading
import traceback
from contextlib import contextmanager
from importlib import import_module
from pathlib import Path
@@ -63,7 +64,7 @@ class AdapterContainer:
# otherwise, the error had to have come from some underlying
# library. Log the stack trace.
fire_event(PluginLoadError())
fire_event(PluginLoadError(exc_info=traceback.format_exc()))
raise
plugin: AdapterPlugin = mod.Plugin
plugin_type = plugin.adapter.type()

View File

@@ -2,6 +2,7 @@
from collections import namedtuple
from typing import Any, Optional
from dbt.events.proto_types import ReferenceKeyMsg
_ReferenceKey = namedtuple("_ReferenceKey", "database schema identifier")
@@ -14,7 +15,12 @@ def lowercase(value: Optional[str]) -> Optional[str]:
return value.lower()
# For backwards compatibility. New code should use _make_ref_key
def _make_key(relation: Any) -> _ReferenceKey:
return _make_ref_key(relation)
def _make_ref_key(relation: Any) -> _ReferenceKey:
"""Make _ReferenceKeys with lowercase values for the cache so we don't have
to keep track of quoting
"""
@@ -22,3 +28,13 @@ def _make_key(relation: Any) -> _ReferenceKey:
return _ReferenceKey(
lowercase(relation.database), lowercase(relation.schema), lowercase(relation.identifier)
)
def _make_ref_key_msg(relation: Any):
return _make_msg_from_ref_key(_make_ref_key(relation))
def _make_msg_from_ref_key(ref_key: _ReferenceKey) -> ReferenceKeyMsg:
return ReferenceKeyMsg(
database=ref_key.database, schema=ref_key.schema, identifier=ref_key.identifier
)

View File

@@ -10,6 +10,7 @@ from dbt.adapters.base import BaseConnectionManager
from dbt.contracts.connection import Connection, ConnectionState, AdapterResponse
from dbt.events.functions import fire_event
from dbt.events.types import ConnectionUsed, SQLQuery, SQLCommit, SQLQueryStatus
from dbt.utils import cast_to_str
class SQLConnectionManager(BaseConnectionManager):
@@ -55,7 +56,7 @@ class SQLConnectionManager(BaseConnectionManager):
connection = self.get_thread_connection()
if auto_begin and connection.transaction_open is False:
self.begin()
fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=connection.name))
fire_event(ConnectionUsed(conn_type=self.TYPE, conn_name=cast_to_str(connection.name)))
with self.exception_handler(sql):
if abridge_sql_log:
@@ -63,7 +64,7 @@ class SQLConnectionManager(BaseConnectionManager):
else:
log_sql = sql
fire_event(SQLQuery(conn_name=connection.name, sql=log_sql))
fire_event(SQLQuery(conn_name=cast_to_str(connection.name), sql=log_sql))
pre = time.time()
cursor = connection.handle.cursor()

View File

@@ -5,7 +5,7 @@ import dbt.clients.agate_helper
from dbt.contracts.connection import Connection
import dbt.exceptions
from dbt.adapters.base import BaseAdapter, available
from dbt.adapters.cache import _make_key
from dbt.adapters.cache import _make_ref_key_msg
from dbt.adapters.sql import SQLConnectionManager
from dbt.events.functions import fire_event
from dbt.events.types import ColTypeChange, SchemaCreation, SchemaDrop
@@ -110,7 +110,7 @@ class SQLAdapter(BaseAdapter):
ColTypeChange(
orig_type=target_column.data_type,
new_type=new_type,
table=_make_key(current),
table=_make_ref_key_msg(current),
)
)
@@ -155,7 +155,7 @@ class SQLAdapter(BaseAdapter):
def create_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
fire_event(SchemaCreation(relation=_make_key(relation)))
fire_event(SchemaCreation(relation=_make_ref_key_msg(relation)))
kwargs = {
"relation": relation,
}
@@ -166,7 +166,7 @@ class SQLAdapter(BaseAdapter):
def drop_schema(self, relation: BaseRelation) -> None:
relation = relation.without_identifier()
fire_event(SchemaDrop(relation=_make_key(relation)))
fire_event(SchemaDrop(relation=_make_ref_key_msg(relation)))
kwargs = {
"relation": relation,
}

View File

@@ -367,9 +367,9 @@ class BlockIterator:
if self.current:
linecount = self.data[: self.current.end].count("\n") + 1
dbt.exceptions.raise_compiler_error(
(
"Reached EOF without finding a close tag for " "{} (searched from line {})"
).format(self.current.block_type_name, linecount)
("Reached EOF without finding a close tag for {} (searched from line {})").format(
self.current.block_type_name, linecount
)
)
if collect_raw_data:

View File

@@ -3,9 +3,9 @@ from typing import Any, Dict, List
import requests
from dbt.events.functions import fire_event
from dbt.events.types import (
RegistryProgressMakingGETRequest,
RegistryProgressGETRequest,
RegistryProgressGETResponse,
RegistryIndexProgressMakingGETRequest,
RegistryIndexProgressGETRequest,
RegistryIndexProgressGETResponse,
RegistryResponseUnexpectedType,
RegistryResponseMissingTopKeys,
@@ -38,7 +38,7 @@ def _get_with_retries(package_name, registry_base_url=None):
def _get(package_name, registry_base_url=None):
url = _get_url(package_name, registry_base_url)
fire_event(RegistryProgressMakingGETRequest(url=url))
fire_event(RegistryProgressGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30)
fire_event(RegistryProgressGETResponse(url=url, resp_code=resp.status_code))
@@ -162,7 +162,7 @@ def get_compatible_versions(package_name, dbt_version, should_version_check) ->
def _get_index(registry_base_url=None):
url = _get_url("index", registry_base_url)
fire_event(RegistryIndexProgressMakingGETRequest(url=url))
fire_event(RegistryIndexProgressGETRequest(url=url))
# all exceptions from requests get caught in the retry logic so no need to wrap this here
resp = requests.get(url, timeout=30)
fire_event(RegistryIndexProgressGETResponse(url=url, resp_code=resp.status_code))

View File

@@ -56,6 +56,7 @@ def print_compile_stats(stats):
NodeType.Source: "source",
NodeType.Exposure: "exposure",
NodeType.Metric: "metric",
NodeType.Entity: "entity",
}
results = {k: 0 for k in names.keys()}
@@ -91,6 +92,8 @@ def _generate_stats(manifest: Manifest):
stats[exposure.resource_type] += 1
for metric in manifest.metrics.values():
stats[metric.resource_type] += 1
for entity in manifest.entities.values():
stats[entity.resource_type] += 1
for macro in manifest.macros.values():
stats[macro.resource_type] += 1
return stats

View File

@@ -248,7 +248,7 @@ class PartialProject(RenderComponents):
project_name: Optional[str] = field(
metadata=dict(
description=(
"The name of the project. This should always be set and will not " "be rendered"
"The name of the project. This should always be set and will not be rendered"
)
)
)
@@ -381,6 +381,7 @@ class PartialProject(RenderComponents):
sources: Dict[str, Any]
tests: Dict[str, Any]
metrics: Dict[str, Any]
entities: Dict[str, Any]
exposures: Dict[str, Any]
vars_value: VarProvider
@@ -391,6 +392,7 @@ class PartialProject(RenderComponents):
sources = cfg.sources
tests = cfg.tests
metrics = cfg.metrics
entities = cfg.entities
exposures = cfg.exposures
if cfg.vars is None:
vars_dict: Dict[str, Any] = {}
@@ -446,6 +448,7 @@ class PartialProject(RenderComponents):
sources=sources,
tests=tests,
metrics=metrics,
entities=entities,
exposures=exposures,
vars=vars_value,
config_version=cfg.config_version,
@@ -550,6 +553,7 @@ class Project:
sources: Dict[str, Any]
tests: Dict[str, Any]
metrics: Dict[str, Any]
entities: Dict[str, Any]
exposures: Dict[str, Any]
vars: VarProvider
dbt_version: List[VersionSpecifier]
@@ -624,6 +628,7 @@ class Project:
"sources": self.sources,
"tests": self.tests,
"metrics": self.metrics,
"entities": self.entities,
"exposures": self.exposures,
"vars": self.vars.to_dict(),
"require-dbt-version": [v.to_version_string() for v in self.dbt_version],
@@ -668,7 +673,7 @@ class Project:
def get_selector(self, name: str) -> Union[SelectionSpec, bool]:
if name not in self.selectors:
raise RuntimeException(
f"Could not find selector named {name}, expected one of " f"{list(self.selectors)}"
f"Could not find selector named {name}, expected one of {list(self.selectors)}"
)
return self.selectors[name]["definition"]

View File

@@ -3,31 +3,41 @@ import os
from copy import deepcopy
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, Any, Optional, Mapping, Iterator, Iterable, Tuple, List, MutableSet, Type
from typing import (
Any,
Dict,
Iterable,
Iterator,
Mapping,
MutableSet,
Optional,
Tuple,
Type,
Union,
)
from .profile import Profile
from .project import Project
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet, DictDefaultEmptyStr
from dbt.adapters.factory import get_include_paths, get_relation_class_by_name
from dbt.config.profile import read_user_config
from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata
from dbt.contracts.relation import ComponentName
from dbt.ui import warning_tag
from dbt.contracts.project import Configuration, UserConfig
from dbt.exceptions import (
RuntimeException,
DbtProjectError,
validator_error_message,
warn_or_error,
raise_compiler_error,
)
from dbt.contracts.relation import ComponentName
from dbt.dataclass_schema import ValidationError
from dbt.exceptions import (
DbtProjectError,
RuntimeException,
raise_compiler_error,
validator_error_message,
)
from dbt.events.functions import warn_or_error
from dbt.events.types import UnusedResourceConfigPath
from dbt.helper_types import DictDefaultEmptyStr, FQNPath, PathSet
from .profile import Profile
from .project import Project, PartialProject
from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
def _project_quoting_dict(proj: Project, profile: Profile) -> Dict[ComponentName, bool]:
@@ -106,6 +116,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
sources=project.sources,
tests=project.tests,
metrics=project.metrics,
entities=project.entities,
exposures=project.exposures,
vars=project.vars,
config_version=project.config_version,
@@ -190,28 +201,52 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
@classmethod
def collect_parts(cls: Type["RuntimeConfig"], args: Any) -> Tuple[Project, Profile]:
# profile_name from the project
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(project_root, verify_version=version_check)
# build the profile using the base renderer and the one fact we know
# Note: only the named profile section is rendered. The rest of the
# profile is ignored.
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
profile = cls.collect_profile(args=args)
project_renderer = DbtProjectYamlRenderer(profile, cli_vars)
project = cls.collect_project(args=args, project_renderer=project_renderer)
assert type(project) is Project
return (project, profile)
@classmethod
def collect_profile(
cls: Type["RuntimeConfig"], args: Any, profile_name: Optional[str] = None
) -> Profile:
cli_vars: Dict[str, Any] = parse_cli_vars(getattr(args, "vars", "{}"))
profile_renderer = ProfileRenderer(cli_vars)
profile_name = partial.render_profile_name(profile_renderer)
# build the profile using the base renderer and the one fact we know
if profile_name is None:
# Note: only the named profile section is rendered here. The rest of the
# profile is ignored.
partial = cls.collect_project(args)
assert type(partial) is PartialProject
profile_name = partial.render_profile_name(profile_renderer)
profile = cls._get_rendered_profile(args, profile_renderer, profile_name)
# Save env_vars encountered in rendering for partial parsing
profile.profile_env_vars = profile_renderer.ctx_obj.env_vars
return profile
# get a new renderer using our target information and render the
# project
project_renderer = DbtProjectYamlRenderer(profile, cli_vars)
project = partial.render(project_renderer)
# Save env_vars encountered in rendering for partial parsing
project.project_env_vars = project_renderer.ctx_obj.env_vars
return (project, profile)
@classmethod
def collect_project(
cls: Type["RuntimeConfig"],
args: Any,
project_renderer: Optional[DbtProjectYamlRenderer] = None,
) -> Union[Project, PartialProject]:
project_root = args.project_dir if args.project_dir else os.getcwd()
version_check = bool(flags.VERSION_CHECK)
partial = Project.partial_load(project_root, verify_version=version_check)
if project_renderer is None:
return partial
else:
project = partial.render(project_renderer)
project.project_env_vars = project_renderer.ctx_obj.env_vars
return project
# Called in main.py, lib.py, task/base.py
@classmethod
@@ -277,14 +312,15 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
"sources": self._get_config_paths(self.sources),
"tests": self._get_config_paths(self.tests),
"metrics": self._get_config_paths(self.metrics),
"entities": self._get_config_paths(self.entities),
"exposures": self._get_config_paths(self.exposures),
}
def get_unused_resource_config_paths(
def warn_for_unused_resource_config_paths(
self,
resource_fqns: Mapping[str, PathSet],
disabled: PathSet,
) -> List[FQNPath]:
) -> None:
"""Return a list of lists of strings, where each inner list of strings
represents a type + FQN path of a resource configuration that is not
used.
@@ -298,23 +334,13 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
for config_path in config_paths:
if not _is_config_used(config_path, fqns):
unused_resource_config_paths.append((resource_type,) + config_path)
return unused_resource_config_paths
resource_path = ".".join(i for i in ((resource_type,) + config_path))
unused_resource_config_paths.append(resource_path)
def warn_for_unused_resource_config_paths(
self,
resource_fqns: Mapping[str, PathSet],
disabled: PathSet,
) -> None:
unused = self.get_unused_resource_config_paths(resource_fqns, disabled)
if len(unused) == 0:
if len(unused_resource_config_paths) == 0:
return
msg = UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE.format(
len(unused), "\n".join("- {}".format(".".join(u)) for u in unused)
)
warn_or_error(msg, log_fmt=warning_tag("{}"))
warn_or_error(UnusedResourceConfigPath(unused_config_paths=unused_resource_config_paths))
def load_dependencies(self, base_only=False) -> Mapping[str, "RuntimeConfig"]:
if self.dependencies is None:
@@ -482,6 +508,7 @@ class UnsetProfileConfig(RuntimeConfig):
"sources": self.sources,
"tests": self.tests,
"metrics": self.metrics,
"entities": self.entities,
"exposures": self.exposures,
"vars": self.vars.to_dict(),
"require-dbt-version": [v.to_version_string() for v in self.dbt_version],
@@ -544,6 +571,7 @@ class UnsetProfileConfig(RuntimeConfig):
sources=project.sources,
tests=project.tests,
metrics=project.metrics,
entities=project.entities,
exposures=project.exposures,
vars=project.vars,
config_version=project.config_version,
@@ -591,14 +619,6 @@ class UnsetProfileConfig(RuntimeConfig):
return cls.from_parts(project=project, profile=profile, args=args)
UNUSED_RESOURCE_CONFIGURATION_PATH_MESSAGE = """\
Configuration paths exist in your dbt_project.yml file which do not \
apply to any resources.
There are {} unused configuration paths:
{}
"""
def _is_config_used(path, fqns):
if fqns:
for fqn in fqns:

View File

@@ -1,2 +1,10 @@
SECRET_ENV_PREFIX = "DBT_ENV_SECRET_"
DEFAULT_ENV_PLACEHOLDER = "DBT_DEFAULT_PLACEHOLDER"
METADATA_ENV_PREFIX = "DBT_ENV_CUSTOM_ENV_"
MAXIMUM_SEED_SIZE = 1 * 1024 * 1024
MAXIMUM_SEED_SIZE_NAME = "1MB"
PIN_PACKAGE_URL = (
"https://docs.getdbt.com/docs/package-management#section-specifying-package-versions"
)

View File

@@ -4,6 +4,7 @@ from typing import Any, Dict, NoReturn, Optional, Mapping, Iterable, Set, List
from dbt import flags
from dbt import tracking
from dbt import utils
from dbt.clients.jinja import get_rendered
from dbt.clients.yaml_helper import yaml, safe_load, SafeLoader, Loader, Dumper # noqa: F401
from dbt.constants import SECRET_ENV_PREFIX, DEFAULT_ENV_PLACEHOLDER
@@ -16,7 +17,7 @@ from dbt.exceptions import (
disallow_secret_env_var,
)
from dbt.events.functions import fire_event, get_invocation_id
from dbt.events.types import MacroEventInfo, MacroEventDebug
from dbt.events.types import JinjaLogInfo, JinjaLogDebug
from dbt.version import __version__ as dbt_version
# These modules are added to the context. Consider alternative
@@ -126,7 +127,7 @@ class ContextMeta(type):
class Var:
UndefinedVarError = "Required var '{}' not found in config:\nVars " "supplied to {} = {}"
UndefinedVarError = "Required var '{}' not found in config:\nVars supplied to {} = {}"
_VAR_NOTSET = object()
def __init__(
@@ -557,9 +558,9 @@ class BaseContext(metaclass=ContextMeta):
{% endmacro %}"
"""
if info:
fire_event(MacroEventInfo(msg=msg))
fire_event(JinjaLogInfo(msg=msg))
else:
fire_event(MacroEventDebug(msg=msg))
fire_event(JinjaLogDebug(msg=msg))
return ""
@contextproperty
@@ -687,6 +688,19 @@ class BaseContext(metaclass=ContextMeta):
dict_diff.update({k: dict_a[k]})
return dict_diff
@contextmember
@staticmethod
def local_md5(value: str) -> str:
"""Calculates an MD5 hash of the given string.
It's called "local_md5" to emphasize that it runs locally in dbt (in jinja context) and not an MD5 SQL command.
:param value: The value to hash
Usage:
{% set value_hash = local_md5("hello world") %}
"""
return utils.md5(value)
def generate_base_context(cli_vars: Dict[str, Any]) -> Dict[str, Any]:
ctx = BaseContext(cli_vars)

View File

@@ -45,6 +45,8 @@ class UnrenderedConfig(ConfigSource):
model_configs = unrendered.get("tests")
elif resource_type == NodeType.Metric:
model_configs = unrendered.get("metrics")
elif resource_type == NodeType.Entity:
model_configs = unrendered.get("entities")
elif resource_type == NodeType.Exposure:
model_configs = unrendered.get("exposures")
else:
@@ -70,6 +72,8 @@ class RenderedConfig(ConfigSource):
model_configs = self.project.tests
elif resource_type == NodeType.Metric:
model_configs = self.project.metrics
elif resource_type == NodeType.Entity:
model_configs = self.project.entities
elif resource_type == NodeType.Exposure:
model_configs = self.project.exposures
else:

View File

@@ -37,11 +37,12 @@ from dbt.contracts.graph.parsed import (
ParsedMacro,
ParsedExposure,
ParsedMetric,
ParsedEntity,
ParsedSeedNode,
ParsedSourceDefinition,
)
from dbt.contracts.graph.metrics import MetricReference, ResolvedMetricReference
from dbt.contracts.util import get_metadata_env
from dbt.events.functions import get_metadata_vars
from dbt.exceptions import (
CompilationException,
ParsingException,
@@ -53,7 +54,6 @@ from dbt.exceptions import (
raise_compiler_error,
ref_invalid_args,
metric_invalid_args,
ref_target_not_found,
target_not_found,
ref_bad_context,
wrapped_exports,
@@ -182,7 +182,7 @@ class BaseDatabaseWrapper:
return macro
searched = ", ".join(repr(a) for a in attempts)
msg = f"In dispatch: No macro named '{macro_name}' found\n" f" Searched for: {searched}"
msg = f"In dispatch: No macro named '{macro_name}' found\n Searched for: {searched}"
raise CompilationException(msg)
@@ -220,12 +220,12 @@ class BaseRefResolver(BaseResolver):
def validate_args(self, name: str, package: Optional[str]):
if not isinstance(name, str):
raise CompilationException(
f"The name argument to ref() must be a string, got " f"{type(name)}"
f"The name argument to ref() must be a string, got {type(name)}"
)
if package is not None and not isinstance(package, str):
raise CompilationException(
f"The package argument to ref() must be a string or None, got " f"{type(package)}"
f"The package argument to ref() must be a string or None, got {type(package)}"
)
def __call__(self, *args: str) -> RelationProxy:
@@ -302,12 +302,10 @@ class BaseMetricResolver(BaseResolver):
self.validate_args(name, package)
return self.resolve(name, package)
class Config(Protocol):
def __init__(self, model, context_config: Optional[ContextConfig]):
...
# Implementation of "config(..)" calls in models
class ParseConfigObject(Config):
def __init__(self, model, context_config: Optional[ContextConfig]):
@@ -476,10 +474,11 @@ class RuntimeRefResolver(BaseRefResolver):
)
if target_model is None or isinstance(target_model, Disabled):
ref_target_not_found(
self.model,
target_name,
target_package,
target_not_found(
node=self.model,
target_name=target_name,
target_kind="node",
target_package=target_package,
disabled=isinstance(target_model, Disabled),
)
self.validate(target_model, target_name, target_package)
@@ -713,7 +712,7 @@ class ProviderContext(ManifestContext):
@contextproperty
def dbt_metadata_envs(self) -> Dict[str, str]:
return get_metadata_env()
return get_metadata_vars()
@contextproperty
def invocation_args_dict(self):
@@ -803,6 +802,7 @@ class ProviderContext(ManifestContext):
raise_compiler_error(
"can only load_agate_table for seeds (got a {})".format(self.model.resource_type)
)
assert self.model.root_path
path = os.path.join(self.model.root_path, self.model.original_file_path)
column_types = self.model.config.column_types
try:
@@ -1434,6 +1434,14 @@ class ExposureSourceResolver(BaseResolver):
return ""
class ExposureMetricResolver(BaseResolver):
def __call__(self, *args) -> str:
if len(args) not in (1, 2):
metric_invalid_args(self.model, args)
self.model.metrics.append(list(args))
return ""
def generate_parse_exposure(
exposure: ParsedExposure,
config: RuntimeConfig,
@@ -1454,6 +1462,12 @@ def generate_parse_exposure(
project,
manifest,
),
"metric": ExposureMetricResolver(
None,
exposure,
project,
manifest,
),
}
@@ -1477,7 +1491,6 @@ class MetricRefResolver(BaseResolver):
"the name argument to ref() must be a string"
)
def generate_parse_metrics(
metric: ParsedMetric,
config: RuntimeConfig,
@@ -1500,6 +1513,41 @@ def generate_parse_metrics(
),
}
class EntityRefResolver(BaseResolver):
def __call__(self, *args) -> str:
package = None
if len(args) == 1:
name = args[0]
elif len(args) == 2:
package, name = args
else:
ref_invalid_args(self.model, args)
self.validate_args(name, package)
self.model.refs.append(list(args))
return ""
def validate_args(self, name, package):
if not isinstance(name, str):
raise ParsingException(
f"In the entity associated with {self.model.original_file_path} "
"the name argument to ref() must be a string"
)
def generate_parse_entities(
entity: ParsedEntity,
config: RuntimeConfig,
manifest: Manifest,
package_name: str,
) -> Dict[str, Any]:
project = config.load_dependencies()[package_name]
return {
"ref": EntityRefResolver(
None,
entity,
project,
manifest,
),
}
# This class is currently used by the schema parser in order
# to limit the number of macros in the context by using

View File

@@ -94,7 +94,7 @@ class Connection(ExtensibleDbtClassMixin, Replaceable):
self._handle.resolve(self)
except RecursionError as exc:
raise InternalException(
"A connection's open() method attempted to read the " "handle value"
"A connection's open() method attempted to read the handle value"
) from exc
return self._handle

View File

@@ -1,18 +1,16 @@
import hashlib
import os
from dataclasses import dataclass, field
from mashumaro.types import SerializableType
from typing import List, Optional, Union, Dict, Any
from dbt.constants import MAXIMUM_SEED_SIZE
from dbt.dataclass_schema import dbtClassMixin, StrEnum
from .util import SourceKey
MAXIMUM_SEED_SIZE = 1 * 1024 * 1024
MAXIMUM_SEED_SIZE_NAME = "1MB"
class ParseFileType(StrEnum):
Macro = "macro"
Model = "model"
@@ -229,6 +227,7 @@ class SchemaSourceFile(BaseSourceFile):
sources: List[str] = field(default_factory=list)
exposures: List[str] = field(default_factory=list)
metrics: List[str] = field(default_factory=list)
entities: List[str] = field(default_factory=list)
# node patches contain models, seeds, snapshots, analyses
ndp: List[str] = field(default_factory=list)
# any macro patches in this file by macro unique_id.

View File

@@ -7,6 +7,7 @@ from dbt.contracts.graph.parsed import (
ParsedModelNode,
ParsedExposure,
ParsedMetric,
ParsedEntity,
ParsedResource,
ParsedRPCNode,
ParsedSqlNode,
@@ -97,6 +98,7 @@ class CompiledSeedNode(CompiledNode):
# keep this in sync with ParsedSeedNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Seed]})
config: SeedConfig = field(default_factory=SeedConfig)
root_path: Optional[str] = None
@property
def empty(self):
@@ -232,4 +234,5 @@ GraphMemberNode = Union[
CompileResultNode,
ParsedExposure,
ParsedMetric,
ParsedEntity,
]

View File

@@ -36,6 +36,7 @@ from dbt.contracts.graph.parsed import (
ParsedGenericTestNode,
ParsedExposure,
ParsedMetric,
ParsedEntity,
HasUniqueID,
UnpatchedSourceDefinition,
ManifestNodes,
@@ -216,8 +217,39 @@ class MetricLookup(dbtClassMixin):
)
return manifest.metrics[unique_id]
class EntityLookup(dbtClassMixin):
def __init__(self, manifest: "Manifest"):
self.storage: Dict[str, Dict[PackageName, UniqueID]] = {}
self.populate(manifest)
# This handles both models/seeds/snapshots and sources/metrics/exposures
def get_unique_id(self, search_name, package: Optional[PackageName]):
return find_unique_id_for_package(self.storage, search_name, package)
def find(self, search_name, package: Optional[PackageName], manifest: "Manifest"):
unique_id = self.get_unique_id(search_name, package)
if unique_id is not None:
return self.perform_lookup(unique_id, manifest)
return None
def add_entity(self, entity: ParsedEntity):
if entity.search_name not in self.storage:
self.storage[entity.search_name] = {}
self.storage[entity.search_name][entity.package_name] = entity.unique_id
def populate(self, manifest):
for entity in manifest.entities.values():
if hasattr(entity, "name"):
self.add_entity(entity)
def perform_lookup(self, unique_id: UniqueID, manifest: "Manifest") -> ParsedEntity:
if unique_id not in manifest.entities:
raise dbt.exceptions.InternalException(
f"Entity {unique_id} found in cache but not found in manifest"
)
return manifest.entities[unique_id]
# This handles both models/seeds/snapshots and sources/metrics/entities/exposures
class DisabledLookup(dbtClassMixin):
def __init__(self, manifest: "Manifest"):
self.storage: Dict[str, Dict[PackageName, List[Any]]] = {}
@@ -467,6 +499,7 @@ class Disabled(Generic[D]):
MaybeMetricNode = Optional[Union[ParsedMetric, Disabled[ParsedMetric]]]
MaybeEntityNode = Optional[Union[ParsedEntity, Disabled[ParsedEntity]]]
MaybeDocumentation = Optional[ParsedDocumentation]
@@ -499,7 +532,7 @@ def _update_into(dest: MutableMapping[str, T], new_item: T):
existing = dest[unique_id]
if new_item.original_file_path != existing.original_file_path:
raise dbt.exceptions.RuntimeException(
f"cannot update a {new_item.resource_type} to have a new file " f"path!"
f"cannot update a {new_item.resource_type} to have a new file path!"
)
dest[unique_id] = new_item
@@ -611,6 +644,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
docs: MutableMapping[str, ParsedDocumentation] = field(default_factory=dict)
exposures: MutableMapping[str, ParsedExposure] = field(default_factory=dict)
metrics: MutableMapping[str, ParsedMetric] = field(default_factory=dict)
entities: MutableMapping[str, ParsedEntity] = field(default_factory=dict)
selectors: MutableMapping[str, Any] = field(default_factory=dict)
files: MutableMapping[str, AnySourceFile] = field(default_factory=dict)
metadata: ManifestMetadata = field(default_factory=ManifestMetadata)
@@ -632,6 +666,9 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
_metric_lookup: Optional[MetricLookup] = field(
default=None, metadata={"serialize": lambda x: None, "deserialize": lambda x: None}
)
_entity_lookup: Optional[EntityLookup] = field(
default=None, metadata={"serialize": lambda x: None, "deserialize": lambda x: None}
)
_disabled_lookup: Optional[DisabledLookup] = field(
default=None, metadata={"serialize": lambda x: None, "deserialize": lambda x: None}
)
@@ -682,6 +719,9 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
def update_metric(self, new_metric: ParsedMetric):
_update_into(self.metrics, new_metric)
def update_entity(self, new_entity: ParsedEntity):
_update_into(self.entities, new_entity)
def update_node(self, new_node: ManifestNode):
_update_into(self.nodes, new_node)
@@ -697,6 +737,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.flat_graph = {
"exposures": {k: v.to_dict(omit_none=False) for k, v in self.exposures.items()},
"metrics": {k: v.to_dict(omit_none=False) for k, v in self.metrics.items()},
"entities": {k: v.to_dict(omit_none=False) for k, v in self.entities.items()},
"nodes": {k: v.to_dict(omit_none=False) for k, v in self.nodes.items()},
"sources": {k: v.to_dict(omit_none=False) for k, v in self.sources.items()},
}
@@ -759,6 +800,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.nodes.values(),
self.sources.values(),
self.metrics.values(),
self.entities.values(),
)
for resource in all_resources:
resource_type_plural = resource.resource_type.pluralize()
@@ -787,6 +829,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
docs={k: _deepcopy(v) for k, v in self.docs.items()},
exposures={k: _deepcopy(v) for k, v in self.exposures.items()},
metrics={k: _deepcopy(v) for k, v in self.metrics.items()},
entities={k: _deepcopy(v) for k, v in self.entities.items()},
selectors={k: _deepcopy(v) for k, v in self.selectors.items()},
metadata=self.metadata,
disabled={k: _deepcopy(v) for k, v in self.disabled.items()},
@@ -803,6 +846,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.sources.values(),
self.exposures.values(),
self.metrics.values(),
self.entities.values(),
)
)
forward_edges, backward_edges = build_node_edges(edge_members)
@@ -828,6 +872,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
docs=self.docs,
exposures=self.exposures,
metrics=self.metrics,
entities=self.entities,
selectors=self.selectors,
metadata=self.metadata,
disabled=self.disabled,
@@ -849,6 +894,8 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
return self.exposures[unique_id]
elif unique_id in self.metrics:
return self.metrics[unique_id]
elif unique_id in self.entities:
return self.entities[unique_id]
else:
# something terrible has happened
raise dbt.exceptions.InternalException(
@@ -885,6 +932,12 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self._metric_lookup = MetricLookup(self)
return self._metric_lookup
@property
def entity_lookup(self) -> EntityLookup:
if self._entity_lookup is None:
self._entity_lookup = EntityLookup(self)
return self._entity_lookup
def rebuild_ref_lookup(self):
self._ref_lookup = RefableLookup(self)
@@ -985,6 +1038,32 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
return Disabled(disabled[0])
return None
def resolve_entity(
self,
target_entity_name: str,
target_entity_package: Optional[str],
current_project: str,
node_package: str,
) -> MaybeEntityNode:
entity: Optional[ParsedEntity] = None
disabled: Optional[List[ParsedEntity]] = None
candidates = _search_packages(current_project, node_package, target_entity_package)
for pkg in candidates:
entity = self.entity_lookup.find(target_entity_name, pkg, self)
if entity is not None and entity.config.enabled:
return entity
# it's possible that the node is disabled
if disabled is None:
disabled = self.disabled_lookup.find(f"{target_entity_name}", pkg)
if disabled:
return Disabled(disabled[0])
return None
# Called by DocsRuntimeContext.doc
def resolve_doc(
self,
@@ -1011,6 +1090,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
adapter,
other: "WritableManifest",
selected: AbstractSet[UniqueID],
favor_state: bool = False,
) -> None:
"""Given the selected unique IDs and a writable manifest, update this
manifest by replacing any unselected nodes with their counterpart.
@@ -1025,7 +1105,10 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
node.resource_type in refables
and not node.is_ephemeral
and unique_id not in selected
and not adapter.get_relation(current.database, current.schema, current.identifier)
and (
not adapter.get_relation(current.database, current.schema, current.identifier)
or favor_state
)
):
merged.add(unique_id)
self.nodes[unique_id] = node.replace(deferred=True)
@@ -1036,7 +1119,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
# log up to 5 items
sample = list(islice(merged, 5))
fire_event(MergedFromState(nbr_merged=len(merged), sample=sample))
fire_event(MergedFromState(num_merged=len(merged), sample=sample))
# Methods that were formerly in ParseResult
@@ -1095,6 +1178,8 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
source_file.add_test(node.unique_id, test_from)
if isinstance(node, ParsedMetric):
source_file.metrics.append(node.unique_id)
if isinstance(node, ParsedEntity):
source_file.entities.append(node.unique_id)
if isinstance(node, ParsedExposure):
source_file.exposures.append(node.unique_id)
else:
@@ -1110,6 +1195,11 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.metrics[metric.unique_id] = metric
source_file.metrics.append(metric.unique_id)
def add_entity(self, source_file: SchemaSourceFile, entity: ParsedEntity):
_check_duplicates(entity, self.entities)
self.entities[entity.unique_id] = entity
source_file.entities.append(entity.unique_id)
def add_disabled_nofile(self, node: GraphMemberNode):
# There can be multiple disabled nodes for the same unique_id
if node.unique_id in self.disabled:
@@ -1125,6 +1215,8 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
source_file.add_test(node.unique_id, test_from)
if isinstance(node, ParsedMetric):
source_file.metrics.append(node.unique_id)
if isinstance(node, ParsedEntity):
source_file.entities.append(node.unique_id)
if isinstance(node, ParsedExposure):
source_file.exposures.append(node.unique_id)
else:
@@ -1152,6 +1244,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self.docs,
self.exposures,
self.metrics,
self.entities,
self.selectors,
self.files,
self.metadata,
@@ -1164,6 +1257,7 @@ class Manifest(MacroMethods, DataClassMessagePackMixin, dbtClassMixin):
self._source_lookup,
self._ref_lookup,
self._metric_lookup,
self._entity_lookup,
self._disabled_lookup,
self._analysis_lookup,
)
@@ -1183,7 +1277,7 @@ AnyManifest = Union[Manifest, MacroManifest]
@dataclass
@schema_version("manifest", 7)
@schema_version("manifest", 8)
class WritableManifest(ArtifactMixin):
nodes: Mapping[UniqueID, ManifestNode] = field(
metadata=dict(description=("The nodes defined in the dbt project and its dependencies"))
@@ -1205,6 +1299,9 @@ class WritableManifest(ArtifactMixin):
metrics: Mapping[UniqueID, ParsedMetric] = field(
metadata=dict(description=("The metrics defined in the dbt project and its dependencies"))
)
entities: Mapping[UniqueID, ParsedEntity] = field(
metadata=dict(description=("The entities defined in the dbt project and its dependencies"))
)
selectors: Mapping[UniqueID, Any] = field(
metadata=dict(description=("The selectors defined in selectors.yml"))
)
@@ -1229,7 +1326,7 @@ class WritableManifest(ArtifactMixin):
@classmethod
def compatible_previous_versions(self):
return [("manifest", 4), ("manifest", 5), ("manifest", 6)]
return [("manifest", 4), ("manifest", 5), ("manifest", 6), ("manifest", 7)]
def __post_serialize__(self, dct):
for unique_id, node in dct["nodes"].items():

View File

@@ -367,6 +367,9 @@ class BaseConfig(AdditionalPropertiesAllowed, Replaceable):
class MetricConfig(BaseConfig):
enabled: bool = True
@dataclass
class EntityConfig(BaseConfig):
enabled: bool = True
@dataclass
class ExposureConfig(BaseConfig):
@@ -495,6 +498,12 @@ class SeedConfig(NodeConfig):
materialized: str = "seed"
quote_columns: Optional[bool] = None
@classmethod
def validate(cls, data):
super().validate(data)
if data.get("materialized") and data.get("materialized") != "seed":
raise ValidationError("A seed must have a materialized value of 'seed'")
@dataclass
class TestConfig(NodeAndTestConfig):
@@ -534,6 +543,12 @@ class TestConfig(NodeAndTestConfig):
return False
return True
@classmethod
def validate(cls, data):
super().validate(data)
if data.get("materialized") and data.get("materialized") != "test":
raise ValidationError("A test must have a materialized value of 'test'")
@dataclass
class EmptySnapshotConfig(NodeConfig):
@@ -570,7 +585,6 @@ class SnapshotConfig(EmptySnapshotConfig):
f"Invalid value for 'check_cols': {data['check_cols']}. "
"Expected 'all' or a list of strings."
)
elif data.get("strategy") == "timestamp":
if not data.get("updated_at"):
raise ValidationError(
@@ -582,6 +596,9 @@ class SnapshotConfig(EmptySnapshotConfig):
# If the strategy is not 'check' or 'timestamp' it's a custom strategy,
# formerly supported with GenericSnapshotConfig
if data.get("materialized") and data.get("materialized") != "snapshot":
raise ValidationError("A snapshot must have a materialized value of 'snapshot'")
def finalize_and_validate(self):
data = self.to_dict(omit_none=True)
self.validate(data)
@@ -590,6 +607,7 @@ class SnapshotConfig(EmptySnapshotConfig):
RESOURCE_TYPES: Dict[NodeType, Type[BaseConfig]] = {
NodeType.Metric: MetricConfig,
NodeType.Entity: EntityConfig,
NodeType.Exposure: ExposureConfig,
NodeType.Source: SourceConfig,
NodeType.Seed: SeedConfig,

View File

@@ -18,7 +18,7 @@ from typing import (
from dbt.dataclass_schema import dbtClassMixin, ExtensibleDbtClassMixin
from dbt.clients.system import write_file
from dbt.contracts.files import FileHash, MAXIMUM_SEED_SIZE_NAME
from dbt.contracts.files import FileHash
from dbt.contracts.graph.unparsed import (
UnparsedNode,
UnparsedDocumentation,
@@ -38,10 +38,17 @@ from dbt.contracts.graph.unparsed import (
MaturityType,
MetricFilter,
MetricTime,
EntityRelationship,
EntityDimension
)
from dbt.contracts.util import Replaceable, AdditionalPropertiesMixin
from dbt.exceptions import warn_or_error
from dbt.events.proto_types import NodeInfo
from dbt.events.functions import warn_or_error
from dbt.events.types import (
SeedIncreased,
SeedExceedsLimitSamePath,
SeedExceedsLimitAndPathChanged,
SeedExceedsLimitChecksumChanged,
)
from dbt import flags
from dbt.node_types import ModelLanguage, NodeType
@@ -52,6 +59,7 @@ from .model_config import (
TestConfig,
SourceConfig,
MetricConfig,
EntityConfig,
ExposureConfig,
EmptySnapshotConfig,
SnapshotConfig,
@@ -65,9 +73,6 @@ class ColumnInfo(AdditionalPropertiesMixin, ExtensibleDbtClassMixin, Replaceable
meta: Dict[str, Any] = field(default_factory=dict)
data_type: Optional[str] = None
quote: Optional[bool] = None
is_dimension: Optional[bool] = False
is_primary_key: Optional[bool] = False
time_grains: Optional[List[str]] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
_extra: Dict[str, Any] = field(default_factory=dict)
@@ -164,8 +169,6 @@ class ParsedNodeMixins(dbtClassMixin):
self.created_at = time.time()
self.description = patch.description
self.columns = patch.columns
self.is_public = patch.is_public
self.relationships = patch.relationships
def get_materialization(self):
return self.config.materialized
@@ -198,7 +201,8 @@ class NodeInfoMixin:
"node_started_at": self._event_status.get("started_at"),
"node_finished_at": self._event_status.get("finished_at"),
}
return node_info
node_info_msg = NodeInfo(**node_info)
return node_info_msg
@dataclass
@@ -207,6 +211,7 @@ class ParsedNodeDefaults(NodeInfoMixin, ParsedNodeMandatory):
refs: List[List[str]] = field(default_factory=list)
sources: List[List[str]] = field(default_factory=list)
metrics: List[List[str]] = field(default_factory=list)
entities: List[List[str]] = field(default_factory=list)
depends_on: DependsOn = field(default_factory=DependsOn)
description: str = field(default="")
columns: Dict[str, ColumnInfo] = field(default_factory=dict)
@@ -216,9 +221,6 @@ class ParsedNodeDefaults(NodeInfoMixin, ParsedNodeMandatory):
compiled_path: Optional[str] = None
build_path: Optional[str] = None
deferred: bool = False
is_public: Optional[bool] = False
relationships: List[EntityRelationship] = field(default_factory=list)
primary_keys: List[str] = field(default_factory=list)
unrendered_config: Dict[str, Any] = field(default_factory=dict)
created_at: float = field(default_factory=lambda: time.time())
config_call_dict: Dict[str, Any] = field(default_factory=dict)
@@ -253,7 +255,7 @@ class ParsedNode(ParsedNodeDefaults, ParsedNodeMixins, SerializableType):
@classmethod
def _deserialize(cls, dct: Dict[str, int]):
# The serialized ParsedNodes do not differ from each other
# in fields that would allow 'from_dict' to distinguish
# in fields that would allow 'from_dict' to distinguish
# between them.
resource_type = dct["resource_type"]
if resource_type == "model":
@@ -382,30 +384,28 @@ def same_seeds(first: ParsedNode, second: ParsedNode) -> bool:
if first.checksum.name == "path":
msg: str
if second.checksum.name != "path":
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was "
f"<={MAXIMUM_SEED_SIZE_NAME}, so it has changed"
warn_or_error(
SeedIncreased(package_name=first.package_name, name=first.name), node=first
)
elif result:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size at the same path, dbt "
f"cannot tell if it has changed: assuming they are the same"
warn_or_error(
SeedExceedsLimitSamePath(package_name=first.package_name, name=first.name),
node=first,
)
elif not result:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file was in "
f"a different location, assuming it has changed"
warn_or_error(
SeedExceedsLimitAndPathChanged(package_name=first.package_name, name=first.name),
node=first,
)
else:
msg = (
f"Found a seed ({first.package_name}.{first.name}) "
f">{MAXIMUM_SEED_SIZE_NAME} in size. The previous file had a "
f"checksum type of {second.checksum.name}, so it has changed"
warn_or_error(
SeedExceedsLimitChecksumChanged(
package_name=first.package_name,
name=first.name,
checksum_name=second.checksum.name,
),
node=first,
)
warn_or_error(msg, node=first)
return result
@@ -415,6 +415,9 @@ class ParsedSeedNode(ParsedNode):
# keep this in sync with CompiledSeedNode!
resource_type: NodeType = field(metadata={"restrict": [NodeType.Seed]})
config: SeedConfig = field(default_factory=SeedConfig)
# seeds need the root_path because the contents are not loaded initially
# and we need the root_path to load the seed later
root_path: Optional[str] = None
@property
def empty(self):
@@ -506,8 +509,6 @@ class ParsedPatch(HasYamlMetadata, Replaceable):
@dataclass
class ParsedNodePatch(ParsedPatch):
columns: Dict[str, ColumnInfo]
is_public: Optional[bool]
relationships: List[EntityRelationship]
@dataclass
@@ -766,6 +767,7 @@ class ParsedExposure(UnparsedBaseNode, HasUniqueID, HasFqn):
depends_on: DependsOn = field(default_factory=DependsOn)
refs: List[List[str]] = field(default_factory=list)
sources: List[List[str]] = field(default_factory=list)
metrics: List[List[str]] = field(default_factory=list)
created_at: float = field(default_factory=lambda: time.time())
@property
@@ -839,15 +841,14 @@ class ParsedMetric(UnparsedBaseNode, HasUniqueID, HasFqn):
expression: str
filters: List[MetricFilter]
time_grains: List[str]
dimensions: Dict[str, Any] = field(default_factory=dict)
dimensions: List[str]
window: Optional[MetricTime] = None
model: Optional[str] = None
model_unique_id: Optional[str] = None
allow_joins: Optional[bool] = True
resource_type: NodeType = NodeType.Metric
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
config: MetricConfig = field(default_factory=MetricConfig)
config: EntityConfig = field(default_factory=EntityConfig)
unrendered_config: Dict[str, Any] = field(default_factory=dict)
sources: List[List[str]] = field(default_factory=list)
depends_on: DependsOn = field(default_factory=DependsOn)
@@ -920,6 +921,60 @@ class ParsedMetric(UnparsedBaseNode, HasUniqueID, HasFqn):
and True
)
@dataclass
class ParsedEntity(UnparsedBaseNode, HasUniqueID, HasFqn):
name: str
model: str
description: str
dimensions: Dict[str, EntityDimension] = field(default_factory=dict)
model_unique_id: Optional[str] = None
resource_type: NodeType = NodeType.Metric
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
config: MetricConfig = field(default_factory=MetricConfig)
unrendered_config: Dict[str, Any] = field(default_factory=dict)
sources: List[List[str]] = field(default_factory=list)
depends_on: DependsOn = field(default_factory=DependsOn)
refs: List[List[str]] = field(default_factory=list)
entities: List[List[str]] = field(default_factory=list)
created_at: float = field(default_factory=lambda: time.time())
@property
def depends_on_nodes(self):
return self.depends_on.nodes
@property
def search_name(self):
return self.name
def same_model(self, old: "ParsedEntity") -> bool:
return self.model == old.model
def same_description(self, old: "ParsedEntity") -> bool:
return self.description == old.description
def same_dimensions(self, old: "ParsedEntity") -> bool:
return self.dimensions == old.dimensions
def same_config(self, old: "ParsedEntity") -> bool:
return self.config.same_contents(
self.unrendered_config,
old.unrendered_config,
)
def same_contents(self, old: Optional["ParsedEntity"]) -> bool:
# existing when it didn't before is a change!
# metadata/tags changes are not "changes"
if old is None:
return True
return (
self.same_model(old)
and self.same_description(old)
and self.same_dimensions(old)
and self.same_config(old)
and True
)
ManifestNodes = Union[
ParsedAnalysisNode,
@@ -940,5 +995,6 @@ ParsedResource = Union[
ParsedNode,
ParsedExposure,
ParsedMetric,
ParsedEntity,
ParsedSourceDefinition,
]

View File

@@ -24,7 +24,6 @@ from typing import Optional, List, Union, Dict, Any, Sequence
@dataclass
class UnparsedBaseNode(dbtClassMixin, Replaceable):
package_name: str
root_path: str
path: str
original_file_path: str
@@ -88,36 +87,12 @@ class Docs(dbtClassMixin, Replaceable):
node_color: Optional[str] = None
@dataclass
class EntityRelationshipType(StrEnum):
many_to_one = "many_to_one"
one_to_many = "one_to_many"
one_to_one = "one_to_one"
def inverse(self) -> str:
if self == "many_to_one":
return "one_to_many"
elif self == "one_to_many":
return "many_to_one"
else:
return self
@dataclass
class EntityRelationship(dbtClassMixin, Replaceable):
to: str
join_keys: Union[str, List[str]]
relationship_type: EntityRelationshipType
@dataclass
class HasDocs(AdditionalPropertiesMixin, ExtensibleDbtClassMixin, Replaceable):
name: str
description: str = ""
meta: Dict[str, Any] = field(default_factory=dict)
is_public: Optional[bool] = False
data_type: Optional[str] = None
relationships: List[EntityRelationship] = field(default_factory=list)
docs: Docs = field(default_factory=Docs)
_extra: Dict[str, Any] = field(default_factory=dict)
@@ -138,10 +113,6 @@ class HasTests(HasDocs):
class UnparsedColumn(HasTests):
quote: Optional[bool] = None
tags: List[str] = field(default_factory=list)
is_dimension: Optional[bool] = False
is_primary_key: Optional[bool] = False
time_grains: Optional[List[str]] = field(default_factory=list)
data_type: Optional[str] = None
@dataclass
@@ -260,7 +231,7 @@ class ExternalTable(AdditionalPropertiesAllowed, Mergeable):
file_format: Optional[str] = None
row_format: Optional[str] = None
tbl_properties: Optional[str] = None
partitions: Optional[List[ExternalPartition]] = None
partitions: Optional[Union[List[str], List[ExternalPartition]]] = None
def __bool__(self):
return self.location is not None
@@ -392,7 +363,6 @@ class SourcePatch(dbtClassMixin, Replaceable):
@dataclass
class UnparsedDocumentation(dbtClassMixin, Replaceable):
package_name: str
root_path: str
path: str
original_file_path: str
@@ -517,11 +487,10 @@ class UnparsedMetric(dbtClassMixin, Replaceable):
timestamp: str
expression: str
description: str = ""
time_grains: Optional[List[str]] = field(default_factory=list)
dimensions: Union[Dict[str, Any], List[str]] = field(default_factory=dict)
time_grains: List[str] = field(default_factory=list)
dimensions: List[str] = field(default_factory=list)
window: Optional[MetricTime] = None
model: Optional[str] = None
allow_joins: Optional[bool] = True
filters: List[MetricFilter] = field(default_factory=list)
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
@@ -554,3 +523,47 @@ class UnparsedMetric(dbtClassMixin, Replaceable):
if data.get("model") is not None and data.get("calculation_method") == "derived":
raise ValidationError("Derived metrics cannot have a 'model' property")
@dataclass
class EntityDimension(dbtClassMixin, Mergeable):
"""This class is used for the dimension information at the entity level. It
closely matches the implementation of columns for models."""
name: str
description: str = ""
column_name: Optional[str] = None
date_type: Optional[str] = None
default_timestamp: Optional[bool] = None
primary_key: Optional[bool] = None
time_grains: Optional[List[str]] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
meta: Dict[str, Any] = field(default_factory=dict)
@dataclass
class EntityInheritence(EntityDimension):
"""This class is used for entity dimension inheritence. This class is optional
but if it is present then include needs to be present. Exclude cannot be present
without some idea of what is being included, whereas exclude is fully optional.
The acceptable inputs for include are either a list of columns/dimensions or *
to represent all fields. The acceptable inputs for exclude are a list of columns/
dimensions
"""
include: Union[List[str],str] = field(default_factory=list)
exclude: Optional[List[str]] = field(default_factory=list)
@dataclass
class UnparsedEntity(dbtClassMixin, Replaceable):
"""This class is used for entity information"""
name: str
model: str
description: str = ""
dimensions: Optional[Union[Optional[Sequence[EntityDimension]],Optional[EntityInheritence]]] = None
# dimensions: Optional[Sequence[EntityDimension]] = None
meta: Dict[str, Any] = field(default_factory=dict)
tags: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
@classmethod
def validate(cls, data):
super(UnparsedEntity, cls).validate(data)
errors = []
## TODO: Add validation here around include/exclude and others

View File

@@ -12,9 +12,7 @@ from dataclasses import dataclass, field
from typing import Optional, List, Dict, Union, Any
from mashumaro.types import SerializableType
PIN_PACKAGE_URL = (
"https://docs.getdbt.com/docs/package-management#section-specifying-package-versions" # noqa
)
DEFAULT_SEND_ANONYMOUS_USAGE_STATS = True
@@ -210,6 +208,7 @@ class Project(HyphenatedDbtClassMixin, Replaceable):
sources: Dict[str, Any] = field(default_factory=dict)
tests: Dict[str, Any] = field(default_factory=dict)
metrics: Dict[str, Any] = field(default_factory=dict)
entities: Dict[str, Any] = field(default_factory=dict)
exposures: Dict[str, Any] = field(default_factory=dict)
vars: Optional[Dict[str, Any]] = field(
default=None,

View File

@@ -11,11 +11,12 @@ from dbt.contracts.util import (
from dbt.exceptions import InternalException
from dbt.events.functions import fire_event
from dbt.events.types import TimingInfoCollected
from dbt.events.proto_types import RunResultMsg
from dbt.logger import (
TimingProcessor,
JsonOnly,
)
from dbt.utils import lowercase
from dbt.utils import lowercase, cast_to_str, cast_to_int
from dbt.dataclass_schema import dbtClassMixin, StrEnum
import agate
@@ -119,6 +120,17 @@ class BaseResult(dbtClassMixin):
data["failures"] = None
return data
def to_msg(self):
# TODO: add more fields
msg = RunResultMsg()
msg.status = str(self.status)
msg.message = cast_to_str(self.message)
msg.thread = self.thread_id
msg.execution_time = self.execution_time
msg.num_failures = cast_to_int(self.failures)
# timing_info, adapter_response, message
return msg
@dataclass
class NodeResult(BaseResult):
@@ -208,7 +220,9 @@ class RunResultsArtifact(ExecutionResult, ArtifactMixin):
generated_at: datetime,
args: Dict,
):
processed_results = [process_run_result(result) for result in results]
processed_results = [
process_run_result(result) for result in results if isinstance(result, RunResult)
]
meta = RunResultsMetadata(
dbt_schema_version=str(cls.dbt_schema_version),
generated_at=generated_at,
@@ -327,7 +341,7 @@ def process_freshness_result(result: FreshnessNodeResult) -> FreshnessNodeOutput
criteria = result.node.freshness
if criteria is None:
raise InternalException(
"Somehow evaluated a freshness result for a source " "that has no freshness criteria!"
"Somehow evaluated a freshness result for a source that has no freshness criteria!"
)
return SourceFreshnessOutput(
unique_id=unique_id,

View File

@@ -1,5 +1,4 @@
import dataclasses
import os
from datetime import datetime
from typing import List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
@@ -11,7 +10,7 @@ from dbt.exceptions import (
IncompatibleSchemaException,
)
from dbt.version import __version__
from dbt.events.functions import get_invocation_id
from dbt.events.functions import get_invocation_id, get_metadata_vars
from dbt.dataclass_schema import dbtClassMixin
from dbt.dataclass_schema import (
@@ -148,20 +147,6 @@ class SchemaVersion:
return BASE_SCHEMAS_URL + self.path
SCHEMA_VERSION_KEY = "dbt_schema_version"
METADATA_ENV_PREFIX = "DBT_ENV_CUSTOM_ENV_"
def get_metadata_env() -> Dict[str, str]:
return {
k[len(METADATA_ENV_PREFIX) :]: v
for k, v in os.environ.items()
if k.startswith(METADATA_ENV_PREFIX)
}
# This is used in the ManifestMetadata, RunResultsMetadata, RunOperationResultMetadata,
# FreshnessMetadata, and CatalogMetadata classes
@dataclasses.dataclass
@@ -170,7 +155,7 @@ class BaseArtifactMetadata(dbtClassMixin):
dbt_version: str = __version__
generated_at: datetime = dataclasses.field(default_factory=datetime.utcnow)
invocation_id: Optional[str] = dataclasses.field(default_factory=get_invocation_id)
env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)
env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_vars)
def __post_serialize__(self, dct):
dct = super().__post_serialize__(dct)
@@ -255,13 +240,32 @@ def rename_sql_attr(node_content: dict) -> dict:
def upgrade_manifest_json(manifest: dict) -> dict:
for node_content in manifest.get("nodes", {}).values():
node_content = rename_sql_attr(node_content)
if node_content["resource_type"] != "seed" and "root_path" in node_content:
del node_content["root_path"]
for disabled in manifest.get("disabled", {}).values():
# There can be multiple disabled nodes for the same unique_id
# so make sure all the nodes get the attr renamed
disabled = [rename_sql_attr(n) for n in disabled]
for node_content in disabled:
rename_sql_attr(node_content)
if node_content["resource_type"] != "seed" and "root_path" in node_content:
del node_content["root_path"]
for metric_content in manifest.get("metrics", {}).values():
# handle attr renames + value translation ("expression" -> "derived")
metric_content = rename_metric_attr(metric_content)
if "root_path" in metric_content:
del metric_content["root_path"]
for exposure_content in manifest.get("exposures", {}).values():
if "root_path" in exposure_content:
del exposure_content["root_path"]
for source_content in manifest.get("sources", {}).values():
if "root_path" in exposure_content:
del source_content["root_path"]
for macro_content in manifest.get("macros", {}).values():
if "root_path" in macro_content:
del macro_content["root_path"]
for doc_content in manifest.get("docs", {}).values():
if "root_path" in doc_content:
del doc_content["root_path"]
return manifest
@@ -306,7 +310,7 @@ class VersionedSchema(dbtClassMixin):
expected=str(cls.dbt_schema_version),
found=previous_schema_version,
)
if get_manifest_schema_version(data) <= 6:
if get_manifest_schema_version(data) <= 7:
data = upgrade_manifest_json(data)
return cls.from_dict(data) # type: ignore

View File

@@ -1,14 +1,14 @@
import abc
from typing import Optional, Set, List, Dict, ClassVar
import dbt.exceptions
from dbt import ui
import dbt.tracking
class DBTDeprecation:
_name: ClassVar[Optional[str]] = None
_description: ClassVar[Optional[str]] = None
_event: ClassVar[Optional[str]] = None
@property
def name(self) -> str:
@@ -21,66 +21,50 @@ class DBTDeprecation:
dbt.tracking.track_deprecation_warn({"deprecation_name": self.name})
@property
def description(self) -> str:
if self._description is not None:
return self._description
raise NotImplementedError("description not implemented for {}".format(self))
def event(self) -> abc.ABCMeta:
if self._event is not None:
module_path = dbt.events.types
class_name = self._event
try:
return getattr(module_path, class_name)
except AttributeError:
msg = f"Event Class `{class_name}` is not defined in `{module_path}`"
raise NameError(msg)
raise NotImplementedError("event not implemented for {}".format(self._event))
def show(self, *args, **kwargs) -> None:
if self.name not in active_deprecations:
desc = self.description.format(**kwargs)
msg = ui.line_wrap_message(desc, prefix="Deprecated functionality\n\n")
dbt.exceptions.warn_or_error(msg, log_fmt=ui.warning_tag("{}"))
event = self.event(**kwargs)
dbt.events.functions.warn_or_error(event)
self.track_deprecation_warn()
active_deprecations.add(self.name)
class PackageRedirectDeprecation(DBTDeprecation):
_name = "package-redirect"
_description = """\
The `{old_name}` package is deprecated in favor of `{new_name}`. Please update
your `packages.yml` configuration to use `{new_name}` instead.
"""
_event = "PackageRedirectDeprecation"
class PackageInstallPathDeprecation(DBTDeprecation):
_name = "install-packages-path"
_description = """\
The default package install path has changed from `dbt_modules` to `dbt_packages`.
Please update `clean-targets` in `dbt_project.yml` and check `.gitignore` as well.
Or, set `packages-install-path: dbt_modules` if you'd like to keep the current value.
"""
_event = "PackageInstallPathDeprecation"
class ConfigPathDeprecation(DBTDeprecation):
_description = """\
The `{deprecated_path}` config has been renamed to `{exp_path}`.
Please update your `dbt_project.yml` configuration to reflect this change.
"""
class ConfigSourcePathDeprecation(ConfigPathDeprecation):
class ConfigSourcePathDeprecation(DBTDeprecation):
_name = "project-config-source-paths"
_event = "ConfigSourcePathDeprecation"
class ConfigDataPathDeprecation(ConfigPathDeprecation):
class ConfigDataPathDeprecation(DBTDeprecation):
_name = "project-config-data-paths"
_adapter_renamed_description = """\
The adapter function `adapter.{old_name}` is deprecated and will be removed in
a future release of dbt. Please use `adapter.{new_name}` instead.
Documentation for {new_name} can be found here:
https://docs.getdbt.com/docs/adapter
"""
_event = "ConfigDataPathDeprecation"
def renamed_method(old_name: str, new_name: str):
class AdapterDeprecationWarning(DBTDeprecation):
_name = "adapter:{}".format(old_name)
_description = _adapter_renamed_description.format(old_name=old_name, new_name=new_name)
_event = "AdapterDeprecationWarning"
dep = AdapterDeprecationWarning()
deprecations_list.append(dep)
@@ -89,26 +73,12 @@ def renamed_method(old_name: str, new_name: str):
class MetricAttributesRenamed(DBTDeprecation):
_name = "metric-attr-renamed"
_description = """\
dbt-core v1.3 renamed attributes for metrics:
\n 'sql' -> 'expression'
\n 'type' -> 'calculation_method'
\n 'type: expression' -> 'calculation_method: derived'
\nThe old metric parameter names will be fully deprecated in v1.4.
\nPlease remove them from the metric definition of metric '{metric_name}'
\nRelevant issue here: https://github.com/dbt-labs/dbt-core/issues/5849
"""
_event = "MetricAttributesRenamed"
class ExposureNameDeprecation(DBTDeprecation):
_name = "exposure-name"
_description = """\
Starting in v1.3, the 'name' of an exposure should contain only letters, numbers, and underscores.
Exposures support a new property, 'label', which may contain spaces, capital letters, and special characters.
{exposure} does not follow this pattern.
Please update the 'name', and use the 'label' property for a human-friendly title.
This will raise an error in a future version of dbt-core.
"""
_event = "ExposureNameDeprecation"
def warn(name, *args, **kwargs):
@@ -125,12 +95,12 @@ def warn(name, *args, **kwargs):
active_deprecations: Set[str] = set()
deprecations_list: List[DBTDeprecation] = [
ExposureNameDeprecation(),
PackageRedirectDeprecation(),
PackageInstallPathDeprecation(),
ConfigSourcePathDeprecation(),
ConfigDataPathDeprecation(),
PackageInstallPathDeprecation(),
PackageRedirectDeprecation(),
MetricAttributesRenamed(),
ExposureNameDeprecation(),
]
deprecations: Dict[str, DBTDeprecation] = {d.name: d for d in deprecations_list}

View File

@@ -74,7 +74,7 @@ class PinnedPackage(BasePackage):
raise NotImplementedError
@abc.abstractmethod
def install(self, project):
def install(self, project, renderer):
raise NotImplementedError
@abc.abstractmethod

View File

@@ -9,14 +9,9 @@ from dbt.contracts.project import (
GitPackage,
)
from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
from dbt.exceptions import ExecutableError, warn_or_error, raise_dependency_error
from dbt.events.functions import fire_event
from dbt.events.types import EnsureGitInstalled
from dbt import ui
PIN_PACKAGE_URL = (
"https://docs.getdbt.com/docs/package-management#section-specifying-package-versions" # noqa
)
from dbt.exceptions import ExecutableError, raise_dependency_error
from dbt.events.functions import fire_event, warn_or_error
from dbt.events.types import EnsureGitInstalled, DepsUnpinned
def md5sum(s: str):
@@ -62,14 +57,6 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
else:
return "revision {}".format(self.revision)
def unpinned_msg(self):
if self.revision == "HEAD":
return "not pinned, using HEAD (default branch)"
elif self.revision in ("main", "master"):
return f'pinned to the "{self.revision}" branch'
else:
return None
def _checkout(self):
"""Performs a shallow clone of the repository into the downloads
directory. This function can be called repeatedly. If the project has
@@ -92,14 +79,8 @@ class GitPinnedPackage(GitPackageMixin, PinnedPackage):
def _fetch_metadata(self, project, renderer) -> ProjectPackageMetadata:
path = self._checkout()
if self.unpinned_msg() and self.warn_unpinned:
warn_or_error(
'The git package "{}" \n\tis {}.\n\tThis can introduce '
"breaking changes into your project without warning!\n\nSee {}".format(
self.git, self.unpinned_msg(), PIN_PACKAGE_URL
),
log_fmt=ui.yellow("WARNING: {}"),
)
if (self.revision == "HEAD" or self.revision in ("main", "master")) and self.warn_unpinned:
warn_or_error(DepsUnpinned(git=self.git))
loaded = Project.from_project_root(path, renderer)
return ProjectPackageMetadata.from_project(loaded)

Binary file not shown.

Binary file not shown.

4
core/dbt/docs/build/html/.buildinfo vendored Normal file
View File

@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 1ee31fc16e025fb98598189ba2cb5fcb
tags: 645f666f9bcd5a90fca523b33c5a78b7

View File

@@ -0,0 +1,4 @@
dbt-core's API documentation
============================
.. dbt_click:: dbt.cli.main:cli

View File

@@ -0,0 +1,134 @@
/*
* _sphinx_javascript_frameworks_compat.js
* ~~~~~~~~~~
*
* Compatability shim for jQuery and underscores.js.
*
* WILL BE REMOVED IN Sphinx 6.0
* xref RemovedInSphinx60Warning
*
*/
/**
* select a different prefix for underscore
*/
$u = _.noConflict();
/**
* small helper function to urldecode strings
*
* See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL
*/
jQuery.urldecode = function(x) {
if (!x) {
return x
}
return decodeURIComponent(x.replace(/\+/g, ' '));
};
/**
* small helper function to urlencode strings
*/
jQuery.urlencode = encodeURIComponent;
/**
* This function returns the parsed url parameters of the
* current request. Multiple values per key are supported,
* it will always return arrays of strings for the value parts.
*/
jQuery.getQueryParameters = function(s) {
if (typeof s === 'undefined')
s = document.location.search;
var parts = s.substr(s.indexOf('?') + 1).split('&');
var result = {};
for (var i = 0; i < parts.length; i++) {
var tmp = parts[i].split('=', 2);
var key = jQuery.urldecode(tmp[0]);
var value = jQuery.urldecode(tmp[1]);
if (key in result)
result[key].push(value);
else
result[key] = [value];
}
return result;
};
/**
* highlight a given string on a jquery object by wrapping it in
* span elements with the given class name.
*/
jQuery.fn.highlightText = function(text, className) {
function highlight(node, addItems) {
if (node.nodeType === 3) {
var val = node.nodeValue;
var pos = val.toLowerCase().indexOf(text);
if (pos >= 0 &&
!jQuery(node.parentNode).hasClass(className) &&
!jQuery(node.parentNode).hasClass("nohighlight")) {
var span;
var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg");
if (isInSVG) {
span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
} else {
span = document.createElement("span");
span.className = className;
}
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling));
node.nodeValue = val.substr(0, pos);
if (isInSVG) {
var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect");
var bbox = node.parentElement.getBBox();
rect.x.baseVal.value = bbox.x;
rect.y.baseVal.value = bbox.y;
rect.width.baseVal.value = bbox.width;
rect.height.baseVal.value = bbox.height;
rect.setAttribute('class', className);
addItems.push({
"parent": node.parentNode,
"target": rect});
}
}
}
else if (!jQuery(node).is("button, select, textarea")) {
jQuery.each(node.childNodes, function() {
highlight(this, addItems);
});
}
}
var addItems = [];
var result = this.each(function() {
highlight(this, addItems);
});
for (var i = 0; i < addItems.length; ++i) {
jQuery(addItems[i].parent).before(addItems[i].target);
}
return result;
};
/*
* backward compatibility for jQuery.browser
* This will be supported until firefox bug is fixed.
*/
if (!jQuery.browser) {
jQuery.uaMatch = function(ua) {
ua = ua.toLowerCase();
var match = /(chrome)[ \/]([\w.]+)/.exec(ua) ||
/(webkit)[ \/]([\w.]+)/.exec(ua) ||
/(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) ||
/(msie) ([\w.]+)/.exec(ua) ||
ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) ||
[];
return {
browser: match[ 1 ] || "",
version: match[ 2 ] || "0"
};
};
jQuery.browser = {};
jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true;
}

View File

@@ -0,0 +1,701 @@
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: Georgia, serif;
font-size: 17px;
background-color: #fff;
color: #000;
margin: 0;
padding: 0;
}
div.document {
width: 940px;
margin: 30px auto 0 auto;
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 220px;
}
div.sphinxsidebar {
width: 220px;
font-size: 14px;
line-height: 1.5;
}
hr {
border: 1px solid #B1B4B6;
}
div.body {
background-color: #fff;
color: #3E4349;
padding: 0 30px 0 30px;
}
div.body > .section {
text-align: left;
}
div.footer {
width: 940px;
margin: 20px auto 30px auto;
font-size: 14px;
color: #888;
text-align: right;
}
div.footer a {
color: #888;
}
p.caption {
font-family: inherit;
font-size: inherit;
}
div.relations {
display: none;
}
div.sphinxsidebar a {
color: #444;
text-decoration: none;
border-bottom: 1px dotted #999;
}
div.sphinxsidebar a:hover {
border-bottom: 1px solid #999;
}
div.sphinxsidebarwrapper {
padding: 18px 10px;
}
div.sphinxsidebarwrapper p.logo {
padding: 0;
margin: -10px 0 0 0px;
text-align: center;
}
div.sphinxsidebarwrapper h1.logo {
margin-top: -10px;
text-align: center;
margin-bottom: 5px;
text-align: left;
}
div.sphinxsidebarwrapper h1.logo-name {
margin-top: 0px;
}
div.sphinxsidebarwrapper p.blurb {
margin-top: 0;
font-style: normal;
}
div.sphinxsidebar h3,
div.sphinxsidebar h4 {
font-family: Georgia, serif;
color: #444;
font-size: 24px;
font-weight: normal;
margin: 0 0 5px 0;
padding: 0;
}
div.sphinxsidebar h4 {
font-size: 20px;
}
div.sphinxsidebar h3 a {
color: #444;
}
div.sphinxsidebar p.logo a,
div.sphinxsidebar h3 a,
div.sphinxsidebar p.logo a:hover,
div.sphinxsidebar h3 a:hover {
border: none;
}
div.sphinxsidebar p {
color: #555;
margin: 10px 0;
}
div.sphinxsidebar ul {
margin: 10px 0;
padding: 0;
color: #000;
}
div.sphinxsidebar ul li.toctree-l1 > a {
font-size: 120%;
}
div.sphinxsidebar ul li.toctree-l2 > a {
font-size: 110%;
}
div.sphinxsidebar input {
border: 1px solid #CCC;
font-family: Georgia, serif;
font-size: 1em;
}
div.sphinxsidebar hr {
border: none;
height: 1px;
color: #AAA;
background: #AAA;
text-align: left;
margin-left: 0;
width: 50%;
}
div.sphinxsidebar .badge {
border-bottom: none;
}
div.sphinxsidebar .badge:hover {
border-bottom: none;
}
/* To address an issue with donation coming after search */
div.sphinxsidebar h3.donation {
margin-top: 10px;
}
/* -- body styles ----------------------------------------------------------- */
a {
color: #004B6B;
text-decoration: underline;
}
a:hover {
color: #6D4100;
text-decoration: underline;
}
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: Georgia, serif;
font-weight: normal;
margin: 30px 0px 10px 0px;
padding: 0;
}
div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; }
div.body h2 { font-size: 180%; }
div.body h3 { font-size: 150%; }
div.body h4 { font-size: 130%; }
div.body h5 { font-size: 100%; }
div.body h6 { font-size: 100%; }
a.headerlink {
color: #DDD;
padding: 0 4px;
text-decoration: none;
}
a.headerlink:hover {
color: #444;
background: #EAEAEA;
}
div.body p, div.body dd, div.body li {
line-height: 1.4em;
}
div.admonition {
margin: 20px 0px;
padding: 10px 30px;
background-color: #EEE;
border: 1px solid #CCC;
}
div.admonition tt.xref, div.admonition code.xref, div.admonition a tt {
background-color: #FBFBFB;
border-bottom: 1px solid #fafafa;
}
div.admonition p.admonition-title {
font-family: Georgia, serif;
font-weight: normal;
font-size: 24px;
margin: 0 0 10px 0;
padding: 0;
line-height: 1;
}
div.admonition p.last {
margin-bottom: 0;
}
div.highlight {
background-color: #fff;
}
dt:target, .highlight {
background: #FAF3E8;
}
div.warning {
background-color: #FCC;
border: 1px solid #FAA;
}
div.danger {
background-color: #FCC;
border: 1px solid #FAA;
-moz-box-shadow: 2px 2px 4px #D52C2C;
-webkit-box-shadow: 2px 2px 4px #D52C2C;
box-shadow: 2px 2px 4px #D52C2C;
}
div.error {
background-color: #FCC;
border: 1px solid #FAA;
-moz-box-shadow: 2px 2px 4px #D52C2C;
-webkit-box-shadow: 2px 2px 4px #D52C2C;
box-shadow: 2px 2px 4px #D52C2C;
}
div.caution {
background-color: #FCC;
border: 1px solid #FAA;
}
div.attention {
background-color: #FCC;
border: 1px solid #FAA;
}
div.important {
background-color: #EEE;
border: 1px solid #CCC;
}
div.note {
background-color: #EEE;
border: 1px solid #CCC;
}
div.tip {
background-color: #EEE;
border: 1px solid #CCC;
}
div.hint {
background-color: #EEE;
border: 1px solid #CCC;
}
div.seealso {
background-color: #EEE;
border: 1px solid #CCC;
}
div.topic {
background-color: #EEE;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre, tt, code {
font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
font-size: 0.9em;
}
.hll {
background-color: #FFC;
margin: 0 -12px;
padding: 0 12px;
display: block;
}
img.screenshot {
}
tt.descname, tt.descclassname, code.descname, code.descclassname {
font-size: 0.95em;
}
tt.descname, code.descname {
padding-right: 0.08em;
}
img.screenshot {
-moz-box-shadow: 2px 2px 4px #EEE;
-webkit-box-shadow: 2px 2px 4px #EEE;
box-shadow: 2px 2px 4px #EEE;
}
table.docutils {
border: 1px solid #888;
-moz-box-shadow: 2px 2px 4px #EEE;
-webkit-box-shadow: 2px 2px 4px #EEE;
box-shadow: 2px 2px 4px #EEE;
}
table.docutils td, table.docutils th {
border: 1px solid #888;
padding: 0.25em 0.7em;
}
table.field-list, table.footnote {
border: none;
-moz-box-shadow: none;
-webkit-box-shadow: none;
box-shadow: none;
}
table.footnote {
margin: 15px 0;
width: 100%;
border: 1px solid #EEE;
background: #FDFDFD;
font-size: 0.9em;
}
table.footnote + table.footnote {
margin-top: -15px;
border-top: none;
}
table.field-list th {
padding: 0 0.8em 0 0;
}
table.field-list td {
padding: 0;
}
table.field-list p {
margin-bottom: 0.8em;
}
/* Cloned from
* https://github.com/sphinx-doc/sphinx/commit/ef60dbfce09286b20b7385333d63a60321784e68
*/
.field-name {
-moz-hyphens: manual;
-ms-hyphens: manual;
-webkit-hyphens: manual;
hyphens: manual;
}
table.footnote td.label {
width: .1px;
padding: 0.3em 0 0.3em 0.5em;
}
table.footnote td {
padding: 0.3em 0.5em;
}
dl {
margin: 0;
padding: 0;
}
dl dd {
margin-left: 30px;
}
blockquote {
margin: 0 0 0 30px;
padding: 0;
}
ul, ol {
/* Matches the 30px from the narrow-screen "li > ul" selector below */
margin: 10px 0 10px 30px;
padding: 0;
}
pre {
background: #EEE;
padding: 7px 30px;
margin: 15px 0px;
line-height: 1.3em;
}
div.viewcode-block:target {
background: #ffd;
}
dl pre, blockquote pre, li pre {
margin-left: 0;
padding-left: 30px;
}
tt, code {
background-color: #ecf0f3;
color: #222;
/* padding: 1px 2px; */
}
tt.xref, code.xref, a tt {
background-color: #FBFBFB;
border-bottom: 1px solid #fff;
}
a.reference {
text-decoration: none;
border-bottom: 1px dotted #004B6B;
}
/* Don't put an underline on images */
a.image-reference, a.image-reference:hover {
border-bottom: none;
}
a.reference:hover {
border-bottom: 1px solid #6D4100;
}
a.footnote-reference {
text-decoration: none;
font-size: 0.7em;
vertical-align: top;
border-bottom: 1px dotted #004B6B;
}
a.footnote-reference:hover {
border-bottom: 1px solid #6D4100;
}
a:hover tt, a:hover code {
background: #EEE;
}
@media screen and (max-width: 870px) {
div.sphinxsidebar {
display: none;
}
div.document {
width: 100%;
}
div.documentwrapper {
margin-left: 0;
margin-top: 0;
margin-right: 0;
margin-bottom: 0;
}
div.bodywrapper {
margin-top: 0;
margin-right: 0;
margin-bottom: 0;
margin-left: 0;
}
ul {
margin-left: 0;
}
li > ul {
/* Matches the 30px from the "ul, ol" selector above */
margin-left: 30px;
}
.document {
width: auto;
}
.footer {
width: auto;
}
.bodywrapper {
margin: 0;
}
.footer {
width: auto;
}
.github {
display: none;
}
}
@media screen and (max-width: 875px) {
body {
margin: 0;
padding: 20px 30px;
}
div.documentwrapper {
float: none;
background: #fff;
}
div.sphinxsidebar {
display: block;
float: none;
width: 102.5%;
margin: 50px -30px -20px -30px;
padding: 10px 20px;
background: #333;
color: #FFF;
}
div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p,
div.sphinxsidebar h3 a {
color: #fff;
}
div.sphinxsidebar a {
color: #AAA;
}
div.sphinxsidebar p.logo {
display: none;
}
div.document {
width: 100%;
margin: 0;
}
div.footer {
display: none;
}
div.bodywrapper {
margin: 0;
}
div.body {
min-height: 0;
padding: 0;
}
.rtd_doc_footer {
display: none;
}
.document {
width: auto;
}
.footer {
width: auto;
}
.footer {
width: auto;
}
.github {
display: none;
}
}
/* misc. */
.revsys-inline {
display: none!important;
}
/* Make nested-list/multi-paragraph items look better in Releases changelog
* pages. Without this, docutils' magical list fuckery causes inconsistent
* formatting between different release sub-lists.
*/
div#changelog > div.section > ul > li > p:only-child {
margin-bottom: 0;
}
/* Hide fugly table cell borders in ..bibliography:: directive output */
table.docutils.citation, table.docutils.citation td, table.docutils.citation th {
border: none;
/* Below needed in some edge cases; if not applied, bottom shadows appear */
-moz-box-shadow: none;
-webkit-box-shadow: none;
box-shadow: none;
}
/* relbar */
.related {
line-height: 30px;
width: 100%;
font-size: 0.9rem;
}
.related.top {
border-bottom: 1px solid #EEE;
margin-bottom: 20px;
}
.related.bottom {
border-top: 1px solid #EEE;
}
.related ul {
padding: 0;
margin: 0;
list-style: none;
}
.related li {
display: inline;
}
nav#rellinks {
float: right;
}
nav#rellinks li+li:before {
content: "|";
}
nav#breadcrumbs li+li:before {
content: "\00BB";
}
/* Hide certain items when printing */
@media print {
div.related {
display: none;
}
}

View File

@@ -0,0 +1,900 @@
/*
* basic.css
* ~~~~~~~~~
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
div.section::after {
display: block;
content: '';
clear: left;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 230px;
margin-left: -100%;
font-size: 90%;
word-wrap: break-word;
overflow-wrap : break-word;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar #searchbox form.search {
overflow: hidden;
}
div.sphinxsidebar #searchbox input[type="text"] {
float: left;
width: 80%;
padding: 0.25em;
box-sizing: border-box;
}
div.sphinxsidebar #searchbox input[type="submit"] {
float: left;
width: 20%;
border-left: none;
padding: 0.25em;
box-sizing: border-box;
}
img {
border: 0;
max-width: 100%;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li p.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
margin-left: auto;
margin-right: auto;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable {
width: 100%;
}
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable ul {
margin-top: 0;
margin-bottom: 0;
list-style-type: none;
}
table.indextable > tbody > tr > td > ul {
padding-left: 0em;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
div.modindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
div.genindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
/* -- domain module index --------------------------------------------------- */
table.modindextable td {
padding: 2px;
border-collapse: collapse;
}
/* -- general body styles --------------------------------------------------- */
div.body {
min-width: 360px;
max-width: 800px;
}
div.body p, div.body dd, div.body li, div.body blockquote {
-moz-hyphens: auto;
-ms-hyphens: auto;
-webkit-hyphens: auto;
hyphens: auto;
}
a.headerlink {
visibility: hidden;
}
h1:hover > a.headerlink,
h2:hover > a.headerlink,
h3:hover > a.headerlink,
h4:hover > a.headerlink,
h5:hover > a.headerlink,
h6:hover > a.headerlink,
dt:hover > a.headerlink,
caption:hover > a.headerlink,
p.caption:hover > a.headerlink,
div.code-block-caption:hover > a.headerlink {
visibility: visible;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.first {
margin-top: 0 !important;
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
img.align-left, figure.align-left, .figure.align-left, object.align-left {
clear: left;
float: left;
margin-right: 1em;
}
img.align-right, figure.align-right, .figure.align-right, object.align-right {
clear: right;
float: right;
margin-left: 1em;
}
img.align-center, figure.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
img.align-default, figure.align-default, .figure.align-default {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
.align-center {
text-align: center;
}
.align-default {
text-align: center;
}
.align-right {
text-align: right;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar,
aside.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px;
background-color: #ffe;
width: 40%;
float: right;
clear: right;
overflow-x: auto;
}
p.sidebar-title {
font-weight: bold;
}
nav.contents,
aside.topic,
div.admonition, div.topic, blockquote {
clear: left;
}
/* -- topics ---------------------------------------------------------------- */
nav.contents,
aside.topic,
div.topic {
border: 1px solid #ccc;
padding: 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- content of sidebars/topics/admonitions -------------------------------- */
div.sidebar > :last-child,
aside.sidebar > :last-child,
nav.contents > :last-child,
aside.topic > :last-child,
div.topic > :last-child,
div.admonition > :last-child {
margin-bottom: 0;
}
div.sidebar::after,
aside.sidebar::after,
nav.contents::after,
aside.topic::after,
div.topic::after,
div.admonition::after,
blockquote::after {
display: block;
content: '';
clear: both;
}
/* -- tables ---------------------------------------------------------------- */
table.docutils {
margin-top: 10px;
margin-bottom: 10px;
border: 0;
border-collapse: collapse;
}
table.align-center {
margin-left: auto;
margin-right: auto;
}
table.align-default {
margin-left: auto;
margin-right: auto;
}
table caption span.caption-number {
font-style: italic;
}
table caption span.caption-text {
}
table.docutils td, table.docutils th {
padding: 1px 8px 1px 5px;
border-top: 0;
border-left: 0;
border-right: 0;
border-bottom: 1px solid #aaa;
}
th {
text-align: left;
padding-right: 5px;
}
table.citation {
border-left: solid 1px gray;
margin-left: 1px;
}
table.citation td {
border-bottom: none;
}
th > :first-child,
td > :first-child {
margin-top: 0px;
}
th > :last-child,
td > :last-child {
margin-bottom: 0px;
}
/* -- figures --------------------------------------------------------------- */
div.figure, figure {
margin: 0.5em;
padding: 0.5em;
}
div.figure p.caption, figcaption {
padding: 0.3em;
}
div.figure p.caption span.caption-number,
figcaption span.caption-number {
font-style: italic;
}
div.figure p.caption span.caption-text,
figcaption span.caption-text {
}
/* -- field list styles ----------------------------------------------------- */
table.field-list td, table.field-list th {
border: 0 !important;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.field-name {
-moz-hyphens: manual;
-ms-hyphens: manual;
-webkit-hyphens: manual;
hyphens: manual;
}
/* -- hlist styles ---------------------------------------------------------- */
table.hlist {
margin: 1em 0;
}
table.hlist td {
vertical-align: top;
}
/* -- object description styles --------------------------------------------- */
.sig {
font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
}
.sig-name, code.descname {
background-color: transparent;
font-weight: bold;
}
.sig-name {
font-size: 1.1em;
}
code.descname {
font-size: 1.2em;
}
.sig-prename, code.descclassname {
background-color: transparent;
}
.optional {
font-size: 1.3em;
}
.sig-paren {
font-size: larger;
}
.sig-param.n {
font-style: italic;
}
/* C++ specific styling */
.sig-inline.c-texpr,
.sig-inline.cpp-texpr {
font-family: unset;
}
.sig.c .k, .sig.c .kt,
.sig.cpp .k, .sig.cpp .kt {
color: #0033B3;
}
.sig.c .m,
.sig.cpp .m {
color: #1750EB;
}
.sig.c .s, .sig.c .sc,
.sig.cpp .s, .sig.cpp .sc {
color: #067D17;
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
list-style: decimal;
}
ol.loweralpha {
list-style: lower-alpha;
}
ol.upperalpha {
list-style: upper-alpha;
}
ol.lowerroman {
list-style: lower-roman;
}
ol.upperroman {
list-style: upper-roman;
}
:not(li) > ol > li:first-child > :first-child,
:not(li) > ul > li:first-child > :first-child {
margin-top: 0px;
}
:not(li) > ol > li:last-child > :last-child,
:not(li) > ul > li:last-child > :last-child {
margin-bottom: 0px;
}
ol.simple ol p,
ol.simple ul p,
ul.simple ol p,
ul.simple ul p {
margin-top: 0;
}
ol.simple > li:not(:first-child) > p,
ul.simple > li:not(:first-child) > p {
margin-top: 0;
}
ol.simple p,
ul.simple p {
margin-bottom: 0;
}
aside.footnote > span,
div.citation > span {
float: left;
}
aside.footnote > span:last-of-type,
div.citation > span:last-of-type {
padding-right: 0.5em;
}
aside.footnote > p {
margin-left: 2em;
}
div.citation > p {
margin-left: 4em;
}
aside.footnote > p:last-of-type,
div.citation > p:last-of-type {
margin-bottom: 0em;
}
aside.footnote > p:last-of-type:after,
div.citation > p:last-of-type:after {
content: "";
clear: both;
}
dl.field-list {
display: grid;
grid-template-columns: fit-content(30%) auto;
}
dl.field-list > dt {
font-weight: bold;
word-break: break-word;
padding-left: 0.5em;
padding-right: 5px;
}
dl.field-list > dd {
padding-left: 0.5em;
margin-top: 0em;
margin-left: 0em;
margin-bottom: 0em;
}
dl {
margin-bottom: 15px;
}
dd > :first-child {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
dl > dd:last-child,
dl > dd:last-child > :last-child {
margin-bottom: 0;
}
dt:target, span.highlighted {
background-color: #fbe54e;
}
rect.highlighted {
fill: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa;
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
.guilabel, .menuselection {
font-family: sans-serif;
}
.accelerator {
text-decoration: underline;
}
.classifier {
font-style: oblique;
}
.classifier:before {
font-style: normal;
margin: 0 0.5em;
content: ":";
display: inline-block;
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
pre, div[class*="highlight-"] {
clear: both;
}
span.pre {
-moz-hyphens: none;
-ms-hyphens: none;
-webkit-hyphens: none;
hyphens: none;
white-space: nowrap;
}
div[class*="highlight-"] {
margin: 1em 0;
}
td.linenos pre {
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
display: block;
}
table.highlighttable tbody {
display: block;
}
table.highlighttable tr {
display: flex;
}
table.highlighttable td {
margin: 0;
padding: 0;
}
table.highlighttable td.linenos {
padding-right: 0.5em;
}
table.highlighttable td.code {
flex: 1;
overflow: hidden;
}
.highlight .hll {
display: block;
}
div.highlight pre,
table.highlighttable pre {
margin: 0;
}
div.code-block-caption + div {
margin-top: 0;
}
div.code-block-caption {
margin-top: 1em;
padding: 2px 5px;
font-size: small;
}
div.code-block-caption code {
background-color: transparent;
}
table.highlighttable td.linenos,
span.linenos,
div.highlight span.gp { /* gp: Generic.Prompt */
user-select: none;
-webkit-user-select: text; /* Safari fallback only */
-webkit-user-select: none; /* Chrome/Safari */
-moz-user-select: none; /* Firefox */
-ms-user-select: none; /* IE10+ */
}
div.code-block-caption span.caption-number {
padding: 0.1em 0.3em;
font-style: italic;
}
div.code-block-caption span.caption-text {
}
div.literal-block-wrapper {
margin: 1em 0;
}
code.xref, a code {
background-color: transparent;
font-weight: bold;
}
h1 code, h2 code, h3 code, h4 code, h5 code, h6 code {
background-color: transparent;
}
.viewcode-link {
float: right;
}
.viewcode-back {
float: right;
font-family: sans-serif;
}
div.viewcode-block:target {
margin: -1px -10px;
padding: 0 10px;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
span.eqno a.headerlink {
position: absolute;
z-index: 1;
}
div.math:hover a.headerlink {
visibility: visible;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

View File

@@ -0,0 +1 @@
/* This file intentionally left blank. */

View File

@@ -0,0 +1,156 @@
/*
* doctools.js
* ~~~~~~~~~~~
*
* Base JavaScript utilities for all Sphinx HTML documentation.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
"use strict";
const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([
"TEXTAREA",
"INPUT",
"SELECT",
"BUTTON",
]);
const _ready = (callback) => {
if (document.readyState !== "loading") {
callback();
} else {
document.addEventListener("DOMContentLoaded", callback);
}
};
/**
* Small JavaScript module for the documentation.
*/
const Documentation = {
init: () => {
Documentation.initDomainIndexTable();
Documentation.initOnKeyListeners();
},
/**
* i18n support
*/
TRANSLATIONS: {},
PLURAL_EXPR: (n) => (n === 1 ? 0 : 1),
LOCALE: "unknown",
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext: (string) => {
const translated = Documentation.TRANSLATIONS[string];
switch (typeof translated) {
case "undefined":
return string; // no translation
case "string":
return translated; // translation exists
default:
return translated[0]; // (singular, plural) translation tuple exists
}
},
ngettext: (singular, plural, n) => {
const translated = Documentation.TRANSLATIONS[singular];
if (typeof translated !== "undefined")
return translated[Documentation.PLURAL_EXPR(n)];
return n === 1 ? singular : plural;
},
addTranslations: (catalog) => {
Object.assign(Documentation.TRANSLATIONS, catalog.messages);
Documentation.PLURAL_EXPR = new Function(
"n",
`return (${catalog.plural_expr})`
);
Documentation.LOCALE = catalog.locale;
},
/**
* helper function to focus on search bar
*/
focusSearchBar: () => {
document.querySelectorAll("input[name=q]")[0]?.focus();
},
/**
* Initialise the domain index toggle buttons
*/
initDomainIndexTable: () => {
const toggler = (el) => {
const idNumber = el.id.substr(7);
const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`);
if (el.src.substr(-9) === "minus.png") {
el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`;
toggledRows.forEach((el) => (el.style.display = "none"));
} else {
el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`;
toggledRows.forEach((el) => (el.style.display = ""));
}
};
const togglerElements = document.querySelectorAll("img.toggler");
togglerElements.forEach((el) =>
el.addEventListener("click", (event) => toggler(event.currentTarget))
);
togglerElements.forEach((el) => (el.style.display = ""));
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler);
},
initOnKeyListeners: () => {
// only install a listener if it is really needed
if (
!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS &&
!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS
)
return;
document.addEventListener("keydown", (event) => {
// bail for input elements
if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
// bail with special keys
if (event.altKey || event.ctrlKey || event.metaKey) return;
if (!event.shiftKey) {
switch (event.key) {
case "ArrowLeft":
if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
const prevLink = document.querySelector('link[rel="prev"]');
if (prevLink && prevLink.href) {
window.location.href = prevLink.href;
event.preventDefault();
}
break;
case "ArrowRight":
if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
const nextLink = document.querySelector('link[rel="next"]');
if (nextLink && nextLink.href) {
window.location.href = nextLink.href;
event.preventDefault();
}
break;
}
}
// some keyboard layouts may need Shift to get /
switch (event.key) {
case "/":
if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break;
Documentation.focusSearchBar();
event.preventDefault();
}
});
},
};
// quick alias for translations
const _ = Documentation.gettext;
_ready(Documentation.init);

View File

@@ -0,0 +1,14 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
VERSION: '',
LANGUAGE: 'en',
COLLAPSE_INDEX: false,
BUILDER: 'html',
FILE_SUFFIX: '.html',
LINK_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt',
NAVIGATION_WITH_KEYS: false,
SHOW_SEARCH_SUMMARY: true,
ENABLE_SEARCH_SHORTCUTS: true,
};

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 B

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,199 @@
/*
* language_data.js
* ~~~~~~~~~~~~~~~~
*
* This script contains the language-specific data used by searchtools.js,
* namely the list of stopwords, stemmer, scorer and splitter.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"];
/* Non-minified version is copied as a separate JS file, is available */
/**
* Porter Stemmer
*/
var Stemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 B

View File

@@ -0,0 +1,83 @@
pre { line-height: 125%; }
td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; }
span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; }
td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; }
span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; }
.highlight .hll { background-color: #ffffcc }
.highlight { background: #f8f8f8; }
.highlight .c { color: #8f5902; font-style: italic } /* Comment */
.highlight .err { color: #a40000; border: 1px solid #ef2929 } /* Error */
.highlight .g { color: #000000 } /* Generic */
.highlight .k { color: #004461; font-weight: bold } /* Keyword */
.highlight .l { color: #000000 } /* Literal */
.highlight .n { color: #000000 } /* Name */
.highlight .o { color: #582800 } /* Operator */
.highlight .x { color: #000000 } /* Other */
.highlight .p { color: #000000; font-weight: bold } /* Punctuation */
.highlight .ch { color: #8f5902; font-style: italic } /* Comment.Hashbang */
.highlight .cm { color: #8f5902; font-style: italic } /* Comment.Multiline */
.highlight .cp { color: #8f5902 } /* Comment.Preproc */
.highlight .cpf { color: #8f5902; font-style: italic } /* Comment.PreprocFile */
.highlight .c1 { color: #8f5902; font-style: italic } /* Comment.Single */
.highlight .cs { color: #8f5902; font-style: italic } /* Comment.Special */
.highlight .gd { color: #a40000 } /* Generic.Deleted */
.highlight .ge { color: #000000; font-style: italic } /* Generic.Emph */
.highlight .gr { color: #ef2929 } /* Generic.Error */
.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.highlight .gi { color: #00A000 } /* Generic.Inserted */
.highlight .go { color: #888888 } /* Generic.Output */
.highlight .gp { color: #745334 } /* Generic.Prompt */
.highlight .gs { color: #000000; font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.highlight .gt { color: #a40000; font-weight: bold } /* Generic.Traceback */
.highlight .kc { color: #004461; font-weight: bold } /* Keyword.Constant */
.highlight .kd { color: #004461; font-weight: bold } /* Keyword.Declaration */
.highlight .kn { color: #004461; font-weight: bold } /* Keyword.Namespace */
.highlight .kp { color: #004461; font-weight: bold } /* Keyword.Pseudo */
.highlight .kr { color: #004461; font-weight: bold } /* Keyword.Reserved */
.highlight .kt { color: #004461; font-weight: bold } /* Keyword.Type */
.highlight .ld { color: #000000 } /* Literal.Date */
.highlight .m { color: #990000 } /* Literal.Number */
.highlight .s { color: #4e9a06 } /* Literal.String */
.highlight .na { color: #c4a000 } /* Name.Attribute */
.highlight .nb { color: #004461 } /* Name.Builtin */
.highlight .nc { color: #000000 } /* Name.Class */
.highlight .no { color: #000000 } /* Name.Constant */
.highlight .nd { color: #888888 } /* Name.Decorator */
.highlight .ni { color: #ce5c00 } /* Name.Entity */
.highlight .ne { color: #cc0000; font-weight: bold } /* Name.Exception */
.highlight .nf { color: #000000 } /* Name.Function */
.highlight .nl { color: #f57900 } /* Name.Label */
.highlight .nn { color: #000000 } /* Name.Namespace */
.highlight .nx { color: #000000 } /* Name.Other */
.highlight .py { color: #000000 } /* Name.Property */
.highlight .nt { color: #004461; font-weight: bold } /* Name.Tag */
.highlight .nv { color: #000000 } /* Name.Variable */
.highlight .ow { color: #004461; font-weight: bold } /* Operator.Word */
.highlight .pm { color: #000000; font-weight: bold } /* Punctuation.Marker */
.highlight .w { color: #f8f8f8; text-decoration: underline } /* Text.Whitespace */
.highlight .mb { color: #990000 } /* Literal.Number.Bin */
.highlight .mf { color: #990000 } /* Literal.Number.Float */
.highlight .mh { color: #990000 } /* Literal.Number.Hex */
.highlight .mi { color: #990000 } /* Literal.Number.Integer */
.highlight .mo { color: #990000 } /* Literal.Number.Oct */
.highlight .sa { color: #4e9a06 } /* Literal.String.Affix */
.highlight .sb { color: #4e9a06 } /* Literal.String.Backtick */
.highlight .sc { color: #4e9a06 } /* Literal.String.Char */
.highlight .dl { color: #4e9a06 } /* Literal.String.Delimiter */
.highlight .sd { color: #8f5902; font-style: italic } /* Literal.String.Doc */
.highlight .s2 { color: #4e9a06 } /* Literal.String.Double */
.highlight .se { color: #4e9a06 } /* Literal.String.Escape */
.highlight .sh { color: #4e9a06 } /* Literal.String.Heredoc */
.highlight .si { color: #4e9a06 } /* Literal.String.Interpol */
.highlight .sx { color: #4e9a06 } /* Literal.String.Other */
.highlight .sr { color: #4e9a06 } /* Literal.String.Regex */
.highlight .s1 { color: #4e9a06 } /* Literal.String.Single */
.highlight .ss { color: #4e9a06 } /* Literal.String.Symbol */
.highlight .bp { color: #3465a4 } /* Name.Builtin.Pseudo */
.highlight .fm { color: #000000 } /* Name.Function.Magic */
.highlight .vc { color: #000000 } /* Name.Variable.Class */
.highlight .vg { color: #000000 } /* Name.Variable.Global */
.highlight .vi { color: #000000 } /* Name.Variable.Instance */
.highlight .vm { color: #000000 } /* Name.Variable.Magic */
.highlight .il { color: #990000 } /* Literal.Number.Integer.Long */

View File

@@ -0,0 +1,566 @@
/*
* searchtools.js
* ~~~~~~~~~~~~~~~~
*
* Sphinx JavaScript utilities for the full-text search.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
"use strict";
/**
* Simple result scoring code.
*/
if (typeof Scorer === "undefined") {
var Scorer = {
// Implement the following function to further tweak the score for each result
// The function takes a result array [docname, title, anchor, descr, score, filename]
// and returns the new score.
/*
score: result => {
const [docname, title, anchor, descr, score, filename] = result
return score
},
*/
// query matches the full name of an object
objNameMatch: 11,
// or matches in the last dotted part of the object name
objPartialMatch: 6,
// Additive scores depending on the priority of the object
objPrio: {
0: 15, // used to be importantResults
1: 5, // used to be objectResults
2: -5, // used to be unimportantResults
},
// Used when the priority is not in the mapping.
objPrioDefault: 0,
// query found in title
title: 15,
partialTitle: 7,
// query found in terms
term: 5,
partialTerm: 2,
};
}
const _removeChildren = (element) => {
while (element && element.lastChild) element.removeChild(element.lastChild);
};
/**
* See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping
*/
const _escapeRegExp = (string) =>
string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string
const _displayItem = (item, searchTerms) => {
const docBuilder = DOCUMENTATION_OPTIONS.BUILDER;
const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT;
const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX;
const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX;
const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY;
const [docName, title, anchor, descr, score, _filename] = item;
let listItem = document.createElement("li");
let requestUrl;
let linkUrl;
if (docBuilder === "dirhtml") {
// dirhtml builder
let dirname = docName + "/";
if (dirname.match(/\/index\/$/))
dirname = dirname.substring(0, dirname.length - 6);
else if (dirname === "index/") dirname = "";
requestUrl = docUrlRoot + dirname;
linkUrl = requestUrl;
} else {
// normal html builders
requestUrl = docUrlRoot + docName + docFileSuffix;
linkUrl = docName + docLinkSuffix;
}
let linkEl = listItem.appendChild(document.createElement("a"));
linkEl.href = linkUrl + anchor;
linkEl.dataset.score = score;
linkEl.innerHTML = title;
if (descr)
listItem.appendChild(document.createElement("span")).innerHTML =
" (" + descr + ")";
else if (showSearchSummary)
fetch(requestUrl)
.then((responseData) => responseData.text())
.then((data) => {
if (data)
listItem.appendChild(
Search.makeSearchSummary(data, searchTerms)
);
});
Search.output.appendChild(listItem);
};
const _finishSearch = (resultCount) => {
Search.stopPulse();
Search.title.innerText = _("Search Results");
if (!resultCount)
Search.status.innerText = Documentation.gettext(
"Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories."
);
else
Search.status.innerText = _(
`Search finished, found ${resultCount} page(s) matching the search query.`
);
};
const _displayNextItem = (
results,
resultCount,
searchTerms
) => {
// results left, load the summary and display it
// this is intended to be dynamic (don't sub resultsCount)
if (results.length) {
_displayItem(results.pop(), searchTerms);
setTimeout(
() => _displayNextItem(results, resultCount, searchTerms),
5
);
}
// search finished, update title and status message
else _finishSearch(resultCount);
};
/**
* Default splitQuery function. Can be overridden in ``sphinx.search`` with a
* custom function per language.
*
* The regular expression works by splitting the string on consecutive characters
* that are not Unicode letters, numbers, underscores, or emoji characters.
* This is the same as ``\W+`` in Python, preserving the surrogate pair area.
*/
if (typeof splitQuery === "undefined") {
var splitQuery = (query) => query
.split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu)
.filter(term => term) // remove remaining empty strings
}
/**
* Search Module
*/
const Search = {
_index: null,
_queued_query: null,
_pulse_status: -1,
htmlToText: (htmlString) => {
const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html');
htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() });
const docContent = htmlElement.querySelector('[role="main"]');
if (docContent !== undefined) return docContent.textContent;
console.warn(
"Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template."
);
return "";
},
init: () => {
const query = new URLSearchParams(window.location.search).get("q");
document
.querySelectorAll('input[name="q"]')
.forEach((el) => (el.value = query));
if (query) Search.performSearch(query);
},
loadIndex: (url) =>
(document.body.appendChild(document.createElement("script")).src = url),
setIndex: (index) => {
Search._index = index;
if (Search._queued_query !== null) {
const query = Search._queued_query;
Search._queued_query = null;
Search.query(query);
}
},
hasIndex: () => Search._index !== null,
deferQuery: (query) => (Search._queued_query = query),
stopPulse: () => (Search._pulse_status = -1),
startPulse: () => {
if (Search._pulse_status >= 0) return;
const pulse = () => {
Search._pulse_status = (Search._pulse_status + 1) % 4;
Search.dots.innerText = ".".repeat(Search._pulse_status);
if (Search._pulse_status >= 0) window.setTimeout(pulse, 500);
};
pulse();
},
/**
* perform a search for something (or wait until index is loaded)
*/
performSearch: (query) => {
// create the required interface elements
const searchText = document.createElement("h2");
searchText.textContent = _("Searching");
const searchSummary = document.createElement("p");
searchSummary.classList.add("search-summary");
searchSummary.innerText = "";
const searchList = document.createElement("ul");
searchList.classList.add("search");
const out = document.getElementById("search-results");
Search.title = out.appendChild(searchText);
Search.dots = Search.title.appendChild(document.createElement("span"));
Search.status = out.appendChild(searchSummary);
Search.output = out.appendChild(searchList);
const searchProgress = document.getElementById("search-progress");
// Some themes don't use the search progress node
if (searchProgress) {
searchProgress.innerText = _("Preparing search...");
}
Search.startPulse();
// index already loaded, the browser was quick!
if (Search.hasIndex()) Search.query(query);
else Search.deferQuery(query);
},
/**
* execute search (requires search index to be loaded)
*/
query: (query) => {
const filenames = Search._index.filenames;
const docNames = Search._index.docnames;
const titles = Search._index.titles;
const allTitles = Search._index.alltitles;
const indexEntries = Search._index.indexentries;
// stem the search terms and add them to the correct list
const stemmer = new Stemmer();
const searchTerms = new Set();
const excludedTerms = new Set();
const highlightTerms = new Set();
const objectTerms = new Set(splitQuery(query.toLowerCase().trim()));
splitQuery(query.trim()).forEach((queryTerm) => {
const queryTermLower = queryTerm.toLowerCase();
// maybe skip this "word"
// stopwords array is from language_data.js
if (
stopwords.indexOf(queryTermLower) !== -1 ||
queryTerm.match(/^\d+$/)
)
return;
// stem the word
let word = stemmer.stemWord(queryTermLower);
// select the correct list
if (word[0] === "-") excludedTerms.add(word.substr(1));
else {
searchTerms.add(word);
highlightTerms.add(queryTermLower);
}
});
if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js
localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" "))
}
// console.debug("SEARCH: searching for:");
// console.info("required: ", [...searchTerms]);
// console.info("excluded: ", [...excludedTerms]);
// array of [docname, title, anchor, descr, score, filename]
let results = [];
_removeChildren(document.getElementById("search-progress"));
const queryLower = query.toLowerCase();
for (const [title, foundTitles] of Object.entries(allTitles)) {
if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) {
for (const [file, id] of foundTitles) {
let score = Math.round(100 * queryLower.length / title.length)
results.push([
docNames[file],
titles[file] !== title ? `${titles[file]} > ${title}` : title,
id !== null ? "#" + id : "",
null,
score,
filenames[file],
]);
}
}
}
// search for explicit entries in index directives
for (const [entry, foundEntries] of Object.entries(indexEntries)) {
if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) {
for (const [file, id] of foundEntries) {
let score = Math.round(100 * queryLower.length / entry.length)
results.push([
docNames[file],
titles[file],
id ? "#" + id : "",
null,
score,
filenames[file],
]);
}
}
}
// lookup as object
objectTerms.forEach((term) =>
results.push(...Search.performObjectSearch(term, objectTerms))
);
// lookup as search terms in fulltext
results.push(...Search.performTermsSearch(searchTerms, excludedTerms));
// let the scorer override scores with a custom scoring function
if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item)));
// now sort the results by score (in opposite order of appearance, since the
// display function below uses pop() to retrieve items) and then
// alphabetically
results.sort((a, b) => {
const leftScore = a[4];
const rightScore = b[4];
if (leftScore === rightScore) {
// same score: sort alphabetically
const leftTitle = a[1].toLowerCase();
const rightTitle = b[1].toLowerCase();
if (leftTitle === rightTitle) return 0;
return leftTitle > rightTitle ? -1 : 1; // inverted is intentional
}
return leftScore > rightScore ? 1 : -1;
});
// remove duplicate search results
// note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept
let seen = new Set();
results = results.reverse().reduce((acc, result) => {
let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(',');
if (!seen.has(resultStr)) {
acc.push(result);
seen.add(resultStr);
}
return acc;
}, []);
results = results.reverse();
// for debugging
//Search.lastresults = results.slice(); // a copy
// console.info("search results:", Search.lastresults);
// print the results
_displayNextItem(results, results.length, searchTerms);
},
/**
* search for object names
*/
performObjectSearch: (object, objectTerms) => {
const filenames = Search._index.filenames;
const docNames = Search._index.docnames;
const objects = Search._index.objects;
const objNames = Search._index.objnames;
const titles = Search._index.titles;
const results = [];
const objectSearchCallback = (prefix, match) => {
const name = match[4]
const fullname = (prefix ? prefix + "." : "") + name;
const fullnameLower = fullname.toLowerCase();
if (fullnameLower.indexOf(object) < 0) return;
let score = 0;
const parts = fullnameLower.split(".");
// check for different match types: exact matches of full name or
// "last name" (i.e. last dotted part)
if (fullnameLower === object || parts.slice(-1)[0] === object)
score += Scorer.objNameMatch;
else if (parts.slice(-1)[0].indexOf(object) > -1)
score += Scorer.objPartialMatch; // matches in last name
const objName = objNames[match[1]][2];
const title = titles[match[0]];
// If more than one term searched for, we require other words to be
// found in the name/title/description
const otherTerms = new Set(objectTerms);
otherTerms.delete(object);
if (otherTerms.size > 0) {
const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase();
if (
[...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0)
)
return;
}
let anchor = match[3];
if (anchor === "") anchor = fullname;
else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname;
const descr = objName + _(", in ") + title;
// add custom score for some objects according to scorer
if (Scorer.objPrio.hasOwnProperty(match[2]))
score += Scorer.objPrio[match[2]];
else score += Scorer.objPrioDefault;
results.push([
docNames[match[0]],
fullname,
"#" + anchor,
descr,
score,
filenames[match[0]],
]);
};
Object.keys(objects).forEach((prefix) =>
objects[prefix].forEach((array) =>
objectSearchCallback(prefix, array)
)
);
return results;
},
/**
* search for full-text terms in the index
*/
performTermsSearch: (searchTerms, excludedTerms) => {
// prepare search
const terms = Search._index.terms;
const titleTerms = Search._index.titleterms;
const filenames = Search._index.filenames;
const docNames = Search._index.docnames;
const titles = Search._index.titles;
const scoreMap = new Map();
const fileMap = new Map();
// perform the search on the required terms
searchTerms.forEach((word) => {
const files = [];
const arr = [
{ files: terms[word], score: Scorer.term },
{ files: titleTerms[word], score: Scorer.title },
];
// add support for partial matches
if (word.length > 2) {
const escapedWord = _escapeRegExp(word);
Object.keys(terms).forEach((term) => {
if (term.match(escapedWord) && !terms[word])
arr.push({ files: terms[term], score: Scorer.partialTerm });
});
Object.keys(titleTerms).forEach((term) => {
if (term.match(escapedWord) && !titleTerms[word])
arr.push({ files: titleTerms[word], score: Scorer.partialTitle });
});
}
// no match but word was a required one
if (arr.every((record) => record.files === undefined)) return;
// found search word in contents
arr.forEach((record) => {
if (record.files === undefined) return;
let recordFiles = record.files;
if (recordFiles.length === undefined) recordFiles = [recordFiles];
files.push(...recordFiles);
// set score for the word in each file
recordFiles.forEach((file) => {
if (!scoreMap.has(file)) scoreMap.set(file, {});
scoreMap.get(file)[word] = record.score;
});
});
// create the mapping
files.forEach((file) => {
if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1)
fileMap.get(file).push(word);
else fileMap.set(file, [word]);
});
});
// now check if the files don't contain excluded terms
const results = [];
for (const [file, wordList] of fileMap) {
// check if all requirements are matched
// as search terms with length < 3 are discarded
const filteredTermCount = [...searchTerms].filter(
(term) => term.length > 2
).length;
if (
wordList.length !== searchTerms.size &&
wordList.length !== filteredTermCount
)
continue;
// ensure that none of the excluded terms is in the search result
if (
[...excludedTerms].some(
(term) =>
terms[term] === file ||
titleTerms[term] === file ||
(terms[term] || []).includes(file) ||
(titleTerms[term] || []).includes(file)
)
)
break;
// select one (max) score for the file.
const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w]));
// add result to the result list
results.push([
docNames[file],
titles[file],
"",
null,
score,
filenames[file],
]);
}
return results;
},
/**
* helper function to return a node containing the
* search summary for a given text. keywords is a list
* of stemmed words.
*/
makeSearchSummary: (htmlText, keywords) => {
const text = Search.htmlToText(htmlText);
if (text === "") return null;
const textLower = text.toLowerCase();
const actualStartPosition = [...keywords]
.map((k) => textLower.indexOf(k.toLowerCase()))
.filter((i) => i > -1)
.slice(-1)[0];
const startWithContext = Math.max(actualStartPosition - 120, 0);
const top = startWithContext === 0 ? "" : "...";
const tail = startWithContext + 240 < text.length ? "..." : "";
let summary = document.createElement("p");
summary.classList.add("context");
summary.textContent = top + text.substr(startWithContext, 240).trim() + tail;
return summary;
},
};
_ready(Search.init);

View File

@@ -0,0 +1,144 @@
/* Highlighting utilities for Sphinx HTML documentation. */
"use strict";
const SPHINX_HIGHLIGHT_ENABLED = true
/**
* highlight a given string on a node by wrapping it in
* span elements with the given class name.
*/
const _highlight = (node, addItems, text, className) => {
if (node.nodeType === Node.TEXT_NODE) {
const val = node.nodeValue;
const parent = node.parentNode;
const pos = val.toLowerCase().indexOf(text);
if (
pos >= 0 &&
!parent.classList.contains(className) &&
!parent.classList.contains("nohighlight")
) {
let span;
const closestNode = parent.closest("body, svg, foreignObject");
const isInSVG = closestNode && closestNode.matches("svg");
if (isInSVG) {
span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
} else {
span = document.createElement("span");
span.classList.add(className);
}
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
parent.insertBefore(
span,
parent.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling
)
);
node.nodeValue = val.substr(0, pos);
if (isInSVG) {
const rect = document.createElementNS(
"http://www.w3.org/2000/svg",
"rect"
);
const bbox = parent.getBBox();
rect.x.baseVal.value = bbox.x;
rect.y.baseVal.value = bbox.y;
rect.width.baseVal.value = bbox.width;
rect.height.baseVal.value = bbox.height;
rect.setAttribute("class", className);
addItems.push({ parent: parent, target: rect });
}
}
} else if (node.matches && !node.matches("button, select, textarea")) {
node.childNodes.forEach((el) => _highlight(el, addItems, text, className));
}
};
const _highlightText = (thisNode, text, className) => {
let addItems = [];
_highlight(thisNode, addItems, text, className);
addItems.forEach((obj) =>
obj.parent.insertAdjacentElement("beforebegin", obj.target)
);
};
/**
* Small JavaScript module for the documentation.
*/
const SphinxHighlight = {
/**
* highlight the search words provided in localstorage in the text
*/
highlightSearchWords: () => {
if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight
// get and clear terms from localstorage
const url = new URL(window.location);
const highlight =
localStorage.getItem("sphinx_highlight_terms")
|| url.searchParams.get("highlight")
|| "";
localStorage.removeItem("sphinx_highlight_terms")
url.searchParams.delete("highlight");
window.history.replaceState({}, "", url);
// get individual terms from highlight string
const terms = highlight.toLowerCase().split(/\s+/).filter(x => x);
if (terms.length === 0) return; // nothing to do
// There should never be more than one element matching "div.body"
const divBody = document.querySelectorAll("div.body");
const body = divBody.length ? divBody[0] : document.querySelector("body");
window.setTimeout(() => {
terms.forEach((term) => _highlightText(body, term, "highlighted"));
}, 10);
const searchBox = document.getElementById("searchbox");
if (searchBox === null) return;
searchBox.appendChild(
document
.createRange()
.createContextualFragment(
'<p class="highlight-link">' +
'<a href="javascript:SphinxHighlight.hideSearchWords()">' +
_("Hide Search Matches") +
"</a></p>"
)
);
},
/**
* helper function to hide the search marks again
*/
hideSearchWords: () => {
document
.querySelectorAll("#searchbox .highlight-link")
.forEach((el) => el.remove());
document
.querySelectorAll("span.highlighted")
.forEach((el) => el.classList.remove("highlighted"));
localStorage.removeItem("sphinx_highlight_terms")
},
initEscapeListener: () => {
// only install a listener if it is really needed
if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return;
document.addEventListener("keydown", (event) => {
// bail for input elements
if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
// bail with special keys
if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return;
if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) {
SphinxHighlight.hideSearchWords();
event.preventDefault();
}
});
},
};
_ready(SphinxHighlight.highlightSearchWords);
_ready(SphinxHighlight.initEscapeListener);

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

102
core/dbt/docs/build/html/genindex.html vendored Normal file
View File

@@ -0,0 +1,102 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Index &#8212; dbt-core documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="_static/doctools.js"></script>
<script src="_static/sphinx_highlight.js"></script>
<link rel="index" title="Index" href="#" />
<link rel="search" title="Search" href="search.html" />
<link rel="stylesheet" href="_static/custom.css" type="text/css" />
<meta name="viewport" content="width=device-width, initial-scale=0.9, maximum-scale=0.9" />
</head><body>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<h1 id="index">Index</h1>
<div class="genindex-jumpbox">
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h1 class="logo"><a href="index.html">dbt-core</a></h1>
<h3>Navigation</h3>
<div class="relations">
<h3>Related Topics</h3>
<ul>
<li><a href="index.html">Documentation overview</a><ul>
</ul></li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<input type="text" name="q" aria-labelledby="searchlabel" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false"/>
<input type="submit" value="Go" />
</form>
</div>
</div>
<script>document.getElementById('searchbox').style.display = "block"</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="footer">
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 5.3.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
</div>
</body>
</html>

855
core/dbt/docs/build/html/index.html vendored Normal file
View File

@@ -0,0 +1,855 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />
<title>dbt-cores API documentation &#8212; dbt-core documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="_static/doctools.js"></script>
<script src="_static/sphinx_highlight.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="stylesheet" href="_static/custom.css" type="text/css" />
<meta name="viewport" content="width=device-width, initial-scale=0.9, maximum-scale=0.9" />
</head><body>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<section id="dbt-core-s-api-documentation">
<h1>dbt-cores API documentation<a class="headerlink" href="#dbt-core-s-api-documentation" title="Permalink to this heading"></a></h1>
<section id="dbt-section">
<h2>Command: build<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="build|defer">
<h3>defer<a class="headerlink" href="#build|defer" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If set, defer to the state variable for resolving unselected nodes.</p>
</section>
<section id="build|exclude">
<h3>exclude<a class="headerlink" href="#build|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="build|fail_fast">
<h3>fail_fast<a class="headerlink" href="#build|fail_fast" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Stop execution on first failure.</p>
</section>
<section id="build|full_refresh">
<h3>full_refresh<a class="headerlink" href="#build|full_refresh" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If specified, dbt will drop incremental models and fully-recalculate the incremental table from the model definition.</p>
</section>
<section id="build|indirect_selection">
<h3>indirect_selection<a class="headerlink" href="#build|indirect_selection" title="Permalink to this heading"></a></h3>
<p>Type: choice: [eager, cautious]</p>
<p>Select all tests that are adjacent to selected resources, even if they those resources have been explicitly selected.</p>
</section>
<section id="build|log_path">
<h3>log_path<a class="headerlink" href="#build|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="build|models">
<h3>models<a class="headerlink" href="#build|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="build|profile">
<h3>profile<a class="headerlink" href="#build|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="build|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#build|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="build|project_dir">
<h3>project_dir<a class="headerlink" href="#build|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="build|selector">
<h3>selector<a class="headerlink" href="#build|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="build|show">
<h3>show<a class="headerlink" href="#build|show" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Show a sample of the loaded data in the terminal</p>
</section>
<section id="build|state">
<h3>state<a class="headerlink" href="#build|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="build|store_failures">
<h3>store_failures<a class="headerlink" href="#build|store_failures" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Store test results (failing rows) in the database</p>
</section>
<section id="build|target">
<h3>target<a class="headerlink" href="#build|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="build|target_path">
<h3>target_path<a class="headerlink" href="#build|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="build|threads">
<h3>threads<a class="headerlink" href="#build|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="build|vars">
<h3>vars<a class="headerlink" href="#build|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="build|version_check">
<h3>version_check<a class="headerlink" href="#build|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<h2>Command: clean<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="clean|profile">
<h3>profile<a class="headerlink" href="#clean|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="clean|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#clean|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="clean|project_dir">
<h3>project_dir<a class="headerlink" href="#clean|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="clean|target">
<h3>target<a class="headerlink" href="#clean|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="clean|vars">
<h3>vars<a class="headerlink" href="#clean|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: compile<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="compile|defer">
<h3>defer<a class="headerlink" href="#compile|defer" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If set, defer to the state variable for resolving unselected nodes.</p>
</section>
<section id="compile|exclude">
<h3>exclude<a class="headerlink" href="#compile|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="compile|full_refresh">
<h3>full_refresh<a class="headerlink" href="#compile|full_refresh" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If specified, dbt will drop incremental models and fully-recalculate the incremental table from the model definition.</p>
</section>
<section id="compile|log_path">
<h3>log_path<a class="headerlink" href="#compile|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="compile|models">
<h3>models<a class="headerlink" href="#compile|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="compile|parse_only">
<h3>parse_only<a class="headerlink" href="#compile|parse_only" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>TODO: No help text currently available</p>
</section>
<section id="compile|profile">
<h3>profile<a class="headerlink" href="#compile|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="compile|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#compile|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="compile|project_dir">
<h3>project_dir<a class="headerlink" href="#compile|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="compile|selector">
<h3>selector<a class="headerlink" href="#compile|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="compile|state">
<h3>state<a class="headerlink" href="#compile|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="compile|target">
<h3>target<a class="headerlink" href="#compile|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="compile|target_path">
<h3>target_path<a class="headerlink" href="#compile|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="compile|threads">
<h3>threads<a class="headerlink" href="#compile|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="compile|vars">
<h3>vars<a class="headerlink" href="#compile|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="compile|version_check">
<h3>version_check<a class="headerlink" href="#compile|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<h2>Command: debug<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="debug|config_dir">
<h3>config_dir<a class="headerlink" href="#debug|config_dir" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>If specified, DBT will show path information for this project</p>
</section>
<section id="debug|profile">
<h3>profile<a class="headerlink" href="#debug|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="debug|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#debug|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="debug|project_dir">
<h3>project_dir<a class="headerlink" href="#debug|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="debug|target">
<h3>target<a class="headerlink" href="#debug|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="debug|vars">
<h3>vars<a class="headerlink" href="#debug|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="debug|version_check">
<h3>version_check<a class="headerlink" href="#debug|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<h2>Command: deps<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="deps|profile">
<h3>profile<a class="headerlink" href="#deps|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="deps|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#deps|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="deps|project_dir">
<h3>project_dir<a class="headerlink" href="#deps|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="deps|target">
<h3>target<a class="headerlink" href="#deps|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="deps|vars">
<h3>vars<a class="headerlink" href="#deps|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: docs<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<h2>Command: init<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="init|profile">
<h3>profile<a class="headerlink" href="#init|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="init|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#init|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="init|project_dir">
<h3>project_dir<a class="headerlink" href="#init|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="init|skip_profile_setup">
<h3>skip_profile_setup<a class="headerlink" href="#init|skip_profile_setup" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Skip interative profile setup.</p>
</section>
<section id="init|target">
<h3>target<a class="headerlink" href="#init|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="init|vars">
<h3>vars<a class="headerlink" href="#init|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: list<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="list|exclude">
<h3>exclude<a class="headerlink" href="#list|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="list|indirect_selection">
<h3>indirect_selection<a class="headerlink" href="#list|indirect_selection" title="Permalink to this heading"></a></h3>
<p>Type: choice: [eager, cautious]</p>
<p>Select all tests that are adjacent to selected resources, even if they those resources have been explicitly selected.</p>
</section>
<section id="list|models">
<h3>models<a class="headerlink" href="#list|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="list|output">
<h3>output<a class="headerlink" href="#list|output" title="Permalink to this heading"></a></h3>
<p>Type: choice: [json, name, path, selector]</p>
<p>TODO: No current help text</p>
</section>
<section id="list|output_keys">
<h3>output_keys<a class="headerlink" href="#list|output_keys" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>TODO: No current help text</p>
</section>
<section id="list|profile">
<h3>profile<a class="headerlink" href="#list|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="list|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#list|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="list|project_dir">
<h3>project_dir<a class="headerlink" href="#list|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="list|resource_type">
<h3>resource_type<a class="headerlink" href="#list|resource_type" title="Permalink to this heading"></a></h3>
<p>Type: choice: [metric, source, analysis, model, test, exposure, snapshot, seed, default, all]</p>
<p>TODO: No current help text</p>
</section>
<section id="list|selector">
<h3>selector<a class="headerlink" href="#list|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="list|state">
<h3>state<a class="headerlink" href="#list|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="list|target">
<h3>target<a class="headerlink" href="#list|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="list|vars">
<h3>vars<a class="headerlink" href="#list|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: parse<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="parse|compile">
<h3>compile<a class="headerlink" href="#parse|compile" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>TODO: No help text currently available</p>
</section>
<section id="parse|log_path">
<h3>log_path<a class="headerlink" href="#parse|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="parse|profile">
<h3>profile<a class="headerlink" href="#parse|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="parse|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#parse|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="parse|project_dir">
<h3>project_dir<a class="headerlink" href="#parse|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="parse|target">
<h3>target<a class="headerlink" href="#parse|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="parse|target_path">
<h3>target_path<a class="headerlink" href="#parse|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="parse|threads">
<h3>threads<a class="headerlink" href="#parse|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="parse|vars">
<h3>vars<a class="headerlink" href="#parse|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="parse|version_check">
<h3>version_check<a class="headerlink" href="#parse|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<section id="parse|write_manifest">
<h3>write_manifest<a class="headerlink" href="#parse|write_manifest" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>TODO: No help text currently available</p>
</section>
<h2>Command: run<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="run|defer">
<h3>defer<a class="headerlink" href="#run|defer" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If set, defer to the state variable for resolving unselected nodes.</p>
</section>
<section id="run|exclude">
<h3>exclude<a class="headerlink" href="#run|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="run|fail_fast">
<h3>fail_fast<a class="headerlink" href="#run|fail_fast" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Stop execution on first failure.</p>
</section>
<section id="run|full_refresh">
<h3>full_refresh<a class="headerlink" href="#run|full_refresh" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If specified, dbt will drop incremental models and fully-recalculate the incremental table from the model definition.</p>
</section>
<section id="run|log_path">
<h3>log_path<a class="headerlink" href="#run|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="run|models">
<h3>models<a class="headerlink" href="#run|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="run|profile">
<h3>profile<a class="headerlink" href="#run|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="run|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#run|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="run|project_dir">
<h3>project_dir<a class="headerlink" href="#run|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="run|selector">
<h3>selector<a class="headerlink" href="#run|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="run|state">
<h3>state<a class="headerlink" href="#run|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="run|target">
<h3>target<a class="headerlink" href="#run|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="run|target_path">
<h3>target_path<a class="headerlink" href="#run|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="run|threads">
<h3>threads<a class="headerlink" href="#run|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="run|vars">
<h3>vars<a class="headerlink" href="#run|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="run|version_check">
<h3>version_check<a class="headerlink" href="#run|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<h2>Command: run_operation<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="run-operation|args">
<h3>args<a class="headerlink" href="#run-operation|args" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply arguments to the macro. This dictionary will be mapped to the keyword arguments defined in the selected macro. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="run-operation|profile">
<h3>profile<a class="headerlink" href="#run-operation|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="run-operation|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#run-operation|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="run-operation|project_dir">
<h3>project_dir<a class="headerlink" href="#run-operation|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="run-operation|target">
<h3>target<a class="headerlink" href="#run-operation|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="run-operation|vars">
<h3>vars<a class="headerlink" href="#run-operation|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: seed<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="seed|exclude">
<h3>exclude<a class="headerlink" href="#seed|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="seed|full_refresh">
<h3>full_refresh<a class="headerlink" href="#seed|full_refresh" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If specified, dbt will drop incremental models and fully-recalculate the incremental table from the model definition.</p>
</section>
<section id="seed|log_path">
<h3>log_path<a class="headerlink" href="#seed|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="seed|models">
<h3>models<a class="headerlink" href="#seed|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="seed|profile">
<h3>profile<a class="headerlink" href="#seed|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="seed|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#seed|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="seed|project_dir">
<h3>project_dir<a class="headerlink" href="#seed|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="seed|selector">
<h3>selector<a class="headerlink" href="#seed|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="seed|show">
<h3>show<a class="headerlink" href="#seed|show" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Show a sample of the loaded data in the terminal</p>
</section>
<section id="seed|state">
<h3>state<a class="headerlink" href="#seed|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="seed|target">
<h3>target<a class="headerlink" href="#seed|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="seed|target_path">
<h3>target_path<a class="headerlink" href="#seed|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="seed|threads">
<h3>threads<a class="headerlink" href="#seed|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="seed|vars">
<h3>vars<a class="headerlink" href="#seed|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="seed|version_check">
<h3>version_check<a class="headerlink" href="#seed|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
<h2>Command: snapshot<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="snapshot|defer">
<h3>defer<a class="headerlink" href="#snapshot|defer" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If set, defer to the state variable for resolving unselected nodes.</p>
</section>
<section id="snapshot|exclude">
<h3>exclude<a class="headerlink" href="#snapshot|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="snapshot|models">
<h3>models<a class="headerlink" href="#snapshot|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="snapshot|profile">
<h3>profile<a class="headerlink" href="#snapshot|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="snapshot|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#snapshot|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="snapshot|project_dir">
<h3>project_dir<a class="headerlink" href="#snapshot|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="snapshot|selector">
<h3>selector<a class="headerlink" href="#snapshot|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="snapshot|state">
<h3>state<a class="headerlink" href="#snapshot|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="snapshot|target">
<h3>target<a class="headerlink" href="#snapshot|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="snapshot|threads">
<h3>threads<a class="headerlink" href="#snapshot|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="snapshot|vars">
<h3>vars<a class="headerlink" href="#snapshot|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<h2>Command: source<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<h2>Command: test<a class="headerlink" href="#dbt-section" title="Permalink to this heading"></a></h2>
<section id="test|defer">
<h3>defer<a class="headerlink" href="#test|defer" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>If set, defer to the state variable for resolving unselected nodes.</p>
</section>
<section id="test|exclude">
<h3>exclude<a class="headerlink" href="#test|exclude" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to exclude.</p>
</section>
<section id="test|fail_fast">
<h3>fail_fast<a class="headerlink" href="#test|fail_fast" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Stop execution on first failure.</p>
</section>
<section id="test|indirect_selection">
<h3>indirect_selection<a class="headerlink" href="#test|indirect_selection" title="Permalink to this heading"></a></h3>
<p>Type: choice: [eager, cautious]</p>
<p>Select all tests that are adjacent to selected resources, even if they those resources have been explicitly selected.</p>
</section>
<section id="test|log_path">
<h3>log_path<a class="headerlink" href="#test|log_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the log-path. Only applies this setting for the current run. Overrides the DBT_LOG_PATH if it is set.</p>
</section>
<section id="test|models">
<h3>models<a class="headerlink" href="#test|models" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Specify the nodes to include.</p>
</section>
<section id="test|profile">
<h3>profile<a class="headerlink" href="#test|profile" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which profile to load. Overrides setting in dbt_project.yml.</p>
</section>
<section id="test|profiles_dir">
<h3>profiles_dir<a class="headerlink" href="#test|profiles_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the profiles.yml file. If not set, dbt will look in the current working directory first, then HOME/.dbt/</p>
</section>
<section id="test|project_dir">
<h3>project_dir<a class="headerlink" href="#test|project_dir" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Which directory to look in for the dbt_project.yml file. Default is the current working directory and its parents.</p>
</section>
<section id="test|selector">
<h3>selector<a class="headerlink" href="#test|selector" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>The selector name to use, as defined in selectors.yml</p>
</section>
<section id="test|state">
<h3>state<a class="headerlink" href="#test|state" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>If set, use the given directory as the source for json files to compare with this project.</p>
</section>
<section id="test|store_failures">
<h3>store_failures<a class="headerlink" href="#test|store_failures" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Store test results (failing rows) in the database</p>
</section>
<section id="test|target">
<h3>target<a class="headerlink" href="#test|target" title="Permalink to this heading"></a></h3>
<p>Type: string</p>
<p>Which target to load for the given profile</p>
</section>
<section id="test|target_path">
<h3>target_path<a class="headerlink" href="#test|target_path" title="Permalink to this heading"></a></h3>
<p>Type: path</p>
<p>Configure the target-path. Only applies this setting for the current run. Overrides the DBT_TARGET_PATH if it is set.</p>
</section>
<section id="test|threads">
<h3>threads<a class="headerlink" href="#test|threads" title="Permalink to this heading"></a></h3>
<p>Type: int</p>
<p>Specify number of threads to use while executing models. Overrides settings in profiles.yml.</p>
</section>
<section id="test|vars">
<h3>vars<a class="headerlink" href="#test|vars" title="Permalink to this heading"></a></h3>
<p>Type: YAML</p>
<p>Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. {my_variable: my_value}</p>
</section>
<section id="test|version_check">
<h3>version_check<a class="headerlink" href="#test|version_check" title="Permalink to this heading"></a></h3>
<p>Type: boolean</p>
<p>Ensure dbts version matches the one specified in the dbt_project.yml file (require-dbt-version)</p>
</section>
</section>
</section>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h1 class="logo"><a href="#">dbt-core</a></h1>
<h3>Navigation</h3>
<div class="relations">
<h3>Related Topics</h3>
<ul>
<li><a href="#">Documentation overview</a><ul>
</ul></li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3 id="searchlabel">Quick search</h3>
<div class="searchformwrapper">
<form class="search" action="search.html" method="get">
<input type="text" name="q" aria-labelledby="searchlabel" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false"/>
<input type="submit" value="Go" />
</form>
</div>
</div>
<script>document.getElementById('searchbox').style.display = "block"</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="footer">
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 5.3.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
|
<a href="_sources/index.rst.txt"
rel="nofollow">Page source</a>
</div>
</body>
</html>

BIN
core/dbt/docs/build/html/objects.inv vendored Normal file

Binary file not shown.

121
core/dbt/docs/build/html/search.html vendored Normal file
View File

@@ -0,0 +1,121 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Search &#8212; dbt-core documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="_static/doctools.js"></script>
<script src="_static/sphinx_highlight.js"></script>
<script src="_static/searchtools.js"></script>
<script src="_static/language_data.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="#" />
<script src="searchindex.js" defer></script>
<link rel="stylesheet" href="_static/custom.css" type="text/css" />
<meta name="viewport" content="width=device-width, initial-scale=0.9, maximum-scale=0.9" />
</head><body>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<h1 id="search-documentation">Search</h1>
<noscript>
<div class="admonition warning">
<p>
Please activate JavaScript to enable the search
functionality.
</p>
</div>
</noscript>
<p>
Searching for multiple words only shows matches that contain
all words.
</p>
<form action="" method="get">
<input type="text" name="q" aria-labelledby="search-documentation" value="" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false"/>
<input type="submit" value="search" />
<span id="search-progress" style="padding-left: 10px"></span>
</form>
<div id="search-results">
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h1 class="logo"><a href="index.html">dbt-core</a></h1>
<h3>Navigation</h3>
<div class="relations">
<h3>Related Topics</h3>
<ul>
<li><a href="index.html">Documentation overview</a><ul>
</ul></li>
</ul>
</div>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="footer">
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 5.3.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
</div>
</body>
</html>

File diff suppressed because one or more lines are too long

View File

@@ -7,7 +7,7 @@ import typing as t
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
sys.path.insert(0, os.path.abspath("../.."))
sys.path.insert(0, os.path.abspath("../../.."))
sys.path.insert(0, os.path.abspath("./_ext"))
# -- Project information -----------------------------------------------------

View File

@@ -1,55 +1,39 @@
# Events Module
The Events module is responsible for communicating internal dbt structures into a consumable interface. Right now, the events module is exclusively used for structured logging, but in the future could grow to include other user-facing components such as exceptions. These events represent both a programatic interface to dbt processes as well as human-readable messaging in one centralized place. The centralization allows for leveraging mypy to enforce interface invariants across all dbt events, and the distinct type layer allows for decoupling events and libraries such as loggers.
The Events module is responsible for communicating internal dbt structures into a consumable interface. Because the "event" classes are based entirely on protobuf definitions, the interface is really clearly defined, whether or not protobufs are used to consume it. We use Betterproto for compiling the protobuf message definitions into Python classes.
# Using the Events Module
The event module provides types that represent what is happening in dbt in `events.types`. These types are intended to represent an exhaustive list of all things happening within dbt that will need to be logged, streamed, or printed. To fire an event, `events.functions::fire_event` is the entry point to the module from everywhere in dbt.
# Logging
When events are processed via `fire_event`, nearly everything is logged. Whether or not the user has enabled the debug flag, all debug messages are still logged to the file. However, some events are particularly time consuming to construct because they return a huge amount of data. Today, the only messages in this category are cache events and are only logged if the `--log-cache-events` flag is on. This is important because these messages should not be created unless they are going to be logged, because they cause a noticable performance degredation. We achieve this by making the event class explicitly use lazy values for the expensive ones so they are not computed until the moment they are required. This is done with the data type `core/dbt/helper_types.py::Lazy` which includes usage documentation.
Example:
```
@dataclass
class DumpBeforeAddGraph(DebugLevel, Cache):
dump: Lazy[Dict[str, List[str]]]
code: str = "E031"
def message(self) -> str:
return f"before adding : {self.dump.force()}"
```
When events are processed via `fire_event`, nearly everything is logged. Whether or not the user has enabled the debug flag, all debug messages are still logged to the file. However, some events are particularly time consuming to construct because they return a huge amount of data. Today, the only messages in this category are cache events and are only logged if the `--log-cache-events` flag is on. This is important because these messages should not be created unless they are going to be logged, because they cause a noticable performance degredation. These events use a "fire_event_if" functions.
# Adding a New Event
In `events.types` add a new class that represents the new event. All events must be a dataclass with, at minimum, a code. You may also include some other values to construct downstream messaging. Only include the data necessary to construct this message within this class. You must extend all destinations (e.g. - if your log message belongs on the cli, extend `Cli`) as well as the loglevel this event belongs to. This system has been designed to take full advantage of mypy so running it will catch anything you may miss.
* Add a new message in types.proto with an EventInfo field first
* run the protoc compiler to update proto_types.py: ```protoc --python_betterproto_out . types.proto```
* Add a wrapping class in core/dbt/event/types.py with a Level superclass and the superclass from proto_types.py, plus code and message methods
* Add the class to tests/unit/test_events.py
Note that no attributes can exist in these event classes except for fields defined in the protobuf definitions, because the betterproto metaclass will throw an error. Betterproto provides a to_dict() method to convert the generated classes to a dictionary and from that to json. However some attributes will successfully convert to dictionaries but not to serialized protobufs, so we need to test both output formats.
## Required for Every Event
- a string attribute `code`, that's unique across events
- assign a log level by extending `DebugLevel`, `InfoLevel`, `WarnLevel`, or `ErrorLevel`
- a method `code`, that's unique across events
- assign a log level by using the Level mixin: `DebugLevel`, `InfoLevel`, `WarnLevel`, or `ErrorLevel`
- a message()
- extend `File` and/or `Cli` based on where it should output
Example
```
@dataclass
class PartialParsingDeletedExposure(DebugLevel, Cli, File):
unique_id: str
code: str = "I049"
class PartialParsingDeletedExposure(DebugLevel, pt.PartialParsingDeletedExposure):
def code(self):
return "I049"
def message(self) -> str:
return f"Partial parsing: deleted exposure {self.unique_id}"
```
## Optional (based on your event)
- Events associated with node status changes must be extended with `NodeInfo` which contains a node_info attribute
All values other than `code` and `node_info` will be included in the `data` node of the json log output.
Once your event has been added, add a dummy call to your new event at the bottom of `types.py` and also add your new Event to the list `sample_values` in `test/unit/test_events.py'.
# Adapter Maintainers
To integrate existing log messages from adapters, you likely have a line of code like this in your adapter already:
@@ -63,3 +47,7 @@ from dbt.events import AdapterLogger
logger = AdapterLogger("<database name>")
# e.g. AdapterLogger("Snowflake")
```
## Compiling types.proto
After adding a new message in types.proto, in the core/dbt/events directory: ```protoc --python_betterproto_out . types.proto```

View File

@@ -1,3 +1,4 @@
import traceback
from dataclasses import dataclass
from dbt.events.functions import fire_event
from dbt.events.types import (
@@ -8,61 +9,33 @@ from dbt.events.types import (
)
# N.B. No guarantees for what type param msg is.
@dataclass
class AdapterLogger:
name: str
def debug(self, msg, *args, exc_info=None, extra=None, stack_info=False):
def debug(self, msg, *args):
event = AdapterEventDebug(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def info(self, msg, *args, exc_info=None, extra=None, stack_info=False):
def info(self, msg, *args):
event = AdapterEventInfo(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def warning(self, msg, *args, exc_info=None, extra=None, stack_info=False):
def warning(self, msg, *args):
event = AdapterEventWarning(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
def error(self, msg, *args, exc_info=None, extra=None, stack_info=False):
def error(self, msg, *args):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)
# The default exc_info=True is what makes this method different
def exception(self, msg, *args, exc_info=True, extra=None, stack_info=False):
def exception(self, msg, *args):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
event.exc_info = traceback.format_exc()
fire_event(event)
def critical(self, msg, *args, exc_info=False, extra=None, stack_info=False):
def critical(self, msg, *args):
event = AdapterEventError(name=self.name, base_msg=msg, args=args)
event.exc_info = exc_info
event.extra = extra
event.stack_info = stack_info
fire_event(event)

View File

@@ -1,10 +1,7 @@
from abc import ABCMeta, abstractproperty, abstractmethod
from dataclasses import dataclass
from dbt.events.serialization import EventSerialization
import os
import threading
from typing import Any, Dict
from datetime import datetime
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# These base types define the _required structure_ for the concrete event #
@@ -17,93 +14,122 @@ class Cache:
pass
def get_global_metadata_vars() -> dict:
from dbt.events.functions import get_metadata_vars
return get_metadata_vars()
def get_invocation_id() -> str:
from dbt.events.functions import get_invocation_id
return get_invocation_id()
# exactly one pid per concrete event
def get_pid() -> int:
return os.getpid()
# preformatted time stamp
def get_ts_rfc3339() -> str:
ts = datetime.utcnow()
ts_rfc3339 = ts.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
return ts_rfc3339
# in theory threads can change so we don't cache them.
def get_thread_name() -> str:
return threading.current_thread().name
@dataclass
class ShowException:
# N.B.:
# As long as we stick with the current convention of setting the member vars in the
# `message` method of subclasses, this is a safe operation.
# If that ever changes we'll want to reassess.
class BaseEvent:
"""BaseEvent for proto message generated python events"""
def __post_init__(self):
self.exc_info: Any = True
self.stack_info: Any = None
self.extra: Any = None
super().__post_init__()
if not self.info.level:
self.info.level = self.level_tag()
assert self.info.level in ["info", "warn", "error", "debug", "test"]
if not hasattr(self.info, "msg") or not self.info.msg:
self.info.msg = self.message()
self.info.invocation_id = get_invocation_id()
self.info.extra = get_global_metadata_vars()
self.info.ts = datetime.utcnow()
self.info.pid = get_pid()
self.info.thread = get_thread_name()
self.info.code = self.code()
self.info.name = type(self).__name__
# TODO add exhaustiveness checking for subclasses
# top-level superclass for all events
class Event(metaclass=ABCMeta):
# Do not define fields with defaults here
# four digit string code that uniquely identifies this type of event
# uniqueness and valid characters are enforced by tests
@abstractproperty
@staticmethod
def code() -> str:
raise Exception("code() not implemented for event")
# The 'to_dict' method is added by mashumaro via the EventSerialization.
# It should be in all subclasses that are to record actual events.
@abstractmethod
def to_dict(self):
raise Exception("to_dict not implemented for Event")
# do not define this yourself. inherit it from one of the above level types.
@abstractmethod
def level_tag(self) -> str:
raise Exception("level_tag not implemented for Event")
return "debug"
# Solely the human readable message. Timestamps and formatting will be added by the logger.
# Must override yourself
@abstractmethod
def message(self) -> str:
raise Exception("msg not implemented for Event")
# This is here because although we know that info should always
# exist, mypy doesn't.
def log_level(self) -> str:
return self.info.level # type: ignore
# exactly one pid per concrete event
def get_pid(self) -> int:
return os.getpid()
# in theory threads can change so we don't cache them.
def get_thread_name(self) -> str:
return threading.current_thread().name
@classmethod
def get_invocation_id(cls) -> str:
from dbt.events.functions import get_invocation_id
return get_invocation_id()
def message(self):
raise Exception("message() not implemented for event")
# in preparation for #3977
# DynamicLevel requires that the level be supplied on the
# event construction call using the "info" function from functions.py
@dataclass # type: ignore[misc]
class TestLevel(EventSerialization, Event):
class DynamicLevel(BaseEvent):
pass
@dataclass
class TestLevel(BaseEvent):
__test__ = False
def level_tag(self) -> str:
return "test"
@dataclass # type: ignore[misc]
class DebugLevel(EventSerialization, Event):
class DebugLevel(BaseEvent):
def level_tag(self) -> str:
return "debug"
@dataclass # type: ignore[misc]
class InfoLevel(EventSerialization, Event):
class InfoLevel(BaseEvent):
def level_tag(self) -> str:
return "info"
@dataclass # type: ignore[misc]
class WarnLevel(EventSerialization, Event):
class WarnLevel(BaseEvent):
def level_tag(self) -> str:
return "warn"
@dataclass # type: ignore[misc]
class ErrorLevel(EventSerialization, Event):
class ErrorLevel(BaseEvent):
def level_tag(self) -> str:
return "error"
# Included to ensure classes with str-type message members are initialized correctly.
@dataclass # type: ignore[misc]
class AdapterEventStringFunctor:
def __post_init__(self):
super().__post_init__()
if not isinstance(self.base_msg, str):
self.base_msg = str(self.base_msg)
@dataclass # type: ignore[misc]
class EventStringFunctor:
def __post_init__(self):
super().__post_init__()
if not isinstance(self.msg, str):
self.msg = str(self.msg)
# prevents an event from going to the file
# This should rarely be used in core code. It is currently
# only used in integration tests and for the 'clean' command.
@@ -114,10 +140,3 @@ class NoFile:
# prevents an event from going to stdout
class NoStdOut:
pass
# This class represents the node_info which is generated
# by the NodeInfoMixin class in dbt.contracts.graph.parsed
@dataclass
class NodeInfo:
node_info: Dict[str, Any]

View File

@@ -1,11 +1,14 @@
import betterproto
from colorama import Style
import dbt.events.functions as this # don't worry I hate it too.
from dbt.events.base_types import NoStdOut, Event, NoFile, ShowException, Cache
from dbt.events.types import T_Event, MainReportVersion, EmptyLine, EventBufferFull
import dbt.flags as flags
from dbt.constants import SECRET_ENV_PREFIX
# TODO this will need to move eventually
from dbt.events.base_types import NoStdOut, BaseEvent, NoFile, Cache
from dbt.events.types import EventBufferFull, MainReportVersion, EmptyLine
from dbt.events.proto_types import EventInfo
from dbt.events.helpers import env_secrets, scrub_secrets
import dbt.flags as flags
from dbt.constants import METADATA_ENV_PREFIX
from dbt.logger import make_log_dir_if_missing, GLOBAL_LOGGER
from datetime import datetime
import json
@@ -19,10 +22,11 @@ from logging.handlers import RotatingFileHandler
import os
import uuid
import threading
from typing import Any, Dict, List, Optional, Union
from typing import Optional, Union, Callable, Dict
from collections import deque
LOG_VERSION = 2
LOG_VERSION = 3
EVENT_HISTORY = None
# create the global file logger with no configuration
@@ -41,22 +45,24 @@ STDOUT_LOG.addHandler(stdout_handler)
format_color = True
format_json = False
invocation_id: Optional[str] = None
metadata_vars: Optional[Dict[str, str]] = None
def setup_event_logger(log_path, level_override=None):
global format_json, format_color, STDOUT_LOG, FILE_LOG
make_log_dir_if_missing(log_path)
this.format_json = flags.LOG_FORMAT == "json"
format_json = flags.LOG_FORMAT == "json"
# USE_COLORS can be None if the app just started and the cli flags
# havent been applied yet
this.format_color = True if flags.USE_COLORS else False
format_color = True if flags.USE_COLORS else False
# TODO this default should live somewhere better
log_dest = os.path.join(log_path, "dbt.log")
level = level_override or (logging.DEBUG if flags.DEBUG else logging.INFO)
# overwrite the STDOUT_LOG logger with the configured one
this.STDOUT_LOG = logging.getLogger("configured_std_out")
this.STDOUT_LOG.setLevel(level)
STDOUT_LOG = logging.getLogger("configured_std_out")
STDOUT_LOG.setLevel(level)
FORMAT = "%(message)s"
stdout_passthrough_formatter = logging.Formatter(fmt=FORMAT)
@@ -65,16 +71,16 @@ def setup_event_logger(log_path, level_override=None):
stdout_handler.setFormatter(stdout_passthrough_formatter)
stdout_handler.setLevel(level)
# clear existing stdout TextIOWrapper stream handlers
this.STDOUT_LOG.handlers = [
STDOUT_LOG.handlers = [
h
for h in this.STDOUT_LOG.handlers
for h in STDOUT_LOG.handlers
if not (hasattr(h, "stream") and isinstance(h.stream, TextIOWrapper)) # type: ignore
]
this.STDOUT_LOG.addHandler(stdout_handler)
STDOUT_LOG.addHandler(stdout_handler)
# overwrite the FILE_LOG logger with the configured one
this.FILE_LOG = logging.getLogger("configured_file")
this.FILE_LOG.setLevel(logging.DEBUG) # always debug regardless of user input
FILE_LOG = logging.getLogger("configured_file")
FILE_LOG.setLevel(logging.DEBUG) # always debug regardless of user input
file_passthrough_formatter = logging.Formatter(fmt=FORMAT)
@@ -83,92 +89,68 @@ def setup_event_logger(log_path, level_override=None):
)
file_handler.setFormatter(file_passthrough_formatter)
file_handler.setLevel(logging.DEBUG) # always debug regardless of user input
this.FILE_LOG.handlers.clear()
this.FILE_LOG.addHandler(file_handler)
FILE_LOG.handlers.clear()
FILE_LOG.addHandler(file_handler)
# used for integration tests
def capture_stdout_logs() -> StringIO:
global STDOUT_LOG
capture_buf = io.StringIO()
stdout_capture_handler = logging.StreamHandler(capture_buf)
stdout_handler.setLevel(logging.DEBUG)
this.STDOUT_LOG.addHandler(stdout_capture_handler)
STDOUT_LOG.addHandler(stdout_capture_handler)
return capture_buf
# used for integration tests
def stop_capture_stdout_logs() -> None:
this.STDOUT_LOG.handlers = [
global STDOUT_LOG
STDOUT_LOG.handlers = [
h
for h in this.STDOUT_LOG.handlers
for h in STDOUT_LOG.handlers
if not (hasattr(h, "stream") and isinstance(h.stream, StringIO)) # type: ignore
]
def env_secrets() -> List[str]:
return [v for k, v in os.environ.items() if k.startswith(SECRET_ENV_PREFIX) and v.strip()]
def scrub_secrets(msg: str, secrets: List[str]) -> str:
scrubbed = msg
for secret in secrets:
scrubbed = scrubbed.replace(secret, "*****")
return scrubbed
# returns a dictionary representation of the event fields.
# the message may contain secrets which must be scrubbed at the usage site.
def event_to_serializable_dict(
e: T_Event,
) -> Dict[str, Any]:
def event_to_json(
event: BaseEvent,
) -> str:
event_dict = event_to_dict(event)
raw_log_line = json.dumps(event_dict, sort_keys=True)
return raw_log_line
log_line = dict()
code: str
def event_to_dict(event: BaseEvent) -> dict:
event_dict = dict()
try:
log_line = e.to_dict()
# We could use to_json here, but it wouldn't sort the keys.
# The 'to_json' method just does json.dumps on the dict anyway.
event_dict = event.to_dict(casing=betterproto.Casing.SNAKE, include_default_values=True) # type: ignore
except AttributeError as exc:
event_type = type(e).__name__
raise Exception( # TODO this may hang async threads
f"type {event_type} is not serializable. {str(exc)}"
)
# We get the code from the event object, so we don't need it in the data
if "code" in log_line:
del log_line["code"]
event_dict = {
"type": "log_line",
"log_version": LOG_VERSION,
"ts": get_ts_rfc3339(),
"pid": e.get_pid(),
"msg": e.message(),
"level": e.level_tag(),
"data": log_line,
"invocation_id": e.get_invocation_id(),
"thread_name": e.get_thread_name(),
"code": e.code,
}
event_type = type(event).__name__
raise Exception(f"type {event_type} is not serializable. {str(exc)}")
return event_dict
# translates an Event to a completely formatted text-based log line
# type hinting everything as strings so we don't get any unintentional string conversions via str()
def reset_color() -> str:
return "" if not this.format_color else Style.RESET_ALL
global format_color
return "" if not format_color else Style.RESET_ALL
def create_info_text_log_line(e: T_Event) -> str:
def create_info_text_log_line(e: BaseEvent) -> str:
color_tag: str = reset_color()
ts: str = get_ts().strftime("%H:%M:%S")
ts: str = get_ts().strftime("%H:%M:%S") # TODO: get this from the event.ts?
scrubbed_msg: str = scrub_secrets(e.message(), env_secrets())
log_line: str = f"{color_tag}{ts} {scrubbed_msg}"
return log_line
def create_debug_text_log_line(e: T_Event) -> str:
def create_debug_text_log_line(e: BaseEvent) -> str:
log_line: str = ""
# Create a separator if this is the beginning of an invocation
if type(e) == MainReportVersion:
@@ -177,7 +159,8 @@ def create_debug_text_log_line(e: T_Event) -> str:
color_tag: str = reset_color()
ts: str = get_ts().strftime("%H:%M:%S.%f")
scrubbed_msg: str = scrub_secrets(e.message(), env_secrets())
level: str = e.level_tag() if len(e.level_tag()) == 5 else f"{e.level_tag()} "
# Make the levels all 5 characters so they line up
level: str = f"{e.log_level():<5}"
thread = ""
if threading.current_thread().name:
thread_name = threading.current_thread().name
@@ -189,18 +172,17 @@ def create_debug_text_log_line(e: T_Event) -> str:
# translates an Event to a completely formatted json log line
def create_json_log_line(e: T_Event) -> Optional[str]:
def create_json_log_line(e: BaseEvent) -> Optional[str]:
if type(e) == EmptyLine:
return None # will not be sent to logger
# using preformatted ts string instead of formatting it here to be extra careful about timezone
values = event_to_serializable_dict(e)
raw_log_line = json.dumps(values, sort_keys=True)
raw_log_line = event_to_json(e)
return scrub_secrets(raw_log_line, env_secrets())
# calls create_stdout_text_log_line() or create_json_log_line() according to logger config
def create_log_line(e: T_Event, file_output=False) -> Optional[str]:
if this.format_json:
def create_log_line(e: BaseEvent, file_output=False) -> Optional[str]:
global format_json
if format_json:
return create_json_log_line(e) # json output, both console and file
elif file_output is True or flags.DEBUG:
return create_debug_text_log_line(e) # default file output
@@ -210,51 +192,47 @@ def create_log_line(e: T_Event, file_output=False) -> Optional[str]:
# allows for reuse of this obnoxious if else tree.
# do not use for exceptions, it doesn't pass along exc_info, stack_info, or extra
def send_to_logger(l: Union[Logger, logbook.Logger], level_tag: str, log_line: str):
def send_to_logger(l: Union[Logger, logbook.Logger], level: str, log_line: str):
if not log_line:
return
if level_tag == "test":
if level == "test":
# TODO after implmenting #3977 send to new test level
l.debug(log_line)
elif level_tag == "debug":
elif level == "debug":
l.debug(log_line)
elif level_tag == "info":
elif level == "info":
l.info(log_line)
elif level_tag == "warn":
elif level == "warn":
l.warning(log_line)
elif level_tag == "error":
elif level == "error":
l.error(log_line)
else:
raise AssertionError(
f"While attempting to log {log_line}, encountered the unhandled level: {level_tag}"
f"While attempting to log {log_line}, encountered the unhandled level: {level}"
)
def send_exc_to_logger(
l: Logger, level_tag: str, log_line: str, exc_info=True, stack_info=False, extra=False
):
if level_tag == "test":
# TODO after implmenting #3977 send to new test level
l.debug(log_line, exc_info=exc_info, stack_info=stack_info, extra=extra)
elif level_tag == "debug":
l.debug(log_line, exc_info=exc_info, stack_info=stack_info, extra=extra)
elif level_tag == "info":
l.info(log_line, exc_info=exc_info, stack_info=stack_info, extra=extra)
elif level_tag == "warn":
l.warning(log_line, exc_info=exc_info, stack_info=stack_info, extra=extra)
elif level_tag == "error":
l.error(log_line, exc_info=exc_info, stack_info=stack_info, extra=extra)
def warn_or_error(event, node=None):
if flags.WARN_ERROR:
from dbt.exceptions import raise_compiler_error
raise_compiler_error(scrub_secrets(event.info.msg, env_secrets()), node)
else:
raise AssertionError(
f"While attempting to log {log_line}, encountered the unhandled level: {level_tag}"
)
fire_event(event)
# an alternative to fire_event which only creates and logs the event value
# if the condition is met. Does nothing otherwise.
def fire_event_if(conditional: bool, lazy_e: Callable[[], BaseEvent]) -> None:
if conditional:
fire_event(lazy_e())
# top-level method for accessing the new eventing system
# this is where all the side effects happen branched by event type
# (i.e. - mutating the event history, printing to stdout, logging
# to files, etc.)
def fire_event(e: Event) -> None:
def fire_event(e: BaseEvent) -> None:
# skip logs when `--log-cache-events` is not passed
if isinstance(e, Cache) and not flags.LOG_CACHE_EVENTS:
return
@@ -267,7 +245,7 @@ def fire_event(e: Event) -> None:
# destination
log_line = create_log_line(e)
if log_line:
send_to_logger(GLOBAL_LOGGER, e.level_tag(), log_line)
send_to_logger(GLOBAL_LOGGER, level=e.log_level(), log_line=log_line)
return # exit the function to avoid using the current logger as well
# always logs debug level regardless of user input
@@ -275,29 +253,35 @@ def fire_event(e: Event) -> None:
log_line = create_log_line(e, file_output=True)
# doesn't send exceptions to exception logger
if log_line:
send_to_logger(FILE_LOG, level_tag=e.level_tag(), log_line=log_line)
send_to_logger(FILE_LOG, level=e.log_level(), log_line=log_line)
if not isinstance(e, NoStdOut):
# explicitly checking the debug flag here so that potentially expensive-to-construct
# log messages are not constructed if debug messages are never shown.
if e.level_tag() == "debug" and not flags.DEBUG:
if e.log_level() == "debug" and not flags.DEBUG:
return # eat the message in case it was one of the expensive ones
if e.level_tag() != "error" and flags.QUIET:
if e.log_level() != "error" and flags.QUIET:
return # eat all non-exception messages in quiet mode
log_line = create_log_line(e)
if log_line:
if not isinstance(e, ShowException):
send_to_logger(STDOUT_LOG, level_tag=e.level_tag(), log_line=log_line)
else:
send_exc_to_logger(
STDOUT_LOG,
level_tag=e.level_tag(),
log_line=log_line,
exc_info=e.exc_info,
stack_info=e.stack_info,
extra=e.extra,
)
send_to_logger(STDOUT_LOG, level=e.log_level(), log_line=log_line)
def get_metadata_vars() -> Dict[str, str]:
global metadata_vars
if metadata_vars is None:
metadata_vars = {
k[len(METADATA_ENV_PREFIX) :]: v
for k, v in os.environ.items()
if k.startswith(METADATA_ENV_PREFIX)
}
return metadata_vars
def reset_metadata_vars() -> None:
global metadata_vars
metadata_vars = None
def get_invocation_id() -> str:
@@ -342,3 +326,11 @@ def add_to_event_history(event):
def reset_event_history():
global EVENT_HISTORY
EVENT_HISTORY = deque(maxlen=flags.EVENT_BUFFER_SIZE)
# Currently used to set the level in EventInfo, so logging events can
# provide more than one "level". Might be used in the future to set
# more fields in EventInfo, once some of that information is no longer global
def info(level="info"):
info = EventInfo(level=level)
return info

View File

@@ -0,0 +1,16 @@
import os
from typing import List
from dbt.constants import SECRET_ENV_PREFIX
def env_secrets() -> List[str]:
return [v for k, v in os.environ.items() if k.startswith(SECRET_ENV_PREFIX) and v.strip()]
def scrub_secrets(msg: str, secrets: List[str]) -> str:
scrubbed = msg
for secret in secrets:
scrubbed = scrubbed.replace(secret, "*****")
return scrubbed

File diff suppressed because it is too large Load Diff

View File

@@ -1,55 +0,0 @@
from dbt.helper_types import Lazy
from mashumaro import DataClassDictMixin
from mashumaro.config import BaseConfig as MashBaseConfig
from mashumaro.types import SerializationStrategy
from typing import Dict, List
# The dbtClassMixin serialization class has a DateTime serialization strategy
# class. If a datetime ends up in an event class, we could use a similar class
# here to serialize it in our preferred format.
class ExceptionSerialization(SerializationStrategy):
def serialize(self, value):
out = str(value)
return out
def deserialize(self, value):
return Exception(value)
class BaseExceptionSerialization(SerializationStrategy):
def serialize(self, value):
return str(value)
def deserialize(self, value):
return BaseException(value)
# This is an explicit deserializer for the type Lazy[Dict[str, List[str]]]
# mashumaro does not support composing serialization strategies, so all
# future uses of Lazy will need to register a unique serialization class like this one.
class LazySerialization1(SerializationStrategy):
def serialize(self, value) -> Dict[str, List[str]]:
return value.force()
# we _can_ deserialize into a lazy value, but that defers running the deserialization
# function till the value is used which can raise errors at very unexpected times.
# It's best practice to do strict deserialization unless you're in a very special case.
def deserialize(self, value):
raise Exception("Don't deserialize into a Lazy value. Try just using the value itself.")
# This class is the equivalent of dbtClassMixin that's used for serialization
# in other parts of the code. That class did extra things which we didn't want
# to use for events, so this class is a simpler version of dbtClassMixin.
class EventSerialization(DataClassDictMixin):
# This is where we register serializtion strategies per type.
class Config(MashBaseConfig):
serialization_strategy = {
Exception: ExceptionSerialization(),
BaseException: ExceptionSerialization(),
Lazy[Dict[str, List[str]]]: LazySerialization1(),
}

View File

@@ -1,6 +1,8 @@
from dataclasses import dataclass
from dbt.events.types import InfoLevel, DebugLevel, WarnLevel, ErrorLevel, ShowException
from dbt.events.types import InfoLevel, DebugLevel, WarnLevel, ErrorLevel
from dbt.events.base_types import NoFile
from dbt.events import proto_types as pl
from dbt.events.proto_types import EventInfo # noqa
# Keeping log messages for testing separate since they are used for debugging.
@@ -8,69 +10,54 @@ from dbt.events.base_types import NoFile
@dataclass
class IntegrationTestInfo(InfoLevel, NoFile):
msg: str
code: str = "T001"
class IntegrationTestInfo(InfoLevel, NoFile, pl.IntegrationTestInfo):
def code(self):
return "T001"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestDebug(DebugLevel, NoFile):
msg: str
code: str = "T002"
class IntegrationTestDebug(DebugLevel, NoFile, pl.IntegrationTestDebug):
def code(self):
return "T002"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestWarn(WarnLevel, NoFile):
msg: str
code: str = "T003"
class IntegrationTestWarn(WarnLevel, NoFile, pl.IntegrationTestWarn):
def code(self):
return "T003"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestError(ErrorLevel, NoFile):
msg: str
code: str = "T004"
class IntegrationTestError(ErrorLevel, NoFile, pl.IntegrationTestError):
def code(self):
return "T004"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class IntegrationTestException(ShowException, ErrorLevel, NoFile):
msg: str
code: str = "T005"
class IntegrationTestException(ErrorLevel, NoFile, pl.IntegrationTestException):
def code(self):
return "T005"
def message(self) -> str:
return f"Integration Test: {self.msg}"
@dataclass
class UnitTestInfo(InfoLevel, NoFile):
msg: str
code: str = "T006"
class UnitTestInfo(InfoLevel, NoFile, pl.UnitTestInfo):
def code(self):
return "T006"
def message(self) -> str:
return f"Unit Test: {self.msg}"
# since mypy doesn't run on every file we need to suggest to mypy that every
# class gets instantiated. But we don't actually want to run this code.
# making the conditional `if False` causes mypy to skip it as dead code so
# we need to skirt around that by computing something it doesn't check statically.
#
# TODO remove these lines once we run mypy everywhere.
if 1 == 0:
IntegrationTestInfo(msg="")
IntegrationTestDebug(msg="")
IntegrationTestWarn(msg="")
IntegrationTestError(msg="")
IntegrationTestException(msg="")
UnitTestInfo(msg="")

1711
core/dbt/events/types.proto Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -2,11 +2,9 @@ import builtins
import functools
from typing import NoReturn, Optional, Mapping, Any
from dbt.events.functions import fire_event, scrub_secrets, env_secrets
from dbt.events.types import GeneralWarningMsg, GeneralWarningException
from dbt.events.helpers import env_secrets, scrub_secrets
from dbt.events.types import JinjaLogWarning
from dbt.node_types import NodeType
from dbt import flags
from dbt.ui import line_wrap_message, warning_tag
import dbt.dataclass_schema
@@ -570,74 +568,11 @@ def doc_target_not_found(
raise_compiler_error(msg, model)
def _get_target_failure_msg(
def get_not_found_or_disabled_msg(
original_file_path,
unique_id,
resource_type_title,
target_name: str,
target_model_package: Optional[str],
include_path: bool,
reason: str,
target_kind: str,
) -> str:
target_package_string = ""
if target_model_package is not None:
target_package_string = "in package '{}' ".format(target_model_package)
source_path_string = ""
if include_path:
source_path_string = " ({})".format(original_file_path)
return "{} '{}'{} depends on a {} named '{}' {}which {}".format(
resource_type_title,
unique_id,
source_path_string,
target_kind,
target_name,
target_package_string,
reason,
)
def get_target_not_found_or_disabled_msg(
node,
target_name: str,
target_package: Optional[str],
disabled: Optional[bool] = None,
) -> str:
if disabled is None:
reason = "was not found or is disabled"
elif disabled is True:
reason = "is disabled"
else:
reason = "was not found"
return _get_target_failure_msg(
node.original_file_path,
node.unique_id,
node.resource_type.title(),
target_name,
target_package,
include_path=True,
reason=reason,
target_kind="node",
)
def ref_target_not_found(
model,
target_model_name: str,
target_model_package: Optional[str],
disabled: Optional[bool] = None,
) -> NoReturn:
msg = get_target_not_found_or_disabled_msg(
model, target_model_name, target_model_package, disabled
)
raise_compiler_error(msg, model)
def get_not_found_or_disabled_msg(
node,
target_name: str,
target_kind: str,
target_package: Optional[str] = None,
disabled: Optional[bool] = None,
@@ -648,15 +583,19 @@ def get_not_found_or_disabled_msg(
reason = "is disabled"
else:
reason = "was not found"
return _get_target_failure_msg(
node.original_file_path,
node.unique_id,
node.resource_type.title(),
target_package_string = ""
if target_package is not None:
target_package_string = "in package '{}' ".format(target_package)
return "{} '{}' ({}) depends on a {} named '{}' {}which {}".format(
resource_type_title,
unique_id,
original_file_path,
target_kind,
target_name,
target_package,
include_path=True,
reason=reason,
target_kind=target_kind,
target_package_string,
reason,
)
@@ -668,7 +607,9 @@ def target_not_found(
disabled: Optional[bool] = None,
) -> NoReturn:
msg = get_not_found_or_disabled_msg(
node=node,
original_file_path=node.original_file_path,
unique_id=node.unique_id,
resource_type_title=node.resource_type.title(),
target_name=target_name,
target_kind=target_kind,
target_package=target_package,
@@ -976,9 +917,7 @@ def raise_patch_targets_not_found(patches):
def _fix_dupe_msg(path_1: str, path_2: str, name: str, type_name: str) -> str:
if path_1 == path_2:
return (
f"remove one of the {type_name} entries for {name} in this file:\n" f" - {path_1!s}\n"
)
return f"remove one of the {type_name} entries for {name} in this file:\n - {path_1!s}\n"
else:
return (
f"remove the {type_name} entry for {name} in one of these files:\n"
@@ -1043,19 +982,6 @@ def raise_unrecognized_credentials_type(typename, supported_types):
)
def warn_invalid_patch(patch, resource_type):
msg = line_wrap_message(
f"""\
'{patch.name}' is a {resource_type} node, but it is
specified in the {patch.yaml_key} section of
{patch.original_file_path}.
To fix this error, place the `{patch.name}`
specification under the {resource_type.pluralize()} key instead.
"""
)
warn_or_error(msg, log_fmt=warning_tag("{}"))
def raise_not_implemented(msg):
raise NotImplementedException("ERROR: {}".format(msg))
@@ -1069,24 +995,8 @@ def raise_duplicate_alias(
raise AliasException(f'Got duplicate keys: ({key_names}) all map to "{canonical_key}"')
def warn_or_error(msg, node=None, log_fmt=None):
if flags.WARN_ERROR:
raise_compiler_error(scrub_secrets(msg, env_secrets()), node)
else:
fire_event(GeneralWarningMsg(msg=msg, log_fmt=log_fmt))
def warn_or_raise(exc, log_fmt=None):
if flags.WARN_ERROR:
raise exc
else:
fire_event(GeneralWarningException(exc=str(exc), log_fmt=log_fmt))
def warn(msg, node=None):
# there's no reason to expose log_fmt to macros - it's only useful for
# handling colors
warn_or_error(msg, node=node)
dbt.events.functions.warn_or_error(JinjaLogWarning(msg=msg), node=node)
return ""

View File

@@ -113,6 +113,7 @@ def env_set_path(key: str) -> Optional[Path]:
MACRO_DEBUGGING = env_set_truthy("DBT_MACRO_DEBUGGING")
DEFER_MODE = env_set_truthy("DBT_DEFER_TO_STATE")
FAVOR_STATE_MODE = env_set_truthy("DBT_FAVOR_STATE_STATE")
ARTIFACT_STATE_PATH = env_set_path("DBT_ARTIFACT_STATE_PATH")
ENABLE_LEGACY_LOGGER = env_set_truthy("DBT_ENABLE_LEGACY_LOGGER")

View File

@@ -20,7 +20,7 @@ from .selector_spec import (
INTERSECTION_DELIMITER = ","
DEFAULT_INCLUDES: List[str] = ["fqn:*", "source:*", "exposure:*", "metric:*"]
DEFAULT_INCLUDES: List[str] = ["fqn:*", "source:*", "exposure:*", "metric:*", "entity:*"]
DEFAULT_EXCLUDES: List[str] = []

View File

@@ -90,7 +90,7 @@ class Graph:
for node in include_nodes:
if node not in new_graph:
raise ValueError(
"Couldn't find model '{}' -- does it exist or is " "it disabled?".format(node)
"Couldn't find model '{}' -- does it exist or is it disabled?".format(node)
)
return Graph(new_graph)

Some files were not shown because too many files have changed in this diff Show More