Compare commits

...

44 Commits

Author SHA1 Message Date
dependabot[bot]
83c5a8c24b Bump ubuntu from 22.04 to 23.04 (#6865)
* Bump ubuntu from 22.04 to 23.04

Bumps ubuntu from 22.04 to 23.04.

---
updated-dependencies:
- dependency-name: ubuntu
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Add automated changelog yaml from template for bot PR

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2023-02-09 16:59:25 -05:00
FishtownBuildBot
6d78e5e640 Add most recent dbt-docs changes (#6923)
* Add new index.html and changelog yaml files from dbt-docs

* Update .changes/unreleased/Docs-20230209-082901.yaml

---------

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2023-02-09 13:23:12 -06:00
Ryan Harris
f54a876f65 Update link to dbt install docs (#6883)
* update dbt install link

* add changelog entry
2023-02-09 09:35:33 -08:00
Peter Webb
8b2c9bf39d Ensure flush() after logging write() (#6909)
* ct-2063: Ensure flush after logging, by using Python's logging subsystem directly

* ct-2063: Add changelog entry
2023-02-09 09:37:37 -05:00
Jeremy Cohen
298bf8a1d4 Add back depends_on for seeds - only macros, never nodes (#6851)
* Extend functional tests for seeds w hooks

* Add MacroDependsOn to seeds, raise exception for other deps

* Add changelog entry

* Fix unit tests

* Update upgrade_seed_content

* Cleanup

* Regen manifest v8 schema. Fix tests

* Be less magical

* PR feedback
2023-02-09 10:56:12 +01:00
Emily Rockman
abbece8876 1.4 regression: Check if status has node attribute (#6899)
* check for node

* add changelog

* add test for regression
2023-02-08 13:49:24 -06:00
dave-connors-3
3ad40372e6 add base class for merge exclude tests (#6700)
* add base class for merge exclude tests

* changie <33

* remove comments

* add comments to sql, remove and clarify contents of resultholder
2023-02-08 10:20:13 -08:00
Emily Rockman
df64511feb Dynamically list all .latest branches for scheduled testing (#6682)
* first pass at automating latest branches

* checkout repo first

* fetch all history

* reorg

* debugging

* update test id

* swap lines

* incorporate new branch aciton

* tweak vars
2023-02-08 08:02:21 -06:00
Peter Webb
ccb4fa26cd CT-1917: Fix a regression in the behavior of the -q/--quiet cli parameter (#6886) 2023-02-07 16:15:42 -05:00
Neelesh Salian
4c63b630de [CT-1959]: moving simple_seed tests to adapter zone (#6859)
* Formatting

* Changelog entry

* Rename to BaseSimpleSeedColumnOverride

* Better error handling

* Update test to include the BOM test

* Cleanup and formating

* Unused import remove

* nit line

* Pr comments
2023-02-06 19:51:32 -08:00
colin-rogers-dbt
b2ea2b8b25 move test_store_test_failures.py to adapter zone (#6816) 2023-02-02 10:55:09 -08:00
Emily Rockman
2245d8d710 update regex to match all iterations (#6839)
* update regex to match all iterations

* convert to num to match all adapters

* add comments, remove extra .

* clarify with more comments

* Update .bumpversion.cfg

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>

---------

Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
2023-02-02 12:16:06 -06:00
Gerda Shank
d9424cc710 CT 2000 fix semver prerelease comparisons (#6838)
* Modify semver.py to not use packaging.version.parse

* Changie
2023-02-02 12:22:48 -05:00
Mila Page
1a6e4a00c7 Add clearer directions for custom test suite vars in Makefile. (#6764)
* Add clearer directions for custom test suite vars in Makefile.

* Fix up PR for review

* Fix erroneous whitespace.

* Fix a spelling error.

* Add documentation to discourage makefile edits but provide override tooling.

* Fix quotation marks. Very strange behavior

* Compact code and verify quotations happy inside bash and python.

* Fold comments into Makefile.

---------

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2023-01-31 13:53:55 -08:00
Mila Page
42b7caae19 Ct 1827/064 column comments tests conversion (#6766)
* Convert test and make it a bit more pytest-onic

* Ax old integration test.

* Run black on test conversion

* I didn't like how pytest was running the fixture so wrapped it into a closure.

* Merge converted test into persist docs.

* Move persist docs tests to the adapter zone. Prep for adapter tests.

* Fix up test names

* Fix name to be less confusing.

---------

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2023-01-31 12:58:19 -08:00
Emily Rockman
622e5fd71d fix contributor list generation (#6799) 2023-01-31 13:05:13 -06:00
Kshitij Aranke
d2f3cdd6de [CT-1841] Convert custom target test to Pytest (#6765) 2023-01-30 07:55:28 -08:00
Alexander Smolyakov
92d1ef8482 Update release workflow (#6778)
- Update AWS secrets
- Rework condition for Slack notification
2023-01-30 09:19:10 -06:00
Neelesh Salian
a8abc49632 [CT-1940] Stand-alone Python module for PostgresColumn (#6773) 2023-01-27 17:00:39 -08:00
Neelesh Salian
c653330911 Adding nssalian to committers list (#6769) 2023-01-27 11:16:59 -08:00
Alexander Smolyakov
82d9b2fa87 [CI/CD] Update release workflow and introduce workflow for nightly releases (#6602)
* Add release workflows

* Update nightly-release.yml

* Set default `test_run` value to `true`

* Update .bumpversion.cfg

* Resolve review comment

- Update workflow docs
- Change workflow name
- Set `test_run` default value to `true`

* Update Slack secret

* PyPI
2023-01-27 09:04:31 -06:00
Mila Page
3f96fad4f9 Ct 1629/052 column quoting tests conversion (#6652)
* Test converted and reformatted for pytest.

* Ax old versions of 052 test

* Nix the 'os' import and black format

* Change names of models to be more PEP like

* cleanup code

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2023-01-26 12:23:02 -08:00
Peter Webb
c2c4757a2b Graph Analysis Optimization for Large Dags (#6720)
* Optimization to remove graph analysis bottleneck in large dags.

* Add changelog entry.
2023-01-26 14:27:42 -05:00
Mila Page
c65ba11ae6 Ct 1827/064 column comments tests conversion (#6654)
* Convert test and make it a bit more pytest-onic

* Ax old integration test.

* Run black on test conversion

* I didn't like how pytest was running the fixture so wrapped it into a closure.

* Merge converted test into persist docs.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2023-01-26 00:54:00 -08:00
Matthew Beall
b0651b13b5 change exposure_content to source_content (#6739)
* change `exposure_content` to `source_content`

* Adding changelog

Co-authored-by: Leah Antkiewicz <leah.antkiewicz@fishtownanalytics.com>
2023-01-25 19:51:34 -05:00
Gerda Shank
a34521ec07 CT 1894 log partial parsing var changes and sort cli vars before hashing (#6713)
* Log information about vars_hash, normalize cli_vars before hashing

* Changie

* Add to test_events.py

* Update core/dbt/events/types.py

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2023-01-25 17:47:45 -05:00
Matthew McKnight
da47b90503 [CT-1630] Convert Column_types tests (#6690)
* init commit for column_types test conversion

* init start of test_column_types.py

* pass tes macros into both tests

* remove alt tests, remove old tests, push up working conversion

* rename base class, move to adapter zone so adapters can use

* typo fix
2023-01-25 14:57:16 -06:00
Peter Webb
db99e2f68d Event Clean-Up (#6716)
* CT-1857: Event cleanup

* Add changelog entry.
2023-01-25 13:51:52 -05:00
Michelle Ark
cbb9117ab9 test_init conversion (#6610)
* convert 044_init_tests
2023-01-24 16:22:26 -05:00
Gerda Shank
e2ccf011d9 CT 1886 include adapter_response in NodeFinished log message (#6709)
* Include adapter_response in run_result in NodeFinished log event

* Changie
2023-01-24 14:25:32 -05:00
Aezo
17014bfad3 add adapter_response for test (#6645)
resolves https://github.com/dbt-labs/dbt-core/issues/2964
2023-01-24 09:58:08 -08:00
Peter Webb
7b464b8a49 CT-1718: Add Note and Formatting event types (#6691)
* CT-1718: Add Note and Formatting event types

* CT-1718: Add changelog entry
2023-01-23 16:39:29 -05:00
Sean McIntyre
5c765bf3e2 Cheeky performance improvement on big DAGs (#6694)
* Short-circuit set operations for nice speed boost

* Add changelog

* Fix issue

* Update .changes/unreleased/Under the Hood-20230122-215235.yaml

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>

Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
2023-01-23 09:09:09 -07:00
Mila Page
93619a9a37 Ct 738/dbt debug log fix (#6541)
* Code cleanup and adding stderr to capture dbt

* Debug with --log-format json now prints structured logs.

* Add changelog.

* Move logs into miscellaneous and add values to test.

* nix whitespace and fix log levels

* List will now do structured logging when log format set to json.

* Add a quick None check.

* Add a get guard to class check.

* Better null checking

* The boolean doesn't reflect the original logic but a try-catch does.

* Address some code review comments and get us working again.

* Simplify logic now that we have a namespace object for self.config.args.

* Simplify logic for json log format checking.

* Simplify code for allowing our GraphTest cases to pass while also hiding compile stats from dbt ls/list .

* Simplify structured logging types.

* Fix up boolean logic and simplify via De'Morgan.

* Nix unneeded fixture.

Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
2023-01-20 16:37:54 -08:00
Doug Beatty
a181cee6ae Improve error message for packages missing dbt_project.yml (#6685)
* Improve error message for packages missing `dbt_project.yml`

* Use black formatting

* Update capitalization of expected error message
2023-01-20 13:46:36 -07:00
Michelle Ark
3aeab73740 convert 069_build_tests (#6678) 2023-01-20 14:27:02 -05:00
Jeremy Cohen
9801eebc58 Consolidate changie entries from #6620 (#6684) 2023-01-20 19:58:40 +01:00
Peter Webb
6954c4df1b CT-1786: Port docs tests to pytest (#6608)
* CT-1786: Port docs tets to pytest

* Add generated CLI API docs

* CT-1786: Comply with the new style requirements

Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
2023-01-19 11:11:17 -05:00
dave-connors-3
f841a7ca76 add backwards compatibility and default argument for incremental_predicates (#6628)
* add backwards compatibility and default argument

* changie <3

* Update .changes/unreleased/Fixes-20230117-101342.yaml

Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
2023-01-19 15:20:19 +01:00
Jeremy Cohen
07a004b301 convert 062_defer_state_tests (#6616)
* Fix --favor-state flag

* Convert 062_defer_state_tests

* Revert "Fix --favor-state flag"

This reverts commit ccbdcbad98b26822629364e6fdbd2780db0c20d3.

* Reformat

* Revert "Revert "Fix --favor-state flag""

This reverts commit fa9d2a09d6.
2023-01-19 11:00:09 +01:00
Jeremy Cohen
b05582de39 mv on_schema_change tests -> "adapter zone" (#6618)
* Mv incremental on_schema_change tests to 'adapter zone'

* Use type_string()

* Cleanup
2023-01-19 10:12:59 +01:00
Jeremy Cohen
fa7c4d19f0 Respect quoting config in dbt-py models (#6620)
* Respect quoting for 'this' in dbt-py models #6619

* Respect quoting for ref/source in dbt-py models #6103

* Add changelog entries
2023-01-19 09:34:08 +01:00
Jeremy Cohen
066346faa2 convert 038_caching_tests (#6612)
* convert 038_caching_tests

* Adapt for dbt-snowflake

* PR feedback

* Reformat
2023-01-18 22:37:50 +01:00
Emily Rockman
0a03355ceb update test matrix (#6604) 2023-01-18 14:16:34 -06:00
232 changed files with 5349 additions and 5914 deletions

View File

@@ -1,13 +1,21 @@
[bumpversion]
current_version = 1.5.0a1
parse = (?P<major>\d+)
\.(?P<minor>\d+)
\.(?P<patch>\d+)
((?P<prekind>a|b|rc)
(?P<pre>\d+) # pre-release version num
# `parse` allows parsing the version into the parts we need to check. There are some
# unnamed groups and that's okay because they do not need to be audited. If any part
# of the version passed and does not match the regex, it will fail.
# expected matches: `1.5.0`, `1.5.0a1`, `1.5.0a1.dev123457+nightly`
# excepted failures: `1`, `1.5`, `1.5.2-a1`, `text1.5.0`
parse = (?P<major>[\d]+) # major version number
\.(?P<minor>[\d]+) # minor version number
\.(?P<patch>[\d]+) # patch version number
(((?P<prekind>a|b|rc) # optional pre-release type
?(?P<num>[\d]+?)) # optional pre-release version number
\.?(?P<nightly>[a-z0-9]+\+[a-z]+)? # optional nightly release indicator
)?
serialize =
{major}.{minor}.{patch}{prekind}{pre}
{major}.{minor}.{patch}{prekind}{num}.{nightly}
{major}.{minor}.{patch}{prekind}{num}
{major}.{minor}.{patch}
commit = False
tag = False
@@ -21,9 +29,11 @@ values =
rc
final
[bumpversion:part:pre]
[bumpversion:part:num]
first_value = 1
[bumpversion:part:nightly]
[bumpversion:file:core/setup.py]
[bumpversion:file:core/dbt/version.py]

View File

@@ -0,0 +1,6 @@
kind: "Dependencies"
body: "Bump ubuntu from 22.04 to 23.04"
time: 2023-02-06T00:09:26.00000Z
custom:
Author: dependabot[bot]
PR: 6865

View File

@@ -0,0 +1,6 @@
kind: Docs
body: update link to installation instructions
time: 2023-02-07T12:38:07.336783-05:00
custom:
Author: ryancharris
Issue: None

View File

@@ -0,0 +1,6 @@
kind: Docs
body: Fix JSON path to overview docs
time: 2023-02-09T08:29:01.432616-07:00
custom:
Author: halvorlu
Issue: "366"

View File

@@ -0,0 +1,6 @@
kind: Features
body: Have dbt debug spit out structured json logs with flags enabled.
time: 2023-01-07T00:31:57.516063-08:00
custom:
Author: versusfacit
Issue: "5353"

View File

@@ -0,0 +1,6 @@
kind: Features
body: add adapter_response to dbt test and freshness result
time: 2023-01-18T23:38:01.857342+08:00
custom:
Author: aezomz
Issue: "2964"

View File

@@ -0,0 +1,6 @@
kind: Features
body: Improve error message for packages missing `dbt_project.yml`
time: 2023-01-20T11:29:21.509967-07:00
custom:
Author: dbeatty10
Issue: "6663"

View File

@@ -0,0 +1,6 @@
kind: Features
body: Adjust makefile to have clearer instructions for CI env var changes.
time: 2023-01-26T15:47:16.887327-08:00
custom:
Author: versusfacit
Issue: "6689"

View File

@@ -0,0 +1,6 @@
kind: Features
body: Stand-alone Python module for PostgresColumn
time: 2023-01-27T16:28:12.212427-08:00
custom:
Author: nssalian
Issue: "6772"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Respect quoting config for dbt.ref(), dbt.source(), and dbt.this() in dbt-py models
time: 2023-01-16T12:36:45.63092+01:00
custom:
Author: jtcohen6
Issue: 6103 6619

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Provide backward compatibility for `get_merge_sql` arguments
time: 2023-01-17T10:13:42.118336-06:00
custom:
Author: dave-connors-3
Issue: "6625"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: add merge_exclude_columns adapter tests
time: 2023-01-23T13:28:14.808748-06:00
custom:
Author: dave-connors-3
Issue: "6699"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Include adapter_response in NodeFinished run_result log event
time: 2023-01-24T11:58:37.74179-05:00
custom:
Author: gshank
Issue: "6703"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Sort cli vars before hashing for partial parsing
time: 2023-01-24T14:19:43.333628-05:00
custom:
Author: gshank
Issue: "6710"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: '[Regression] exposure_content referenced incorrectly'
time: 2023-01-25T19:17:39.942081-05:00
custom:
Author: Mathyoub
Issue: "6738"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Remove pin on packaging and stop using it for prerelease comparisons
time: 2023-02-01T15:44:18.279158-05:00
custom:
Author: gshank
Issue: "6834"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Readd depends_on.macros to SeedNode, to support seeds with hooks calling macros
time: 2023-02-03T13:55:57.853715+01:00
custom:
Author: jtcohen6
Issue: "6806"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Fix regression of --quiet cli parameter behavior
time: 2023-02-07T14:35:44.160163-05:00
custom:
Author: peterallenwebb
Issue: "6749"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Ensure results from hooks contain nodes when processing them
time: 2023-02-08T11:05:51.952494-06:00
custom:
Author: emmyoop
Issue: "6796"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Always flush stdout after logging
time: 2023-02-08T15:49:35.175874-05:00
custom:
Author: peterallenwebb
Issue: "6901"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: Port docs tests to pytest
time: 2023-01-13T15:07:00.477038-05:00
custom:
Author: peterallenwebb
Issue: "6573"

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Replaced the EmptyLine event with a more general Formatting event, and added
a Note event.
time: 2023-01-20T17:22:54.45828-05:00
custom:
Author: peterallenwebb
Issue: "6481"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: Small optimization on manifest parsing benefitting large DAGs
time: 2023-01-22T21:52:35.549814+01:00
custom:
Author: boxysean
Issue: "6697"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: Revised and simplified various structured logging events
time: 2023-01-24T15:35:53.065356-05:00
custom:
Author: peterallenwebb
Issue: 6664 6665 6666

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: ' Optimized GraphQueue to remove graph analysis bottleneck in large dags.'
time: 2023-01-26T13:59:39.518345-05:00
custom:
Author: peterallenwebb
Issue: "6759"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: '[CT-1841] Convert custom target test to Pytest'
time: 2023-01-26T16:47:41.198714-08:00
custom:
Author: aranke
Issue: "6638"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: Moving simple_seed to adapter zone to help adapter test conversions
time: 2023-02-03T14:35:51.481856-08:00
custom:
Author: nssalian
Issue: CT-1959

View File

@@ -97,22 +97,28 @@ footerFormat: |
{{- /* we only want to include non-core team contributors */}}
{{- if not (has $authorLower $core_team)}}
{{- $changeList := splitList " " $change.Custom.Author }}
{{- /* Docs kind link back to dbt-docs instead of dbt-core issues */}}
{{- $IssueList := list }}
{{- $changeLink := $change.Kind }}
{{- if or (eq $change.Kind "Dependencies") (eq $change.Kind "Security") }}
{{- $changeLink = "[#nbr](https://github.com/dbt-labs/dbt-core/pull/nbr)" | replace "nbr" $change.Custom.PR }}
{{- else if eq $change.Kind "Docs"}}
{{- $changeLink = "[dbt-docs/#nbr](https://github.com/dbt-labs/dbt-docs/issues/nbr)" | replace "nbr" $change.Custom.Issue }}
{{- $changes := splitList " " $change.Custom.PR }}
{{- range $issueNbr := $changes }}
{{- $changeLink := "[#nbr](https://github.com/dbt-labs/dbt-core/pull/nbr)" | replace "nbr" $issueNbr }}
{{- $IssueList = append $IssueList $changeLink }}
{{- end -}}
{{- else }}
{{- $changeLink = "[#nbr](https://github.com/dbt-labs/dbt-core/issues/nbr)" | replace "nbr" $change.Custom.Issue }}
{{- $changes := splitList " " $change.Custom.Issue }}
{{- range $issueNbr := $changes }}
{{- $changeLink := "[#nbr](https://github.com/dbt-labs/dbt-core/issues/nbr)" | replace "nbr" $issueNbr }}
{{- $IssueList = append $IssueList $changeLink }}
{{- end -}}
{{- end }}
{{- /* check if this contributor has other changes associated with them already */}}
{{- if hasKey $contributorDict $author }}
{{- $contributionList := get $contributorDict $author }}
{{- $contributionList = append $contributionList $changeLink }}
{{- $contributionList = concat $contributionList $IssueList }}
{{- $contributorDict := set $contributorDict $author $contributionList }}
{{- else }}
{{- $contributionList := list $changeLink }}
{{- $contributionList := $IssueList }}
{{- $contributorDict := set $contributorDict $author $contributionList }}
{{- end }}
{{- end}}

109
.github/workflows/nightly-release.yml vendored Normal file
View File

@@ -0,0 +1,109 @@
# **what?**
# Nightly releases to GitHub and PyPI. This workflow produces the following outcome:
# - generate and validate data for night release (commit SHA, version number, release branch);
# - pass data to release workflow;
# - night release will be pushed to GitHub as a draft release;
# - night build will be pushed to test PyPI;
#
# **why?**
# Ensure an automated and tested release process for nightly builds
#
# **when?**
# This workflow runs on schedule or can be run manually on demand.
name: Nightly Test Release to GitHub and PyPI
on:
workflow_dispatch: # for manual triggering
schedule:
- cron: 0 9 * * *
permissions:
contents: write # this is the permission that allows creating a new release
defaults:
run:
shell: bash
env:
RELEASE_BRANCH: "main"
jobs:
aggregate-release-data:
runs-on: ubuntu-latest
outputs:
commit_sha: ${{ steps.resolve-commit-sha.outputs.release_commit }}
version_number: ${{ steps.nightly-release-version.outputs.number }}
release_branch: ${{ steps.release-branch.outputs.name }}
steps:
- name: "Checkout ${{ github.repository }} Branch ${{ env.RELEASE_BRANCH }}"
uses: actions/checkout@v3
with:
ref: ${{ env.RELEASE_BRANCH }}
- name: "Resolve Commit To Release"
id: resolve-commit-sha
run: |
commit_sha=$(git rev-parse HEAD)
echo "release_commit=$commit_sha" >> $GITHUB_OUTPUT
- name: "Get Current Version Number"
id: version-number-sources
run: |
current_version=`awk -F"current_version = " '{print $2}' .bumpversion.cfg | tr '\n' ' '`
echo "current_version=$current_version" >> $GITHUB_OUTPUT
- name: "Audit Version And Parse Into Parts"
id: semver
uses: dbt-labs/actions/parse-semver@v1.1.0
with:
version: ${{ steps.version-number-sources.outputs.current_version }}
- name: "Get Current Date"
id: current-date
run: echo "date=$(date +'%m%d%Y')" >> $GITHUB_OUTPUT
- name: "Generate Nightly Release Version Number"
id: nightly-release-version
run: |
number="${{ steps.semver.outputs.version }}.dev${{ steps.current-date.outputs.date }}+nightly"
echo "number=$number" >> $GITHUB_OUTPUT
- name: "Audit Nightly Release Version And Parse Into Parts"
uses: dbt-labs/actions/parse-semver@v1.1.0
with:
version: ${{ steps.nightly-release-version.outputs.number }}
- name: "Set Release Branch"
id: release-branch
run: |
echo "name=${{ env.RELEASE_BRANCH }}" >> $GITHUB_OUTPUT
log-outputs-aggregate-release-data:
runs-on: ubuntu-latest
needs: [aggregate-release-data]
steps:
- name: "[DEBUG] Log Outputs"
run: |
echo commit_sha : ${{ needs.aggregate-release-data.outputs.commit_sha }}
echo version_number: ${{ needs.aggregate-release-data.outputs.version_number }}
echo release_branch: ${{ needs.aggregate-release-data.outputs.release_branch }}
release-github-pypi:
needs: [aggregate-release-data]
uses: ./.github/workflows/release.yml
with:
sha: ${{ needs.aggregate-release-data.outputs.commit_sha }}
target_branch: ${{ needs.aggregate-release-data.outputs.release-branch }}
version_number: ${{ needs.aggregate-release-data.outputs.version_number }}
build_script_path: "scripts/build-dist.sh"
env_setup_script_path: "scripts/env-setup.sh"
s3_bucket_name: "core-team-artifacts"
package_test_command: "dbt --version"
test_run: true
nightly_release: true
secrets: inherit

View File

@@ -28,7 +28,33 @@ on:
permissions: read-all
jobs:
fetch-latest-branches:
runs-on: ubuntu-latest
outputs:
latest-branches: ${{ steps.get-latest-branches.outputs.repo-branches }}
steps:
- name: "Fetch dbt-core Latest Branches"
uses: dbt-labs/actions/fetch-repo-branches@v1.1.1
id: get-latest-branches
with:
repo_name: ${{ github.event.repository.name }}
organization: "dbt-labs"
pat: ${{ secrets.GITHUB_TOKEN }}
fetch_protected_branches_only: true
regex: "^1.[0-9]+.latest$"
perform_match_method: "match"
retries: 3
- name: "[ANNOTATION] ${{ github.event.repository.name }} - branches to test"
run: |
title="${{ github.event.repository.name }} - branches to test"
message="The workflow will run tests for the following branches of the ${{ github.event.repository.name }} repo: ${{ steps.get-latest-branches.outputs.repo-branches }}"
echo "::notice $title::$message"
kick-off-ci:
needs: [fetch-latest-branches]
name: Kick-off CI
runs-on: ubuntu-latest
@@ -39,7 +65,9 @@ jobs:
max-parallel: 1
fail-fast: false
matrix:
branch: [1.0.latest, 1.1.latest, 1.2.latest, 1.3.latest, main]
branch: ${{ fromJSON(needs.fetch-latest-branches.outputs.latest-branches) }}
include:
- branch: 'main'
steps:
- name: Call CI workflow for ${{ matrix.branch }} branch

View File

@@ -1,24 +1,110 @@
# **what?**
# Take the given commit, run unit tests specifically on that sha, build and
# package it, and then release to GitHub and PyPi with that specific build
# Release workflow provides the following steps:
# - checkout the given commit;
# - validate version in sources and changelog file for given version;
# - bump the version and generate a changelog if needed;
# - merge all changes to the target branch if needed;
# - run unit and integration tests against given commit;
# - build and package that SHA;
# - release it to GitHub and PyPI with that specific build;
#
# **why?**
# Ensure an automated and tested release process
#
# **when?**
# This will only run manually with a given sha and version
# This workflow can be run manually on demand or can be called by other workflows
name: Release to GitHub and PyPi
name: Release to GitHub and PyPI
on:
workflow_dispatch:
inputs:
sha:
description: 'The last commit sha in the release'
required: true
description: "The last commit sha in the release"
type: string
required: true
target_branch:
description: "The branch to release from"
type: string
required: true
version_number:
description: 'The release version number (i.e. 1.0.0b1)'
required: true
description: "The release version number (i.e. 1.0.0b1)"
type: string
required: true
build_script_path:
description: "Build script path"
type: string
default: "scripts/build-dist.sh"
required: true
env_setup_script_path:
description: "Environment setup script path"
type: string
default: "scripts/env-setup.sh"
required: false
s3_bucket_name:
description: "AWS S3 bucket name"
type: string
default: "core-team-artifacts"
required: true
package_test_command:
description: "Package test command"
type: string
default: "dbt --version"
required: true
test_run:
description: "Test run (Publish release as draft)"
type: boolean
default: true
required: false
nightly_release:
description: "Nightly release to dev environment"
type: boolean
default: false
required: false
workflow_call:
inputs:
sha:
description: "The last commit sha in the release"
type: string
required: true
target_branch:
description: "The branch to release from"
type: string
required: true
version_number:
description: "The release version number (i.e. 1.0.0b1)"
type: string
required: true
build_script_path:
description: "Build script path"
type: string
default: "scripts/build-dist.sh"
required: true
env_setup_script_path:
description: "Environment setup script path"
type: string
default: "scripts/env-setup.sh"
required: false
s3_bucket_name:
description: "AWS S3 bucket name"
type: string
default: "core-team-artifacts"
required: true
package_test_command:
description: "Package test command"
type: string
default: "dbt --version"
required: true
test_run:
description: "Test run (Publish release as draft)"
type: boolean
default: true
required: false
nightly_release:
description: "Nightly release to dev environment"
type: boolean
default: false
required: false
permissions:
contents: write # this is the permission that allows creating a new release
@@ -28,175 +114,117 @@ defaults:
shell: bash
jobs:
unit:
name: Unit test
log-inputs:
name: Log Inputs
runs-on: ubuntu-latest
env:
TOXENV: "unit"
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
- name: "[DEBUG] Print Variables"
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
echo The last commit sha in the release: ${{ inputs.sha }}
echo The branch to release from: ${{ inputs.target_branch }}
echo The release version number: ${{ inputs.version_number }}
echo Build script path: ${{ inputs.build_script_path }}
echo Environment setup script path: ${{ inputs.env_setup_script_path }}
echo AWS S3 bucket name: ${{ inputs.s3_bucket_name }}
echo Package test command: ${{ inputs.package_test_command }}
echo Test run: ${{ inputs.test_run }}
echo Nightly release: ${{ inputs.nightly_release }}
- name: Run tox
run: tox
bump-version-generate-changelog:
name: Bump package version, Generate changelog
build:
name: build packages
uses: dbt-labs/dbt-release/.github/workflows/release-prep.yml@main
with:
sha: ${{ inputs.sha }}
version_number: ${{ inputs.version_number }}
target_branch: ${{ inputs.target_branch }}
env_setup_script_path: ${{ inputs.env_setup_script_path }}
test_run: ${{ inputs.test_run }}
nightly_release: ${{ inputs.nightly_release }}
secrets:
FISHTOWN_BOT_PAT: ${{ secrets.FISHTOWN_BOT_PAT }}
log-outputs-bump-version-generate-changelog:
name: "[Log output] Bump package version, Generate changelog"
if: ${{ !failure() && !cancelled() }}
needs: [bump-version-generate-changelog]
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.inputs.sha }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
- name: Print variables
run: |
pip install --user --upgrade pip
pip install --upgrade setuptools wheel twine check-wheel-contents
pip --version
echo Final SHA : ${{ needs.bump-version-generate-changelog.outputs.final_sha }}
echo Changelog path: ${{ needs.bump-version-generate-changelog.outputs.changelog_path }}
- name: Build distributions
run: ./scripts/build-dist.sh
build-test-package:
name: Build, Test, Package
if: ${{ !failure() && !cancelled() }}
needs: [bump-version-generate-changelog]
- name: Show distributions
run: ls -lh dist/
uses: dbt-labs/dbt-release/.github/workflows/build.yml@main
- name: Check distribution descriptions
run: |
twine check dist/*
with:
sha: ${{ needs.bump-version-generate-changelog.outputs.final_sha }}
version_number: ${{ inputs.version_number }}
changelog_path: ${{ needs.bump-version-generate-changelog.outputs.changelog_path }}
build_script_path: ${{ inputs.build_script_path }}
s3_bucket_name: ${{ inputs.s3_bucket_name }}
package_test_command: ${{ inputs.package_test_command }}
test_run: ${{ inputs.test_run }}
nightly_release: ${{ inputs.nightly_release }}
- name: Check wheel contents
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: |
dist/
!dist/dbt-${{github.event.inputs.version_number}}.tar.gz
test-build:
name: verify packages
needs: [build, unit]
runs-on: ubuntu-latest
steps:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check wheel distributions
run: |
dbt --version
- name: Install source distributions
run: |
find ./dist/*.gz -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/
- name: Check source distributions
run: |
dbt --version
secrets:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
github-release:
name: GitHub Release
if: ${{ !failure() && !cancelled() }}
needs: test-build
needs: [bump-version-generate-changelog, build-test-package]
runs-on: ubuntu-latest
uses: dbt-labs/dbt-release/.github/workflows/github-release.yml@main
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: '.'
# Need to set an output variable because env variables can't be taken as input
# This is needed for the next step with releasing to GitHub
- name: Find release type
id: release_type
env:
IS_PRERELEASE: ${{ contains(github.event.inputs.version_number, 'rc') || contains(github.event.inputs.version_number, 'b') }}
run: |
echo "isPrerelease=$IS_PRERELEASE" >> $GITHUB_OUTPUT
- name: Creating GitHub Release
uses: softprops/action-gh-release@v1
with:
name: dbt-core v${{github.event.inputs.version_number}}
tag_name: v${{github.event.inputs.version_number}}
prerelease: ${{ steps.release_type.outputs.isPrerelease }}
target_commitish: ${{github.event.inputs.sha}}
body: |
[Release notes](https://github.com/dbt-labs/dbt-core/blob/main/CHANGELOG.md)
files: |
dbt_postgres-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt_core-${{github.event.inputs.version_number}}-py3-none-any.whl
dbt-postgres-${{github.event.inputs.version_number}}.tar.gz
dbt-core-${{github.event.inputs.version_number}}.tar.gz
with:
sha: ${{ needs.bump-version-generate-changelog.outputs.final_sha }}
version_number: ${{ inputs.version_number }}
changelog_path: ${{ needs.bump-version-generate-changelog.outputs.changelog_path }}
test_run: ${{ inputs.test_run }}
pypi-release:
name: Pypi release
name: PyPI Release
runs-on: ubuntu-latest
needs: [github-release]
needs: github-release
uses: dbt-labs/dbt-release/.github/workflows/pypi-release.yml@main
environment: PypiProd
steps:
- uses: actions/download-artifact@v2
with:
name: dist
path: 'dist'
with:
version_number: ${{ inputs.version_number }}
test_run: ${{ inputs.test_run }}
- name: Publish distribution to PyPI
uses: pypa/gh-action-pypi-publish@v1.4.2
with:
password: ${{ secrets.PYPI_API_TOKEN }}
secrets:
PYPI_API_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
TEST_PYPI_API_TOKEN: ${{ secrets.TEST_PYPI_API_TOKEN }}
slack-notification:
name: Slack Notification
if: ${{ failure() && (!inputs.test_run || inputs.nightly_release) }}
needs:
[
bump-version-generate-changelog,
build-test-package,
github-release,
pypi-release,
]
uses: dbt-labs/dbt-release/.github/workflows/slack-post-notification.yml@main
with:
status: "failure"
secrets:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_DEV_CORE_ALERTS }}

1
.gitignore vendored
View File

@@ -51,6 +51,7 @@ coverage.xml
*,cover
.hypothesis/
test.env
makefile.test.env
*.pytest_cache/

View File

@@ -3,7 +3,7 @@
# See `/docker` for a generic and production-ready docker file
##
FROM ubuntu:22.04
FROM ubuntu:23.04
ENV DEBIAN_FRONTEND noninteractive

View File

@@ -6,18 +6,26 @@ ifeq ($(USE_DOCKER),true)
DOCKER_CMD := docker-compose run --rm test
endif
LOGS_DIR := ./logs
#
# To override CI_flags, create a file at this repo's root dir named `makefile.test.env`. Fill it
# with any ENV_VAR overrides required by your test environment, e.g.
# DBT_TEST_USER_1=user
# LOG_DIR="dir with a space in it"
#
# Warn: Restrict each line to one variable only.
#
ifeq (./makefile.test.env,$(wildcard ./makefile.test.env))
include ./makefile.test.env
endif
# Optional flag to invoke tests using our CI env.
# But we always want these active for structured
# log testing.
CI_FLAGS =\
DBT_TEST_USER_1=dbt_test_user_1\
DBT_TEST_USER_2=dbt_test_user_2\
DBT_TEST_USER_3=dbt_test_user_3\
RUSTFLAGS="-D warnings"\
LOG_DIR=./logs\
DBT_LOG_FORMAT=json
DBT_TEST_USER_1=$(if $(DBT_TEST_USER_1),$(DBT_TEST_USER_1),dbt_test_user_1)\
DBT_TEST_USER_2=$(if $(DBT_TEST_USER_2),$(DBT_TEST_USER_2),dbt_test_user_2)\
DBT_TEST_USER_3=$(if $(DBT_TEST_USER_3),$(DBT_TEST_USER_3),dbt_test_user_3)\
RUSTFLAGS=$(if $(RUSTFLAGS),$(RUSTFLAGS),"-D warnings")\
LOG_DIR=$(if $(LOG_DIR),$(LOG_DIR),./logs)\
DBT_LOG_FORMAT=$(if $(DBT_LOG_FORMAT),$(DBT_LOG_FORMAT),json)
.PHONY: dev_req
dev_req: ## Installs dbt-* packages in develop mode along with only development dependencies.
@@ -66,7 +74,7 @@ test: .env ## Runs unit tests with py and code checks against staged changes.
.PHONY: integration
integration: .env ## Runs postgres integration tests with py-integration
@\
$(if $(USE_CI_FLAGS), $(CI_FLAGS)) $(DOCKER_CMD) tox -e py-integration -- -nauto
$(CI_FLAGS) $(DOCKER_CMD) tox -e py-integration -- -nauto
.PHONY: integration-fail-fast
integration-fail-fast: .env ## Runs postgres integration tests with py-integration in "fail fast" mode.
@@ -76,9 +84,9 @@ integration-fail-fast: .env ## Runs postgres integration tests with py-integrati
.PHONY: interop
interop: clean
@\
mkdir $(LOGS_DIR) && \
mkdir $(LOG_DIR) && \
$(CI_FLAGS) $(DOCKER_CMD) tox -e py-integration -- -nauto && \
LOG_DIR=$(LOGS_DIR) cargo run --manifest-path test/interop/log_parsing/Cargo.toml
LOG_DIR=$(LOG_DIR) cargo run --manifest-path test/interop/log_parsing/Cargo.toml
.PHONY: setup-db
setup-db: ## Setup Postgres database with docker-compose for system testing.

View File

@@ -21,7 +21,7 @@ These select statements, or "models", form a dbt project. Models frequently buil
## Getting started
- [Install dbt](https://docs.getdbt.com/docs/installation)
- [Install dbt](https://docs.getdbt.com/docs/get-started/installation)
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Join the dbt Community

View File

@@ -17,7 +17,6 @@ from typing import (
Iterator,
Set,
)
import agate
import pytz
@@ -54,7 +53,7 @@ from dbt.events.types import (
CodeExecutionStatus,
CatalogGenerationError,
)
from dbt.utils import filter_null_values, executor, cast_to_str
from dbt.utils import filter_null_values, executor, cast_to_str, AttrDict
from dbt.adapters.base.connections import Connection, AdapterResponse
from dbt.adapters.base.meta import AdapterMeta, available
@@ -943,7 +942,7 @@ class BaseAdapter(metaclass=AdapterMeta):
context_override: Optional[Dict[str, Any]] = None,
kwargs: Dict[str, Any] = None,
text_only_columns: Optional[Iterable[str]] = None,
) -> agate.Table:
) -> AttrDict:
"""Look macro_name up in the manifest and execute its results.
:param macro_name: The name of the macro to execute.
@@ -1028,7 +1027,7 @@ class BaseAdapter(metaclass=AdapterMeta):
manifest=manifest,
)
results = self._catalog_filter_table(table, manifest)
results = self._catalog_filter_table(table, manifest) # type: ignore[arg-type]
return results
def get_catalog(self, manifest: Manifest) -> Tuple[agate.Table, List[Exception]]:
@@ -1060,7 +1059,7 @@ class BaseAdapter(metaclass=AdapterMeta):
loaded_at_field: str,
filter: Optional[str],
manifest: Optional[Manifest] = None,
) -> Dict[str, Any]:
) -> Tuple[AdapterResponse, Dict[str, Any]]:
"""Calculate the freshness of sources in dbt, and return it"""
kwargs: Dict[str, Any] = {
"source": source,
@@ -1069,7 +1068,8 @@ class BaseAdapter(metaclass=AdapterMeta):
}
# run the macro
table = self.execute_macro(FRESHNESS_MACRO_NAME, kwargs=kwargs, manifest=manifest)
result = self.execute_macro(FRESHNESS_MACRO_NAME, kwargs=kwargs, manifest=manifest)
adapter_response, table = result.response, result.table # type: ignore[attr-defined]
# now we have a 1-row table of the maximum `loaded_at_field` value and
# the current time according to the db.
if len(table) != 1 or len(table[0]) != 2:
@@ -1083,11 +1083,12 @@ class BaseAdapter(metaclass=AdapterMeta):
snapshotted_at = _utc(table[0][1], source, loaded_at_field)
age = (snapshotted_at - max_loaded_at).total_seconds()
return {
freshness = {
"max_loaded_at": max_loaded_at,
"snapshotted_at": snapshotted_at,
"age": age,
}
return adapter_response, freshness
def pre_model_hook(self, config: Mapping[str, Any]) -> Any:
"""A hook for running some operation before the model materialization

View File

@@ -1,11 +1,12 @@
import os
from collections import defaultdict
from typing import List, Dict, Any, Tuple, Optional
import argparse
import networkx as nx # type: ignore
import os
import pickle
import sqlparse
from collections import defaultdict
from typing import List, Dict, Any, Tuple, Optional
from dbt import flags
from dbt.adapters.factory import get_adapter
from dbt.clients import jinja
@@ -32,6 +33,7 @@ from dbt.events.contextvars import get_node_info
from dbt.node_types import NodeType, ModelLanguage
from dbt.events.format import pluralize
import dbt.tracking
import dbt.task.list as list_task
graph_file_name = "graph.gpickle"
@@ -351,13 +353,6 @@ class Compiler:
)
if node.language == ModelLanguage.python:
# TODO could we also 'minify' this code at all? just aesthetic, not functional
# quoating seems like something very specific to sql so far
# for all python implementations we are seeing there's no quating.
# TODO try to find better way to do this, given that
original_quoting = self.config.quoting
self.config.quoting = {key: False for key in original_quoting.keys()}
context = self._create_node_context(node, manifest, extra_context)
postfix = jinja.get_rendered(
@@ -367,8 +362,6 @@ class Compiler:
)
# we should NOT jinja render the python model's 'raw code'
node.compiled_code = f"{node.raw_code}\n\n{postfix}"
# restore quoting settings in the end since context is lazy evaluated
self.config.quoting = original_quoting
else:
context = self._create_node_context(node, manifest, extra_context)
@@ -482,7 +475,13 @@ class Compiler:
if write:
self.write_graph_file(linker, manifest)
print_compile_stats(stats)
# Do not print these for ListTask's
if not (
self.config.args.__class__ == argparse.Namespace
and self.config.args.cls == list_task.ListTask
):
print_compile_stats(stats)
return Graph(linker.graph)

View File

@@ -75,6 +75,11 @@ Validator Error:
{error}
"""
MISSING_DBT_PROJECT_ERROR = """\
No dbt_project.yml found at expected path {path}
Verify that each entry within packages.yml (and their transitive dependencies) contains a file named dbt_project.yml
"""
@runtime_checkable
class IsFQNResource(Protocol):
@@ -163,9 +168,7 @@ def _raw_project_from(project_root: str) -> Dict[str, Any]:
# get the project.yml contents
if not path_exists(project_yaml_filepath):
raise DbtProjectError(
"no dbt_project.yml found at expected path {}".format(project_yaml_filepath)
)
raise DbtProjectError(MISSING_DBT_PROJECT_ERROR.format(path=project_yaml_filepath))
project_dict = _load_yaml(project_yaml_filepath)

View File

@@ -37,6 +37,7 @@ from dbt.contracts.graph.unparsed import (
from dbt.contracts.util import Replaceable, AdditionalPropertiesMixin
from dbt.events.proto_types import NodeInfo
from dbt.events.functions import warn_or_error
from dbt.exceptions import ParsingError
from dbt.events.types import (
SeedIncreased,
SeedExceedsLimitSamePath,
@@ -482,6 +483,7 @@ class SeedNode(ParsedNode): # No SQLDefaults!
# seeds need the root_path because the contents are not loaded initially
# and we need the root_path to load the seed later
root_path: Optional[str] = None
depends_on: MacroDependsOn = field(default_factory=MacroDependsOn)
def same_seeds(self, other: "SeedNode") -> bool:
# for seeds, we check the hashes. If the hashes are different types,
@@ -523,6 +525,39 @@ class SeedNode(ParsedNode): # No SQLDefaults!
"""Seeds are never empty"""
return False
def _disallow_implicit_dependencies(self):
"""Disallow seeds to take implicit upstream dependencies via pre/post hooks"""
# Seeds are root nodes in the DAG. They cannot depend on other nodes.
# However, it's possible to define pre- and post-hooks on seeds, and for those
# hooks to include {{ ref(...) }}. This worked in previous versions, but it
# was never officially documented or supported behavior. Let's raise an explicit error,
# which will surface during parsing if the user has written code such that we attempt
# to capture & record a ref/source/metric call on the SeedNode.
# For more details: https://github.com/dbt-labs/dbt-core/issues/6806
hooks = [f'- pre_hook: "{hook.sql}"' for hook in self.config.pre_hook] + [
f'- post_hook: "{hook.sql}"' for hook in self.config.post_hook
]
hook_list = "\n".join(hooks)
message = f"""
Seeds cannot depend on other nodes. dbt detected a seed with a pre- or post-hook
that calls 'ref', 'source', or 'metric', either directly or indirectly via other macros.
Error raised for '{self.unique_id}', which has these hooks defined: \n{hook_list}
"""
raise ParsingError(message)
@property
def refs(self):
self._disallow_implicit_dependencies()
@property
def sources(self):
self._disallow_implicit_dependencies()
@property
def metrics(self):
self._disallow_implicit_dependencies()
def same_body(self, other) -> bool:
return self.same_seeds(other)
@@ -531,8 +566,8 @@ class SeedNode(ParsedNode): # No SQLDefaults!
return []
@property
def depends_on_macros(self):
return []
def depends_on_macros(self) -> List[str]:
return self.depends_on.macros
@property
def extra_ctes(self):

View File

@@ -13,7 +13,7 @@ from dbt.events.types import TimingInfoCollected
from dbt.events.proto_types import RunResultMsg, TimingInfoMsg
from dbt.events.contextvars import get_node_info
from dbt.logger import TimingProcessor
from dbt.utils import lowercase, cast_to_str, cast_to_int
from dbt.utils import lowercase, cast_to_str, cast_to_int, cast_dict_to_dict_of_strings
from dbt.dataclass_schema import dbtClassMixin, StrEnum
import agate
@@ -130,7 +130,6 @@ class BaseResult(dbtClassMixin):
return data
def to_msg(self):
# TODO: add more fields
msg = RunResultMsg()
msg.status = str(self.status)
msg.message = cast_to_str(self.message)
@@ -138,7 +137,7 @@ class BaseResult(dbtClassMixin):
msg.execution_time = self.execution_time
msg.num_failures = cast_to_int(self.failures)
msg.timing_info = [ti.to_msg() for ti in self.timing]
# adapter_response
msg.adapter_response = cast_dict_to_dict_of_strings(self.adapter_response)
return msg

View File

@@ -250,7 +250,6 @@ def upgrade_seed_content(node_content):
"refs",
"sources",
"metrics",
"depends_on",
"compiled_path",
"compiled",
"compiled_code",
@@ -260,6 +259,8 @@ def upgrade_seed_content(node_content):
):
if attr_name in node_content:
del node_content[attr_name]
# In v1.4, we switched SeedNode.depends_on from DependsOn to MacroDependsOn
node_content.get("depends_on", {}).pop("nodes", None)
def upgrade_manifest_json(manifest: dict) -> dict:
@@ -283,7 +284,7 @@ def upgrade_manifest_json(manifest: dict) -> dict:
if "root_path" in exposure_content:
del exposure_content["root_path"]
for source_content in manifest.get("sources", {}).values():
if "root_path" in exposure_content:
if "root_path" in source_content:
del source_content["root_path"]
for macro_content in manifest.get("macros", {}).values():
if "root_path" in macro_content:

Binary file not shown.

Binary file not shown.

View File

@@ -419,7 +419,9 @@ table.footnote td {
}
dl {
margin: 0;
margin-left: 0;
margin-right: 0;
margin-top: 0;
padding: 0;
}

View File

@@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/

View File

@@ -4,7 +4,7 @@
*
* Base JavaScript utilities for all Sphinx HTML documentation.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/

View File

@@ -5,7 +5,7 @@
* This script contains the language-specific data used by searchtools.js,
* namely the list of stopwords, stemmer, scorer and splitter.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/

View File

@@ -4,7 +4,7 @@
*
* Sphinx JavaScript utilities for the full-text search.
*
* :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
* :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/

View File

@@ -87,8 +87,8 @@
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.0.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.1.3</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.13</a>
</div>

View File

@@ -837,8 +837,8 @@
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.0.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.1.3</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.13</a>
|
<a href="_sources/index.rst.txt"
@@ -849,4 +849,4 @@
</body>
</html>
</html>

View File

@@ -106,8 +106,8 @@
&copy;2022, dbt Labs.
|
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.0.0</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.12</a>
Powered by <a href="http://sphinx-doc.org/">Sphinx 6.1.3</a>
&amp; <a href="https://github.com/bitprophet/alabaster">Alabaster 0.7.13</a>
</div>

File diff suppressed because one or more lines are too long

View File

@@ -88,41 +88,40 @@ class _Logger:
self.level: EventLevel = config.level
self.event_manager: EventManager = event_manager
self._python_logger: Optional[logging.Logger] = config.logger
self._stream: Optional[TextIO] = config.output_stream
if config.output_stream is not None:
stream_handler = logging.StreamHandler(config.output_stream)
self._python_logger = self._get_python_log_for_handler(stream_handler)
if config.output_file_name:
log = logging.getLogger(config.name)
log.setLevel(_log_level_map[config.level])
handler = RotatingFileHandler(
file_handler = RotatingFileHandler(
filename=str(config.output_file_name),
encoding="utf8",
maxBytes=10 * 1024 * 1024, # 10 mb
backupCount=5,
)
self._python_logger = self._get_python_log_for_handler(file_handler)
handler.setFormatter(logging.Formatter(fmt="%(message)s"))
log.handlers.clear()
log.addHandler(handler)
self._python_logger = log
def _get_python_log_for_handler(self, handler: logging.Handler):
log = logging.getLogger(self.name)
log.setLevel(_log_level_map[self.level])
handler.setFormatter(logging.Formatter(fmt="%(message)s"))
log.handlers.clear()
log.addHandler(handler)
return log
def create_line(self, msg: EventMsg) -> str:
raise NotImplementedError()
def write_line(self, msg: EventMsg):
line = self.create_line(msg)
python_level = _log_level_map[EventLevel(msg.info.level)]
if self._python_logger is not None:
send_to_logger(self._python_logger, msg.info.level, line)
elif self._stream is not None and _log_level_map[self.level] <= python_level:
self._stream.write(line + "\n")
def flush(self):
if self._python_logger is not None:
for handler in self._python_logger.handlers:
handler.flush()
elif self._stream is not None:
self._stream.flush()
class _TextLogger(_Logger):

View File

@@ -3,7 +3,7 @@ from dbt.constants import METADATA_ENV_PREFIX
from dbt.events.base_types import BaseEvent, Cache, EventLevel, NoFile, NoStdOut, EventMsg
from dbt.events.eventmgr import EventManager, LoggerConfig, LineFormat, NoFilter
from dbt.events.helpers import env_secrets, scrub_secrets
from dbt.events.types import EmptyLine
from dbt.events.types import Formatting
import dbt.flags as flags
from dbt.logger import GLOBAL_LOGGER, make_log_dir_if_missing
from functools import partial
@@ -18,6 +18,14 @@ LOG_VERSION = 3
metadata_vars: Optional[Dict[str, str]] = None
# The "fallback" logger is used as a stop-gap so that console logging works before the logging
# configuration is fully loaded.
def setup_fallback_logger(use_legacy: bool, level: EventLevel) -> None:
cleanup_event_logger()
config = _get_logbook_log_config(level) if use_legacy else _get_stdout_config(level)
EVENT_MANAGER.add_logger(config)
def setup_event_logger(log_path: str, level_override: Optional[EventLevel] = None):
cleanup_event_logger()
make_log_dir_if_missing(log_path)
@@ -65,7 +73,7 @@ def _stdout_filter(
and (not isinstance(msg.data, Cache) or log_cache_events)
and (EventLevel(msg.info.level) != EventLevel.DEBUG or debug_mode)
and (EventLevel(msg.info.level) == EventLevel.ERROR or not quiet_mode)
and not (flags.LOG_FORMAT == "json" and type(msg.data) == EmptyLine)
and not (flags.LOG_FORMAT == "json" and type(msg.data) == Formatting)
)
@@ -85,7 +93,7 @@ def _logfile_filter(log_cache_events: bool, msg: EventMsg) -> bool:
return (
not isinstance(msg.data, NoFile)
and not (isinstance(msg.data, Cache) and not log_cache_events)
and not (flags.LOG_FORMAT == "json" and type(msg.data) == EmptyLine)
and not (flags.LOG_FORMAT == "json" and type(msg.data) == Formatting)
)
@@ -113,9 +121,7 @@ def cleanup_event_logger():
# currently fire before logs can be configured by setup_event_logger(), we
# create a default configuration with default settings and no file output.
EVENT_MANAGER: EventManager = EventManager()
EVENT_MANAGER.add_logger(
_get_logbook_log_config() if flags.ENABLE_LEGACY_LOGGER else _get_stdout_config()
)
setup_fallback_logger(bool(flags.ENABLE_LEGACY_LOGGER), EventLevel.INFO)
# This global, and the following two functions for capturing stdout logs are

View File

@@ -1055,6 +1055,23 @@ class UnableToPartialParseMsg(betterproto.Message):
data: "UnableToPartialParse" = betterproto.message_field(2)
@dataclass
class StateCheckVarsHash(betterproto.Message):
"""I025"""
checksum: str = betterproto.string_field(1)
vars: str = betterproto.string_field(2)
profile: str = betterproto.string_field(3)
target: str = betterproto.string_field(4)
version: str = betterproto.string_field(5)
@dataclass
class StateCheckVarsHashMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StateCheckVarsHash" = betterproto.message_field(2)
@dataclass
class PartialParsingNotEnabled(betterproto.Message):
"""I028"""
@@ -1083,123 +1100,6 @@ class ParsedFileLoadFailedMsg(betterproto.Message):
data: "ParsedFileLoadFailed" = betterproto.message_field(2)
@dataclass
class StaticParserCausedJinjaRendering(betterproto.Message):
"""I031"""
path: str = betterproto.string_field(1)
@dataclass
class StaticParserCausedJinjaRenderingMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StaticParserCausedJinjaRendering" = betterproto.message_field(2)
@dataclass
class UsingExperimentalParser(betterproto.Message):
"""I032"""
path: str = betterproto.string_field(1)
@dataclass
class UsingExperimentalParserMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "UsingExperimentalParser" = betterproto.message_field(2)
@dataclass
class SampleFullJinjaRendering(betterproto.Message):
"""I033"""
path: str = betterproto.string_field(1)
@dataclass
class SampleFullJinjaRenderingMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "SampleFullJinjaRendering" = betterproto.message_field(2)
@dataclass
class StaticParserFallbackJinjaRendering(betterproto.Message):
"""I034"""
path: str = betterproto.string_field(1)
@dataclass
class StaticParserFallbackJinjaRenderingMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StaticParserFallbackJinjaRendering" = betterproto.message_field(2)
@dataclass
class StaticParsingMacroOverrideDetected(betterproto.Message):
"""I035"""
path: str = betterproto.string_field(1)
@dataclass
class StaticParsingMacroOverrideDetectedMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StaticParsingMacroOverrideDetected" = betterproto.message_field(2)
@dataclass
class StaticParserSuccess(betterproto.Message):
"""I036"""
path: str = betterproto.string_field(1)
@dataclass
class StaticParserSuccessMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StaticParserSuccess" = betterproto.message_field(2)
@dataclass
class StaticParserFailure(betterproto.Message):
"""I037"""
path: str = betterproto.string_field(1)
@dataclass
class StaticParserFailureMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "StaticParserFailure" = betterproto.message_field(2)
@dataclass
class ExperimentalParserSuccess(betterproto.Message):
"""I038"""
path: str = betterproto.string_field(1)
@dataclass
class ExperimentalParserSuccessMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "ExperimentalParserSuccess" = betterproto.message_field(2)
@dataclass
class ExperimentalParserFailure(betterproto.Message):
"""I039"""
path: str = betterproto.string_field(1)
@dataclass
class ExperimentalParserFailureMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "ExperimentalParserFailure" = betterproto.message_field(2)
@dataclass
class PartialParsingEnabled(betterproto.Message):
"""I040"""
@@ -1408,6 +1308,34 @@ class JinjaLogWarningMsg(betterproto.Message):
data: "JinjaLogWarning" = betterproto.message_field(2)
@dataclass
class JinjaLogInfo(betterproto.Message):
"""I062"""
node_info: "NodeInfo" = betterproto.message_field(1)
msg: str = betterproto.string_field(2)
@dataclass
class JinjaLogInfoMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "JinjaLogInfo" = betterproto.message_field(2)
@dataclass
class JinjaLogDebug(betterproto.Message):
"""I063"""
node_info: "NodeInfo" = betterproto.message_field(1)
msg: str = betterproto.string_field(2)
@dataclass
class JinjaLogDebugMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "JinjaLogDebug" = betterproto.message_field(2)
@dataclass
class GitSparseCheckoutSubdirectory(betterproto.Message):
"""M001"""
@@ -1542,34 +1470,6 @@ class SelectorReportInvalidSelectorMsg(betterproto.Message):
data: "SelectorReportInvalidSelector" = betterproto.message_field(2)
@dataclass
class JinjaLogInfo(betterproto.Message):
"""M011"""
node_info: "NodeInfo" = betterproto.message_field(1)
msg: str = betterproto.string_field(2)
@dataclass
class JinjaLogInfoMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "JinjaLogInfo" = betterproto.message_field(2)
@dataclass
class JinjaLogDebug(betterproto.Message):
"""M012"""
node_info: "NodeInfo" = betterproto.message_field(1)
msg: str = betterproto.string_field(2)
@dataclass
class JinjaLogDebugMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "JinjaLogDebug" = betterproto.message_field(2)
@dataclass
class DepsNoPackagesFound(betterproto.Message):
"""M013"""
@@ -1859,19 +1759,6 @@ class SeedHeaderMsg(betterproto.Message):
data: "SeedHeader" = betterproto.message_field(2)
@dataclass
class SeedHeaderSeparator(betterproto.Message):
"""Q005"""
len_header: int = betterproto.int32_field(1)
@dataclass
class SeedHeaderSeparatorMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "SeedHeaderSeparator" = betterproto.message_field(2)
@dataclass
class SQLRunnerException(betterproto.Message):
"""Q006"""
@@ -2511,16 +2398,16 @@ class OpenCommandMsg(betterproto.Message):
@dataclass
class EmptyLine(betterproto.Message):
class Formatting(betterproto.Message):
"""Z017"""
pass
msg: str = betterproto.string_field(1)
@dataclass
class EmptyLineMsg(betterproto.Message):
class FormattingMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "EmptyLine" = betterproto.message_field(2)
data: "Formatting" = betterproto.message_field(2)
@dataclass
@@ -2847,6 +2734,58 @@ class RunResultWarningMessageMsg(betterproto.Message):
data: "RunResultWarningMessage" = betterproto.message_field(2)
@dataclass
class DebugCmdOut(betterproto.Message):
"""Z047"""
msg: str = betterproto.string_field(1)
@dataclass
class DebugCmdOutMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "DebugCmdOut" = betterproto.message_field(2)
@dataclass
class DebugCmdResult(betterproto.Message):
"""Z048"""
msg: str = betterproto.string_field(1)
@dataclass
class DebugCmdResultMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "DebugCmdResult" = betterproto.message_field(2)
@dataclass
class ListCmdOut(betterproto.Message):
"""Z049"""
msg: str = betterproto.string_field(1)
@dataclass
class ListCmdOutMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "ListCmdOut" = betterproto.message_field(2)
@dataclass
class Note(betterproto.Message):
"""Z050"""
msg: str = betterproto.string_field(1)
@dataclass
class NoteMsg(betterproto.Message):
info: "EventInfo" = betterproto.message_field(1)
data: "Note" = betterproto.message_field(2)
@dataclass
class IntegrationTestInfo(betterproto.Message):
"""T001"""

View File

@@ -839,7 +839,21 @@ message UnableToPartialParseMsg {
UnableToPartialParse data = 2;
}
// Skipped I025, I026, I027
// I025
message StateCheckVarsHash {
string checksum = 1;
string vars = 2;
string profile = 3;
string target = 4;
string version = 5;
}
message StateCheckVarsHashMsg {
EventInfo info = 1;
StateCheckVarsHash data = 2;
}
// Skipped I026, I027
// I028
@@ -863,98 +877,7 @@ message ParsedFileLoadFailedMsg {
ParsedFileLoadFailed data = 2;
}
// Skipping I030
// I031
message StaticParserCausedJinjaRendering {
string path = 1;
}
message StaticParserCausedJinjaRenderingMsg {
EventInfo info = 1;
StaticParserCausedJinjaRendering data = 2;
}
// I032
message UsingExperimentalParser {
string path = 1;
}
message UsingExperimentalParserMsg {
EventInfo info = 1;
UsingExperimentalParser data = 2;
}
// I033
message SampleFullJinjaRendering {
string path = 1;
}
message SampleFullJinjaRenderingMsg {
EventInfo info = 1;
SampleFullJinjaRendering data = 2;
}
// I034
message StaticParserFallbackJinjaRendering {
string path = 1;
}
message StaticParserFallbackJinjaRenderingMsg {
EventInfo info = 1;
StaticParserFallbackJinjaRendering data = 2;
}
// I035
message StaticParsingMacroOverrideDetected {
string path = 1;
}
message StaticParsingMacroOverrideDetectedMsg {
EventInfo info = 1;
StaticParsingMacroOverrideDetected data = 2;
}
// I036
message StaticParserSuccess {
string path = 1;
}
message StaticParserSuccessMsg {
EventInfo info = 1;
StaticParserSuccess data = 2;
}
// I037
message StaticParserFailure {
string path = 1;
}
message StaticParserFailureMsg {
EventInfo info = 1;
StaticParserFailure data = 2;
}
// I038
message ExperimentalParserSuccess {
string path = 1;
}
message ExperimentalParserSuccessMsg {
EventInfo info = 1;
ExperimentalParserSuccess data = 2;
}
// I039
message ExperimentalParserFailure {
string path = 1;
}
message ExperimentalParserFailureMsg {
EventInfo info = 1;
ExperimentalParserFailure data = 2;
}
// Skipping I030 - I039
// I040
message PartialParsingEnabled {
@@ -1124,6 +1047,28 @@ message JinjaLogWarningMsg {
JinjaLogWarning data = 2;
}
// I062
message JinjaLogInfo {
NodeInfo node_info = 1;
string msg = 2;
}
message JinjaLogInfoMsg {
EventInfo info = 1;
JinjaLogInfo data = 2;
}
// I063
message JinjaLogDebug {
NodeInfo node_info = 1;
string msg = 2;
}
message JinjaLogDebugMsg {
EventInfo info = 1;
JinjaLogDebug data = 2;
}
// M - Deps generation
// M001
@@ -1230,27 +1175,7 @@ message SelectorReportInvalidSelectorMsg {
SelectorReportInvalidSelector data = 2;
}
// M011
message JinjaLogInfo {
NodeInfo node_info = 1;
string msg = 2;
}
message JinjaLogInfoMsg {
EventInfo info = 1;
JinjaLogInfo data = 2;
}
// M012
message JinjaLogDebug {
NodeInfo node_info = 1;
string msg = 2;
}
message JinjaLogDebugMsg {
EventInfo info = 1;
JinjaLogDebug data = 2;
}
// Skipped M011 and M012
// M013
message DepsNoPackagesFound {
@@ -1473,15 +1398,7 @@ message SeedHeaderMsg {
SeedHeader data = 2;
}
// Q005
message SeedHeaderSeparator {
int32 len_header = 1;
}
message SeedHeaderSeparatorMsg {
EventInfo info = 1;
SeedHeaderSeparator data = 2;
}
// Skipped Q005
// Q006
message SQLRunnerException {
@@ -2004,12 +1921,13 @@ message OpenCommandMsg {
}
// Z017
message EmptyLine {
message Formatting {
string msg = 1;
}
message EmptyLineMsg {
message FormattingMsg {
EventInfo info = 1;
EmptyLine data = 2;
Formatting data = 2;
}
// Z018
@@ -2258,6 +2176,46 @@ message RunResultWarningMessageMsg {
RunResultWarningMessage data = 2;
}
// Z047
message DebugCmdOut {
string msg = 1;
}
message DebugCmdOutMsg {
EventInfo info = 1;
DebugCmdOut data = 2;
}
// Z048
message DebugCmdResult {
string msg = 1;
}
message DebugCmdResultMsg {
EventInfo info = 1;
DebugCmdResult data = 2;
}
// Z049
message ListCmdOut {
string msg = 1;
}
message ListCmdOutMsg {
EventInfo info = 1;
ListCmdOut data = 2;
}
// Z050
message Note {
string msg = 1;
}
message NoteMsg {
EventInfo info = 1;
Note data = 2;
}
// T - Integration tests
// T001

View File

@@ -843,6 +843,15 @@ class UnableToPartialParse(InfoLevel, pt.UnableToPartialParse):
return f"Unable to do partial parsing because {self.reason}"
@dataclass
class StateCheckVarsHash(DebugLevel, pt.StateCheckVarsHash):
def code(self):
return "I025"
def message(self) -> str:
return f"checksum: {self.checksum}, vars: {self.vars}, profile: {self.profile}, target: {self.target}, version: {self.version}"
# Skipped I025, I026, I026, I027
@@ -864,90 +873,7 @@ class ParsedFileLoadFailed(DebugLevel, pt.ParsedFileLoadFailed): # noqa
return f"Failed to load parsed file from disk at {self.path}: {self.exc}"
# Skipped I030
@dataclass
class StaticParserCausedJinjaRendering(DebugLevel, pt.StaticParserCausedJinjaRendering):
def code(self):
return "I031"
def message(self) -> str:
return f"1605: jinja rendering because of STATIC_PARSER flag. file: {self.path}"
# TODO: Experimental/static parser uses these for testing and some may be a good use case for
# the `TestLevel` logger once we implement it. Some will probably stay `DebugLevel`.
@dataclass
class UsingExperimentalParser(DebugLevel, pt.UsingExperimentalParser):
def code(self):
return "I032"
def message(self) -> str:
return f"1610: conducting experimental parser sample on {self.path}"
@dataclass
class SampleFullJinjaRendering(DebugLevel, pt.SampleFullJinjaRendering):
def code(self):
return "I033"
def message(self) -> str:
return f"1611: conducting full jinja rendering sample on {self.path}"
@dataclass
class StaticParserFallbackJinjaRendering(DebugLevel, pt.StaticParserFallbackJinjaRendering):
def code(self):
return "I034"
def message(self) -> str:
return f"1602: parser fallback to jinja rendering on {self.path}"
@dataclass
class StaticParsingMacroOverrideDetected(DebugLevel, pt.StaticParsingMacroOverrideDetected):
def code(self):
return "I035"
def message(self) -> str:
return f"1601: detected macro override of ref/source/config in the scope of {self.path}"
@dataclass
class StaticParserSuccess(DebugLevel, pt.StaticParserSuccess):
def code(self):
return "I036"
def message(self) -> str:
return f"1699: static parser successfully parsed {self.path}"
@dataclass
class StaticParserFailure(DebugLevel, pt.StaticParserFailure):
def code(self):
return "I037"
def message(self) -> str:
return f"1603: static parser failed on {self.path}"
@dataclass
class ExperimentalParserSuccess(DebugLevel, pt.ExperimentalParserSuccess):
def code(self):
return "I038"
def message(self) -> str:
return f"1698: experimental parser successfully parsed {self.path}"
@dataclass
class ExperimentalParserFailure(DebugLevel, pt.ExperimentalParserFailure):
def code(self):
return "I039"
def message(self) -> str:
return f"1604: experimental parser failed on {self.path}"
# Skipped I030-I039
@dataclass
@@ -1162,6 +1088,26 @@ class JinjaLogWarning(WarnLevel, pt.JinjaLogWarning):
return self.msg
@dataclass
class JinjaLogInfo(InfoLevel, EventStringFunctor, pt.JinjaLogInfo):
def code(self):
return "I062"
def message(self) -> str:
# This is for the log method used in macros so msg cannot be built here
return self.msg
@dataclass
class JinjaLogDebug(DebugLevel, EventStringFunctor, pt.JinjaLogDebug):
def code(self):
return "I063"
def message(self) -> str:
# This is for the log method used in macros so msg cannot be built here
return self.msg
# =======================================================
# M - Deps generation
# =======================================================
@@ -1173,7 +1119,7 @@ class GitSparseCheckoutSubdirectory(DebugLevel, pt.GitSparseCheckoutSubdirectory
return "M001"
def message(self) -> str:
return f" Subdirectory specified: {self.subdir}, using sparse checkout."
return f"Subdirectory specified: {self.subdir}, using sparse checkout."
@dataclass
@@ -1182,7 +1128,7 @@ class GitProgressCheckoutRevision(DebugLevel, pt.GitProgressCheckoutRevision):
return "M002"
def message(self) -> str:
return f" Checking out revision {self.revision}."
return f"Checking out revision {self.revision}."
@dataclass
@@ -1218,7 +1164,7 @@ class GitProgressUpdatedCheckoutRange(DebugLevel, pt.GitProgressUpdatedCheckoutR
return "M006"
def message(self) -> str:
return f" Updated checkout from {self.start_sha} to {self.end_sha}."
return f"Updated checkout from {self.start_sha} to {self.end_sha}."
@dataclass
@@ -1227,7 +1173,7 @@ class GitProgressCheckedOutAt(DebugLevel, pt.GitProgressCheckedOutAt):
return "M007"
def message(self) -> str:
return f" Checked out at {self.end_sha}."
return f"Checked out at {self.end_sha}."
@dataclass
@@ -1260,26 +1206,6 @@ class SelectorReportInvalidSelector(InfoLevel, pt.SelectorReportInvalidSelector)
)
@dataclass
class JinjaLogInfo(InfoLevel, EventStringFunctor, pt.JinjaLogInfo):
def code(self):
return "M011"
def message(self) -> str:
# This is for the log method used in macros so msg cannot be built here
return self.msg
@dataclass
class JinjaLogDebug(DebugLevel, EventStringFunctor, pt.JinjaLogDebug):
def code(self):
return "M012"
def message(self) -> str:
# This is for the log method used in macros so msg cannot be built here
return self.msg
@dataclass
class DepsNoPackagesFound(InfoLevel, pt.DepsNoPackagesFound):
def code(self):
@@ -1304,7 +1230,7 @@ class DepsInstallInfo(InfoLevel, pt.DepsInstallInfo):
return "M015"
def message(self) -> str:
return f" Installed from {self.version_name}"
return f"Installed from {self.version_name}"
@dataclass
@@ -1313,7 +1239,7 @@ class DepsUpdateAvailable(InfoLevel, pt.DepsUpdateAvailable):
return "M016"
def message(self) -> str:
return f" Updated version available: {self.version_latest}"
return f"Updated version available: {self.version_latest}"
@dataclass
@@ -1322,7 +1248,7 @@ class DepsUpToDate(InfoLevel, pt.DepsUpToDate):
return "M017"
def message(self) -> str:
return " Up to date!"
return "Up to date!"
@dataclass
@@ -1331,7 +1257,7 @@ class DepsListSubdirectory(InfoLevel, pt.DepsListSubdirectory):
return "M018"
def message(self) -> str:
return f" and subdirectory {self.subdirectory}"
return f"and subdirectory {self.subdirectory}"
@dataclass
@@ -1498,15 +1424,6 @@ class SeedHeader(InfoLevel, pt.SeedHeader):
return self.header
@dataclass
class SeedHeaderSeparator(InfoLevel, pt.SeedHeaderSeparator):
def code(self):
return "Q005"
def message(self) -> str:
return "-" * self.len_header
@dataclass
class SQLRunnerException(DebugLevel, pt.SQLRunnerException): # noqa
def code(self):
@@ -2084,13 +2001,18 @@ class OpenCommand(InfoLevel, pt.OpenCommand):
return msg
# We use events to create console output, but also think of them as a sequence of important and
# meaningful occurrences to be used for debugging and monitoring. The Formatting event helps eases
# the tension between these two goals by allowing empty lines, heading separators, and other
# formatting to be written to the console, while they can be ignored for other purposes. For
# general information that isn't simple formatting, the Note event should be used instead.
@dataclass
class EmptyLine(InfoLevel, pt.EmptyLine):
class Formatting(InfoLevel, pt.Formatting):
def code(self):
return "Z017"
def message(self) -> str:
return ""
return self.msg
@dataclass
@@ -2266,7 +2188,7 @@ class DepsCreatingLocalSymlink(DebugLevel, pt.DepsCreatingLocalSymlink):
return "Z037"
def message(self) -> str:
return " Creating symlink to local dependency."
return "Creating symlink to local dependency."
@dataclass
@@ -2275,7 +2197,7 @@ class DepsSymlinkNotAvailable(DebugLevel, pt.DepsSymlinkNotAvailable):
return "Z038"
def message(self) -> str:
return " Symlinks are not available on this OS, copying dependency."
return "Symlinks are not available on this OS, copying dependency."
@dataclass
@@ -2345,3 +2267,41 @@ class RunResultWarningMessage(WarnLevel, EventStringFunctor, pt.RunResultWarning
def message(self) -> str:
# This is the message on the result object, cannot be formatted in event
return self.msg
@dataclass
class DebugCmdOut(InfoLevel, pt.DebugCmdOut):
def code(self):
return "Z047"
def message(self) -> str:
return self.msg
@dataclass
class DebugCmdResult(InfoLevel, pt.DebugCmdResult):
def code(self):
return "Z048"
def message(self) -> str:
return self.msg
@dataclass
class ListCmdOut(InfoLevel, pt.ListCmdOut):
def code(self):
return "Z049"
def message(self) -> str:
return self.msg
# The Note event provides a way to log messages which aren't likely to be useful as more structured events.
# For conslole formatting text like empty lines and separator bars, use the Formatting event instead.
@dataclass
class Note(InfoLevel, pt.Note):
def code(self):
return "Z050"
def message(self) -> str:
return self.msg

View File

@@ -40,7 +40,7 @@ class GraphQueue:
# store the 'score' of each node as a number. Lower is higher priority.
self._scores = self._get_scores(self.graph)
# populate the initial queue
self._find_new_additions()
self._find_new_additions(list(self.graph.nodes()))
# awaits after task end
self.some_task_done = threading.Condition(self.lock)
@@ -156,12 +156,12 @@ class GraphQueue:
"""
return node in self.in_progress or node in self.queued
def _find_new_additions(self) -> None:
def _find_new_additions(self, candidates) -> None:
"""Find any nodes in the graph that need to be added to the internal
queue and add them.
"""
for node, in_degree in self.graph.in_degree():
if not self._already_known(node) and in_degree == 0:
for node in candidates:
if self.graph.in_degree(node) == 0 and not self._already_known(node):
self.inner.put((self._scores[node], node))
self.queued.add(node)
@@ -174,8 +174,9 @@ class GraphQueue:
"""
with self.lock:
self.in_progress.remove(node_id)
successors = list(self.graph.successors(node_id))
self.graph.remove_node(node_id)
self._find_new_additions()
self._find_new_additions(successors)
self.inner.task_done()
self.some_task_done.notify_all()

View File

@@ -12,5 +12,5 @@
where {{ filter }}
{% endif %}
{% endcall %}
{{ return(load_result('collect_freshness').table) }}
{{ return(load_result('collect_freshness')) }}
{% endmacro %}

View File

@@ -1,8 +1,10 @@
{% macro get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates) -%}
{% macro get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates=none) -%}
-- back compat for old kwarg name
{% set incremental_predicates = kwargs.get('predicates', incremental_predicates) %}
{{ adapter.dispatch('get_merge_sql', 'dbt')(target, source, unique_key, dest_columns, incremental_predicates) }}
{%- endmacro %}
{% macro default__get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates) -%}
{% macro default__get_merge_sql(target, source, unique_key, dest_columns, incremental_predicates=none) -%}
{%- set predicates = [] if incremental_predicates is none else [] + incremental_predicates -%}
{%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
{%- set merge_update_columns = config.get('merge_update_columns') -%}

View File

@@ -3,7 +3,7 @@
{%- set ref_dict = {} -%}
{%- for _ref in model.refs -%}
{%- set resolved = ref(*_ref) -%}
{%- do ref_dict.update({_ref | join("."): resolved.quote(database=False, schema=False, identifier=False) | string}) -%}
{%- do ref_dict.update({_ref | join("."): resolved | string | replace('"', '\"')}) -%}
{%- endfor -%}
def ref(*args,dbt_load_df_function):
@@ -18,7 +18,7 @@ def ref(*args,dbt_load_df_function):
{%- set source_dict = {} -%}
{%- for _source in model.sources -%}
{%- set resolved = source(*_source) -%}
{%- do source_dict.update({_source | join("."): resolved.quote(database=False, schema=False, identifier=False) | string}) -%}
{%- do source_dict.update({_source | join("."): resolved | string | replace('"', '\"')}) -%}
{%- endfor -%}
def source(*args, dbt_load_df_function):
@@ -33,8 +33,8 @@ def source(*args, dbt_load_df_function):
{% set config_dbt_used = zip(model.config.config_keys_used, model.config.config_keys_defaults) | list %}
{%- for key, default in config_dbt_used -%}
{# weird type testing with enum, would be much easier to write this logic in Python! #}
{%- if key == 'language' -%}
{%- set value = 'python' -%}
{%- if key == "language" -%}
{%- set value = "python" -%}
{%- endif -%}
{%- set value = model.config.get(key, default) -%}
{%- do config_dict.update({key: value}) -%}
@@ -62,11 +62,12 @@ class config:
class this:
"""dbt.this() or dbt.this.identifier"""
database = '{{ this.database }}'
schema = '{{ this.schema }}'
identifier = '{{ this.identifier }}'
database = "{{ this.database }}"
schema = "{{ this.schema }}"
identifier = "{{ this.identifier }}"
{% set this_relation_name = this | string | replace('"', '\\"') %}
def __repr__(self):
return '{{ this }}'
return "{{ this_relation_name }}"
class dbtObj:

File diff suppressed because one or more lines are too long

View File

@@ -11,8 +11,9 @@ from contextlib import contextmanager
from pathlib import Path
import dbt.version
from dbt.events.functions import fire_event, setup_event_logger, LOG_VERSION
from dbt.events.functions import fire_event, setup_event_logger, setup_fallback_logger, LOG_VERSION
from dbt.events.types import (
EventLevel,
MainEncounteredError,
MainKeyboardInterrupt,
MainReportVersion,
@@ -178,6 +179,13 @@ def handle_and_check(args):
# Set flags from args, user config, and env vars
user_config = read_user_config(flags.PROFILES_DIR) # This is read again later
flags.set_from_args(parsed, user_config)
# If the user has asked to supress non-error logging on the cli, we want to respect that as soon as possible,
# so that any non-error logging done before full log config is loaded and ready is filtered accordingly.
setup_fallback_logger(
bool(flags.ENABLE_LEGACY_LOGGER), EventLevel.ERROR if flags.QUIET else EventLevel.INFO
)
dbt.tracking.initialize_from_flags()
# Set log_format from flags
parsed.cls.set_log_format()
@@ -229,15 +237,15 @@ def run_from_args(parsed):
if task.config is not None:
log_path = getattr(task.config, "log_path", None)
log_manager.set_path(log_path)
# if 'list' task: set stdout to WARN instead of INFO
level_override = parsed.cls.pre_init_hook(parsed)
setup_event_logger(log_path or "logs", level_override)
setup_event_logger(log_path or "logs")
fire_event(MainReportVersion(version=str(dbt.version.installed), log_version=LOG_VERSION))
fire_event(MainReportArgs(args=args_to_dict(parsed)))
# For the ListTask, filter out system report logs to allow piping ls output to jq, etc
if not list_task.ListTask == parsed.cls:
fire_event(MainReportVersion(version=str(dbt.version.installed), log_version=LOG_VERSION))
fire_event(MainReportArgs(args=args_to_dict(parsed)))
if dbt.tracking.active_user is not None: # mypy appeasement, always true
fire_event(MainTrackingUserState(user_state=dbt.tracking.active_user.state()))
if dbt.tracking.active_user is not None: # mypy appeasement, always true
fire_event(MainTrackingUserState(user_state=dbt.tracking.active_user.state()))
results = None
@@ -486,7 +494,7 @@ def _build_snapshot_subparser(subparsers, base_subparser):
return sub
def _add_defer_argument(*subparsers):
def _add_defer_arguments(*subparsers):
for sub in subparsers:
sub.add_optional_argument_inverse(
"--defer",
@@ -499,10 +507,6 @@ def _add_defer_argument(*subparsers):
""",
default=flags.DEFER_MODE,
)
def _add_favor_state_argument(*subparsers):
for sub in subparsers:
sub.add_optional_argument_inverse(
"--favor-state",
enable_help="""
@@ -580,7 +584,7 @@ def _build_docs_generate_subparser(subparsers, base_subparser):
Do not run "dbt compile" as part of docs generation
""",
)
_add_defer_argument(generate_sub)
_add_defer_arguments(generate_sub)
return generate_sub
@@ -1192,9 +1196,7 @@ def parse_args(args, cls=DBTArgumentParser):
# list_sub sets up its own arguments.
_add_selection_arguments(run_sub, compile_sub, generate_sub, test_sub, snapshot_sub, seed_sub)
# --defer
_add_defer_argument(run_sub, test_sub, build_sub, snapshot_sub, compile_sub)
# --favor-state
_add_favor_state_argument(run_sub, test_sub, build_sub, snapshot_sub)
_add_defer_arguments(run_sub, test_sub, build_sub, snapshot_sub, compile_sub)
# --full-refresh
_add_table_mutability_arguments(run_sub, compile_sub, build_sub)

View File

@@ -8,6 +8,7 @@ from typing import Dict, Optional, Mapping, Callable, Any, List, Type, Union, Tu
from itertools import chain
import time
from dbt.events.base_types import EventLevel
import pprint
import dbt.exceptions
import dbt.tracking
@@ -29,6 +30,8 @@ from dbt.events.types import (
ParsedFileLoadFailed,
InvalidDisabledTargetInTestNode,
NodeNotFoundOrDisabled,
StateCheckVarsHash,
Note,
)
from dbt.logger import DbtProcessState
from dbt.node_types import NodeType
@@ -569,6 +572,12 @@ class ManifestLoader:
reason="config vars, config profile, or config target have changed"
)
)
fire_event(
Note(
msg=f"previous checksum: {self.manifest.state_check.vars_hash.checksum}, current checksum: {manifest.state_check.vars_hash.checksum}"
),
level=EventLevel.DEBUG,
)
valid = False
reparse_reason = ReparseReason.vars_changed
if self.manifest.state_check.profile_hash != manifest.state_check.profile_hash:
@@ -702,16 +711,28 @@ class ManifestLoader:
# arg vars, but since any changes to that file will cause state_check
# to not pass, it doesn't matter. If we move to more granular checking
# of env_vars, that would need to change.
# We are using the parsed cli_vars instead of config.args.vars, in order
# to sort them and avoid reparsing because of ordering issues.
stringified_cli_vars = pprint.pformat(config.cli_vars)
vars_hash = FileHash.from_contents(
"\x00".join(
[
getattr(config.args, "vars", "{}") or "{}",
stringified_cli_vars,
getattr(config.args, "profile", "") or "",
getattr(config.args, "target", "") or "",
__version__,
]
)
)
fire_event(
StateCheckVarsHash(
checksum=vars_hash.checksum,
vars=stringified_cli_vars,
profile=config.args.profile,
target=config.args.target,
version=__version__,
)
)
# Create a FileHash of the env_vars in the project
key_list = list(config.project_env_vars.keys())

View File

@@ -1,19 +1,10 @@
from copy import deepcopy
from dbt.context.context_config import ContextConfig
from dbt.contracts.graph.nodes import ModelNode
import dbt.flags as flags
from dbt.events.base_types import EventLevel
from dbt.events.types import Note
from dbt.events.functions import fire_event
from dbt.events.types import (
StaticParserCausedJinjaRendering,
UsingExperimentalParser,
SampleFullJinjaRendering,
StaticParserFallbackJinjaRendering,
StaticParsingMacroOverrideDetected,
StaticParserSuccess,
StaticParserFailure,
ExperimentalParserSuccess,
ExperimentalParserFailure,
)
import dbt.flags as flags
from dbt.node_types import NodeType, ModelLanguage
from dbt.parser.base import SimpleSQLParser
from dbt.parser.search import FileBlock
@@ -261,7 +252,10 @@ class ModelParser(SimpleSQLParser[ModelNode]):
elif not flags.STATIC_PARSER:
# jinja rendering
super().render_update(node, config)
fire_event(StaticParserCausedJinjaRendering(path=node.path))
fire_event(
Note(f"1605: jinja rendering because of STATIC_PARSER flag. file: {node.path}"),
EventLevel.DEBUG,
)
return
# only sample for experimental parser correctness on normal runs,
@@ -295,7 +289,10 @@ class ModelParser(SimpleSQLParser[ModelNode]):
# sample the experimental parser only during a normal run
if exp_sample and not flags.USE_EXPERIMENTAL_PARSER:
fire_event(UsingExperimentalParser(path=node.path))
fire_event(
Note(f"1610: conducting experimental parser sample on {node.path}"),
EventLevel.DEBUG,
)
experimental_sample = self.run_experimental_parser(node)
# if the experimental parser succeeded, make a full copy of model parser
# and populate _everything_ into it so it can be compared apples-to-apples
@@ -325,7 +322,10 @@ class ModelParser(SimpleSQLParser[ModelNode]):
# sampling rng here, but the effect would be the same since we would only roll
# it 40% of the time. So I've opted to keep all the rng code colocated above.
if stable_sample and not flags.USE_EXPERIMENTAL_PARSER:
fire_event(SampleFullJinjaRendering(path=node.path))
fire_event(
Note(f"1611: conducting full jinja rendering sample on {node.path}"),
EventLevel.DEBUG,
)
# if this will _never_ mutate anything `self` we could avoid these deep copies,
# but we can't really guarantee that going forward.
model_parser_copy = self.partial_deepcopy()
@@ -360,7 +360,9 @@ class ModelParser(SimpleSQLParser[ModelNode]):
else:
# jinja rendering
super().render_update(node, config)
fire_event(StaticParserFallbackJinjaRendering(path=node.path))
fire_event(
Note(f"1602: parser fallback to jinja rendering on {node.path}"), EventLevel.DEBUG
)
# if sampling, add the correct messages for tracking
if exp_sample and isinstance(experimental_sample, str):
@@ -396,19 +398,26 @@ class ModelParser(SimpleSQLParser[ModelNode]):
# this log line is used for integration testing. If you change
# the code at the beginning of the line change the tests in
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
fire_event(StaticParsingMacroOverrideDetected(path=node.path))
fire_event(
Note(
f"1601: detected macro override of ref/source/config in the scope of {node.path}"
),
EventLevel.DEBUG,
)
return "has_banned_macro"
# run the stable static parser and return the results
try:
statically_parsed = py_extract_from_source(node.raw_code)
fire_event(StaticParserSuccess(path=node.path))
fire_event(
Note(f"1699: static parser successfully parsed {node.path}"), EventLevel.DEBUG
)
return _shift_sources(statically_parsed)
# if we want information on what features are barring the static
# parser from reading model files, this is where we would add that
# since that information is stored in the `ExtractionError`.
except ExtractionError:
fire_event(StaticParserFailure(path=node.path))
fire_event(Note(f"1603: static parser failed on {node.path}"), EventLevel.DEBUG)
return "cannot_parse"
def run_experimental_parser(
@@ -419,7 +428,12 @@ class ModelParser(SimpleSQLParser[ModelNode]):
# this log line is used for integration testing. If you change
# the code at the beginning of the line change the tests in
# test/integration/072_experimental_parser_tests/test_all_experimental_parser.py
fire_event(StaticParsingMacroOverrideDetected(path=node.path))
fire_event(
Note(
f"1601: detected macro override of ref/source/config in the scope of {node.path}"
),
EventLevel.DEBUG,
)
return "has_banned_macro"
# run the experimental parser and return the results
@@ -428,13 +442,16 @@ class ModelParser(SimpleSQLParser[ModelNode]):
# experimental features. Change `py_extract_from_source` to the new
# experimental call when we add additional features.
experimentally_parsed = py_extract_from_source(node.raw_code)
fire_event(ExperimentalParserSuccess(path=node.path))
fire_event(
Note(f"1698: experimental parser successfully parsed {node.path}"),
EventLevel.DEBUG,
)
return _shift_sources(experimentally_parsed)
# if we want information on what features are barring the experimental
# parser from reading model files, this is where we would add that
# since that information is stored in the `ExtractionError`.
except ExtractionError:
fire_event(ExperimentalParserFailure(path=node.path))
fire_event(Note(f"1604: experimental parser failed on {node.path}"), EventLevel.DEBUG)
return "cannot_parse"
# checks for banned macros

View File

@@ -1,10 +1,7 @@
from dataclasses import dataclass
import re
import warnings
from typing import List
from packaging import version as packaging_version
from dbt.exceptions import VersionsNotCompatibleError
import dbt.utils
@@ -70,6 +67,11 @@ $
_VERSION_REGEX = re.compile(_VERSION_REGEX_PAT_STR, re.VERBOSE)
def _cmp(a, b):
"""Return negative if a<b, zero if a==b, positive if a>b."""
return (a > b) - (a < b)
@dataclass
class VersionSpecifier(VersionSpecification):
def to_version_string(self, skip_matcher=False):
@@ -142,13 +144,19 @@ class VersionSpecifier(VersionSpecification):
return 1
if b is None:
return -1
# This suppresses the LegacyVersion deprecation warning
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=DeprecationWarning)
if packaging_version.parse(a) > packaging_version.parse(b):
# Check the prerelease component only
prcmp = self._nat_cmp(a, b)
if prcmp != 0: # either -1 or 1
return prcmp
# else is equal and will fall through
else: # major/minor/patch, should all be numbers
if a > b:
return 1
elif packaging_version.parse(a) < packaging_version.parse(b):
elif a < b:
return -1
# else is equal and will fall through
equal = (
self.matcher == Matchers.GREATER_THAN_OR_EQUAL
@@ -212,6 +220,29 @@ class VersionSpecifier(VersionSpecification):
def is_exact(self):
return self.matcher == Matchers.EXACT
@classmethod
def _nat_cmp(cls, a, b):
def cmp_prerelease_tag(a, b):
if isinstance(a, int) and isinstance(b, int):
return _cmp(a, b)
elif isinstance(a, int):
return -1
elif isinstance(b, int):
return 1
else:
return _cmp(a, b)
a, b = a or "", b or ""
a_parts, b_parts = a.split("."), b.split(".")
a_parts = [int(x) if re.match(r"^\d+$", x) else x for x in a_parts]
b_parts = [int(x) if re.match(r"^\d+$", x) else x for x in b_parts]
for sub_a, sub_b in zip(a_parts, b_parts):
cmp_result = cmp_prerelease_tag(sub_a, sub_b)
if cmp_result != 0:
return cmp_result
else:
return _cmp(len(a), len(b))
@dataclass
class VersionRange:

View File

@@ -83,6 +83,7 @@ class CompileTask(GraphRunnableTask):
adapter=adapter,
other=deferred_manifest,
selected=selected_uids,
favor_state=bool(self.args.favor_state),
)
# TODO: is it wrong to write the manifest here? I think it's right...
self.write_manifest()

View File

@@ -5,7 +5,11 @@ import sys
from typing import Optional, Dict, Any, List
from dbt.events.functions import fire_event
from dbt.events.types import OpenCommand
from dbt.events.types import (
OpenCommand,
DebugCmdOut,
DebugCmdResult,
)
from dbt import flags
import dbt.clients.system
import dbt.exceptions
@@ -99,25 +103,25 @@ class DebugTask(BaseTask):
return not self.any_failure
version = get_installed_version().to_version_string(skip_matcher=True)
print("dbt version: {}".format(version))
print("python version: {}".format(sys.version.split()[0]))
print("python path: {}".format(sys.executable))
print("os info: {}".format(platform.platform()))
print("Using profiles.yml file at {}".format(self.profile_path))
print("Using dbt_project.yml file at {}".format(self.project_path))
print("")
fire_event(DebugCmdOut(msg="dbt version: {}".format(version)))
fire_event(DebugCmdOut(msg="python version: {}".format(sys.version.split()[0])))
fire_event(DebugCmdOut(msg="python path: {}".format(sys.executable)))
fire_event(DebugCmdOut(msg="os info: {}".format(platform.platform())))
fire_event(DebugCmdOut(msg="Using profiles.yml file at {}".format(self.profile_path)))
fire_event(DebugCmdOut(msg="Using dbt_project.yml file at {}".format(self.project_path)))
self.test_configuration()
self.test_dependencies()
self.test_connection()
if self.any_failure:
print(red(f"{(pluralize(len(self.messages), 'check'))} failed:"))
fire_event(
DebugCmdResult(msg=red(f"{(pluralize(len(self.messages), 'check'))} failed:"))
)
else:
print(green("All checks passed!"))
fire_event(DebugCmdResult(msg=green("All checks passed!")))
for message in self.messages:
print(message)
print("")
fire_event(DebugCmdResult(msg=f"{message}\n"))
return not self.any_failure
@@ -273,21 +277,33 @@ class DebugTask(BaseTask):
return green("OK found")
def test_dependencies(self):
print("Required dependencies:")
print(" - git [{}]".format(self.test_git()))
print("")
fire_event(DebugCmdOut(msg="Required dependencies:"))
logline_msg = self.test_git()
fire_event(DebugCmdResult(msg=f" - git [{logline_msg}]\n"))
def test_configuration(self):
fire_event(DebugCmdOut(msg="Configuration:"))
profile_status = self._load_profile()
fire_event(DebugCmdOut(msg=f" profiles.yml file [{profile_status}]"))
project_status = self._load_project()
print("Configuration:")
print(" profiles.yml file [{}]".format(profile_status))
print(" dbt_project.yml file [{}]".format(project_status))
fire_event(DebugCmdOut(msg=f" dbt_project.yml file [{project_status}]"))
# skip profile stuff if we can't find a profile name
if self.profile_name is not None:
print(" profile: {} [{}]".format(self.profile_name, self._profile_found()))
print(" target: {} [{}]".format(self.target_name, self._target_found()))
print("")
fire_event(
DebugCmdOut(
msg=" profile: {} [{}]\n".format(self.profile_name, self._profile_found())
)
)
fire_event(
DebugCmdOut(
msg=" target: {} [{}]\n".format(self.target_name, self._target_found())
)
)
self._log_project_fail()
self._log_profile_fail()
@@ -348,11 +364,12 @@ class DebugTask(BaseTask):
def test_connection(self):
if not self.profile:
return
print("Connection:")
fire_event(DebugCmdOut(msg="Connection:"))
for k, v in self.profile.credentials.connection_info():
print(" {}: {}".format(k, v))
print(" Connection test: [{}]".format(self._connection_result()))
print("")
fire_event(DebugCmdOut(msg=f" {k}: {v}"))
res = self._connection_result()
fire_event(DebugCmdOut(msg=f" Connection test: [{res}]\n"))
@classmethod
def validate_connection(cls, target_dict):

View File

@@ -20,7 +20,7 @@ from dbt.events.types import (
DepsInstallInfo,
DepsListSubdirectory,
DepsNotifyUpdatesAvailable,
EmptyLine,
Formatting,
)
from dbt.clients import system
@@ -88,7 +88,7 @@ class DepsTask(BaseTask):
package_name=package_name, source_type=source_type, version=version
)
if packages_to_upgrade:
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(DepsNotifyUpdatesAvailable(packages=ListOfStrings(packages_to_upgrade)))
@classmethod

View File

@@ -105,10 +105,10 @@ class FreshnessRunner(BaseRunner):
)
relation = self.adapter.Relation.create_from_source(compiled_node)
# given a Source, calculate its fresnhess.
# given a Source, calculate its freshness.
with self.adapter.connection_for(compiled_node):
self.adapter.clear_transaction()
freshness = self.adapter.calculate_freshness(
adapter_response, freshness = self.adapter.calculate_freshness(
relation,
compiled_node.loaded_at_field,
compiled_node.freshness.filter,
@@ -124,7 +124,7 @@ class FreshnessRunner(BaseRunner):
timing=[],
execution_time=0,
message=None,
adapter_response={},
adapter_response=adapter_response.to_dict(omit_none=True),
failures=None,
**freshness,
)

View File

@@ -1,15 +1,21 @@
import json
import dbt.flags
from dbt.contracts.graph.nodes import Exposure, SourceDefinition, Metric
from dbt.graph import ResourceTypeSelector
from dbt.task.runnable import GraphRunnableTask, ManifestTask
from dbt.task.test import TestSelector
from dbt.node_types import NodeType
from dbt.events.functions import warn_or_error
from dbt.events.types import NoNodesSelected
from dbt.events.functions import (
fire_event,
warn_or_error,
)
from dbt.events.types import (
NoNodesSelected,
ListCmdOut,
)
from dbt.exceptions import DbtRuntimeError, DbtInternalError
from dbt.logger import log_manager
from dbt.events.eventmgr import EventLevel
class ListTask(GraphRunnableTask):
@@ -50,20 +56,6 @@ class ListTask(GraphRunnableTask):
'"models" and "resource_type" are mutually exclusive ' "arguments"
)
@classmethod
def pre_init_hook(cls, args):
"""A hook called before the task is initialized."""
# Filter out all INFO-level logging to allow piping ls output to jq, etc
# WARN level will still include all warnings + errors
# Do this by:
# - returning the log level so that we can pass it into the 'level_override'
# arg of events.functions.setup_event_logger() -- good!
# - mutating the initialized, not-yet-configured STDOUT event logger
# because it's being configured too late -- bad! TODO refactor!
log_manager.stderr_console()
super().pre_init_hook(args)
return EventLevel.WARN
def _iterate_selected_nodes(self):
selector = self.get_node_selector()
spec = self.get_selection_spec()
@@ -148,9 +140,14 @@ class ListTask(GraphRunnableTask):
return self.output_results(generator())
def output_results(self, results):
"""Log, or output a plain, newline-delimited, and ready-to-pipe list of nodes found."""
for result in results:
self.node_results.append(result)
print(result)
if dbt.flags.LOG_FORMAT == "json":
fire_event(ListCmdOut(msg=result))
else:
# Cleaner to leave as print than to mutate the logger not to print timestamps.
print(result)
return self.node_results
@property

View File

@@ -5,7 +5,7 @@ from dbt.logger import (
)
from dbt.events.functions import fire_event
from dbt.events.types import (
EmptyLine,
Formatting,
RunResultWarning,
RunResultWarningMessage,
RunResultFailure,
@@ -72,14 +72,14 @@ def print_run_status_line(results) -> None:
stats["total"] += 1
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(StatsLine(stats=stats))
def print_run_result_error(result, newline: bool = True, is_warning: bool = False) -> None:
if newline:
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
if result.status == NodeStatus.Fail or (is_warning and result.status == NodeStatus.Warn):
if is_warning:
@@ -109,12 +109,12 @@ def print_run_result_error(result, newline: bool = True, is_warning: bool = Fals
if result.node.build_path is not None:
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(SQLCompiledPath(path=result.node.compiled_path))
if result.node.should_store_failures:
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(CheckNodeTestFailure(relation_name=result.node.relation_name))
elif result.message is not None:
@@ -143,7 +143,7 @@ def print_run_end_messages(results, keyboard_interrupt: bool = False) -> None:
with DbtStatusMessage(), InvocationProcessor():
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(
EndOfRunSummary(
num_errors=len(errors),

View File

@@ -30,7 +30,7 @@ from dbt.exceptions import (
from dbt.events.functions import fire_event, get_invocation_id
from dbt.events.types import (
DatabaseErrorRunningHook,
EmptyLine,
Formatting,
HooksRunning,
FinishedRunningStats,
LogModelResult,
@@ -335,7 +335,7 @@ class RunTask(CompileTask):
num_hooks = len(ordered_hooks)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(HooksRunning(num_hooks=num_hooks, hook_type=hook_type))
startctx = TimestampNamed("node_started_at")
@@ -388,7 +388,7 @@ class RunTask(CompileTask):
self._total_executed += len(ordered_hooks)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
def safe_run_hooks(
self, adapter, hook_type: RunHookType, extra_context: Dict[str, Any]
@@ -419,7 +419,7 @@ class RunTask(CompileTask):
execution = utils.humanize_execution_time(execution_time=execution_time)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(
FinishedRunningStats(
stat_line=stat_line, execution=execution, execution_time=execution_time
@@ -443,7 +443,7 @@ class RunTask(CompileTask):
database_schema_set: Set[Tuple[Optional[str], str]] = {
(r.node.database, r.node.schema)
for r in results
if r.node.is_relational
if (hasattr(r, "node") and r.node.is_relational)
and r.status not in (NodeStatus.Error, NodeStatus.Fail, NodeStatus.Skipped)
}

View File

@@ -28,7 +28,7 @@ from dbt.logger import (
)
from dbt.events.functions import fire_event, warn_or_error
from dbt.events.types import (
EmptyLine,
Formatting,
LogCancelLine,
DefaultSelector,
NodeStart,
@@ -377,7 +377,7 @@ class GraphRunnableTask(ManifestTask):
)
)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
pool = ThreadPool(num_threads)
try:
@@ -458,7 +458,7 @@ class GraphRunnableTask(ManifestTask):
if len(self._flattened_nodes) == 0:
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
warn_or_error(NothingToDo())
result = self.get_result(
results=[],
@@ -467,7 +467,7 @@ class GraphRunnableTask(ManifestTask):
)
else:
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
selected_uids = frozenset(n.unique_id for n in self._flattened_nodes)
result = self.execute_with_hooks(selected_uids)

View File

@@ -12,8 +12,7 @@ from dbt.logger import TextOnly
from dbt.events.functions import fire_event
from dbt.events.types import (
SeedHeader,
SeedHeaderSeparator,
EmptyLine,
Formatting,
LogSeedResult,
LogStartLine,
)
@@ -99,13 +98,13 @@ class SeedTask(RunTask):
header = "Random sample of table: {}.{}".format(schema, alias)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(SeedHeader(header=header))
fire_event(SeedHeaderSeparator(len_header=len(header)))
fire_event(Formatting("-" * len(header)))
rand_table.print_table(max_rows=10, max_columns=None)
with TextOnly():
fire_event(EmptyLine())
fire_event(Formatting(""))
def show_tables(self, results):
for result in results:

View File

@@ -6,7 +6,12 @@ from dbt.include.global_project import DOCS_INDEX_FILE_PATH
from http.server import SimpleHTTPRequestHandler
from socketserver import TCPServer
from dbt.events.functions import fire_event
from dbt.events.types import ServingDocsPort, ServingDocsAccessInfo, ServingDocsExitInfo, EmptyLine
from dbt.events.types import (
ServingDocsPort,
ServingDocsAccessInfo,
ServingDocsExitInfo,
Formatting,
)
from dbt.task.base import ConfiguredTask
@@ -22,8 +27,8 @@ class ServeTask(ConfiguredTask):
fire_event(ServingDocsPort(address=address, port=port))
fire_event(ServingDocsAccessInfo(port=port))
fire_event(EmptyLine())
fire_event(EmptyLine())
fire_event(Formatting(""))
fire_event(Formatting(""))
fire_event(ServingDocsExitInfo())
# mypy doesn't think SimpleHTTPRequestHandler is ok here, but it is

View File

@@ -5,6 +5,7 @@ from dbt.utils import _coerce_decimal
from dbt.events.format import pluralize
from dbt.dataclass_schema import dbtClassMixin
import threading
from typing import Dict, Any
from .compile import CompileRunner
from .run import RunTask
@@ -38,6 +39,7 @@ class TestResultData(dbtClassMixin):
failures: int
should_warn: bool
should_error: bool
adapter_response: Dict[str, Any]
@classmethod
def validate(cls, data):
@@ -137,6 +139,7 @@ class TestRunner(CompileRunner):
map(_coerce_decimal, table.rows[0]),
)
)
test_result_dct["adapter_response"] = result["response"].to_dict(omit_none=True)
TestResultData.validate(test_result_dct)
return TestResultData.from_dict(test_result_dct)
@@ -171,7 +174,7 @@ class TestRunner(CompileRunner):
thread_id=thread_id,
execution_time=0,
message=message,
adapter_response={},
adapter_response=result.adapter_response,
failures=failures,
)

View File

@@ -29,6 +29,8 @@ from dbt.events.test_types import IntegrationTestDebug
# rm_file
# write_file
# read_file
# mkdir
# rm_dir
# get_artifact
# update_config_file
# write_config_file
@@ -156,6 +158,22 @@ def read_file(*paths):
return contents
# To create a directory
def mkdir(directory_path):
try:
os.makedirs(directory_path)
except FileExistsError:
raise FileExistsError(f"{directory_path} already exists.")
# To remove a directory
def rm_dir(directory_path):
try:
shutil.rmtree(directory_path)
except FileNotFoundError:
raise FileNotFoundError(f"{directory_path} does not exist.")
# Get an artifact (usually from the target directory) such as
# manifest.json or catalog.json to use in a test
def get_artifact(*paths):

View File

@@ -58,7 +58,7 @@ setup(
"minimal-snowplow-tracker==0.0.2",
"networkx>=2.3,<2.8.1;python_version<'3.8'",
"networkx>=2.3,<3;python_version>='3.8'",
"packaging>=20.9,<22.0",
"packaging>20.9",
"sqlparse>=0.2.3,<0.5",
"dbt-extractor~=0.4.1",
"typing-extensions>=3.7.4",

View File

@@ -1,7 +1,7 @@
# these are mostly just exports, #noqa them so flake8 will be happy
from dbt.adapters.postgres.connections import PostgresConnectionManager # noqa
from dbt.adapters.postgres.connections import PostgresCredentials
from dbt.adapters.postgres.relation import PostgresColumn # noqa
from dbt.adapters.postgres.column import PostgresColumn # noqa
from dbt.adapters.postgres.relation import PostgresRelation # noqa: F401
from dbt.adapters.postgres.impl import PostgresAdapter

View File

@@ -0,0 +1,12 @@
from dbt.adapters.base import Column
class PostgresColumn(Column):
@property
def data_type(self):
# on postgres, do not convert 'text' or 'varchar' to 'varchar()'
if self.dtype.lower() == "text" or (
self.dtype.lower() == "character varying" and self.char_size is None
):
return self.dtype
return super().data_type

View File

@@ -5,7 +5,7 @@ from dbt.adapters.base.meta import available
from dbt.adapters.base.impl import AdapterConfig
from dbt.adapters.sql import SQLAdapter
from dbt.adapters.postgres import PostgresConnectionManager
from dbt.adapters.postgres import PostgresColumn
from dbt.adapters.postgres.column import PostgresColumn
from dbt.adapters.postgres import PostgresRelation
from dbt.dataclass_schema import dbtClassMixin, ValidationError
from dbt.exceptions import (

View File

@@ -1,4 +1,3 @@
from dbt.adapters.base import Column
from dataclasses import dataclass
from dbt.adapters.base.relation import BaseRelation
from dbt.exceptions import DbtRuntimeError
@@ -21,14 +20,3 @@ class PostgresRelation(BaseRelation):
def relation_max_name_length(self):
return 63
class PostgresColumn(Column):
@property
def data_type(self):
# on postgres, do not convert 'text' or 'varchar' to 'varchar()'
if self.dtype.lower() == "text" or (
self.dtype.lower() == "character varying" and self.char_size is None
):
return self.dtype
return super().data_type

View File

@@ -6,5 +6,4 @@ env_files =
test.env
testpaths =
test/unit
test/integration
tests/functional

View File

@@ -185,12 +185,12 @@
},
"dbt_version": {
"type": "string",
"default": "1.4.0a1"
"default": "1.5.0a1"
},
"generated_at": {
"type": "string",
"format": "date-time",
"default": "2022-12-13T03:30:15.966964Z"
"default": "2023-02-06T14:19:58.594083Z"
},
"invocation_id": {
"oneOf": [
@@ -201,7 +201,7 @@
"type": "null"
}
],
"default": "4f2b967b-7e02-46de-a7ea-268a05e3fab1"
"default": "1dc2da24-242b-43e7-a6c8-0385fcd8ff50"
},
"env": {
"type": "object",
@@ -262,7 +262,6 @@
"AnalysisNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -276,7 +275,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -400,7 +406,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.970579
"default": 1675693198.596908
},
"config_call_dict": {
"type": "object",
@@ -498,7 +504,7 @@
}
},
"additionalProperties": false,
"description": "AnalysisNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "AnalysisNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"FileHash": {
"type": "object",
@@ -811,7 +817,6 @@
"SingularTestNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -825,7 +830,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -941,7 +953,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.973521
"default": 1675693198.598933
},
"config_call_dict": {
"type": "object",
@@ -1039,7 +1051,7 @@
}
},
"additionalProperties": false,
"description": "SingularTestNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.TestConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "SingularTestNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.TestConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"TestConfig": {
"type": "object",
@@ -1156,7 +1168,6 @@
"HookNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -1170,7 +1181,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -1294,7 +1312,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.975156
"default": 1675693198.60002
},
"config_call_dict": {
"type": "object",
@@ -1402,12 +1420,11 @@
}
},
"additionalProperties": false,
"description": "HookNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None, index: Optional[int] = None)"
"description": "HookNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None, index: Optional[int] = None)"
},
"ModelNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -1421,7 +1438,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -1545,7 +1569,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.976732
"default": 1675693198.601195
},
"config_call_dict": {
"type": "object",
@@ -1643,12 +1667,11 @@
}
},
"additionalProperties": false,
"description": "ModelNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "ModelNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"RPCNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -1662,7 +1685,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -1786,7 +1816,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.978195
"default": 1675693198.6022851
},
"config_call_dict": {
"type": "object",
@@ -1884,12 +1914,11 @@
}
},
"additionalProperties": false,
"description": "RPCNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "RPCNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"SqlNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -1903,7 +1932,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -2027,7 +2063,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.979718
"default": 1675693198.603358
},
"config_call_dict": {
"type": "object",
@@ -2125,13 +2161,12 @@
}
},
"additionalProperties": false,
"description": "SqlNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "SqlNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.NodeConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"GenericTestNode": {
"type": "object",
"required": [
"test_metadata",
"database",
"schema",
"name",
"resource_type",
@@ -2148,7 +2183,14 @@
"$ref": "#/definitions/TestMetadata"
},
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -2264,7 +2306,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.981434
"default": 1675693198.60465
},
"config_call_dict": {
"type": "object",
@@ -2382,7 +2424,7 @@
}
},
"additionalProperties": false,
"description": "GenericTestNode(test_metadata: dbt.contracts.graph.nodes.TestMetadata, database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.TestConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None, column_name: Optional[str] = None, file_key_name: Optional[str] = None)"
"description": "GenericTestNode(test_metadata: dbt.contracts.graph.nodes.TestMetadata, database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.TestConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None, column_name: Optional[str] = None, file_key_name: Optional[str] = None)"
},
"TestMetadata": {
"type": "object",
@@ -2414,7 +2456,6 @@
"SnapshotNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -2429,7 +2470,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -2529,7 +2577,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.984685
"default": 1675693198.606718
},
"config_call_dict": {
"type": "object",
@@ -2627,7 +2675,7 @@
}
},
"additionalProperties": false,
"description": "SnapshotNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.SnapshotConfig, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
"description": "SnapshotNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.SnapshotConfig, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', language: str = 'sql', refs: List[List[str]] = <factory>, sources: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, compiled_path: Optional[str] = None, compiled: bool = False, compiled_code: Optional[str] = None, extra_ctes_injected: bool = False, extra_ctes: List[dbt.contracts.graph.nodes.InjectedCTE] = <factory>, _pre_injected_sql: Optional[str] = None)"
},
"SnapshotConfig": {
"type": "object",
@@ -2837,7 +2885,6 @@
"SeedNode": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -2851,7 +2898,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -2976,7 +3030,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.987447
"default": 1675693198.608646
},
"config_call_dict": {
"type": "object",
@@ -3005,10 +3059,16 @@
"type": "null"
}
]
},
"depends_on": {
"$ref": "#/definitions/MacroDependsOn",
"default": {
"macros": []
}
}
},
"additionalProperties": false,
"description": "SeedNode(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.SeedConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', root_path: Optional[str] = None)"
"description": "SeedNode(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], alias: str, checksum: dbt.contracts.files.FileHash, config: dbt.contracts.graph.model_config.SeedConfig = <factory>, _event_status: Dict[str, Any] = <factory>, tags: List[str] = <factory>, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, build_path: Optional[str] = None, deferred: bool = False, unrendered_config: Dict[str, Any] = <factory>, created_at: float = <factory>, config_call_dict: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, raw_code: str = '', root_path: Optional[str] = None, depends_on: dbt.contracts.graph.nodes.MacroDependsOn = <factory>)"
},
"SeedConfig": {
"type": "object",
@@ -3175,10 +3235,24 @@
"additionalProperties": true,
"description": "SeedConfig(_extra: Dict[str, Any] = <factory>, enabled: bool = True, alias: Optional[str] = None, schema: Optional[str] = None, database: Optional[str] = None, tags: Union[List[str], str] = <factory>, meta: Dict[str, Any] = <factory>, materialized: str = 'seed', incremental_strategy: Optional[str] = None, persist_docs: Dict[str, Any] = <factory>, post_hook: List[dbt.contracts.graph.model_config.Hook] = <factory>, pre_hook: List[dbt.contracts.graph.model_config.Hook] = <factory>, quoting: Dict[str, Any] = <factory>, column_types: Dict[str, Any] = <factory>, full_refresh: Optional[bool] = None, unique_key: Union[str, List[str], NoneType] = None, on_schema_change: Optional[str] = 'ignore', grants: Dict[str, Any] = <factory>, packages: List[str] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, quote_columns: Optional[bool] = None)"
},
"MacroDependsOn": {
"type": "object",
"required": [],
"properties": {
"macros": {
"type": "array",
"items": {
"type": "string"
},
"default": []
}
},
"additionalProperties": false,
"description": "Used only in the Macro class"
},
"SourceDefinition": {
"type": "object",
"required": [
"database",
"schema",
"name",
"resource_type",
@@ -3194,7 +3268,14 @@
],
"properties": {
"database": {
"type": "string"
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"schema": {
"type": "string"
@@ -3335,11 +3416,11 @@
},
"created_at": {
"type": "number",
"default": 1670902215.989922
"default": 1675693198.610521
}
},
"additionalProperties": false,
"description": "SourceDefinition(database: str, schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], source_name: str, source_description: str, loader: str, identifier: str, _event_status: Dict[str, Any] = <factory>, quoting: dbt.contracts.graph.unparsed.Quoting = <factory>, loaded_at_field: Optional[str] = None, freshness: Optional[dbt.contracts.graph.unparsed.FreshnessThreshold] = None, external: Optional[dbt.contracts.graph.unparsed.ExternalTable] = None, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, source_meta: Dict[str, Any] = <factory>, tags: List[str] = <factory>, config: dbt.contracts.graph.model_config.SourceConfig = <factory>, patch_path: Optional[str] = None, unrendered_config: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, created_at: float = <factory>)"
"description": "SourceDefinition(database: Optional[str], schema: str, name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], source_name: str, source_description: str, loader: str, identifier: str, _event_status: Dict[str, Any] = <factory>, quoting: dbt.contracts.graph.unparsed.Quoting = <factory>, loaded_at_field: Optional[str] = None, freshness: Optional[dbt.contracts.graph.unparsed.FreshnessThreshold] = None, external: Optional[dbt.contracts.graph.unparsed.ExternalTable] = None, description: str = '', columns: Dict[str, dbt.contracts.graph.nodes.ColumnInfo] = <factory>, meta: Dict[str, Any] = <factory>, source_meta: Dict[str, Any] = <factory>, tags: List[str] = <factory>, config: dbt.contracts.graph.model_config.SourceConfig = <factory>, patch_path: Optional[str] = None, unrendered_config: Dict[str, Any] = <factory>, relation_name: Optional[str] = None, created_at: float = <factory>)"
},
"Quoting": {
"type": "object",
@@ -3445,12 +3526,12 @@
},
"dbt_version": {
"type": "string",
"default": "1.4.0a1"
"default": "1.5.0a1"
},
"generated_at": {
"type": "string",
"format": "date-time",
"default": "2022-12-13T03:30:15.961825Z"
"default": "2023-02-06T14:19:58.590211Z"
},
"invocation_id": {
"oneOf": [
@@ -3461,7 +3542,7 @@
"type": "null"
}
],
"default": "4f2b967b-7e02-46de-a7ea-268a05e3fab1"
"default": "1dc2da24-242b-43e7-a6c8-0385fcd8ff50"
},
"env": {
"type": "object",
@@ -3472,7 +3553,7 @@
}
},
"additionalProperties": false,
"description": "FreshnessMetadata(dbt_schema_version: str = <factory>, dbt_version: str = '1.4.0a1', generated_at: datetime.datetime = <factory>, invocation_id: Optional[str] = <factory>, env: Dict[str, str] = <factory>)"
"description": "FreshnessMetadata(dbt_schema_version: str = <factory>, dbt_version: str = '1.5.0a1', generated_at: datetime.datetime = <factory>, invocation_id: Optional[str] = <factory>, env: Dict[str, str] = <factory>)"
},
"SourceFreshnessRuntimeError": {
"type": "object",
@@ -3814,7 +3895,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.990816
"default": 1675693198.611094
},
"supported_languages": {
"oneOf": [
@@ -3837,21 +3918,6 @@
"additionalProperties": false,
"description": "Macro(name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, macro_sql: str, depends_on: dbt.contracts.graph.nodes.MacroDependsOn = <factory>, description: str = '', meta: Dict[str, Any] = <factory>, docs: dbt.contracts.graph.unparsed.Docs = <factory>, patch_path: Optional[str] = None, arguments: List[dbt.contracts.graph.unparsed.MacroArgument] = <factory>, created_at: float = <factory>, supported_languages: Optional[List[dbt.node_types.ModelLanguage]] = None)"
},
"MacroDependsOn": {
"type": "object",
"required": [],
"properties": {
"macros": {
"type": "array",
"items": {
"type": "string"
},
"default": []
}
},
"additionalProperties": false,
"description": "Used only in the Macro class"
},
"MacroArgument": {
"type": "object",
"required": [
@@ -4072,7 +4138,7 @@
},
"created_at": {
"type": "number",
"default": 1670902215.993354
"default": 1675693198.6123588
}
},
"additionalProperties": false,
@@ -4126,7 +4192,6 @@
"description",
"label",
"calculation_method",
"timestamp",
"expression",
"filters",
"time_grains",
@@ -4169,9 +4234,6 @@
"calculation_method": {
"type": "string"
},
"timestamp": {
"type": "string"
},
"expression": {
"type": "string"
},
@@ -4193,6 +4255,16 @@
"type": "string"
}
},
"timestamp": {
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"window": {
"oneOf": [
{
@@ -4283,11 +4355,11 @@
},
"created_at": {
"type": "number",
"default": 1670902215.995033
"default": 1675693198.613676
}
},
"additionalProperties": false,
"description": "Metric(name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], description: str, label: str, calculation_method: str, timestamp: str, expression: str, filters: List[dbt.contracts.graph.unparsed.MetricFilter], time_grains: List[str], dimensions: List[str], window: Optional[dbt.contracts.graph.unparsed.MetricTime] = None, model: Optional[str] = None, model_unique_id: Optional[str] = None, meta: Dict[str, Any] = <factory>, tags: List[str] = <factory>, config: dbt.contracts.graph.model_config.MetricConfig = <factory>, unrendered_config: Dict[str, Any] = <factory>, sources: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, refs: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, created_at: float = <factory>)"
"description": "Metric(name: str, resource_type: dbt.node_types.NodeType, package_name: str, path: str, original_file_path: str, unique_id: str, fqn: List[str], description: str, label: str, calculation_method: str, expression: str, filters: List[dbt.contracts.graph.unparsed.MetricFilter], time_grains: List[str], dimensions: List[str], timestamp: Optional[str] = None, window: Optional[dbt.contracts.graph.unparsed.MetricTime] = None, model: Optional[str] = None, model_unique_id: Optional[str] = None, meta: Dict[str, Any] = <factory>, tags: List[str] = <factory>, config: dbt.contracts.graph.model_config.MetricConfig = <factory>, unrendered_config: Dict[str, Any] = <factory>, sources: List[List[str]] = <factory>, depends_on: dbt.contracts.graph.nodes.DependsOn = <factory>, refs: List[List[str]] = <factory>, metrics: List[List[str]] = <factory>, created_at: float = <factory>)"
},
"MetricFilter": {
"type": "object",

6
scripts/env-setup.sh Normal file
View File

@@ -0,0 +1,6 @@
#!/bin/bash
# Set environment variables required for integration tests
echo "DBT_INVOCATION_ENV=github-actions" >> $GITHUB_ENV
echo "DBT_TEST_USER_1=dbt_test_user_1" >> $GITHUB_ENV
echo "DBT_TEST_USER_2=dbt_test_user_2" >> $GITHUB_ENV
echo "DBT_TEST_USER_3=dbt_test_user_3" >> $GITHUB_ENV

View File

@@ -1,4 +0,0 @@
create table {schema}.incremental__dbt_tmp as (
select 1 as id
);

View File

@@ -1,17 +0,0 @@
{% docs my_model_doc %}
Alt text about the model
{% enddocs %}
{% docs my_model_doc__id %}
The user ID number with alternative text
{% enddocs %}
The following doc is never used, which should be fine.
{% docs my_model_doc__first_name %}
The user's first name - don't show this text!
{% enddocs %}
This doc is referenced by its full name
{% docs my_model_doc__last_name %}
The user's last name in this other file
{% enddocs %}

View File

@@ -1,7 +0,0 @@
{% docs my_model_doc %}
a doc string
{% enddocs %}
{% docs my_model_doc %}
duplicate doc string
{% enddocs %}

View File

@@ -1 +0,0 @@
select 1 as id, 'joe' as first_name

View File

@@ -1,5 +0,0 @@
version: 2
models:
- name: model
description: "{{ doc('my_model_doc') }}"

View File

@@ -1,12 +0,0 @@
{% docs my_model_doc %}
My model is just a copy of the seed
{% enddocs %}
{% docs my_model_doc__id %}
The user ID number
{% enddocs %}
The following doc is never used, which should be fine.
{% docs my_model_doc__first_name %}
The user's first name
{% enddocs %}

View File

@@ -1 +0,0 @@
select 1 as id, 'joe' as first_name

View File

@@ -1,10 +0,0 @@
version: 2
models:
- name: model
description: "{{ doc('my_model_doc') }}"
columns:
- name: id
description: "{{ doc('my_model_doc__id') }}"
- name: first_name
description: "{{ doc('foo.bar.my_model_doc__id') }}"

View File

@@ -1,7 +0,0 @@
{% docs my_model_doc %}
My model is just a copy of the seed
{% enddocs %}
{% docs my_model_doc__id %}
The user ID number
{% enddocs %}

View File

@@ -1 +0,0 @@
select 1 as id, 'joe' as first_name

View File

@@ -1,11 +0,0 @@
version: 2
models:
- name: model
description: "{{ doc('my_model_doc') }}"
columns:
- name: id
description: "{{ doc('my_model_doc__id') }}"
- name: first_name
# invalid reference
description: "{{ doc('my_model_doc__first_name') }}"

View File

@@ -1,17 +0,0 @@
{% docs my_model_doc %}
My model is just a copy of the seed
{% enddocs %}
{% docs my_model_doc__id %}
The user ID number
{% enddocs %}
The following doc is never used, which should be fine.
{% docs my_model_doc__first_name %}
The user's first name (should not be shown!)
{% enddocs %}
This doc is referenced by its full name
{% docs my_model_doc__last_name %}
The user's last name
{% enddocs %}

View File

@@ -1 +0,0 @@
select 1 as id, 'joe' as first_name, 'smith' as last_name

View File

@@ -1,12 +0,0 @@
version: 2
models:
- name: model
description: "{{ doc('my_model_doc') }}"
columns:
- name: id
description: "{{ doc('my_model_doc__id') }}"
- name: first_name
description: The user's first name
- name: last_name
description: "{{ doc('test', 'my_model_doc__last_name') }}"

Some files were not shown because too many files have changed in this diff Show More