Compare commits

...

19 Commits

Author SHA1 Message Date
Jeremy Cohen
c4b75a22e0 Print cache timing 2022-03-10 12:34:20 -05:00
Emily Rockman
9a0abc1bfc Automate changelog (#4743)
* initial setup to use changie

* added `dbt-core` to version line

* fix formatting

* rename to be more accurate

* remove extra file

* add stug for contributing section

* updated docs for contributing and changelog

* first pass at changelog check

* Fix workflow name

* comment on handling failure

* add automatic contributors section via footer

* removed unused initialization

* add script to automate entire changelog creation and handle prereleases

* stub out README

* add changelog entry!

* no longer need to add contributors ourselves

* fixed formatted and excluded core team

* fix typo and collapse if statement

* updated to reflect automatic pre-release handling

Removed custom script in favor of built in pre-release functionality in new version of changie.

* update contributing doc

* pass at GHA

* fix path

* all changed files

* more GHA work

* continued GHA work

* try another approach

* testing

* adding comment via GHA

* added uses for GHA

* more debugging

* fixed formatting

* another comment attempt

* remove read permission

* add label check

* fix quotes

* checking label logic

* test forcing failure

* remove extra script tag

* removed logic for having changelog

* Revert "removed logic for having changelog"

This reverts commit 490bda8256.

* remove unused workflow section

* update header and readme

* update with current version of changelog

* add step failure for missing changelog file

* fix typos and formatting

* small tweaks per feedback

* Update so changelog end up onlywith current version, not past

* update changelog to recent contents

* added the rest of our releases to previous release list

* clarifying the readme

* updated to reflect current changelog state

* updated so only 1.1 changes are on main
2022-03-07 20:12:33 -06:00
Gerda Shank
490d68e076 Switch to using class scope fixtures (#4835)
* Switch to using class scope fixtures

* Reorganize some graph selection tests because of ci errors
2022-03-07 14:38:36 -05:00
Stu Kilgore
c45147fe6d Fix macro modified from previous state (#4820)
* Fix macro modified from previous state

Previously, if the first node selected by state:modified had multiple
dependencies, the first of which had not been changed, the rest of the
macro dependencies of the node would not be checked for changes. This
commit fixes this behavior, so the remainder of the macro dependencies
of the node will be checked as well.
2022-03-07 08:23:59 -06:00
Gerda Shank
bc3468e649 Convert tests in dbt-adapter-tests to use new pytest framework (#4815)
* Convert tests in dbt-adapter-tests to use new pytest framework

* Filter out ResourceWarning for log file

* Move run_sql to dbt.tests.util, fix check_cols definition

* Convert jaffle_shop fixture and test to use classes

* Tweak run_sql methods, rename some adapter file pieces, add comment
to dbt.tests.adapter.

* Add some more comments
2022-03-03 16:53:41 -05:00
Kyle Wigley
8fff6729a2 simplify and cleanup gha workflow (#4803) 2022-03-02 10:21:39 -05:00
varun-dc
08f50acb9e Fix stdout piped colored output on MacOS and Linux (#4792)
* Fix stdout pipe output coloring

* Update CHANGELOG.md

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>

Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
2022-03-01 17:23:51 -05:00
Chenyu Li
436a5f5cd4 add coverage (#4791) 2022-02-28 09:17:33 -05:00
Emily Rockman
aca710048f ct-237 test conversion 002_varchar_widening_tests (#4795)
* convert 002 integration test

* remove original test

* moved varchar test under basic folder
2022-02-25 14:25:22 -06:00
Emily Rockman
673ad50e21 updated index file to fix DAG errors for operations & work around null columns (#4763)
* updated index file to fix DAG errors for operations

* update index file to reflect dbt-docs fixes

* add changelog
2022-02-25 13:02:26 -06:00
Chenyu Li
8ee86a61a0 rewrite graph selection (#4783)
* rewrite graph selection
2022-02-25 12:09:11 -05:00
Gerda Shank
0dda0a90cf Fix errors on Windows tests in new tests/functional (#4767)
* [#4781] Convert reads and writes in project fixture to text/utf-8 encoding

* Switch to using write_file and read_file functions

* Add comment
2022-02-25 11:13:15 -05:00
Gerda Shank
220d8b888c Fix "dbt found two resources" error with multiple snapshot blocks in one file (#4773)
* Fix handling of multiple snapshot blocks in partial parsing

* Update tests for partial parsing snapshots
2022-02-25 10:54:07 -05:00
dependabot[bot]
42d5812577 Bump black from 21.12b0 to 22.1.0 (#4718)
Bumps [black](https://github.com/psf/black) from 21.12b0 to 22.1.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits/22.1.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-24 13:28:23 -05:00
Ian Knox
dea4f5f8ff update flake8 to remove line length req (#4779) 2022-02-24 11:22:25 -06:00
Dmytro Kazanzhy
8f50eee330 Fixed misspellings, typos, and duplicated words (#4545) 2022-02-22 18:05:43 -05:00
Gerda Shank
8fd8dfcf74 Initial pass at switching integration tests to pytest (#4691)
Author: Emily Rockman <emily.rockman@dbtlabs.com>
    route logs to dbt-core/logs instead of each test folder (#4711)

 * Initial pass at switching integration tests to pytest

* Reorganize dbt.tests.tables. Cleanup adapter handling

* Move run_sql to TestProjInfo and TableComparison.
Add comments, cleanup adapter schema setup

* Tweak unique_schema name generation

* Update CHANGELOG.md
2022-02-22 15:34:14 -05:00
Hein Bekker
10b27b9633 Deduplicate postgres relations (#3058) (#4521)
* Deduplicate postgres relations (#3058)

* Add changelog entry for #3058, #4521
2022-02-21 16:48:15 -06:00
Gerda Shank
5808ee6dd7 Fix bug accessing target in deps and clean commands (#4758)
* Create DictDefaultNone for to_target_dict in deps and clean commands

* Update test case to handle

* update CHANGELOG.md

* Switch to DictDefaultEmptyStr for to_target_dict
2022-02-21 13:26:29 -05:00
167 changed files with 5196 additions and 5865 deletions

16
.changes/0.0.0.md Normal file
View File

@@ -0,0 +1,16 @@
## Previous Releases
For information on prior major and minor releases, see their changelogs:
* [1.0](https://github.com/dbt-labs/dbt-core/blob/1.0.latest/CHANGELOG.md)
* [0.21](https://github.com/dbt-labs/dbt-core/blob/0.21.latest/CHANGELOG.md)
* [0.20](https://github.com/dbt-labs/dbt-core/blob/0.20.latest/CHANGELOG.md)
* [0.19](https://github.com/dbt-labs/dbt-core/blob/0.19.latest/CHANGELOG.md)
* [0.18](https://github.com/dbt-labs/dbt-core/blob/0.18.latest/CHANGELOG.md)
* [0.17](https://github.com/dbt-labs/dbt-core/blob/0.17.latest/CHANGELOG.md)
* [0.16](https://github.com/dbt-labs/dbt-core/blob/0.16.latest/CHANGELOG.md)
* [0.15](https://github.com/dbt-labs/dbt-core/blob/0.15.latest/CHANGELOG.md)
* [0.14](https://github.com/dbt-labs/dbt-core/blob/0.14.latest/CHANGELOG.md)
* [0.13](https://github.com/dbt-labs/dbt-core/blob/0.13.latest/CHANGELOG.md)
* [0.12](https://github.com/dbt-labs/dbt-core/blob/0.12.latest/CHANGELOG.md)
* [0.11 and earlier](https://github.com/dbt-labs/dbt-core/blob/0.11.latest/CHANGELOG.md)

31
.changes/1.0.1.md Normal file
View File

@@ -0,0 +1,31 @@
## dbt-core 1.1.0 (TBD)
### Features
- Added Support for Semantic Versioning ([#4644](https://github.com/dbt-labs/dbt-core/pull/4644))
- New Dockerfile to support specific db adapters and platforms. See docker/README.md for details ([#4495](https://github.com/dbt-labs/dbt-core/issues/4495), [#4487](https://github.com/dbt-labs/dbt-core/pull/4487))
- Allow unique_key to take a list ([#2479](https://github.com/dbt-labs/dbt-core/issues/2479), [#4618](https://github.com/dbt-labs/dbt-core/pull/4618))
- Add `--quiet` global flag and `print` Jinja function ([#3451](https://github.com/dbt-labs/dbt-core/issues/3451), [#4701](https://github.com/dbt-labs/dbt-core/pull/4701))
### Fixes
- User wasn't asked for permission to overwite a profile entry when running init inside an existing project ([#4375](https://github.com/dbt-labs/dbt-core/issues/4375), [#4447](https://github.com/dbt-labs/dbt-core/pull/4447))
- Add project name validation to `dbt init` ([#4490](https://github.com/dbt-labs/dbt-core/issues/4490),[#4536](https://github.com/dbt-labs/dbt-core/pull/4536))
- Allow override of string and numeric types for adapters. ([#4603](https://github.com/dbt-labs/dbt-core/issues/4603))
- A change in secret environment variables won't trigger a full reparse [#4650](https://github.com/dbt-labs/dbt-core/issues/4650) [4665](https://github.com/dbt-labs/dbt-core/pull/4665)
- Fix misspellings and typos in docstrings ([#4545](https://github.com/dbt-labs/dbt-core/pull/4545))
### Under the hood
- Testing cleanup ([#4496](https://github.com/dbt-labs/dbt-core/pull/4496), [#4509](https://github.com/dbt-labs/dbt-core/pull/4509))
- Clean up test deprecation warnings ([#3988](https://github.com/dbt-labs/dbt-core/issue/3988), [#4556](https://github.com/dbt-labs/dbt-core/pull/4556))
- Use mashumaro for serialization in event logging ([#4504](https://github.com/dbt-labs/dbt-core/issues/4504), [#4505](https://github.com/dbt-labs/dbt-core/pull/4505))
- Drop support for Python 3.7.0 + 3.7.1 ([#4584](https://github.com/dbt-labs/dbt-core/issues/4584), [#4585](https://github.com/dbt-labs/dbt-core/pull/4585), [#4643](https://github.com/dbt-labs/dbt-core/pull/4643))
- Re-format codebase (except tests) using pre-commit hooks ([#3195](https://github.com/dbt-labs/dbt-core/issues/3195), [#4697](https://github.com/dbt-labs/dbt-core/pull/4697))
- Add deps module README ([#4686](https://github.com/dbt-labs/dbt-core/pull/4686/))
- Initial conversion of tests to pytest ([#4690](https://github.com/dbt-labs/dbt-core/issues/4690), [#4691](https://github.com/dbt-labs/dbt-core/pull/4691))
- Fix errors in Windows for tests/functions ([#4781](https://github.com/dbt-labs/dbt-core/issues/4781), [#4767](https://github.com/dbt-labs/dbt-core/pull/4767))
Contributors:
- [@NiallRees](https://github.com/NiallRees) ([#4447](https://github.com/dbt-labs/dbt-core/pull/4447))
- [@alswang18](https://github.com/alswang18) ([#4644](https://github.com/dbt-labs/dbt-core/pull/4644))
- [@emartens](https://github.com/ehmartens) ([#4701](https://github.com/dbt-labs/dbt-core/pull/4701))
- [@mdesmet](https://github.com/mdesmet) ([#4604](https://github.com/dbt-labs/dbt-core/pull/4604))
- [@kazanzhy](https://github.com/kazanzhy) ([#4545](https://github.com/dbt-labs/dbt-core/pull/4545))

40
.changes/README.md Normal file
View File

@@ -0,0 +1,40 @@
# CHANGELOG Automation
We use [changie](https://changie.dev/) to automate `CHANGELOG` generation. For installation and format/command specifics, see the documentation.
### Quick Tour
- All new change entries get generated under `/.changes/unreleased` as a yaml file
- `header.tpl.md` contains the contents of the entire CHANGELOG file
- `0.0.0.md` contains the contents of the footer for the entire CHANGELOG file. changie looks to be in the process of supporting a footer file the same as it supports a header file. Switch to that when available. For now, the 0.0.0 in the file name forces it to the bottom of the changelog no matter what version we are releasing.
- `.changie.yaml` contains the fields in a change, the format of a single change, as well as the format of the Contributors section for each version.
### Workflow
#### Daily workflow
Almost every code change we make associated with an issue will require a `CHANGELOG` entry. After you have created the PR in GitHub, run `changie new` and follow the command prompts to generate a yaml file with your change details. This only needs to be done once per PR.
The `changie new` command will ensure correct file format and file name. There is a one to one mapping of issues to changes. Multiple issues cannot be lumped into a single entry. If you make a mistake, the yaml file may be directly modified and saved as long as the format is preserved.
Note: If your PR has been cleared by the Core Team as not needing a changelog entry, the `Skip Changelog` label may be put on the PR to bypass the GitHub action that blacks PRs from being merged when they are missing a `CHANGELOG` entry.
#### Prerelease Workflow
These commands batch up changes in `/.changes/unreleased` to be included in this prerelease and move those files to a directory named for the release version. The `--move-dir` will be created if it does not exist and is created in `/.changes`.
```
changie batch <version> --move-dir '<version>' --prerelease 'rc1'
changie merge
```
#### Final Release Workflow
These commands batch up changes in `/.changes/unreleased` as well as `/.changes/<version>` to be included in this final release and delete all prereleases. This rolls all prereleases up into a single final release. All `yaml` files in `/unreleased` and `<version>` will be deleted at this point.
```
changie batch <version> --include '<version>' --remove-prereleases
changie merge
```
### A Note on Manual Edits & Gotchas
- Changie generates markdown files in the `.changes` directory that are parsed together with the `changie merge` command. Every time `changie merge` is run, it regenerates the entire file. For this reason, any changes made directly to `CHANGELOG.md` will be overwritten on the next run of `changie merge`.
- If changes need to be made to the `CHANGELOG.md`, make the changes to the relevant `<version>.md` file located in the `/.changes` directory. You will then run `changie merge` to regenerate the `CHANGELOG.MD`.
- Do not run `changie batch` again on released versions. Our final release workflow deletes all of the yaml files associated with individual changes. If for some reason modifications to the `CHANGELOG.md` are required after we've generated the final release `CHANGELOG.md`, the modifications need to be done manually to the `<version>.md` file in the `/.changes` directory.

6
.changes/header.tpl.md Executable file
View File

@@ -0,0 +1,6 @@
# dbt Core Changelog
- This file provides a full account of all changes to `dbt-core` and `dbt-postgres`
- Changes are listed under the (pre)release in which they first appear. Subsequent releases include changes from previous releases.
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](CONTRIBUTING.md)

View File

@@ -0,0 +1,7 @@
kind: Under the Hood
body: Automate changelog generation with changie
time: 2022-02-18T16:13:19.882436-06:00
custom:
Author: emmyoop
Issue: "4652"
PR: "4743"

50
.changie.yaml Executable file
View File

@@ -0,0 +1,50 @@
changesDir: .changes
unreleasedDir: unreleased
headerPath: header.tpl.md
versionHeaderPath: ""
changelogPath: CHANGELOG.md
versionExt: md
versionFormat: '## dbt-core {{.Version}} - {{.Time.Format "January 02, 2006"}}'
kindFormat: '### {{.Kind}}'
changeFormat: '- {{.Body}} ([#{{.Custom.Issue}}](https://github.com/dbt-labs/dbt-core/issues/{{.Custom.Issue}}), [#{{.Custom.PR}}](https://github.com/dbt-labs/dbt-core/pull/{{.Custom.PR}}))'
kinds:
- label: Fixes
- label: Features
- label: Under the Hood
- label: Breaking Changes
- label: Docs
- label: Dependencies
custom:
- key: Author
label: GitHub Name
type: string
minLength: 3
- key: Issue
label: GitHub Issue Number
type: int
minLength: 4
- key: PR
label: GitHub Pull Request Number
type: int
minLength: 4
footerFormat: |
Contributors:
{{- $contributorDict := dict }}
{{- $core_team := list "emmyoop" "nathaniel-may" "gshank" "leahwicz" "ChenyuLInx" "stu-k" "iknox-fa" "VersusFacit" "McKnight-42" "jtcohen6" }}
{{- range $change := .Changes }}
{{- $author := $change.Custom.Author }}
{{- if not (has $author $core_team)}}
{{- $pr := $change.Custom.PR }}
{{- if hasKey $contributorDict $author }}
{{- $prList := get $contributorDict $author }}
{{- $prList = append $prList $pr }}
{{- $contributorDict := set $contributorDict $author $prList }}
{{- else }}
{{- $prList := list $change.Custom.PR }}
{{- $contributorDict := set $contributorDict $author $prList }}
{{- end }}
{{- end}}
{{- end }}
{{- range $k,$v := $contributorDict }}
- [{{$k}}](https://github.com/{{$k}}) ({{ range $index, $element := $v }}{{if $index}}, {{end}}[#{{$element}}](https://github.com/dbt-labs/dbt-core/pull/{{$element}}){{end}})
{{- end }}

View File

@@ -8,5 +8,5 @@ ignore =
W504
E203 # makes Flake8 work like black
E741
max-line-length = 99
E501 # long line checking is done in black
exclude = test

View File

@@ -18,4 +18,4 @@ resolves #
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change
- [ ] I have added information about my change to be included in the [CHANGELOG](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#Adding-CHANGELOG-Entry).

View File

@@ -1,95 +0,0 @@
module.exports = ({ context }) => {
const defaultPythonVersion = "3.8";
const supportedPythonVersions = ["3.7", "3.8", "3.9"];
const supportedAdapters = ["postgres"];
// if PR, generate matrix based on files changed and PR labels
if (context.eventName.includes("pull_request")) {
// `changes` is a list of adapter names that have related
// file changes in the PR
// ex: ['postgres', 'snowflake']
const changes = JSON.parse(process.env.CHANGES);
const labels = context.payload.pull_request.labels.map(({ name }) => name);
console.log("labels", labels);
console.log("changes", changes);
const testAllLabel = labels.includes("test all");
const include = [];
for (const adapter of supportedAdapters) {
if (
changes.includes(adapter) ||
testAllLabel ||
labels.includes(`test ${adapter}`)
) {
for (const pythonVersion of supportedPythonVersions) {
if (
pythonVersion === defaultPythonVersion ||
labels.includes(`test python${pythonVersion}`) ||
testAllLabel
) {
// always run tests on ubuntu by default
include.push({
os: "ubuntu-latest",
adapter,
"python-version": pythonVersion,
});
if (labels.includes("test windows") || testAllLabel) {
include.push({
os: "windows-latest",
adapter,
"python-version": pythonVersion,
});
}
if (labels.includes("test macos") || testAllLabel) {
include.push({
os: "macos-latest",
adapter,
"python-version": pythonVersion,
});
}
}
}
}
}
console.log("matrix", { include });
return {
include,
};
}
// if not PR, generate matrix of python version, adapter, and operating
// system to run integration tests on
const include = [];
// run for all adapters and python versions on ubuntu
for (const adapter of supportedAdapters) {
for (const pythonVersion of supportedPythonVersions) {
include.push({
os: 'ubuntu-latest',
adapter: adapter,
"python-version": pythonVersion,
});
}
}
// additionally include runs for all adapters, on macos and windows,
// but only for the default python version
for (const adapter of supportedAdapters) {
for (const operatingSystem of ["windows-latest", "macos-latest"]) {
include.push({
os: operatingSystem,
adapter: adapter,
"python-version": defaultPythonVersion,
});
}
}
console.log("matrix", { include });
return {
include,
};
};

62
.github/workflows/changelog-check.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
# **what?**
# Checks that a file has been committed under the /.changes directory
# as a new CHANGELOG entry. Cannot check for a specific filename as
# it is dynamically generated by change type and timestamp.
# This workflow should not require any secrets since it runs for PRs
# from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# **why?**
# Ensure code change gets reflected in the CHANGELOG.
# **when?**
# This will run for all PRs going into main and *.latest.
name: Check Changelog Entry
on:
pull_request:
workflow_dispatch:
defaults:
run:
shell: bash
permissions:
contents: read
pull-requests: write
jobs:
changelog:
name: changelog
runs-on: ubuntu-latest
steps:
- name: Check if changelog file was added
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: filter
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
changelog:
- added: '.changes/unreleased/**.yaml'
- name: Check a file has been added to .changes/unreleased if required
uses: actions/github-script@v6
if: steps.filter.outputs.changelog == 'false' && !contains( github.event.pull_request.labels.*.name, 'Skip Changelog')
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: "Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see [the contributing guide](CONTRIBUTING.md)."
})
core.setFailed('Changelog entry required to merge.')

View File

@@ -1,222 +0,0 @@
# **what?**
# This workflow runs all integration tests for supported OS
# and python versions and core adapters. If triggered by PR,
# the workflow will only run tests for adapters related
# to code changes. Use the `test all` and `test ${adapter}`
# label to run all or additional tests. Use `ok to test`
# label to mark PRs from forked repositories that are safe
# to run integration tests for. Requires secrets to run
# against different warehouses.
# **why?**
# This checks the functionality of dbt from a user's perspective
# and attempts to catch functional regressions.
# **when?**
# This workflow will run on every push to a protected branch
# and when manually triggered. It will also run for all PRs, including
# PRs from forks. The workflow will be skipped until there is a label
# to mark the PR as safe to run.
name: Adapter Integration Tests
on:
# pushes to release branches
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
# all PRs, important to note that `pull_request_target` workflows
# will run in the context of the target branch of a PR
pull_request_target:
# manual tigger
workflow_dispatch:
# explicitly turn off permissions for `GITHUB_TOKEN`
permissions: read-all
# will cancel previous workflows triggered by the same event and for the same ref for PRs or same SHA otherwise
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ contains(github.event_name, 'pull_request') && github.event.pull_request.head.ref || github.sha }}
cancel-in-progress: true
# sets default shell to bash, for all operating systems
defaults:
run:
shell: bash
jobs:
# generate test metadata about what files changed and the testing matrix to use
test-metadata:
# run if not a PR from a forked repository or has a label to mark as safe to test
if: >-
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.generate-matrix.outputs.result }}
steps:
- name: Check out the repository (non-PR)
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Check if relevant files changed
# https://github.com/marketplace/actions/paths-changes-filter
# For each filter, it sets output variable named by the filter to the text:
# 'true' - if any of changed files matches any of filter rules
# 'false' - if none of changed files matches any of filter rules
# also, returns:
# `changes` - JSON array with names of all filters matching any of the changed files
uses: dorny/paths-filter@v2
id: get-changes
with:
token: ${{ secrets.GITHUB_TOKEN }}
filters: |
postgres:
- 'core/**'
- 'plugins/postgres/**'
- 'dev-requirements.txt'
- name: Generate integration test matrix
id: generate-matrix
uses: actions/github-script@v4
env:
CHANGES: ${{ steps.get-changes.outputs.changes }}
with:
script: |
const script = require('./.github/scripts/integration-test-matrix.js')
const matrix = script({ context })
console.log(matrix)
return matrix
test:
name: ${{ matrix.adapter }} / python ${{ matrix.python-version }} / ${{ matrix.os }}
# run if not a PR from a forked repository or has a label to mark as safe to test
# also checks that the matrix generated is not empty
if: >-
needs.test-metadata.outputs.matrix &&
fromJSON( needs.test-metadata.outputs.matrix ).include[0] &&
(
github.event_name != 'pull_request_target' ||
github.event.pull_request.head.repo.full_name == github.repository ||
contains(github.event.pull_request.labels.*.name, 'ok to test')
)
runs-on: ${{ matrix.os }}
needs: test-metadata
strategy:
fail-fast: false
matrix: ${{ fromJSON(needs.test-metadata.outputs.matrix) }}
env:
TOXENV: integration-${{ matrix.adapter }}
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
if: github.event_name != 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
# explicity checkout the branch for the PR,
# this is necessary for the `pull_request_target` event
- name: Check out the repository (PR)
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
with:
persist-credentials: false
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: |
matrix.adapter == 'postgres' &&
runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
tox --version
- name: Run tox (postgres)
if: matrix.adapter == 'postgres'
run: tox
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs
path: ./logs
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y-%m-%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ matrix.adapter }}-${{ steps.date.outputs.date }}.csv
path: integration_results.csv
require-label-comment:
runs-on: ubuntu-latest
needs: test
permissions:
pull-requests: write
steps:
- name: Needs permission PR comment
if: >-
needs.test.result == 'skipped' &&
github.event_name == 'pull_request_target' &&
github.event.pull_request.head.repo.full_name != github.repository
uses: unsplash/comment-on-pr@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
msg: |
"You do not have permissions to run integration tests, @dbt-labs/core "\
"needs to label this PR with `ok to test` in order to run integration tests!"
check_for_duplicate_msg: true

View File

@@ -1,9 +1,8 @@
# **what?**
# Runs code quality checks, unit tests, and verifies python build on
# all code commited to the repository. This workflow should not
# require any secrets since it runs for PRs from forked repos.
# By default, secrets are not passed to workflows running from
# a forked repo.
# Runs code quality checks, unit tests, integration tests and
# verifies python build on all code commited to the repository. This workflow
# should not require any secrets since it runs for PRs from forked repos. By
# default, secrets are not passed to workflows running from a forked repos.
# **why?**
# Ensure code for dbt meets a certain quality standard.
@@ -18,7 +17,6 @@ on:
push:
branches:
- "main"
- "develop"
- "*.latest"
- "releases/*"
pull_request:
@@ -44,8 +42,6 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
@@ -53,12 +49,12 @@ jobs:
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install pre-commit
pip install mypy==0.782
pip install -r editable-requirements.txt
pip --version
pip install pre-commit
pre-commit --version
pip install mypy==0.782
mypy --version
pip install -r editable-requirements.txt
dbt --version
- name: Run pre-commit hooks
@@ -81,8 +77,6 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
@@ -92,8 +86,8 @@ jobs:
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install tox
pip --version
pip install tox
tox --version
- name: Run tox
@@ -110,6 +104,75 @@ jobs:
name: unit_results_${{ matrix.python-version }}-${{ steps.date.outputs.date }}.csv
path: unit_results.csv
integration:
name: integration test / python ${{ matrix.python-version }} / ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: [3.7, 3.8, 3.9]
os: [ubuntu-latest]
include:
- python-version: 3.8
os: windows-latest
- python-version: 3.8
os: macos-latest
env:
TOXENV: integration
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_INVOCATION_ENV: github-actions
steps:
- name: Check out the repository
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up postgres (linux)
if: runner.os == 'Linux'
uses: ./.github/actions/setup-postgres-linux
- name: Set up postgres (macos)
if: runner.os == 'macOS'
uses: ./.github/actions/setup-postgres-macos
- name: Set up postgres (windows)
if: runner.os == 'Windows'
uses: ./.github/actions/setup-postgres-windows
- name: Install python tools
run: |
pip install --user --upgrade pip
pip --version
pip install tox
tox --version
- name: Run tests
run: tox
- name: Get current date
if: always()
id: date
run: echo "::set-output name=date::$(date +'%Y_%m_%dT%H_%M_%S')" #no colons allowed for artifacts
- uses: actions/upload-artifact@v2
if: always()
with:
name: logs_${{ matrix.python-version }}_${{ matrix.os }}_${{ steps.date.outputs.date }}
path: ./logs
- uses: actions/upload-artifact@v2
if: always()
with:
name: integration_results_${{ matrix.python-version }}_${{ matrix.os }}_${{ steps.date.outputs.date }}.csv
path: integration_results.csv
build:
name: build packages
@@ -118,8 +181,6 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v2
@@ -146,44 +207,6 @@ jobs:
run: |
check-wheel-contents dist/*.whl --ignore W007,W008
- uses: actions/upload-artifact@v2
with:
name: dist
path: dist/
test-build:
name: verify packages / python ${{ matrix.python-version }} / ${{ matrix.os }}
needs: build
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.7, 3.8, 3.9]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip install --upgrade wheel
pip --version
- uses: actions/download-artifact@v2
with:
name: dist
path: dist/
- name: Show distributions
run: ls -lh dist/
- name: Install wheel distributions
run: |
find ./dist/*.whl -maxdepth 1 -type f | xargs pip install --force-reinstall --find-links=dist/

View File

@@ -6,7 +6,6 @@
# version of our structured logging and add new documentation to
# communicate these changes.
name: Structured Logging Schema Check
on:
push:
@@ -30,9 +29,8 @@ jobs:
# points tests to the log file
LOG_DIR: "/home/runner/work/dbt-core/dbt-core/logs"
# tells integration tests to output into json format
DBT_LOG_FORMAT: 'json'
DBT_LOG_FORMAT: "json"
steps:
- name: checkout dev
uses: actions/checkout@v2
with:
@@ -49,8 +47,12 @@ jobs:
toolchain: stable
override: true
- name: install dbt
run: pip install -r dev-requirements.txt -r editable-requirements.txt
- name: Install python dependencies
run: |
pip install --user --upgrade pip
pip --version
pip install tox
tox --version
- name: Set up postgres
uses: ./.github/actions/setup-postgres-linux
@@ -61,7 +63,7 @@ jobs:
# integration tests generate a ton of logs in different files. the next step will find them all.
# we actually care if these pass, because the normal test run doesn't usually include many json log outputs
- name: Run integration tests
run: tox -e py38-postgres -- -nauto
run: tox -e integration -- -nauto
# apply our schema tests to every log event from the previous step
# skips any output that isn't valid json

3314
CHANGELOG.md Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -219,6 +219,15 @@ python -m pytest test/unit/test_graph.py::GraphTest::test__dependency_list
```
> [Here](https://docs.pytest.org/en/reorganize-docs/new-docs/user/commandlineuseful.html)
> is a list of useful command-line options for `pytest` to use while developing.
## Adding CHANGELOG Entry
We use [changie](https://changie.dev) to generate `CHANGELOG` entries. Do not edit the `CHANGELOG.md` directly. Your modifications will be lost.
Follow the steps to [install `changie`](https://changie.dev/guide/installation/) for your system.
Once changie is installed and your PR is created, simply run `changie new` and changie will walk you through the process of creating a changelog entry. Commit the file that's created and your changelog entry is complete!
## Submitting a Pull Request
dbt Labs provides a CI environment to test changes to specific adapters, and periodic maintenance checks of `dbt-core` through Github Actions. For example, if you submit a pull request to the `dbt-redshift` repo, GitHub will trigger automated code checks and tests against Redshift.

View File

@@ -41,26 +41,20 @@ unit: .env ## Runs unit tests with py38.
.PHONY: test
test: .env ## Runs unit tests with py38 and code checks against staged changes.
@\
$(DOCKER_CMD) tox -p -e py38; \
$(DOCKER_CMD) tox -e py38; \
$(DOCKER_CMD) pre-commit run black-check --hook-stage manual | grep -v "INFO"; \
$(DOCKER_CMD) pre-commit run flake8-check --hook-stage manual | grep -v "INFO"; \
$(DOCKER_CMD) pre-commit run mypy-check --hook-stage manual | grep -v "INFO"
.PHONY: integration
integration: .env integration-postgres ## Alias for integration-postgres.
integration: .env ## Runs postgres integration tests with py38.
@\
$(DOCKER_CMD) tox -e py38-integration -- -nauto
.PHONY: integration-fail-fast
integration-fail-fast: .env integration-postgres-fail-fast ## Alias for integration-postgres-fail-fast.
.PHONY: integration-postgres
integration-postgres: .env ## Runs postgres integration tests with py38.
integration-fail-fast: .env ## Runs postgres integration tests with py38 in "fail fast" mode.
@\
$(DOCKER_CMD) tox -e py38-postgres -- -nauto
.PHONY: integration-postgres-fail-fast
integration-postgres-fail-fast: .env ## Runs postgres integration tests with py38 in "fail fast" mode.
@\
$(DOCKER_CMD) tox -e py38-postgres -- -x -nauto
$(DOCKER_CMD) tox -e py38-integration -- -x -nauto
.PHONY: setup-db
setup-db: ## Setup Postgres database with docker-compose for system testing.

View File

@@ -3,10 +3,7 @@
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="CI Badge"/>
</a>
</p>

View File

@@ -3,10 +3,7 @@
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="CI Badge"/>
</a>
</p>

View File

@@ -177,6 +177,10 @@ def get_adapter(config: AdapterRequiredConfig):
return FACTORY.lookup_adapter(config.credentials.type)
def get_adapter_by_type(adapter_type):
return FACTORY.lookup_adapter(adapter_type)
def reset_adapters():
"""Clear the adapters. This is useful for tests, which change configs."""
FACTORY.reset_adapters()

View File

@@ -27,7 +27,7 @@ ALTER_COLUMN_TYPE_MACRO_NAME = "alter_column_type"
class SQLAdapter(BaseAdapter):
"""The default adapter with the common agate conversions and some SQL
methods implemented. This adapter has a different much shorter list of
methods was implemented. This adapter has a different much shorter list of
methods to implement, but some more macros that must be implemented.
To implement a macro, implement "${adapter_type}__${macro_name}". in the

View File

@@ -80,7 +80,7 @@ def table_from_rows(
def table_from_data(data, column_names: Iterable[str]) -> agate.Table:
"Convert list of dictionaries into an Agate table"
"Convert a list of dictionaries into an Agate table"
# The agate table is generated from a list of dicts, so the column order
# from `data` is not preserved. We can use `select` to reorder the columns

View File

@@ -103,7 +103,7 @@ class NativeSandboxEnvironment(MacroFuzzEnvironment):
class TextMarker(str):
"""A special native-env marker that indicates that a value is text and is
"""A special native-env marker that indicates a value is text and is
not to be evaluated. Use this to prevent your numbery-strings from becoming
numbers!
"""
@@ -580,7 +580,7 @@ def extract_toplevel_blocks(
allowed_blocks: Optional[Set[str]] = None,
collect_raw_data: bool = True,
) -> List[Union[BlockData, BlockTag]]:
"""Extract the top level blocks with matching block types from a jinja
"""Extract the top-level blocks with matching block types from a jinja
file, with some special handling for block nesting.
:param data: The data to extract blocks from.

View File

@@ -335,7 +335,7 @@ def _handle_posix_cmd_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _handle_posix_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
"""OSError handling for posix systems.
"""OSError handling for POSIX systems.
Some things that could happen to trigger an OSError:
- cwd could not exist
@@ -386,7 +386,7 @@ def _handle_windows_error(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
def _interpret_oserror(exc: OSError, cwd: str, cmd: List[str]) -> NoReturn:
"""Interpret an OSError exc and raise the appropriate dbt exception."""
"""Interpret an OSError exception and raise the appropriate dbt exception."""
if len(cmd) == 0:
raise dbt.exceptions.CommandError(cwd, cmd)
@@ -501,7 +501,7 @@ def move(src, dst):
directory on windows when it has read-only files in it and the move is
between two drives.
This is almost identical to the real shutil.move, except it uses our rmtree
This is almost identical to the real shutil.move, except it, uses our rmtree
and skips handling non-windows OSes since the existing one works ok there.
"""
src = convert_path(src)
@@ -536,7 +536,7 @@ def move(src, dst):
def rmtree(path):
"""Recursively remove path. On permissions errors on windows, try to remove
"""Recursively remove the path. On permissions errors on windows, try to remove
the read-only flag and try again.
"""
path = convert_path(path)

View File

@@ -11,7 +11,7 @@ from .renderer import DbtProjectYamlRenderer, ProfileRenderer
from .utils import parse_cli_vars
from dbt import flags
from dbt.adapters.factory import get_relation_class_by_name, get_include_paths
from dbt.helper_types import FQNPath, PathSet
from dbt.helper_types import FQNPath, PathSet, DictDefaultEmptyStr
from dbt.config.profile import read_user_config
from dbt.contracts.connection import AdapterRequiredConfig, Credentials
from dbt.contracts.graph.manifest import ManifestMetadata
@@ -396,7 +396,7 @@ class UnsetProfile(Profile):
self.threads = -1
def to_target_dict(self):
return {}
return DictDefaultEmptyStr({})
def __getattribute__(self, name):
if name in {"profile_name", "target_name", "threads"}:
@@ -431,7 +431,7 @@ class UnsetProfileConfig(RuntimeConfig):
def to_target_dict(self):
# re-override the poisoned profile behavior
return {}
return DictDefaultEmptyStr({})
@classmethod
def from_parts(

View File

@@ -1147,7 +1147,7 @@ class ProviderContext(ManifestContext):
class MacroContext(ProviderContext):
"""Internally, macros can be executed like nodes, with some restrictions:
- they don't have have all values available that nodes do:
- they don't have all values available that nodes do:
- 'this', 'pre_hooks', 'post_hooks', and 'sql' are missing
- 'schema' does not use any 'model' information
- they can't be configured with config() directives

View File

@@ -104,7 +104,7 @@ class Connection(ExtensibleDbtClassMixin, Replaceable):
class LazyHandle:
"""Opener must be a callable that takes a Connection object and opens the
"""The opener must be a callable that takes a Connection object and opens the
connection, updating the handle on the Connection.
"""

View File

@@ -453,7 +453,7 @@ T = TypeVar("T", bound=GraphMemberNode)
def _update_into(dest: MutableMapping[str, T], new_item: T):
"""Update dest to overwrite whatever is at dest[new_item.unique_id] with
new_itme. There must be an existing value to overwrite, and they two nodes
new_itme. There must be an existing value to overwrite, and the two nodes
must have the same original file path.
"""
unique_id = new_item.unique_id

View File

@@ -389,7 +389,9 @@ class NodeConfig(NodeAndTestConfig):
metadata=MergeBehavior.Update.meta(),
)
full_refresh: Optional[bool] = None
unique_key: Optional[Union[str, List[str]]] = None
# 'unique_key' doesn't use 'Optional' because typing.get_type_hints was
# sometimes getting the Union order wrong, causing serialization failures.
unique_key: Union[str, List[str], None] = None
on_schema_change: Optional[str] = "ignore"
@classmethod
@@ -483,7 +485,8 @@ class SnapshotConfig(EmptySnapshotConfig):
target_schema: Optional[str] = None
target_database: Optional[str] = None
updated_at: Optional[str] = None
check_cols: Optional[Union[str, List[str]]] = None
# Not using Optional because of serialization issues with a Union of str and List[str]
check_cols: Union[str, List[str], None] = None
@classmethod
def validate(cls, data):

View File

@@ -85,6 +85,7 @@ class RunStatus(StrEnum):
class TestStatus(StrEnum):
__test__ = False
Pass = NodeStatus.Pass
Error = NodeStatus.Error
Fail = NodeStatus.Fail

View File

@@ -35,7 +35,7 @@ class DateTimeSerialization(SerializationStrategy):
# jsonschemas for every class and the 'validate' method
# come from Hologram.
class dbtClassMixin(DataClassDictMixin, JsonSchemaMixin):
"""Mixin which adds methods to generate a JSON schema and
"""The Mixin adds methods to generate a JSON schema and
convert to and from JSON encodable dicts with validation
against the schema
"""

View File

@@ -55,19 +55,8 @@ invocation_id: Optional[str] = None
# then we should override the logging stream to use the colorama
# converter. If the TERM var is set (as with Git Bash), then it's safe
# to send escape characters and no log handler injection is needed.
colorama_stdout = sys.stdout
colorama_wrap = True
colorama.init(wrap=colorama_wrap)
if sys.platform == "win32" and not os.getenv("TERM"):
colorama_wrap = False
colorama_stdout = colorama.AnsiToWin32(sys.stdout).stream
elif sys.platform == "win32":
colorama_wrap = False
colorama.init(wrap=colorama_wrap)
if sys.platform == "win32":
colorama.init(wrap=False)
def setup_event_logger(log_path, level_override=None):

View File

@@ -501,7 +501,7 @@ def invalid_type_error(
def invalid_bool_error(got_value, macro_name) -> NoReturn:
"""Raise a CompilationException when an macro expects a boolean but gets some
"""Raise a CompilationException when a macro expects a boolean but gets some
other value.
"""
msg = (

View File

@@ -61,7 +61,7 @@ flag_defaults = {
def env_set_truthy(key: str) -> Optional[str]:
"""Return the value if it was set to a "truthy" string value, or None
"""Return the value if it was set to a "truthy" string value or None
otherwise.
"""
value = os.getenv(key)

View File

@@ -414,20 +414,24 @@ class StateSelectorMethod(SelectorMethod):
return modified
def recursively_check_macros_modified(self, node, previous_macros):
def recursively_check_macros_modified(self, node, visited_macros):
# loop through all macros that this node depends on
for macro_uid in node.depends_on.macros:
# avoid infinite recursion if we've already seen this macro
if macro_uid in previous_macros:
if macro_uid in visited_macros:
continue
previous_macros.append(macro_uid)
visited_macros.append(macro_uid)
# is this macro one of the modified macros?
if macro_uid in self.modified_macros:
return True
# if not, and this macro depends on other macros, keep looping
macro_node = self.manifest.macros[macro_uid]
if len(macro_node.depends_on.macros) > 0:
return self.recursively_check_macros_modified(macro_node, previous_macros)
return self.recursively_check_macros_modified(macro_node, visited_macros)
# this macro hasn't been modified, but we haven't checked
# the other macros the node depends on, so keep looking
elif len(node.depends_on.macros) > len(visited_macros):
continue
else:
return False
@@ -440,8 +444,8 @@ class StateSelectorMethod(SelectorMethod):
return False
# recursively loop through upstream macros to see if any is modified
else:
previous_macros = []
return self.recursively_check_macros_modified(node, previous_macros)
visited_macros = []
return self.recursively_check_macros_modified(node, visited_macros)
# TODO check modifed_content and check_modified macro seems a bit redundent
def check_modified_content(self, old: Optional[SelectorTarget], new: SelectorTarget) -> bool:

View File

@@ -27,7 +27,7 @@ class IndirectSelection(StrEnum):
def _probably_path(value: str):
"""Decide if value is probably a path. Windows has two path separators, so
"""Decide if the value is probably a path. Windows has two path separators, so
we should check both sep ('\\') and altsep ('/') there.
"""
if os.path.sep in value:

View File

@@ -131,3 +131,10 @@ class Lazy(Generic[T]):
if self.memo is None:
self.memo = self._typed_eval_f()
return self.memo
# This class is used in to_target_dict, so that accesses to missing keys
# will return an empty string instead of Undefined
class DictDefaultEmptyStr(dict):
def __getitem__(self, key):
return dict.get(self, key, "")

File diff suppressed because one or more lines are too long

View File

@@ -20,20 +20,11 @@ from dbt.dataclass_schema import dbtClassMixin
# then we should override the logging stream to use the colorama
# converter. If the TERM var is set (as with Git Bash), then it's safe
# to send escape characters and no log handler injection is needed.
colorama_stdout = sys.stdout
colorama_wrap = True
colorama.init(wrap=colorama_wrap)
if sys.platform == "win32" and not os.getenv("TERM"):
colorama_wrap = False
colorama_stdout = colorama.AnsiToWin32(sys.stdout).stream
elif sys.platform == "win32":
colorama_wrap = False
colorama.init(wrap=colorama_wrap)
logging_stdout = sys.stdout
if sys.platform == "win32":
if not os.getenv("TERM"):
logging_stdout = colorama.AnsiToWin32(sys.stdout).stream
colorama.init(wrap=False)
STDOUT_LOG_FORMAT = "{record.message}"
@@ -464,7 +455,7 @@ class DelayedFileHandler(logbook.RotatingFileHandler, FormatterMixin):
class LogManager(logbook.NestedSetup):
def __init__(self, stdout=colorama_stdout, stderr=sys.stderr):
def __init__(self, stdout=logging_stdout, stderr=sys.stderr):
self.stdout = stdout
self.stderr = stderr
self._null_handler = logbook.NullHandler()
@@ -632,7 +623,7 @@ def list_handler(
lst: Optional[List[LogMessage]],
level=logbook.NOTSET,
) -> ContextManager:
"""Return a context manager that temporarly attaches a list to the logger."""
"""Return a context manager that temporarily attaches a list to the logger."""
return ListLogHandler(lst=lst, level=level, bubble=True)

View File

@@ -46,7 +46,7 @@ from dbt.exceptions import InternalException, NotImplementedException, FailedToC
class DBTVersion(argparse.Action):
"""This is very very similar to the builtin argparse._Version action,
"""This is very similar to the built-in argparse._Version action,
except it just calls dbt.version.get_version_information().
"""

View File

@@ -1035,7 +1035,7 @@ def _process_docs_for_metrics(context: Dict[str, Any], metric: ParsedMetric) ->
def _process_refs_for_exposure(manifest: Manifest, current_project: str, exposure: ParsedExposure):
"""Given a manifest and a exposure in that manifest, process its refs"""
"""Given a manifest and exposure in that manifest, process its refs"""
for ref in exposure.refs:
target_model: Optional[Union[Disabled, ManifestNode]] = None
target_model_name: str

View File

@@ -291,10 +291,10 @@ class PartialParsing:
if self.already_scheduled_for_parsing(old_source_file):
return
# These files only have one node.
unique_id = None
# These files only have one node except for snapshots
unique_ids = []
if old_source_file.nodes:
unique_id = old_source_file.nodes[0]
unique_ids = old_source_file.nodes
else:
# It's not clear when this would actually happen.
# Logging in case there are other associated errors.
@@ -305,7 +305,7 @@ class PartialParsing:
self.deleted_manifest.files[file_id] = old_source_file
self.saved_files[file_id] = deepcopy(new_source_file)
self.add_to_pp_files(new_source_file)
if unique_id:
for unique_id in unique_ids:
self.remove_node_in_saved(new_source_file, unique_id)
def remove_node_in_saved(self, source_file, unique_id):
@@ -379,7 +379,7 @@ class PartialParsing:
if not source_file.nodes:
fire_event(PartialParsingMissingNodes(file_id=source_file.file_id))
return
# There is generally only 1 node for SQL files, except for macros
# There is generally only 1 node for SQL files, except for macros and snapshots
for unique_id in source_file.nodes:
self.remove_node_in_saved(source_file, unique_id)
self.schedule_referencing_nodes_for_parsing(unique_id)

View File

@@ -41,7 +41,7 @@ CATALOG_FILENAME = "catalog.json"
def get_stripped_prefix(source: Dict[str, Any], prefix: str) -> Dict[str, Any]:
"""Go through source, extracting every key/value pair where the key starts
"""Go through the source, extracting every key/value pair where the key starts
with the given prefix.
"""
cut = len(prefix)

View File

@@ -54,7 +54,7 @@ from dbt.parser.manifest import ManifestLoader
import dbt.exceptions
from dbt import flags
import dbt.utils
from dbt.ui import warning_tag
from dbt.ui import warning_tag, green
RESULT_FILE_NAME = "run_results.json"
MANIFEST_FILE_NAME = "manifest.json"
@@ -391,7 +391,13 @@ class GraphRunnableTask(ManifestTask):
self._skipped_children[dep_node_id] = cause
def populate_adapter_cache(self, adapter):
import time
start = time.time()
print("Starting cache population")
adapter.set_relations_cache(self.manifest)
end = time.time()
print("Finished cache population")
print(green(end - start))
def before_hooks(self, adapter):
pass

View File

@@ -230,6 +230,8 @@ class TestTask(RunTask):
constraints are satisfied.
"""
__test__ = False
def raise_on_first_error(self):
return False

View File

@@ -0,0 +1 @@
# dbt.tests directory

212
core/dbt/tests/adapter.py Normal file
View File

@@ -0,0 +1,212 @@
from itertools import chain, repeat
from dbt.context import providers
from unittest.mock import patch
# These functions were extracted from the dbt-adapter-tests spec_file.py.
# They are used in the 'adapter' tests directory. At some point they
# might be moved to dbts.tests.util if they are of general purpose use,
# but leaving here for now to keep the adapter work more contained.
# We may want to consolidate in the future since some of this is kind
# of duplicative of the functionality in dbt.tests.tables.
class TestProcessingException(Exception):
pass
def relation_from_name(adapter, name: str):
"""reverse-engineer a relation (including quoting) from a given name and
the adapter. Assumes that relations are split by the '.' character.
"""
# Different adapters have different Relation classes
cls = adapter.Relation
credentials = adapter.config.credentials
quote_policy = cls.get_default_quote_policy().to_dict()
include_policy = cls.get_default_include_policy().to_dict()
kwargs = {} # This will contain database, schema, identifier
parts = name.split(".")
names = ["database", "schema", "identifier"]
defaults = [credentials.database, credentials.schema, None]
values = chain(repeat(None, 3 - len(parts)), parts)
for name, value, default in zip(names, values, defaults):
# no quote policy -> use the default
if value is None:
if default is None:
include_policy[name] = False
value = default
else:
include_policy[name] = True
# if we have a value, we can figure out the quote policy.
trimmed = value[1:-1]
if adapter.quote(trimmed) == value:
quote_policy[name] = True
value = trimmed
else:
quote_policy[name] = False
kwargs[name] = value
relation = cls.create(
include_policy=include_policy,
quote_policy=quote_policy,
**kwargs,
)
return relation
def check_relation_types(adapter, relation_to_type):
"""
Relation name to table/view
{
"base": "table",
"other": "view",
}
"""
expected_relation_values = {}
found_relations = []
schemas = set()
for key, value in relation_to_type.items():
relation = relation_from_name(adapter, key)
expected_relation_values[relation] = value
schemas.add(relation.without_identifier())
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("__test"):
for schema in schemas:
found_relations.extend(adapter.list_relations_without_caching(schema))
for key, value in relation_to_type.items():
for relation in found_relations:
# this might be too broad
if relation.identifier == key:
assert relation.type == value, (
f"Got an unexpected relation type of {relation.type} "
f"for relation {key}, expected {value}"
)
def check_relations_equal(adapter, relation_names):
if len(relation_names) < 2:
raise TestProcessingException(
"Not enough relations to compare",
)
relations = [relation_from_name(adapter, name) for name in relation_names]
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("_test"):
basis, compares = relations[0], relations[1:]
columns = [c.name for c in adapter.get_columns_in_relation(basis)]
for relation in compares:
sql = adapter.get_rows_different_sql(basis, relation, column_names=columns)
_, tbl = adapter.execute(sql, fetch=True)
num_rows = len(tbl)
assert (
num_rows == 1
), f"Invalid sql query from get_rows_different_sql: incorrect number of rows ({num_rows})"
num_cols = len(tbl[0])
assert (
num_cols == 2
), f"Invalid sql query from get_rows_different_sql: incorrect number of cols ({num_cols})"
row_count_difference = tbl[0][0]
assert (
row_count_difference == 0
), f"Got {row_count_difference} difference in row count betwen {basis} and {relation}"
rows_mismatched = tbl[0][1]
assert (
rows_mismatched == 0
), f"Got {rows_mismatched} different rows between {basis} and {relation}"
def get_unique_ids_in_results(results):
unique_ids = []
for result in results:
unique_ids.append(result.node.unique_id)
return unique_ids
def check_result_nodes_by_name(results, names):
result_names = []
for result in results:
result_names.append(result.node.name)
assert set(names) == set(result_names)
def check_result_nodes_by_unique_id(results, unique_ids):
result_unique_ids = []
for result in results:
result_unique_ids.append(result.node.unique_id)
assert set(unique_ids) == set(result_unique_ids)
def update_rows(adapter, update_rows_config):
"""
{
"name": "base",
"dst_col": "some_date"
"clause": {
"type": "add_timestamp",
"src_col": "some_date",
"where" "id > 10"
}
"""
for key in ["name", "dst_col", "clause"]:
if key not in update_rows_config:
raise TestProcessingException(f"Invalid update_rows: no {key}")
clause = update_rows_config["clause"]
clause = generate_update_clause(adapter, clause)
where = None
if "where" in update_rows_config:
where = update_rows_config["where"]
name = update_rows_config["name"]
dst_col = update_rows_config["dst_col"]
relation = relation_from_name(adapter, name)
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("_test"):
sql = adapter.update_column_sql(
dst_name=str(relation),
dst_column=dst_col,
clause=clause,
where_clause=where,
)
print(f"--- update_rows sql: {sql}")
adapter.execute(sql, auto_begin=True)
adapter.commit_if_has_connection()
def generate_update_clause(adapter, clause) -> str:
"""
Called by update_rows function. Expects the "clause" dictionary
documented in 'update_rows.
"""
if "type" not in clause or clause["type"] not in ["add_timestamp", "add_string"]:
raise TestProcessingException("invalid update_rows clause: type missing or incorrect")
clause_type = clause["type"]
if clause_type == "add_timestamp":
if "src_col" not in clause:
raise TestProcessingException("Invalid update_rows clause: no src_col")
add_to = clause["src_col"]
kwargs = {k: v for k, v in clause.items() if k in ("interval", "number")}
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("_test"):
return adapter.timestamp_add_sql(add_to=add_to, **kwargs)
elif clause_type == "add_string":
for key in ["src_col", "value"]:
if key not in clause:
raise TestProcessingException(f"Invalid update_rows clause: no {key}")
src_col = clause["src_col"]
value = clause["value"]
location = clause.get("location", "append")
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("_test"):
return adapter.string_add_sql(src_col, value, location)
return ""

1
core/dbt/tests/fixtures/__init__.py vendored Normal file
View File

@@ -0,0 +1 @@
# dbt.tests.fixtures directory

344
core/dbt/tests/fixtures/project.py vendored Normal file
View File

@@ -0,0 +1,344 @@
import os
import pytest # type: ignore
import random
from argparse import Namespace
from datetime import datetime
import yaml
import dbt.flags as flags
from dbt.config.runtime import RuntimeConfig
from dbt.adapters.factory import get_adapter, register_adapter, reset_adapters
from dbt.events.functions import setup_event_logger
from dbt.tests.util import write_file, run_sql_with_adapter
# These are the fixtures that are used in dbt core functional tests
# Used in constructing the unique_schema and logs_dir
@pytest.fixture(scope="class")
def prefix():
# create a directory name that will be unique per test session
_randint = random.randint(0, 9999)
_runtime_timedelta = datetime.utcnow() - datetime(1970, 1, 1, 0, 0, 0)
_runtime = (int(_runtime_timedelta.total_seconds() * 1e6)) + _runtime_timedelta.microseconds
prefix = f"test{_runtime}{_randint:04}"
return prefix
# Every test has a unique schema
@pytest.fixture(scope="class")
def unique_schema(request, prefix) -> str:
test_file = request.module.__name__
# We only want the last part of the name
test_file = test_file.split(".")[-1]
unique_schema = f"{prefix}_{test_file}"
return unique_schema
# Create a directory for the profile using tmpdir fixture
@pytest.fixture(scope="class")
def profiles_root(tmpdir_factory):
return tmpdir_factory.mktemp("profile")
# Create a directory for the project using tmpdir fixture
@pytest.fixture(scope="class")
def project_root(tmpdir_factory):
# tmpdir docs - https://docs.pytest.org/en/6.2.x/tmpdir.html
project_root = tmpdir_factory.mktemp("project")
print(f"\n=== Test project_root: {project_root}")
return project_root
# This is for data used by multiple tests, in the 'tests/data' directory
@pytest.fixture(scope="session")
def shared_data_dir(request):
return os.path.join(request.config.rootdir, "tests", "data")
# This is for data for a specific test directory, i.e. tests/basic/data
@pytest.fixture(scope="module")
def test_data_dir(request):
return os.path.join(request.fspath.dirname, "data")
# The profile dictionary, used to write out profiles.yml
@pytest.fixture(scope="class")
def dbt_profile_data(unique_schema):
profile = {
"config": {"send_anonymous_usage_stats": False},
"test": {
"outputs": {
"default": {
"type": "postgres",
"threads": 4,
"host": "localhost",
"port": int(os.getenv("POSTGRES_TEST_PORT", 5432)),
"user": os.getenv("POSTGRES_TEST_USER", "root"),
"pass": os.getenv("POSTGRES_TEST_PASS", "password"),
"dbname": os.getenv("POSTGRES_TEST_DATABASE", "dbt"),
"schema": unique_schema,
},
"other_schema": {
"type": "postgres",
"threads": 4,
"host": "localhost",
"port": int(os.getenv("POSTGRES_TEST_PORT", 5432)),
"user": "noaccess",
"pass": "password",
"dbname": os.getenv("POSTGRES_TEST_DATABASE", "dbt"),
"schema": unique_schema + "_alt", # Should this be the same unique_schema?
},
},
"target": "default",
},
}
return profile
# Write out the profile data as a yaml file
@pytest.fixture(scope="class")
def profiles_yml(profiles_root, dbt_profile_data):
os.environ["DBT_PROFILES_DIR"] = str(profiles_root)
write_file(yaml.safe_dump(dbt_profile_data), profiles_root, "profiles.yml")
yield dbt_profile_data
del os.environ["DBT_PROFILES_DIR"]
# This fixture can be overridden in a project
@pytest.fixture(scope="class")
def project_config_update():
return {}
# Combines the project_config_update dictionary with defaults to
# produce a project_yml config and write it out as dbt_project.yml
@pytest.fixture(scope="class")
def dbt_project_yml(project_root, project_config_update, logs_dir):
project_config = {
"config-version": 2,
"name": "test",
"version": "0.1.0",
"profile": "test",
"log-path": logs_dir,
}
if project_config_update:
project_config.update(project_config_update)
write_file(yaml.safe_dump(project_config), project_root, "dbt_project.yml")
# Fixture to provide packages as either yaml or dictionary
@pytest.fixture(scope="class")
def packages():
return {}
# Write out the packages.yml file
@pytest.fixture(scope="class")
def packages_yml(project_root, packages):
if packages:
if isinstance(packages, str):
data = packages
else:
data = yaml.safe_dump(packages)
write_file(data, project_root, "packages.yml")
# Fixture to provide selectors as either yaml or dictionary
@pytest.fixture(scope="class")
def selectors():
return {}
# Write out the selectors.yml file
@pytest.fixture(scope="class")
def selectors_yml(project_root, selectors):
if selectors:
if isinstance(selectors, str):
data = selectors
else:
data = yaml.safe_dump(selectors)
write_file(data, project_root, "selectors.yml")
# This creates an adapter that is used for running test setup and teardown,
# and 'run_sql' commands. The 'run_dbt' commands will create their own adapter
# so this one needs some special patching to run after dbt commands have been
# executed
@pytest.fixture(scope="class")
def adapter(unique_schema, project_root, profiles_root, profiles_yml, dbt_project_yml):
# The profiles.yml and dbt_project.yml should already be written out
args = Namespace(
profiles_dir=str(profiles_root), project_dir=str(project_root), target=None, profile=None
)
flags.set_from_args(args, {})
runtime_config = RuntimeConfig.from_args(args)
register_adapter(runtime_config)
adapter = get_adapter(runtime_config)
yield adapter
adapter.cleanup_connections()
reset_adapters()
# Start at directory level.
def write_project_files(project_root, dir_name, file_dict):
path = project_root.mkdir(dir_name)
if file_dict:
write_project_files_recursively(path, file_dict)
# Write files out from file_dict. Can be nested directories...
def write_project_files_recursively(path, file_dict):
for name, value in file_dict.items():
if name.endswith(".sql") or name.endswith(".csv") or name.endswith(".md"):
write_file(value, path, name)
elif name.endswith(".yml") or name.endswith(".yaml"):
if isinstance(value, str):
data = value
else:
data = yaml.safe_dump(value)
write_file(data, path, name)
else:
write_project_files_recursively(path.mkdir(name), value)
# models, macros, seeds, snapshots, tests, analysis
# Provide a dictionary of file names to contents. Nested directories
# are handle by nested dictionaries.
@pytest.fixture(scope="class")
def models():
return {}
@pytest.fixture(scope="class")
def macros():
return {}
@pytest.fixture(scope="class")
def seeds():
return {}
@pytest.fixture(scope="class")
def snapshots():
return {}
@pytest.fixture(scope="class")
def tests():
return {}
@pytest.fixture(scope="class")
def analysis():
return {}
# Write out the files provided by models, macros, snapshots, seeds, tests, analysis
@pytest.fixture(scope="class")
def project_files(project_root, models, macros, snapshots, seeds, tests, analysis):
write_project_files(project_root, "models", models)
write_project_files(project_root, "macros", macros)
write_project_files(project_root, "snapshots", snapshots)
write_project_files(project_root, "seeds", seeds)
write_project_files(project_root, "tests", tests)
write_project_files(project_root, "analysis", analysis)
# We have a separate logs dir for every test
@pytest.fixture(scope="class")
def logs_dir(request, prefix):
return os.path.join(request.config.rootdir, "logs", prefix)
# This class is returned from the 'project' fixture, and contains information
# from the pytest fixtures that may be needed in the test functions, including
# a 'run_sql' method.
class TestProjInfo:
def __init__(
self,
project_root,
profiles_dir,
adapter,
test_dir,
shared_data_dir,
test_data_dir,
test_schema,
database,
):
self.project_root = project_root
self.profiles_dir = profiles_dir
self.adapter = adapter
self.test_dir = test_dir
self.shared_data_dir = shared_data_dir
self.test_data_dir = test_data_dir
self.test_schema = test_schema
self.database = database
# Run sql from a path
def run_sql_file(self, sql_path, fetch=None):
with open(sql_path, "r") as f:
statements = f.read().split(";")
for statement in statements:
self.run_sql(statement, fetch)
# run sql from a string, using adapter saved at test startup
def run_sql(self, sql, fetch=None):
return run_sql_with_adapter(self.adapter, sql, fetch=fetch)
def get_tables_in_schema(self):
sql = """
select table_name,
case when table_type = 'BASE TABLE' then 'table'
when table_type = 'VIEW' then 'view'
else table_type
end as materialization
from information_schema.tables
where {}
order by table_name
"""
sql = sql.format("{} ilike '{}'".format("table_schema", self.test_schema))
result = self.run_sql(sql, fetch="all")
return {model_name: materialization for (model_name, materialization) in result}
@pytest.fixture(scope="class")
def project(
project_root,
profiles_root,
request,
unique_schema,
profiles_yml,
dbt_project_yml,
packages_yml,
selectors_yml,
adapter,
project_files,
shared_data_dir,
test_data_dir,
logs_dir,
):
setup_event_logger(logs_dir)
orig_cwd = os.getcwd()
os.chdir(project_root)
# Return whatever is needed later in tests but can only come from fixtures, so we can keep
# the signatures in the test signature to a minimum.
project = TestProjInfo(
project_root=project_root,
profiles_dir=profiles_root,
adapter=adapter,
test_dir=request.fspath.dirname,
shared_data_dir=shared_data_dir,
test_data_dir=test_data_dir,
test_schema=unique_schema,
# the following feels kind of fragile. TODO: better way of getting database
database=profiles_yml["test"]["outputs"]["default"]["dbname"],
)
project.run_sql("drop schema if exists {schema} cascade")
project.run_sql("create schema {schema}")
yield project
project.run_sql("drop schema if exists {schema} cascade")
os.chdir(orig_cwd)

365
core/dbt/tests/tables.py Normal file
View File

@@ -0,0 +1,365 @@
from dbt.context import providers
from unittest.mock import patch
from contextlib import contextmanager
from dbt.events.functions import fire_event
from dbt.events.test_types import IntegrationTestDebug
# This code was copied from the earlier test framework in test/integration/base.py
# The goal is to vastly simplify this and replace it with calls to macros.
# For now, we use this to get the tests converted in a more straightforward way.
# Assertions:
# assert_tables_equal (old: assertTablesEqual)
# assert_many_relations_equal (old: assertManyRelationsEqual)
# assert_many_tables_equal (old: assertManyTablesEqual)
# assert_table_does_not_exist (old: assertTableDoesNotExist)
# assert_table_does_exist (old: assertTableDoesExist)
class TableComparison:
def __init__(self, adapter, unique_schema, database):
self.adapter = adapter
self.unique_schema = unique_schema
self.default_database = database
# TODO: We need to get this from somewhere reasonable
if database == "dbtMixedCase":
self.quoting = {"database": True, "schema": True, "identifier": True}
else:
self.quoting = {"database": False, "schema": False, "identifier": False}
# assertion used in tests
def assert_tables_equal(
self,
table_a,
table_b,
table_a_schema=None,
table_b_schema=None,
table_a_db=None,
table_b_db=None,
):
if table_a_schema is None:
table_a_schema = self.unique_schema
if table_b_schema is None:
table_b_schema = self.unique_schema
if table_a_db is None:
table_a_db = self.default_database
if table_b_db is None:
table_b_db = self.default_database
relation_a = self._make_relation(table_a, table_a_schema, table_a_db)
relation_b = self._make_relation(table_b, table_b_schema, table_b_db)
self._assert_table_columns_equal(relation_a, relation_b)
sql = self._assert_tables_equal_sql(relation_a, relation_b)
result = self.run_sql(sql, fetch="one")
assert result[0] == 0, "row_count_difference nonzero: " + sql
assert result[1] == 0, "num_mismatched nonzero: " + sql
# assertion used in tests
def assert_many_relations_equal(self, relations, default_schema=None, default_database=None):
if default_schema is None:
default_schema = self.unique_schema
if default_database is None:
default_database = self.default_database
specs = []
for relation in relations:
if not isinstance(relation, (tuple, list)):
relation = [relation]
assert len(relation) <= 3
if len(relation) == 3:
relation = self._make_relation(*relation)
elif len(relation) == 2:
relation = self._make_relation(relation[0], relation[1], default_database)
elif len(relation) == 1:
relation = self._make_relation(relation[0], default_schema, default_database)
else:
raise ValueError("relation must be a sequence of 1, 2, or 3 values")
specs.append(relation)
with self.get_connection():
column_specs = self.get_many_relation_columns(specs)
# make sure everyone has equal column definitions
first_columns = None
for relation in specs:
key = (relation.database, relation.schema, relation.identifier)
# get a good error here instead of a hard-to-diagnose KeyError
assert key in column_specs, f"No columns found for {key}"
columns = column_specs[key]
if first_columns is None:
first_columns = columns
else:
assert first_columns == columns, f"{str(specs[0])} did not match {str(relation)}"
# make sure everyone has the same data. if we got here, everyone had
# the same column specs!
first_relation = None
for relation in specs:
if first_relation is None:
first_relation = relation
else:
sql = self._assert_tables_equal_sql(
first_relation, relation, columns=first_columns
)
result = self.run_sql(sql, fetch="one")
assert result[0] == 0, "row_count_difference nonzero: " + sql
assert result[1] == 0, "num_mismatched nonzero: " + sql
# assertion used in tests
def assert_many_tables_equal(self, *args):
schema = self.unique_schema
all_tables = []
for table_equivalencies in args:
all_tables += list(table_equivalencies)
all_cols = self.get_table_columns_as_dict(all_tables, schema)
for table_equivalencies in args:
first_table = table_equivalencies[0]
first_relation = self._make_relation(first_table)
# assert that all tables have the same columns
base_result = all_cols[first_table]
assert len(base_result) > 0
for other_table in table_equivalencies[1:]:
other_result = all_cols[other_table]
assert len(other_result) > 0
assert base_result == other_result
other_relation = self._make_relation(other_table)
sql = self._assert_tables_equal_sql(
first_relation, other_relation, columns=base_result
)
result = self.run_sql(sql, fetch="one")
assert result[0] == 0, "row_count_difference nonzero: " + sql
assert result[1] == 0, "num_mismatched nonzero: " + sql
# assertion used in tests
def assert_table_does_not_exist(self, table, schema=None, database=None):
columns = self.get_table_columns(table, schema, database)
assert len(columns) == 0
# assertion used in tests
def assert_table_does_exist(self, table, schema=None, database=None):
columns = self.get_table_columns(table, schema, database)
assert len(columns) > 0
# called by assert_tables_equal
def _assert_table_columns_equal(self, relation_a, relation_b):
table_a_result = self.get_relation_columns(relation_a)
table_b_result = self.get_relation_columns(relation_b)
assert len(table_a_result) == len(table_b_result)
for a_column, b_column in zip(table_a_result, table_b_result):
a_name, a_type, a_size = a_column
b_name, b_type, b_size = b_column
assert a_name == b_name, "{} vs {}: column '{}' != '{}'".format(
relation_a, relation_b, a_name, b_name
)
assert a_type == b_type, "{} vs {}: column '{}' has type '{}' != '{}'".format(
relation_a, relation_b, a_name, a_type, b_type
)
assert a_size == b_size, "{} vs {}: column '{}' has size '{}' != '{}'".format(
relation_a, relation_b, a_name, a_size, b_size
)
def get_relation_columns(self, relation):
with self.get_connection():
columns = self.adapter.get_columns_in_relation(relation)
return sorted(((c.name, c.dtype, c.char_size) for c in columns), key=lambda x: x[0])
def get_table_columns(self, table, schema=None, database=None):
schema = self.unique_schema if schema is None else schema
database = self.default_database if database is None else database
relation = self.adapter.Relation.create(
database=database,
schema=schema,
identifier=table,
type="table",
quote_policy=self.quoting,
)
return self.get_relation_columns(relation)
# called by assert_many_table_equal
def get_table_columns_as_dict(self, tables, schema=None):
col_matrix = self.get_many_table_columns(tables, schema)
res = {}
for row in col_matrix:
table_name = row[0]
col_def = row[1:]
if table_name not in res:
res[table_name] = []
res[table_name].append(col_def)
return res
# override for presto
@property
def column_schema(self):
return "table_name, column_name, data_type, character_maximum_length"
# This should be overridden for Snowflake. Called by get_many_table_columns.
def get_many_table_columns_information_schema(self, tables, schema, database=None):
columns = self.column_schema
sql = """
select {columns}
from {db_string}information_schema.columns
where {schema_filter}
and ({table_filter})
order by column_name asc"""
db_string = ""
if database:
db_string = self.quote_as_configured(database, "database") + "."
table_filters_s = " OR ".join(
_ilike("table_name", table.replace('"', "")) for table in tables
)
schema_filter = _ilike("table_schema", schema)
sql = sql.format(
columns=columns,
schema_filter=schema_filter,
table_filter=table_filters_s,
db_string=db_string,
)
columns = self.run_sql(sql, fetch="all")
return list(map(self.filter_many_columns, columns))
# Snowflake needs a static char_size
def filter_many_columns(self, column):
if len(column) == 3:
table_name, column_name, data_type = column
char_size = None
else:
table_name, column_name, data_type, char_size = column
return (table_name, column_name, data_type, char_size)
@contextmanager
def get_connection(self, name="_test"):
"""Create a test connection context where all executed macros, etc will
use the adapter created in the schema fixture.
This allows tests to run normal adapter macros as if reset_adapters()
were not called by handle_and_check (for asserts, etc)
"""
with patch.object(providers, "get_adapter", return_value=self.adapter):
with self.adapter.connection_named(name):
conn = self.adapter.connections.get_thread_connection()
yield conn
def _make_relation(self, identifier, schema=None, database=None):
if schema is None:
schema = self.unique_schema
if database is None:
database = self.default_database
return self.adapter.Relation.create(
database=database, schema=schema, identifier=identifier, quote_policy=self.quoting
)
# called by get_many_relation_columns
def get_many_table_columns(self, tables, schema, database=None):
result = self.get_many_table_columns_information_schema(tables, schema, database)
result.sort(key=lambda x: "{}.{}".format(x[0], x[1]))
return result
# called by assert_many_relations_equal
def get_many_relation_columns(self, relations):
"""Returns a dict of (datbase, schema) -> (dict of (table_name -> list of columns))"""
schema_fqns = {}
for rel in relations:
this_schema = schema_fqns.setdefault((rel.database, rel.schema), [])
this_schema.append(rel.identifier)
column_specs = {}
for key, tables in schema_fqns.items():
database, schema = key
columns = self.get_many_table_columns(tables, schema, database=database)
table_columns = {}
for col in columns:
table_columns.setdefault(col[0], []).append(col[1:])
for rel_name, columns in table_columns.items():
key = (database, schema, rel_name)
column_specs[key] = columns
return column_specs
def _assert_tables_equal_sql(self, relation_a, relation_b, columns=None):
if columns is None:
columns = self.get_relation_columns(relation_a)
column_names = [c[0] for c in columns]
sql = self.adapter.get_rows_different_sql(relation_a, relation_b, column_names)
return sql
# This duplicates code in the TestProjInfo class.
def run_sql(self, sql, fetch=None):
if sql.strip() == "":
return
# substitute schema and database in sql
adapter = self.adapter
kwargs = {
"schema": self.unique_schema,
"database": adapter.quote(self.default_database),
}
sql = sql.format(**kwargs)
with self.get_connection("__test") as conn:
msg = f'test connection "{conn.name}" executing: {sql}'
fire_event(IntegrationTestDebug(msg=msg))
with conn.handle.cursor() as cursor:
try:
cursor.execute(sql)
conn.handle.commit()
conn.handle.commit()
if fetch == "one":
return cursor.fetchone()
elif fetch == "all":
return cursor.fetchall()
else:
return
except BaseException as e:
if conn.handle and not getattr(conn.handle, "closed", True):
conn.handle.rollback()
print(sql)
print(e)
raise
finally:
conn.transaction_open = False
def get_tables_in_schema(self):
sql = """
select table_name,
case when table_type = 'BASE TABLE' then 'table'
when table_type = 'VIEW' then 'view'
else table_type
end as materialization
from information_schema.tables
where {}
order by table_name
"""
sql = sql.format(_ilike("table_schema", self.unique_schema))
result = self.run_sql(sql, fetch="all")
return {model_name: materialization for (model_name, materialization) in result}
# needs overriding for presto
def _ilike(target, value):
return "{} ilike '{}'".format(target, value)

148
core/dbt/tests/util.py Normal file
View File

@@ -0,0 +1,148 @@
import os
import shutil
import yaml
import json
from typing import List
from dbt.main import handle_and_check
from dbt.logger import log_manager
from dbt.contracts.graph.manifest import Manifest
from dbt.events.functions import fire_event, capture_stdout_logs, stop_capture_stdout_logs
from dbt.events.test_types import IntegrationTestDebug
from dbt.context import providers
from unittest.mock import patch
# This is used in pytest tests to run dbt
def run_dbt(args: List[str] = None, expect_pass=True):
# The logger will complain about already being initialized if
# we don't do this.
log_manager.reset_handlers()
if args is None:
args = ["run"]
print("\n\nInvoking dbt with {}".format(args))
res, success = handle_and_check(args)
# assert success == expect_pass, "dbt exit state did not match expected"
return res
def run_dbt_and_capture(args: List[str] = None, expect_pass=True):
try:
stringbuf = capture_stdout_logs()
res = run_dbt(args, expect_pass=expect_pass)
stdout = stringbuf.getvalue()
finally:
stop_capture_stdout_logs()
return res, stdout
# Used in test cases to get the manifest from the partial parsing file
def get_manifest(project_root):
path = project_root.join("target", "partial_parse.msgpack")
if os.path.exists(path):
with open(path, "rb") as fp:
manifest_mp = fp.read()
manifest: Manifest = Manifest.from_msgpack(manifest_mp)
return manifest
else:
return None
def normalize(path):
"""On windows, neither is enough on its own:
>>> normcase('C:\\documents/ALL CAPS/subdir\\..')
'c:\\documents\\all caps\\subdir\\..'
>>> normpath('C:\\documents/ALL CAPS/subdir\\..')
'C:\\documents\\ALL CAPS'
>>> normpath(normcase('C:\\documents/ALL CAPS/subdir\\..'))
'c:\\documents\\all caps'
"""
return os.path.normcase(os.path.normpath(path))
def copy_file(src_path, src, dest_path, dest) -> None:
# dest is a list, so that we can provide nested directories, like 'models' etc.
# copy files from the data_dir to appropriate project directory
shutil.copyfile(
os.path.join(src_path, src),
os.path.join(dest_path, *dest),
)
def rm_file(src_path, src) -> None:
# remove files from proj_path
os.remove(os.path.join(src_path, src))
# We need to explicitly use encoding="utf-8" because otherwise on
# Windows we'll get codepage 1252 and things might break
def write_file(contents, *paths):
with open(os.path.join(*paths), "w", encoding="utf-8") as fp:
fp.write(contents)
def read_file(*paths):
contents = ""
with open(os.path.join(*paths), "r") as fp:
contents = fp.read()
return contents
def get_artifact(*paths):
contents = read_file(*paths)
dct = json.loads(contents)
return dct
# For updating yaml config files
def update_config_file(updates, *paths):
current_yaml = read_file(*paths)
config = yaml.safe_load(current_yaml)
config.update(updates)
new_yaml = yaml.safe_dump(config)
write_file(new_yaml, *paths)
def run_sql_with_adapter(adapter, sql, fetch=None):
if sql.strip() == "":
return
# substitute schema and database in sql
kwargs = {
"schema": adapter.config.credentials.schema,
"database": adapter.quote(adapter.config.credentials.database),
}
sql = sql.format(**kwargs)
# Since the 'adapter' in dbt.adapters.factory may have been replaced by execution
# of dbt commands since the test 'adapter' was created, we patch the 'get_adapter' call in
# dbt.context.providers, so that macros that are called refer to this test adapter.
# This allows tests to run normal adapter macros as if reset_adapters() were not
# called by handle_and_check (for asserts, etc).
with patch.object(providers, "get_adapter", return_value=adapter):
with adapter.connection_named("__test"):
conn = adapter.connections.get_thread_connection()
msg = f'test connection "{conn.name}" executing: {sql}'
fire_event(IntegrationTestDebug(msg=msg))
with conn.handle.cursor() as cursor:
try:
cursor.execute(sql)
conn.handle.commit()
conn.handle.commit()
if fetch == "one":
return cursor.fetchone()
elif fetch == "all":
return cursor.fetchall()
else:
return
except BaseException as e:
if conn.handle and not getattr(conn.handle, "closed", True):
conn.handle.rollback()
print(sql)
print(e)
raise
finally:
conn.transaction_open = False

View File

@@ -211,7 +211,7 @@ def deep_map_render(func: Callable[[Any, Tuple[Union[str, int], ...]], Any], val
It maps the function func() onto each non-container value in 'value'
recursively, returning a new value. As long as func does not manipulate
value, then deep_map_render will also not manipulate it.
the value, then deep_map_render will also not manipulate it.
value should be a value returned by `yaml.safe_load` or `json.load` - the
only expected types are list, dict, native python number, str, NoneType,
@@ -319,7 +319,7 @@ def timestring() -> str:
class JSONEncoder(json.JSONEncoder):
"""A 'custom' json encoder that does normal json encoder things, but also
handles `Decimal`s. and `Undefined`s. Decimals can lose precision because
handles `Decimal`s and `Undefined`s. Decimals can lose precision because
they get converted to floats. Undefined's are serialized to an empty string
"""
@@ -394,7 +394,7 @@ def translate_aliases(
If recurse is True, perform this operation recursively.
:return: A dict containing all the values in kwargs referenced by their
:returns: A dict containing all the values in kwargs referenced by their
canonical key.
:raises: `AliasException`, if a canonical key is defined more than once.
"""

View File

@@ -1,4 +1,4 @@
black==21.12b0
black==22.1.0
bumpversion
flake8
flaky
@@ -8,6 +8,7 @@ mypy==0.782
pip-tools
pre-commit
pytest
pytest-cov
pytest-dotenv
pytest-logbook
pytest-csv

View File

@@ -3,10 +3,7 @@
</p>
<p align="center">
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
</a>
<a href="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml">
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
<img src="https://github.com/dbt-labs/dbt-core/actions/workflows/main.yml/badge.svg?event=push" alt="CI Badge"/>
</a>
</p>

View File

@@ -22,7 +22,7 @@
from pg_class
),
dependency as (
select
select distinct
pg_depend.objid as id,
pg_depend.refobjid as ref
from pg_depend

View File

@@ -1,3 +1,10 @@
[pytest]
filterwarnings =
ignore:.*'soft_unicode' has been renamed to 'soft_str'*:DeprecationWarning
ignore:unclosed file .*:ResourceWarning
env_files =
test.env
testpaths =
test/unit
test/integration
tests/functional

View File

@@ -1,17 +0,0 @@
{{
config(
materialized = "incremental",
unique_key = "id",
persist_docs = {"relation": true}
)
}}
select *
from {{ ref('seed') }}
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table",
sort = 'first_name',
sort_type = 'compound'
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,8 +0,0 @@
{{
config(
materialized = "view",
enabled = False
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,3 +0,0 @@
{%- do adapter.get_relation(database=target.database, schema=target.schema, identifier='MATERIALIZED') -%}
select * from {{ ref('MATERIALIZED') }}

View File

@@ -1,11 +0,0 @@
{{
config(
materialized = "incremental"
)
}}
select * from {{ ref('seed') }}
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table",
sort = ['first_name', 'last_name'],
sort_type = 'interleaved'
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,8 +0,0 @@
{{
config(
materialized = "table"
)
}}
-- this is a unicode character: å
select * from {{ ref('seed') }}

View File

@@ -1,7 +0,0 @@
version: 2
models:
- name: DISABLED
columns:
- name: id
tests:
- unique

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "view"
)
}}
select * from {{ ref('seed') }}

View File

@@ -1 +0,0 @@
select 1 as id

View File

@@ -1,17 +0,0 @@
{{
config(
materialized = "incremental",
unique_key = "id",
persist_docs = {"relation": true}
)
}}
select *
from {{ ref('seed') }}
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table",
sort = 'first_name',
sort_type = 'compound'
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,8 +0,0 @@
{{
config(
materialized = "view",
enabled = False
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,3 +0,0 @@
{%- do adapter.get_relation(database=target.database, schema=target.schema, identifier='materialized') -%}
select * from {{ ref('materialized') }}

View File

@@ -1,11 +0,0 @@
{{
config(
materialized = "incremental"
)
}}
select * from {{ ref('seed') }}
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table",
sort = ['first_name', 'last_name'],
sort_type = 'interleaved'
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,12 +0,0 @@
{{
config(
materialized = "table"
)
}}
-- ensure that dbt_utils' relation check will work
{% set relation = ref('seed') %}
{%- if not (relation is mapping and relation.get('metadata', {}).get('type', '').endswith('Relation')) -%}
{%- do exceptions.raise_compiler_error("Macro " ~ macro ~ " expected a Relation but received the value: " ~ relation) -%}
{%- endif -%}
-- this is a unicode character: å
select * from {{ relation }}

View File

@@ -1,7 +0,0 @@
version: 2
models:
- name: disabled
columns:
- name: id
tests:
- unique

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "view"
)
}}
select * from {{ ref('seed') }}

View File

@@ -1,202 +0,0 @@
import json
import os
from pytest import mark
from test.integration.base import DBTIntegrationTest, use_profile
class BaseTestSimpleCopy(DBTIntegrationTest):
@property
def schema(self):
return "simple_copy_001"
@staticmethod
def dir(path):
return path.lstrip('/')
@property
def models(self):
return self.dir("models")
@property
def project_config(self):
return self.seed_quote_cfg_with({
'profile': '{{ "tes" ~ "t" }}'
})
def seed_quote_cfg_with(self, extra):
cfg = {
'config-version': 2,
'seeds': {
'quote_columns': False,
}
}
cfg.update(extra)
return cfg
class TestSimpleCopy(BaseTestSimpleCopy):
@property
def project_config(self):
return self.seed_quote_cfg_with({"seed-paths": [self.dir("seed-initial")]})
@use_profile("postgres")
def test__postgres__simple_copy(self):
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
self.assertManyTablesEqual(["seed", "view_model", "incremental", "materialized", "get_and_ref"])
self.use_default_project({"seed-paths": [self.dir("seed-update")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
self.assertManyTablesEqual(["seed", "view_model", "incremental", "materialized", "get_and_ref"])
@use_profile('postgres')
def test__postgres__simple_copy_with_materialized_views(self):
self.run_sql('''
create table {schema}.unrelated_table (id int)
'''.format(schema=self.unique_schema())
)
self.run_sql('''
create materialized view {schema}.unrelated_materialized_view as (
select * from {schema}.unrelated_table
)
'''.format(schema=self.unique_schema()))
self.run_sql('''
create view {schema}.unrelated_view as (
select * from {schema}.unrelated_materialized_view
)
'''.format(schema=self.unique_schema()))
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
@use_profile("postgres")
def test__postgres__dbt_doesnt_run_empty_models(self):
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
models = self.get_models_in_schema()
self.assertFalse("empty" in models.keys())
self.assertFalse("disabled" in models.keys())
class TestShouting(BaseTestSimpleCopy):
@property
def models(self):
return self.dir('models-shouting')
@property
def project_config(self):
return self.seed_quote_cfg_with({"seed-paths": [self.dir("seed-initial")]})
@use_profile("postgres")
def test__postgres__simple_copy_loud(self):
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
self.assertManyTablesEqual(["seed", "VIEW_MODEL", "INCREMENTAL", "MATERIALIZED", "GET_AND_REF"])
self.use_default_project({"seed-paths": [self.dir("seed-update")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 1)
results = self.run_dbt()
self.assertEqual(len(results), 7)
self.assertManyTablesEqual(["seed", "VIEW_MODEL", "INCREMENTAL", "MATERIALIZED", "GET_AND_REF"])
# I give up on getting this working for Windows.
@mark.skipif(os.name == 'nt', reason='mixed-case postgres database tests are not supported on Windows')
class TestMixedCaseDatabase(BaseTestSimpleCopy):
@property
def models(self):
return self.dir('models-trivial')
def postgres_profile(self):
return {
'config': {
'send_anonymous_usage_stats': False
},
'test': {
'outputs': {
'default2': {
'type': 'postgres',
'threads': 4,
'host': self.database_host,
'port': 5432,
'user': 'root',
'pass': 'password',
'dbname': 'dbtMixedCase',
'schema': self.unique_schema()
},
},
'target': 'default2'
}
}
@property
def project_config(self):
return {'config-version': 2}
@use_profile('postgres')
def test_postgres_run_mixed_case(self):
self.run_dbt()
self.run_dbt()
class TestQuotedDatabase(BaseTestSimpleCopy):
@property
def project_config(self):
return self.seed_quote_cfg_with({
'quoting': {
'database': True,
},
"seed-paths": [self.dir("seed-initial")],
})
def seed_get_json(self, expect_pass=True):
results, output = self.run_dbt_and_capture(
['--debug', '--log-format=json', '--single-threaded', 'seed'],
expect_pass=expect_pass
)
logs = []
for line in output.split('\n'):
try:
log = json.loads(line)
except ValueError:
continue
# TODO structured logging does not put out run_state yet
# if log['extra'].get('run_state') != 'internal':
# continue
logs.append(log)
# empty lists evaluate as False
self.assertTrue(logs)
return logs
@use_profile('postgres')
def test_postgres_no_create_schemas(self):
logs = self.seed_get_json()
for log in logs:
msg = log['msg']
self.assertFalse(
'create schema if not exists' in msg,
f'did not expect schema creation: {msg}'
)

View File

@@ -1,9 +0,0 @@
{%- set tgt = ref('seed') -%}
{%- set got = adapter.get_relation(database=tgt.database, schema=tgt.schema, identifier=tgt.identifier) | string -%}
{% set replaced = got.replace('"', '-') %}
{% set expected = "-" + tgt.database.upper() + '-.-' + tgt.schema.upper() + '-.-' + tgt.identifier.upper() + '-' %}
with cte as (
select '{{ replaced }}' as name
)
select * from cte where name not like '{{ expected }}'

View File

@@ -1,13 +0,0 @@
{{
config(
materialized = "incremental"
)
}}
select * from {{ this.schema }}.seed
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "table"
)
}}
select * from {{ this.schema }}.seed

View File

@@ -1,29 +0,0 @@
from test.integration.base import DBTIntegrationTest, use_profile
class TestVarcharWidening(DBTIntegrationTest):
@property
def schema(self):
return "varchar_widening_002"
@property
def models(self):
return "models"
@use_profile('postgres')
def test__postgres__varchar_widening(self):
self.run_sql_file("seed.sql")
results = self.run_dbt()
self.assertEqual(len(results), 2)
self.assertTablesEqual("seed","incremental")
self.assertTablesEqual("seed","materialized")
self.run_sql_file("update.sql")
results = self.run_dbt()
self.assertEqual(len(results), 2)
self.assertTablesEqual("seed","incremental")
self.assertTablesEqual("seed","materialized")

View File

@@ -1,2 +0,0 @@
-- should be ref('model')
select * from {{ ref(model) }}

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "ephemeral"
)
}}
select * from {{ this.schema }}.seed

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table"
)
}}
select gender, count(*) as ct from {{ref('ephemeral_copy')}}
group by gender
order by gender asc

View File

@@ -1,13 +0,0 @@
{{
config(
materialized = "incremental"
)
}}
select * from {{ this.schema }}.seed
{% if is_incremental() %}
where id > (select max(id) from {{this}})
{% endif %}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table",
)
}}
select gender, count(*) as ct from {{ref('incremental_copy')}}
group by gender
order by gender asc

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "table"
)
}}
select * from {{ this.schema }}.seed

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "table"
)
}}
select gender, count(*) as ct from {{ref('materialized_copy')}}
group by gender
order by gender asc

View File

@@ -1,7 +0,0 @@
{{
config(
materialized = "view"
)
}}
select * from {{ this.schema }}.seed

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "view"
)
}}
select gender, count(*) as ct from {{ref('view_copy')}}
group by gender
order by gender asc

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = "view"
)
}}
select gender, count(*) as ct from {{ var('var_ref') }}
group by gender
order by gender asc

View File

@@ -1,119 +0,0 @@
create table {schema}.summary_expected (
gender VARCHAR(10),
ct BIGINT
);
insert into {schema}.summary_expected (gender, ct) values
('Female', 40),
('Male', 60);
create table {schema}.seed (
id BIGSERIAL PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(50),
gender VARCHAR(10),
ip_address VARCHAR(20)
);
insert into {schema}.seed (first_name, last_name, email, gender, ip_address) values
('Jack', 'Hunter', 'jhunter0@pbs.org', 'Male', '59.80.20.168'),
('Kathryn', 'Walker', 'kwalker1@ezinearticles.com', 'Female', '194.121.179.35'),
('Gerald', 'Ryan', 'gryan2@com.com', 'Male', '11.3.212.243'),
('Bonnie', 'Spencer', 'bspencer3@ameblo.jp', 'Female', '216.32.196.175'),
('Harold', 'Taylor', 'htaylor4@people.com.cn', 'Male', '253.10.246.136'),
('Jacqueline', 'Griffin', 'jgriffin5@t.co', 'Female', '16.13.192.220'),
('Wanda', 'Arnold', 'warnold6@google.nl', 'Female', '232.116.150.64'),
('Craig', 'Ortiz', 'cortiz7@sciencedaily.com', 'Male', '199.126.106.13'),
('Gary', 'Day', 'gday8@nih.gov', 'Male', '35.81.68.186'),
('Rose', 'Wright', 'rwright9@yahoo.co.jp', 'Female', '236.82.178.100'),
('Raymond', 'Kelley', 'rkelleya@fc2.com', 'Male', '213.65.166.67'),
('Gerald', 'Robinson', 'grobinsonb@disqus.com', 'Male', '72.232.194.193'),
('Mildred', 'Martinez', 'mmartinezc@samsung.com', 'Female', '198.29.112.5'),
('Dennis', 'Arnold', 'darnoldd@google.com', 'Male', '86.96.3.250'),
('Judy', 'Gray', 'jgraye@opensource.org', 'Female', '79.218.162.245'),
('Theresa', 'Garza', 'tgarzaf@epa.gov', 'Female', '21.59.100.54'),
('Gerald', 'Robertson', 'grobertsong@csmonitor.com', 'Male', '131.134.82.96'),
('Philip', 'Hernandez', 'phernandezh@adobe.com', 'Male', '254.196.137.72'),
('Julia', 'Gonzalez', 'jgonzalezi@cam.ac.uk', 'Female', '84.240.227.174'),
('Andrew', 'Davis', 'adavisj@patch.com', 'Male', '9.255.67.25'),
('Kimberly', 'Harper', 'kharperk@foxnews.com', 'Female', '198.208.120.253'),
('Mark', 'Martin', 'mmartinl@marketwatch.com', 'Male', '233.138.182.153'),
('Cynthia', 'Ruiz', 'cruizm@google.fr', 'Female', '18.178.187.201'),
('Samuel', 'Carroll', 'scarrolln@youtu.be', 'Male', '128.113.96.122'),
('Jennifer', 'Larson', 'jlarsono@vinaora.com', 'Female', '98.234.85.95'),
('Ashley', 'Perry', 'aperryp@rakuten.co.jp', 'Female', '247.173.114.52'),
('Howard', 'Rodriguez', 'hrodriguezq@shutterfly.com', 'Male', '231.188.95.26'),
('Amy', 'Brooks', 'abrooksr@theatlantic.com', 'Female', '141.199.174.118'),
('Louise', 'Warren', 'lwarrens@adobe.com', 'Female', '96.105.158.28'),
('Tina', 'Watson', 'twatsont@myspace.com', 'Female', '251.142.118.177'),
('Janice', 'Kelley', 'jkelleyu@creativecommons.org', 'Female', '239.167.34.233'),
('Terry', 'Mccoy', 'tmccoyv@bravesites.com', 'Male', '117.201.183.203'),
('Jeffrey', 'Morgan', 'jmorganw@surveymonkey.com', 'Male', '78.101.78.149'),
('Louis', 'Harvey', 'lharveyx@sina.com.cn', 'Male', '51.50.0.167'),
('Philip', 'Miller', 'pmillery@samsung.com', 'Male', '103.255.222.110'),
('Willie', 'Marshall', 'wmarshallz@ow.ly', 'Male', '149.219.91.68'),
('Patrick', 'Lopez', 'plopez10@redcross.org', 'Male', '250.136.229.89'),
('Adam', 'Jenkins', 'ajenkins11@harvard.edu', 'Male', '7.36.112.81'),
('Benjamin', 'Cruz', 'bcruz12@linkedin.com', 'Male', '32.38.98.15'),
('Ruby', 'Hawkins', 'rhawkins13@gmpg.org', 'Female', '135.171.129.255'),
('Carlos', 'Barnes', 'cbarnes14@a8.net', 'Male', '240.197.85.140'),
('Ruby', 'Griffin', 'rgriffin15@bravesites.com', 'Female', '19.29.135.24'),
('Sean', 'Mason', 'smason16@icq.com', 'Male', '159.219.155.249'),
('Anthony', 'Payne', 'apayne17@utexas.edu', 'Male', '235.168.199.218'),
('Steve', 'Cruz', 'scruz18@pcworld.com', 'Male', '238.201.81.198'),
('Anthony', 'Garcia', 'agarcia19@flavors.me', 'Male', '25.85.10.18'),
('Doris', 'Lopez', 'dlopez1a@sphinn.com', 'Female', '245.218.51.238'),
('Susan', 'Nichols', 'snichols1b@freewebs.com', 'Female', '199.99.9.61'),
('Wanda', 'Ferguson', 'wferguson1c@yahoo.co.jp', 'Female', '236.241.135.21'),
('Andrea', 'Pierce', 'apierce1d@google.co.uk', 'Female', '132.40.10.209'),
('Lawrence', 'Phillips', 'lphillips1e@jugem.jp', 'Male', '72.226.82.87'),
('Judy', 'Gilbert', 'jgilbert1f@multiply.com', 'Female', '196.250.15.142'),
('Eric', 'Williams', 'ewilliams1g@joomla.org', 'Male', '222.202.73.126'),
('Ralph', 'Romero', 'rromero1h@sogou.com', 'Male', '123.184.125.212'),
('Jean', 'Wilson', 'jwilson1i@ocn.ne.jp', 'Female', '176.106.32.194'),
('Lori', 'Reynolds', 'lreynolds1j@illinois.edu', 'Female', '114.181.203.22'),
('Donald', 'Moreno', 'dmoreno1k@bbc.co.uk', 'Male', '233.249.97.60'),
('Steven', 'Berry', 'sberry1l@eepurl.com', 'Male', '186.193.50.50'),
('Theresa', 'Shaw', 'tshaw1m@people.com.cn', 'Female', '120.37.71.222'),
('John', 'Stephens', 'jstephens1n@nationalgeographic.com', 'Male', '191.87.127.115'),
('Richard', 'Jacobs', 'rjacobs1o@state.tx.us', 'Male', '66.210.83.155'),
('Andrew', 'Lawson', 'alawson1p@over-blog.com', 'Male', '54.98.36.94'),
('Peter', 'Morgan', 'pmorgan1q@rambler.ru', 'Male', '14.77.29.106'),
('Nicole', 'Garrett', 'ngarrett1r@zimbio.com', 'Female', '21.127.74.68'),
('Joshua', 'Kim', 'jkim1s@edublogs.org', 'Male', '57.255.207.41'),
('Ralph', 'Roberts', 'rroberts1t@people.com.cn', 'Male', '222.143.131.109'),
('George', 'Montgomery', 'gmontgomery1u@smugmug.com', 'Male', '76.75.111.77'),
('Gerald', 'Alvarez', 'galvarez1v@flavors.me', 'Male', '58.157.186.194'),
('Donald', 'Olson', 'dolson1w@whitehouse.gov', 'Male', '69.65.74.135'),
('Carlos', 'Morgan', 'cmorgan1x@pbs.org', 'Male', '96.20.140.87'),
('Aaron', 'Stanley', 'astanley1y@webnode.com', 'Male', '163.119.217.44'),
('Virginia', 'Long', 'vlong1z@spiegel.de', 'Female', '204.150.194.182'),
('Robert', 'Berry', 'rberry20@tripadvisor.com', 'Male', '104.19.48.241'),
('Antonio', 'Brooks', 'abrooks21@unesco.org', 'Male', '210.31.7.24'),
('Ruby', 'Garcia', 'rgarcia22@ovh.net', 'Female', '233.218.162.214'),
('Jack', 'Hanson', 'jhanson23@blogtalkradio.com', 'Male', '31.55.46.199'),
('Kathryn', 'Nelson', 'knelson24@walmart.com', 'Female', '14.189.146.41'),
('Jason', 'Reed', 'jreed25@printfriendly.com', 'Male', '141.189.89.255'),
('George', 'Coleman', 'gcoleman26@people.com.cn', 'Male', '81.189.221.144'),
('Rose', 'King', 'rking27@ucoz.com', 'Female', '212.123.168.231'),
('Johnny', 'Holmes', 'jholmes28@boston.com', 'Male', '177.3.93.188'),
('Katherine', 'Gilbert', 'kgilbert29@altervista.org', 'Female', '199.215.169.61'),
('Joshua', 'Thomas', 'jthomas2a@ustream.tv', 'Male', '0.8.205.30'),
('Julie', 'Perry', 'jperry2b@opensource.org', 'Female', '60.116.114.192'),
('Richard', 'Perry', 'rperry2c@oracle.com', 'Male', '181.125.70.232'),
('Kenneth', 'Ruiz', 'kruiz2d@wikimedia.org', 'Male', '189.105.137.109'),
('Jose', 'Morgan', 'jmorgan2e@webnode.com', 'Male', '101.134.215.156'),
('Donald', 'Campbell', 'dcampbell2f@goo.ne.jp', 'Male', '102.120.215.84'),
('Debra', 'Collins', 'dcollins2g@uol.com.br', 'Female', '90.13.153.235'),
('Jesse', 'Johnson', 'jjohnson2h@stumbleupon.com', 'Male', '225.178.125.53'),
('Elizabeth', 'Stone', 'estone2i@histats.com', 'Female', '123.184.126.221'),
('Angela', 'Rogers', 'arogers2j@goodreads.com', 'Female', '98.104.132.187'),
('Emily', 'Dixon', 'edixon2k@mlb.com', 'Female', '39.190.75.57'),
('Albert', 'Scott', 'ascott2l@tinypic.com', 'Male', '40.209.13.189'),
('Barbara', 'Peterson', 'bpeterson2m@ow.ly', 'Female', '75.249.136.180'),
('Adam', 'Greene', 'agreene2n@fastcompany.com', 'Male', '184.173.109.144'),
('Earl', 'Sanders', 'esanders2o@hc360.com', 'Male', '247.34.90.117'),
('Angela', 'Brooks', 'abrooks2p@mtv.com', 'Female', '10.63.249.126'),
('Harold', 'Foster', 'hfoster2q@privacy.gov.au', 'Male', '139.214.40.244'),
('Carl', 'Meyer', 'cmeyer2r@disqus.com', 'Male', '204.117.7.88');

View File

@@ -1,136 +0,0 @@
import os
from dbt.exceptions import CompilationException
from test.integration.base import DBTIntegrationTest, use_profile
class TestSimpleReference(DBTIntegrationTest):
@property
def schema(self):
return "simple_reference_003"
@property
def models(self):
return "models"
@property
def project_config(self):
return {
'config-version': 2,
'vars': {
'test': {
'var_ref': '{{ ref("view_copy") }}',
},
},
}
def setUp(self):
super().setUp()
# self.use_default_config()
self.run_sql_file("seed.sql")
@use_profile('postgres')
def test__postgres__simple_reference(self):
results = self.run_dbt()
# ephemeral_copy doesn't show up in results
self.assertEqual(len(results), 8)
# Copies should match
self.assertTablesEqual("seed","incremental_copy")
self.assertTablesEqual("seed","materialized_copy")
self.assertTablesEqual("seed","view_copy")
# Summaries should match
self.assertTablesEqual("summary_expected","incremental_summary")
self.assertTablesEqual("summary_expected","materialized_summary")
self.assertTablesEqual("summary_expected","view_summary")
self.assertTablesEqual("summary_expected","ephemeral_summary")
self.assertTablesEqual("summary_expected","view_using_ref")
self.run_sql_file("update.sql")
results = self.run_dbt()
self.assertEqual(len(results), 8)
# Copies should match
self.assertTablesEqual("seed","incremental_copy")
self.assertTablesEqual("seed","materialized_copy")
self.assertTablesEqual("seed","view_copy")
# Summaries should match
self.assertTablesEqual("summary_expected","incremental_summary")
self.assertTablesEqual("summary_expected","materialized_summary")
self.assertTablesEqual("summary_expected","view_summary")
self.assertTablesEqual("summary_expected","ephemeral_summary")
self.assertTablesEqual("summary_expected","view_using_ref")
@use_profile('postgres')
def test__postgres__simple_reference_with_models(self):
# Run materialized_copy, ephemeral_copy, and their dependents
# ephemeral_copy should not actually be materialized b/c it is ephemeral
results = self.run_dbt(
['run', '--models', 'materialized_copy', 'ephemeral_copy']
)
self.assertEqual(len(results), 1)
# Copies should match
self.assertTablesEqual("seed","materialized_copy")
created_models = self.get_models_in_schema()
self.assertTrue('materialized_copy' in created_models)
@use_profile('postgres')
def test__postgres__simple_reference_with_models_and_children(self):
# Run materialized_copy, ephemeral_copy, and their dependents
# ephemeral_copy should not actually be materialized b/c it is ephemeral
# the dependent ephemeral_summary, however, should be materialized as a table
results = self.run_dbt(
['run', '--models', 'materialized_copy+', 'ephemeral_copy+']
)
self.assertEqual(len(results), 3)
# Copies should match
self.assertTablesEqual("seed","materialized_copy")
# Summaries should match
self.assertTablesEqual("summary_expected","materialized_summary")
self.assertTablesEqual("summary_expected","ephemeral_summary")
created_models = self.get_models_in_schema()
self.assertFalse('incremental_copy' in created_models)
self.assertFalse('incremental_summary' in created_models)
self.assertFalse('view_copy' in created_models)
self.assertFalse('view_summary' in created_models)
# make sure this wasn't errantly materialized
self.assertFalse('ephemeral_copy' in created_models)
self.assertTrue('materialized_copy' in created_models)
self.assertTrue('materialized_summary' in created_models)
self.assertEqual(created_models['materialized_copy'], 'table')
self.assertEqual(created_models['materialized_summary'], 'table')
self.assertTrue('ephemeral_summary' in created_models)
self.assertEqual(created_models['ephemeral_summary'], 'table')
class TestErrorReference(DBTIntegrationTest):
@property
def schema(self):
return "simple_reference_003"
@property
def models(self):
return "invalid-models"
@use_profile('postgres')
def test_postgres_undefined_value(self):
with self.assertRaises(CompilationException) as exc:
self.run_dbt(['compile'])
path = os.path.join('invalid-models', 'descendant.sql')
self.assertIn(path, str(exc.exception))

View File

@@ -1,107 +0,0 @@
truncate table {schema}.summary_expected;
insert into {schema}.summary_expected (gender, ct) values
('Female', 94),
('Male', 106);
insert into {schema}.seed (first_name, last_name, email, gender, ip_address) values
('Michael', 'Perez', 'mperez0@chronoengine.com', 'Male', '106.239.70.175'),
('Shawn', 'Mccoy', 'smccoy1@reddit.com', 'Male', '24.165.76.182'),
('Kathleen', 'Payne', 'kpayne2@cargocollective.com', 'Female', '113.207.168.106'),
('Jimmy', 'Cooper', 'jcooper3@cargocollective.com', 'Male', '198.24.63.114'),
('Katherine', 'Rice', 'krice4@typepad.com', 'Female', '36.97.186.238'),
('Sarah', 'Ryan', 'sryan5@gnu.org', 'Female', '119.117.152.40'),
('Martin', 'Mcdonald', 'mmcdonald6@opera.com', 'Male', '8.76.38.115'),
('Frank', 'Robinson', 'frobinson7@wunderground.com', 'Male', '186.14.64.194'),
('Jennifer', 'Franklin', 'jfranklin8@mail.ru', 'Female', '91.216.3.131'),
('Henry', 'Welch', 'hwelch9@list-manage.com', 'Male', '176.35.182.168'),
('Fred', 'Snyder', 'fsnydera@reddit.com', 'Male', '217.106.196.54'),
('Amy', 'Dunn', 'adunnb@nba.com', 'Female', '95.39.163.195'),
('Kathleen', 'Meyer', 'kmeyerc@cdc.gov', 'Female', '164.142.188.214'),
('Steve', 'Ferguson', 'sfergusond@reverbnation.com', 'Male', '138.22.204.251'),
('Teresa', 'Hill', 'thille@dion.ne.jp', 'Female', '82.84.228.235'),
('Amanda', 'Harper', 'aharperf@mail.ru', 'Female', '16.123.56.176'),
('Kimberly', 'Ray', 'krayg@xing.com', 'Female', '48.66.48.12'),
('Johnny', 'Knight', 'jknighth@jalbum.net', 'Male', '99.30.138.123'),
('Virginia', 'Freeman', 'vfreemani@tiny.cc', 'Female', '225.172.182.63'),
('Anna', 'Austin', 'aaustinj@diigo.com', 'Female', '62.111.227.148'),
('Willie', 'Hill', 'whillk@mail.ru', 'Male', '0.86.232.249'),
('Sean', 'Harris', 'sharrisl@zdnet.com', 'Male', '117.165.133.249'),
('Mildred', 'Adams', 'madamsm@usatoday.com', 'Female', '163.44.97.46'),
('David', 'Graham', 'dgrahamn@zimbio.com', 'Male', '78.13.246.202'),
('Victor', 'Hunter', 'vhuntero@ehow.com', 'Male', '64.156.179.139'),
('Aaron', 'Ruiz', 'aruizp@weebly.com', 'Male', '34.194.68.78'),
('Benjamin', 'Brooks', 'bbrooksq@jalbum.net', 'Male', '20.192.189.107'),
('Lisa', 'Wilson', 'lwilsonr@japanpost.jp', 'Female', '199.152.130.217'),
('Benjamin', 'King', 'bkings@comsenz.com', 'Male', '29.189.189.213'),
('Christina', 'Williamson', 'cwilliamsont@boston.com', 'Female', '194.101.52.60'),
('Jane', 'Gonzalez', 'jgonzalezu@networksolutions.com', 'Female', '109.119.12.87'),
('Thomas', 'Owens', 'towensv@psu.edu', 'Male', '84.168.213.153'),
('Katherine', 'Moore', 'kmoorew@naver.com', 'Female', '183.150.65.24'),
('Jennifer', 'Stewart', 'jstewartx@yahoo.com', 'Female', '38.41.244.58'),
('Sara', 'Tucker', 'stuckery@topsy.com', 'Female', '181.130.59.184'),
('Harold', 'Ortiz', 'hortizz@vkontakte.ru', 'Male', '198.231.63.137'),
('Shirley', 'James', 'sjames10@yelp.com', 'Female', '83.27.160.104'),
('Dennis', 'Johnson', 'djohnson11@slate.com', 'Male', '183.178.246.101'),
('Louise', 'Weaver', 'lweaver12@china.com.cn', 'Female', '1.14.110.18'),
('Maria', 'Armstrong', 'marmstrong13@prweb.com', 'Female', '181.142.1.249'),
('Gloria', 'Cruz', 'gcruz14@odnoklassniki.ru', 'Female', '178.232.140.243'),
('Diana', 'Spencer', 'dspencer15@ifeng.com', 'Female', '125.153.138.244'),
('Kelly', 'Nguyen', 'knguyen16@altervista.org', 'Female', '170.13.201.119'),
('Jane', 'Rodriguez', 'jrodriguez17@biblegateway.com', 'Female', '12.102.249.81'),
('Scott', 'Brown', 'sbrown18@geocities.jp', 'Male', '108.174.99.192'),
('Norma', 'Cruz', 'ncruz19@si.edu', 'Female', '201.112.156.197'),
('Marie', 'Peters', 'mpeters1a@mlb.com', 'Female', '231.121.197.144'),
('Lillian', 'Carr', 'lcarr1b@typepad.com', 'Female', '206.179.164.163'),
('Judy', 'Nichols', 'jnichols1c@t-online.de', 'Female', '158.190.209.194'),
('Billy', 'Long', 'blong1d@yahoo.com', 'Male', '175.20.23.160'),
('Howard', 'Reid', 'hreid1e@exblog.jp', 'Male', '118.99.196.20'),
('Laura', 'Ferguson', 'lferguson1f@tuttocitta.it', 'Female', '22.77.87.110'),
('Anne', 'Bailey', 'abailey1g@geocities.com', 'Female', '58.144.159.245'),
('Rose', 'Morgan', 'rmorgan1h@ehow.com', 'Female', '118.127.97.4'),
('Nicholas', 'Reyes', 'nreyes1i@google.ru', 'Male', '50.135.10.252'),
('Joshua', 'Kennedy', 'jkennedy1j@house.gov', 'Male', '154.6.163.209'),
('Paul', 'Watkins', 'pwatkins1k@upenn.edu', 'Male', '177.236.120.87'),
('Kathryn', 'Kelly', 'kkelly1l@businessweek.com', 'Female', '70.28.61.86'),
('Adam', 'Armstrong', 'aarmstrong1m@techcrunch.com', 'Male', '133.235.24.202'),
('Norma', 'Wallace', 'nwallace1n@phoca.cz', 'Female', '241.119.227.128'),
('Timothy', 'Reyes', 'treyes1o@google.cn', 'Male', '86.28.23.26'),
('Elizabeth', 'Patterson', 'epatterson1p@sun.com', 'Female', '139.97.159.149'),
('Edward', 'Gomez', 'egomez1q@google.fr', 'Male', '158.103.108.255'),
('David', 'Cox', 'dcox1r@friendfeed.com', 'Male', '206.80.80.58'),
('Brenda', 'Wood', 'bwood1s@over-blog.com', 'Female', '217.207.44.179'),
('Adam', 'Walker', 'awalker1t@blogs.com', 'Male', '253.211.54.93'),
('Michael', 'Hart', 'mhart1u@wix.com', 'Male', '230.206.200.22'),
('Jesse', 'Ellis', 'jellis1v@google.co.uk', 'Male', '213.254.162.52'),
('Janet', 'Powell', 'jpowell1w@un.org', 'Female', '27.192.194.86'),
('Helen', 'Ford', 'hford1x@creativecommons.org', 'Female', '52.160.102.168'),
('Gerald', 'Carpenter', 'gcarpenter1y@about.me', 'Male', '36.30.194.218'),
('Kathryn', 'Oliver', 'koliver1z@army.mil', 'Female', '202.63.103.69'),
('Alan', 'Berry', 'aberry20@gov.uk', 'Male', '246.157.112.211'),
('Harry', 'Andrews', 'handrews21@ameblo.jp', 'Male', '195.108.0.12'),
('Andrea', 'Hall', 'ahall22@hp.com', 'Female', '149.162.163.28'),
('Barbara', 'Wells', 'bwells23@behance.net', 'Female', '224.70.72.1'),
('Anne', 'Wells', 'awells24@apache.org', 'Female', '180.168.81.153'),
('Harry', 'Harper', 'hharper25@rediff.com', 'Male', '151.87.130.21'),
('Jack', 'Ray', 'jray26@wufoo.com', 'Male', '220.109.38.178'),
('Phillip', 'Hamilton', 'phamilton27@joomla.org', 'Male', '166.40.47.30'),
('Shirley', 'Hunter', 'shunter28@newsvine.com', 'Female', '97.209.140.194'),
('Arthur', 'Daniels', 'adaniels29@reuters.com', 'Male', '5.40.240.86'),
('Virginia', 'Rodriguez', 'vrodriguez2a@walmart.com', 'Female', '96.80.164.184'),
('Christina', 'Ryan', 'cryan2b@hibu.com', 'Female', '56.35.5.52'),
('Theresa', 'Mendoza', 'tmendoza2c@vinaora.com', 'Female', '243.42.0.210'),
('Jason', 'Cole', 'jcole2d@ycombinator.com', 'Male', '198.248.39.129'),
('Phillip', 'Bryant', 'pbryant2e@rediff.com', 'Male', '140.39.116.251'),
('Adam', 'Torres', 'atorres2f@sun.com', 'Male', '101.75.187.135'),
('Margaret', 'Johnston', 'mjohnston2g@ucsd.edu', 'Female', '159.30.69.149'),
('Paul', 'Payne', 'ppayne2h@hhs.gov', 'Male', '199.234.140.220'),
('Todd', 'Willis', 'twillis2i@businessweek.com', 'Male', '191.59.136.214'),
('Willie', 'Oliver', 'woliver2j@noaa.gov', 'Male', '44.212.35.197'),
('Frances', 'Robertson', 'frobertson2k@go.com', 'Female', '31.117.65.136'),
('Gregory', 'Hawkins', 'ghawkins2l@joomla.org', 'Male', '91.3.22.49'),
('Lisa', 'Perkins', 'lperkins2m@si.edu', 'Female', '145.95.31.186'),
('Jacqueline', 'Anderson', 'janderson2n@cargocollective.com', 'Female', '14.176.0.187'),
('Shirley', 'Diaz', 'sdiaz2o@ucla.edu', 'Female', '207.12.95.46'),
('Nicole', 'Meyer', 'nmeyer2p@flickr.com', 'Female', '231.79.115.13'),
('Mary', 'Gray', 'mgray2q@constantcontact.com', 'Female', '210.116.64.253'),
('Jean', 'Mcdonald', 'jmcdonald2r@baidu.com', 'Female', '122.239.235.117');

View File

@@ -257,6 +257,16 @@ class TestSimpleDependencyBadProfile(DBTIntegrationTest):
def models(self):
return "models"
@property
def project_config(self):
return {
'config-version': 2,
'models': {
'+any_config': "{{ target.name }}",
'+enabled': "{{ target.name in ['redshift', 'postgres'] | as_bool }}"
}
}
def postgres_profile(self):
# Need to set the environment variable here initially because
# the unittest setup does a load_config.

View File

@@ -1,9 +0,0 @@
{# Same as ´users´ model, but with dots in the model name #}
{{
config(
materialized = 'table',
tags=['dots']
)
}}
select * from {{ ref('base_users') }}

View File

@@ -1,9 +0,0 @@
{{
config(
materialized = 'ephemeral',
tags = ['base']
)
}}
select * from {{ source('raw', 'seed') }}

Some files were not shown because too many files have changed in this diff Show More