Compare commits

...

25 Commits

Author SHA1 Message Date
Anders Swanson
586cba243b tryna get tempbo working 2023-09-14 19:14:00 -04:00
github-actions[bot]
3885024873 Fix test_numeric_values of the show test (#8644) (#8645)
(cherry picked from commit 26c7675c28)

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2023-09-13 16:09:22 -04:00
github-actions[bot]
8232feb616 split up test class (#8610) (#8628)
(cherry picked from commit 5182e3c40c)

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2023-09-13 09:08:47 -05:00
github-actions[bot]
2eb24685bb compile --no-inject-ephemeral-ctes flag (#8482) (#8620)
(cherry picked from commit e24a952e98)

Co-authored-by: Ben Mosher <me@benmosher.com>
2023-09-12 14:01:38 -05:00
github-actions[bot]
227c2c3f0c make UnparsedVersion.__lt__ order-agnostic (#8559) (#8579) 2023-09-12 19:30:15 +01:00
Emily Rockman
a20b09b1e5 Preserve decimal places for dbt show (#8561) (#8619)
* update `Number` class to handle integer values (#8306)

* add show test for json data

* oh changie my changie

* revert unecessary cahnge to fixture

* keep decimal class for precision methods, but return __int__ value

* jerco updates

* update integer type

* update other tests

* Update .changes/unreleased/Fixes-20230803-093502.yaml

---------



* account for integer vs number on table merges

* add tests for combining number with integer.

* add unit test when nulls are added

* cant none as an Integer

* fix null tests

---------

Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
Co-authored-by: Dave Connors <dave.connors@fishtownanalytics.com>
2023-09-11 13:54:11 -07:00
FishtownBuildBot
e0f811222e [Automated] Merged prep-release/1.6.2_6114784627 into target 1.6.latest during release process 2023-09-07 17:00:11 -05:00
Github Build Bot
7c020278a3 Bumping version to 1.6.2 and generate changelog 2023-09-07 21:10:49 +00:00
Emily Rockman
1e875fea3e Revert "Preserve decimal places for dbt show (#8561) (#8588)" (#8591)
This reverts commit e150626612.
2023-09-07 15:27:43 -05:00
github-actions[bot]
e150626612 Preserve decimal places for dbt show (#8561) (#8588)
* update `Number` class to handle integer values (#8306)

* add show test for json data

* oh changie my changie

* revert unecessary cahnge to fixture

* keep decimal class for precision methods, but return __int__ value

* jerco updates

* update integer type

* update other tests

* Update .changes/unreleased/Fixes-20230803-093502.yaml

---------

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>

* account for integer vs number on table merges

* add tests for combining number with integer.

* add unit test when nulls are added

* cant none as an Integer

* fix null tests

---------

Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
Co-authored-by: Dave Connors <dave.connors@fishtownanalytics.com>
(cherry picked from commit be94bf1f3c)

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
2023-09-07 11:48:35 -05:00
Kshitij Aranke
48ba14c89f Backport "Fix #8544: Parse the correct schema version from manifest" (#8587) 2023-09-07 16:03:20 +01:00
Emily Rockman
bfb054082f Support dbt-cloud config dict in dbt_project.yml (#8527) (#8555)
* first pass at adding dbt-cloud config

* changelog

* fix test, add direct validation
2023-09-05 11:03:51 -05:00
Kshitij Aranke
78bb854d0a Copy dir if symlink fails (#7447) (#8548)
Co-authored-by: Anju <anjutiwari5@gmail.com>
2023-09-05 12:41:09 +01:00
github-actions[bot]
7faebbcfc3 Fix ambiguous reference error for duplicate model names across packages with tests (#8488) (#8497) 2023-08-31 16:09:09 -04:00
Gerda Shank
5372157ac4 Fix snapshot success message to display "INSERT 0 1" (for example) instead of success (#8524) (#8530) 2023-08-31 14:01:03 -04:00
Peter Webb
0d64bd947f Backport 8210 (#8500)
* Replaced the FirstRunResultError and AfterFirstRunResultError events with RunResultError.

* Attempts at reasonable unit tests.

* Restore event manager after unit test.
2023-08-30 14:29:08 -04:00
github-actions[bot]
011f19f07e Safely remove external nodes from manifest (#8495) (#8510) 2023-08-29 12:33:11 -04:00
FishtownBuildBot
2764fe7d77 [Automated] Merged prep-release/1.6.1_5955246858 into target 1.6.latest during release process 2023-08-23 14:39:35 -05:00
Github Build Bot
de646cc23a Bumping version to 1.6.1 and generate changelog 2023-08-23 18:48:13 +00:00
github-actions[bot]
2191deb01f revert update agate for int (#8478) (#8481)
(cherry picked from commit 4d3c6d9c7c)

Co-authored-by: Michelle Ark <MichelleArk@users.noreply.github.com>
2023-08-23 14:42:59 -04:00
Emily Rockman
03a52317d6 split up tests into classes (#8475)
# Conflicts:
#	tests/functional/show/test_show.py
2023-08-23 11:27:36 -05:00
Quigley Malcolm
4cfc662cbf [CT-3013] Fix parsing of window_groupings (#8454) (#8455)
* Update semantic model parsing tests to check measure non_additive_dimension spec

* Make `window_groupings` default to empty list if not specified on `non_additive_dimension`

* Add changie doc for `window_groupings`  parsing fix
2023-08-22 14:33:22 -07:00
github-actions[bot]
e2d77fff9e [Backport 1.6.latest] Check for existing_relation immediately prior to renaming (#8465)
* Check for existing_relation immediately prior to renaming (#8193)

(cherry picked from commit 49560bf2a2)

* Changie

---------

Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
2023-08-22 13:08:54 -04:00
github-actions[bot]
1f003f5881 update Number class to handle integer values (#8306) (#8457)
* add show test for json data

* oh changie my changie

* revert unecessary cahnge to fixture

* keep decimal class for precision methods, but return __int__ value

* jerco updates

* update integer type

* update other tests

* Update .changes/unreleased/Fixes-20230803-093502.yaml

---------

Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
(cherry picked from commit 661623f9f7)

Co-authored-by: dave-connors-3 <73915542+dave-connors-3@users.noreply.github.com>
2023-08-21 11:49:06 -05:00
FishtownBuildBot
435c85ca8f [Automated] Merged prep-release/1.6.1rc1_5905000259 into target 1.6.latest during release process 2023-08-18 12:30:12 -05:00
68 changed files with 959 additions and 523 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 1.6.1rc1
current_version = 1.6.2
parse = (?P<major>[\d]+) # major version number
\.(?P<minor>[\d]+) # minor version number
\.(?P<patch>[\d]+) # patch version number

View File

@@ -1,4 +1,4 @@
## dbt-core 1.6.1-rc1 - August 18, 2023
## dbt-core 1.6.1 - August 23, 2023
### Fixes
@@ -9,6 +9,7 @@
- Fix: DbtInternalError after model that previously ref'd external model is deleted ([#8375](https://github.com/dbt-labs/dbt-core/issues/8375))
- Fix using list command with path selector and project-dir ([#8385](https://github.com/dbt-labs/dbt-core/issues/8385))
- Remedy performance regression by only writing run_results.json once. ([#8360](https://github.com/dbt-labs/dbt-core/issues/8360))
- Ensure parsing does not break when `window_groupings` is not specified for `non_additive_dimension` ([#8453](https://github.com/dbt-labs/dbt-core/issues/8453))
### Docs
@@ -23,3 +24,4 @@
- add tracking for plugin.get_nodes calls ([#8344](https://github.com/dbt-labs/dbt-core/issues/8344))
- add internal flag: --no-partial-parse-file-diff to inform whether to compute a file diff during partial parsing ([#8363](https://github.com/dbt-labs/dbt-core/issues/8363))
- Use python version 3.10.7 in Docker image. ([#8444](https://github.com/dbt-labs/dbt-core/issues/8444))
- Check for existing_relation immediately prior to renaming ([#7781](https://github.com/dbt-labs/dbt-core/issues/7781))

View File

@@ -1,6 +0,0 @@
kind: Docs
body: Display contract and column constraints on the model page
time: 2023-08-04T13:18:15.627005-05:00
custom:
Author: emmyoop
Issue: "433"

View File

@@ -1,6 +0,0 @@
kind: Docs
body: Display semantic model details in docs
time: 2023-08-07T15:25:48.711627-05:00
custom:
Author: emmyoop
Issue: "431"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: Add status to Parse Inline Error
time: 2023-07-20T12:27:23.085084-07:00
custom:
Author: ChenyuLInx
Issue: "8173"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: Fix retry not working with log-file-max-bytes
time: 2023-08-02T14:15:56.306027-07:00
custom:
Author: ChenyuLInx
Issue: "8297"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: Detect changes to model access, version, or latest_version in state:modified
time: 2023-08-06T22:23:19.166334-04:00
custom:
Author: michelleark
Issue: "8189"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: fix fqn-selection for external versioned models
time: 2023-08-11T20:41:44.725144-04:00
custom:
Author: michelleark
Issue: "8374"

View File

@@ -1,7 +0,0 @@
kind: Fixes
body: 'Fix: DbtInternalError after model that previously ref''d external model is
deleted'
time: 2023-08-11T21:20:08.145554-04:00
custom:
Author: michelleark
Issue: "8375"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: Fix using list command with path selector and project-dir
time: 2023-08-14T14:57:02.02816-04:00
custom:
Author: gshank
Issue: "8385"

View File

@@ -1,6 +0,0 @@
kind: Fixes
body: Remedy performance regression by only writing run_results.json once.
time: 2023-08-15T10:44:44.836991-04:00
custom:
Author: peterallenwebb
Issue: "8360"

View File

@@ -1,6 +0,0 @@
kind: Under the Hood
body: Use python version 3.10.7 in Docker image.
time: 2023-08-17T13:09:15.936349-05:00
custom:
Author: McKnight-42
Issue: "8444"

View File

@@ -1,6 +0,0 @@
kind: Under the Hood
body: Refactor flaky test pp_versioned_models
time: 2023-07-19T12:46:11.972481-04:00
custom:
Author: gshank
Issue: "7781"

View File

@@ -1,6 +0,0 @@
kind: Under the Hood
body: format exception from dbtPlugin.initialize
time: 2023-07-19T16:33:34.586377-04:00
custom:
Author: michelleark
Issue: "8152"

View File

@@ -1,6 +0,0 @@
kind: Under the Hood
body: Update manifest v10
time: 2023-08-07T16:45:09.712744-04:00
custom:
Author: gshank
Issue: "8333"

View File

@@ -1,6 +0,0 @@
kind: Under the Hood
body: add tracking for plugin.get_nodes calls
time: 2023-08-09T09:48:34.819445-04:00
custom:
Author: michelleark
Issue: "8344"

View File

@@ -1,7 +0,0 @@
kind: Under the Hood
body: 'add internal flag: --no-partial-parse-file-diff to inform whether to compute
a file diff during partial parsing'
time: 2023-08-11T10:09:02.832241-04:00
custom:
Author: michelleark
Issue: "8363"

20
.changes/1.6.2.md Normal file
View File

@@ -0,0 +1,20 @@
## dbt-core 1.6.2 - September 07, 2023
### Breaking Changes
- Removed the FirstRunResultError and AfterFirstRunResultError event types, using the existing RunResultError in their place. ([#7963](https://github.com/dbt-labs/dbt-core/issues/7963))
### Features
- Accept a `dbt-cloud` config in dbt_project.yml ([#8438](https://github.com/dbt-labs/dbt-core/issues/8438))
### Fixes
- Copy dir during `dbt deps` if symlink fails ([#7428](https://github.com/dbt-labs/dbt-core/issues/7428), [#8223](https://github.com/dbt-labs/dbt-core/issues/8223))
- fix ambiguous reference error for tests and versions when model name is duplicated across packages ([#8327](https://github.com/dbt-labs/dbt-core/issues/8327), [#8493](https://github.com/dbt-labs/dbt-core/issues/8493))
- Fix "Internal Error: Expected node <unique-id> not found in manifest" when depends_on set on ModelNodeArgs ([#8506](https://github.com/dbt-labs/dbt-core/issues/8506))
- Fix snapshot success message ([#7583](https://github.com/dbt-labs/dbt-core/issues/7583))
- Parse the correct schema version from manifest ([#8544](https://github.com/dbt-labs/dbt-core/issues/8544))
### Contributors
- [@anjutiwari](https://github.com/anjutiwari) ([#7428](https://github.com/dbt-labs/dbt-core/issues/7428), [#8223](https://github.com/dbt-labs/dbt-core/issues/8223))

View File

@@ -0,0 +1,6 @@
kind: Features
body: Add --no-inject-ephemeral-ctes flag for `compile` command, for usage by linting.
time: 2023-08-23T14:04:07.617476-04:00
custom:
Author: benmosher
Issue: "8480"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: Add explicit support for integers for the show command
time: 2023-08-03T09:35:02.163968-05:00
custom:
Author: dave-connors-3
Issue: "8153"

View File

@@ -0,0 +1,6 @@
kind: Fixes
body: make version comparison insensitive to order
time: 2023-09-06T14:22:13.114549-04:00
custom:
Author: michelleark
Issue: "8571"

View File

@@ -0,0 +1,6 @@
kind: Under the Hood
body: Fix test_numeric_values to look for more specific strings
time: 2023-09-13T14:16:51.453247-04:00
custom:
Author: gshank
Issue: "8470"

View File

@@ -5,7 +5,29 @@
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
## dbt-core 1.6.1-rc1 - August 18, 2023
## dbt-core 1.6.2 - September 07, 2023
### Breaking Changes
- Removed the FirstRunResultError and AfterFirstRunResultError event types, using the existing RunResultError in their place. ([#7963](https://github.com/dbt-labs/dbt-core/issues/7963))
### Features
- Accept a `dbt-cloud` config in dbt_project.yml ([#8438](https://github.com/dbt-labs/dbt-core/issues/8438))
### Fixes
- Copy dir during `dbt deps` if symlink fails ([#7428](https://github.com/dbt-labs/dbt-core/issues/7428), [#8223](https://github.com/dbt-labs/dbt-core/issues/8223))
- fix ambiguous reference error for tests and versions when model name is duplicated across packages ([#8327](https://github.com/dbt-labs/dbt-core/issues/8327), [#8493](https://github.com/dbt-labs/dbt-core/issues/8493))
- Fix "Internal Error: Expected node <unique-id> not found in manifest" when depends_on set on ModelNodeArgs ([#8506](https://github.com/dbt-labs/dbt-core/issues/8506))
- Fix snapshot success message ([#7583](https://github.com/dbt-labs/dbt-core/issues/7583))
- Parse the correct schema version from manifest ([#8544](https://github.com/dbt-labs/dbt-core/issues/8544))
### Contributors
- [@anjutiwari](https://github.com/anjutiwari) ([#7428](https://github.com/dbt-labs/dbt-core/issues/7428), [#8223](https://github.com/dbt-labs/dbt-core/issues/8223))
## dbt-core 1.6.1 - August 23, 2023
### Fixes
@@ -16,6 +38,7 @@
- Fix: DbtInternalError after model that previously ref'd external model is deleted ([#8375](https://github.com/dbt-labs/dbt-core/issues/8375))
- Fix using list command with path selector and project-dir ([#8385](https://github.com/dbt-labs/dbt-core/issues/8385))
- Remedy performance regression by only writing run_results.json once. ([#8360](https://github.com/dbt-labs/dbt-core/issues/8360))
- Ensure parsing does not break when `window_groupings` is not specified for `non_additive_dimension` ([#8453](https://github.com/dbt-labs/dbt-core/issues/8453))
### Docs
@@ -30,8 +53,7 @@
- add tracking for plugin.get_nodes calls ([#8344](https://github.com/dbt-labs/dbt-core/issues/8344))
- add internal flag: --no-partial-parse-file-diff to inform whether to compute a file diff during partial parsing ([#8363](https://github.com/dbt-labs/dbt-core/issues/8363))
- Use python version 3.10.7 in Docker image. ([#8444](https://github.com/dbt-labs/dbt-core/issues/8444))
- Check for existing_relation immediately prior to renaming ([#7781](https://github.com/dbt-labs/dbt-core/issues/7781))
## dbt-core 1.6.0 - July 31, 2023

View File

@@ -330,6 +330,7 @@ def docs_serve(ctx, **kwargs):
@p.state
@p.defer_state
@p.deprecated_state
@p.compile_inject_ephemeral_ctes
@p.target
@p.target_path
@p.threads

View File

@@ -40,6 +40,14 @@ compile_docs = click.option(
default=True,
)
compile_inject_ephemeral_ctes = click.option(
"--inject-ephemeral-ctes/--no-inject-ephemeral-ctes",
envvar=None,
help="Internal flag controlling injection of referenced ephemeral models' CTEs during `compile`.",
hidden=True,
default=True,
)
config_dir = click.option(
"--config-dir",
envvar=None,

View File

@@ -9,10 +9,23 @@ from typing import Iterable, List, Dict, Union, Optional, Any
from dbt.exceptions import DbtRuntimeError
BOM = BOM_UTF8.decode("utf-8") # '\ufeff'
class Integer(agate.data_types.DataType):
def cast(self, d):
# by default agate will cast none as a Number
# but we need to cast it as an Integer to preserve
# the type when merging and unioning tables
if type(d) == int or d is None:
return d
else:
raise agate.exceptions.CastError('Can not parse value "%s" as Integer.' % d)
def jsonify(self, d):
return d
class Number(agate.data_types.Number):
# undo the change in https://github.com/wireservice/agate/pull/733
# i.e. do not cast True and False to numeric 1 and 0
@@ -48,6 +61,7 @@ def build_type_tester(
) -> agate.TypeTester:
types = [
Integer(null_values=("null", "")),
Number(null_values=("null", "")),
agate.data_types.Date(null_values=("null", ""), date_format="%Y-%m-%d"),
agate.data_types.DateTime(null_values=("null", ""), datetime_format="%Y-%m-%d %H:%M:%S"),
@@ -166,6 +180,13 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
elif isinstance(value, _NullMarker):
# use the existing value
return
# when one table column is Number while another is Integer, force the column to Number on merge
elif isinstance(value, Integer) and isinstance(existing_type, agate.data_types.Number):
# use the existing value
return
elif isinstance(existing_type, Integer) and isinstance(value, agate.data_types.Number):
# overwrite
super().__setitem__(key, value)
elif not isinstance(value, type(existing_type)):
# actual type mismatch!
raise DbtRuntimeError(
@@ -177,8 +198,9 @@ class ColumnTypeBuilder(Dict[str, NullableAgateType]):
result: Dict[str, agate.data_types.DataType] = {}
for key, value in self.items():
if isinstance(value, _NullMarker):
# this is what agate would do.
result[key] = agate.data_types.Number()
# agate would make it a Number but we'll make it Integer so that if this column
# gets merged with another Integer column, it won't get forced to a Number
result[key] = Integer()
else:
result[key] = value
return result

View File

@@ -320,6 +320,10 @@ class Compiler:
if model.compiled_code is None:
raise DbtRuntimeError("Cannot inject ctes into an uncompiled node", model)
# tech debt: safe flag/arg access (#6259)
if not getattr(self.config.args, "inject_ephemeral_ctes", True):
return (model, [])
# extra_ctes_injected flag says that we've already recursively injected the ctes
if model.extra_ctes_injected:
return (model, model.extra_ctes)

View File

@@ -428,6 +428,7 @@ class PartialProject(RenderComponents):
metrics: Dict[str, Any]
exposures: Dict[str, Any]
vars_value: VarProvider
dbt_cloud: Dict[str, Any]
dispatch = cfg.dispatch
models = cfg.models
@@ -459,6 +460,8 @@ class PartialProject(RenderComponents):
manifest_selectors = SelectorDict.parse_from_selectors_list(
rendered.selectors_dict["selectors"]
)
dbt_cloud = cfg.dbt_cloud
project = Project(
project_name=name,
version=version,
@@ -498,6 +501,7 @@ class PartialProject(RenderComponents):
unrendered=unrendered,
project_env_vars=project_env_vars,
restrict_access=cfg.restrict_access,
dbt_cloud=dbt_cloud,
)
# sanity check - this means an internal issue
project.validate()
@@ -609,6 +613,7 @@ class Project:
unrendered: RenderComponents
project_env_vars: Dict[str, Any]
restrict_access: bool
dbt_cloud: Dict[str, Any]
@property
def all_source_paths(self) -> List[str]:
@@ -678,6 +683,7 @@ class Project:
"require-dbt-version": [v.to_version_string() for v in self.dbt_version],
"config-version": self.config_version,
"restrict-access": self.restrict_access,
"dbt-cloud": self.dbt_cloud,
}
)
if self.query_comment:

View File

@@ -182,6 +182,7 @@ class RuntimeConfig(Project, Profile, AdapterRequiredConfig):
args=args,
cli_vars=cli_vars,
dependencies=dependencies,
dbt_cloud=project.dbt_cloud,
)
# Called by 'load_projects' in this class

View File

@@ -1510,7 +1510,15 @@ def get_manifest_schema_version(dct: dict) -> int:
schema_version = dct.get("metadata", {}).get("dbt_schema_version", None)
if not schema_version:
raise ValueError("Manifest doesn't have schema version")
return int(schema_version.split(".")[-2][-1])
# schema_version is in this format: https://schemas.getdbt.com/dbt/manifest/v10.json
# What the code below is doing:
# 1. Split on "/" v10.json
# 2. Split on "." v10
# 3. Skip first character 10
# 4. Convert to int
# TODO: If this gets more complicated, turn into a regex
return int(schema_version.split("/")[-1].split(".")[0][1:])
def _check_duplicates(value: BaseNode, src: Mapping[str, BaseNode]):

View File

@@ -163,14 +163,9 @@ class UnparsedVersion(dbtClassMixin):
def __lt__(self, other):
try:
v = type(other.v)(self.v)
return v < other.v
return float(self.v) < float(other.v)
except ValueError:
try:
other_v = type(self.v)(other.v)
return self.v < other_v
except ValueError:
return str(self.v) < str(other.v)
return str(self.v) < str(other.v)
@property
def include_exclude(self) -> dbt.helper_types.IncludeExclude:
@@ -689,7 +684,7 @@ class UnparsedEntity(dbtClassMixin):
class UnparsedNonAdditiveDimension(dbtClassMixin):
name: str
window_choice: str # AggregationType enum
window_groupings: List[str]
window_groupings: List[str] = field(default_factory=list)
@dataclass

View File

@@ -224,6 +224,7 @@ class Project(HyphenatedDbtClassMixin, Replaceable):
packages: List[PackageSpec] = field(default_factory=list)
query_comment: Optional[Union[QueryComment, NoValue, str]] = field(default_factory=NoValue)
restrict_access: bool = False
dbt_cloud: Optional[Dict[str, Any]] = None
@classmethod
def validate(cls, data):
@@ -240,6 +241,10 @@ class Project(HyphenatedDbtClassMixin, Replaceable):
or not isinstance(entry["search_order"], list)
):
raise ValidationError(f"Invalid project dispatch config: {entry}")
if "dbt_cloud" in data and not isinstance(data["dbt_cloud"], dict):
raise ValidationError(
f"Invalid dbt_cloud config. Expected a 'dict' but got '{type(data['dbt_cloud'])}'"
)
@dataclass

View File

@@ -51,19 +51,15 @@ class LocalPinnedPackage(LocalPackageMixin, PinnedPackage):
src_path = self.resolve_path(project)
dest_path = self.get_installation_path(project, renderer)
can_create_symlink = system.supports_symlinks()
if system.path_exists(dest_path):
if not system.path_is_symlink(dest_path):
system.rmdir(dest_path)
else:
system.remove_file(dest_path)
if can_create_symlink:
try:
fire_event(DepsCreatingLocalSymlink())
system.make_symlink(src_path, dest_path)
else:
except OSError:
fire_event(DepsSymlinkNotAvailable())
shutil.copytree(src_path, dest_path)

View File

@@ -8,7 +8,7 @@ import logging
from logging.handlers import RotatingFileHandler
import threading
import traceback
from typing import Any, Callable, List, Optional, TextIO
from typing import Any, Callable, List, Optional, TextIO, Protocol
from uuid import uuid4
from dbt.events.format import timestamp_to_datetime_string
@@ -206,7 +206,7 @@ class EventManager:
for callback in self.callbacks:
callback(msg)
def add_logger(self, config: LoggerConfig):
def add_logger(self, config: LoggerConfig) -> None:
logger = (
_JsonLogger(self, config)
if config.line_format == LineFormat.Json
@@ -218,3 +218,25 @@ class EventManager:
def flush(self):
for logger in self.loggers:
logger.flush()
class IEventManager(Protocol):
callbacks: List[Callable[[EventMsg], None]]
invocation_id: str
def fire_event(self, e: BaseEvent, level: Optional[EventLevel] = None) -> None:
...
def add_logger(self, config: LoggerConfig) -> None:
...
class TestEventManager(IEventManager):
def __init__(self):
self.event_history = []
def fire_event(self, e: BaseEvent, level: Optional[EventLevel] = None) -> None:
self.event_history.append((e, level))
def add_logger(self, config: LoggerConfig) -> None:
raise NotImplementedError()

View File

@@ -1,6 +1,6 @@
from dbt.constants import METADATA_ENV_PREFIX
from dbt.events.base_types import BaseEvent, EventLevel, EventMsg
from dbt.events.eventmgr import EventManager, LoggerConfig, LineFormat, NoFilter
from dbt.events.eventmgr import EventManager, LoggerConfig, LineFormat, NoFilter, IEventManager
from dbt.events.helpers import env_secrets, scrub_secrets
from dbt.events.types import Formatting, Note
from dbt.flags import get_flags, ENABLE_LEGACY_LOGGER
@@ -182,7 +182,7 @@ def cleanup_event_logger():
# Since dbt-rpc does not do its own log setup, and since some events can
# currently fire before logs can be configured by setup_event_logger(), we
# create a default configuration with default settings and no file output.
EVENT_MANAGER: EventManager = EventManager()
EVENT_MANAGER: IEventManager = EventManager()
EVENT_MANAGER.add_logger(
_get_logbook_log_config(False, True, False, False) # type: ignore
if ENABLE_LEGACY_LOGGER
@@ -295,3 +295,8 @@ def set_invocation_id() -> None:
# This is primarily for setting the invocation_id for separate
# commands in the dbt servers. It shouldn't be necessary for the CLI.
EVENT_MANAGER.invocation_id = str(uuid.uuid4())
def ctx_set_event_manager(event_manager: IEventManager):
global EVENT_MANAGER
EVENT_MANAGER = event_manager

View File

@@ -1650,6 +1650,7 @@ message LogSnapshotResult {
int32 total = 5;
float execution_time = 6;
map<string, string> cfg = 7;
string result_message = 8;
}
message LogSnapshotResultMsg {
@@ -2245,25 +2246,7 @@ message CheckNodeTestFailureMsg {
CheckNodeTestFailure data = 2;
}
// Z028
message FirstRunResultError {
string msg = 1;
}
message FirstRunResultErrorMsg {
EventInfo info = 1;
FirstRunResultError data = 2;
}
// Z029
message AfterFirstRunResultError {
string msg = 1;
}
message AfterFirstRunResultErrorMsg {
EventInfo info = 1;
AfterFirstRunResultError data = 2;
}
// Skipped Z028, Z029
// Z030
message EndOfRunSummary {

View File

@@ -1614,7 +1614,7 @@ class LogSnapshotResult(DynamicLevel):
status = red(self.status.upper())
else:
info = "OK snapshotted"
status = green(self.status)
status = green(self.result_message)
msg = "{info} {description}".format(info=info, description=self.description, **self.cfg)
return format_fancy_output_line(
@@ -2171,25 +2171,7 @@ class CheckNodeTestFailure(InfoLevel):
return f" See test failures:\n {border}\n {msg}\n {border}"
# FirstRunResultError and AfterFirstRunResultError are just splitting the message from the result
# object into multiple log lines
# TODO: is this reallly needed? See printer.py
class FirstRunResultError(ErrorLevel):
def code(self):
return "Z028"
def message(self) -> str:
return yellow(self.msg)
class AfterFirstRunResultError(ErrorLevel):
def code(self):
return "Z029"
def message(self) -> str:
return self.msg
# Skipped Z028, Z029
class EndOfRunSummary(InfoLevel):

File diff suppressed because one or more lines are too long

View File

@@ -33,7 +33,12 @@
-- cleanup
{% if existing_relation is not none %}
{{ adapter.rename_relation(existing_relation, backup_relation) }}
/* Do the equivalent of rename_if_exists. 'existing_relation' could have been dropped
since the variable was first set. */
{% set existing_relation = load_cached_relation(existing_relation) %}
{% if existing_relation is not none %}
{{ adapter.rename_relation(existing_relation, backup_relation) }}
{% endif %}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -45,7 +45,12 @@
-- cleanup
-- move the existing view out of the way
{% if existing_relation is not none %}
{{ adapter.rename_relation(existing_relation, backup_relation) }}
/* Do the equivalent of rename_if_exists. 'existing_relation' could have been dropped
since the variable was first set. */
{% set existing_relation = load_cached_relation(existing_relation) %}
{% if existing_relation is not none %}
{{ adapter.rename_relation(existing_relation, backup_relation) }}
{% endif %}
{% endif %}
{{ adapter.rename_relation(intermediate_relation, target_relation) }}

View File

@@ -557,7 +557,7 @@ class ManifestLoader:
)
# parent and child maps will be rebuilt by write_manifest
if not skip_parsing:
if not skip_parsing or external_nodes_modified:
# write out the fully parsed manifest
self.write_manifest_for_partial_parse()
@@ -755,10 +755,13 @@ class ManifestLoader:
def inject_external_nodes(self) -> bool:
# Remove previously existing external nodes since we are regenerating them
manifest_nodes_modified = False
# Remove all dependent nodes before removing referencing nodes
for unique_id in self.manifest.external_node_unique_ids:
self.manifest.nodes.pop(unique_id)
remove_dependent_project_references(self.manifest, unique_id)
manifest_nodes_modified = True
for unique_id in self.manifest.external_node_unique_ids:
# remove external nodes from manifest only after dependent project references safely removed
self.manifest.nodes.pop(unique_id)
# Inject any newly-available external nodes
pm = plugins.get_plugin_manager(self.root_project.project_name)

View File

@@ -233,7 +233,7 @@ class SchemaGenericTestParser(SimpleParser):
attached_node = None # type: Optional[Union[ManifestNode, GraphMemberNode]]
if not isinstance(target, UnpatchedSourceDefinition):
attached_node_unique_id = self.manifest.ref_lookup.get_unique_id(
target.name, None, version
target.name, target.package_name, version
)
if attached_node_unique_id:
attached_node = self.manifest.nodes[attached_node_unique_id]

View File

@@ -693,7 +693,7 @@ class ModelPatchParser(NodePatchParser[UnparsedModelUpdate]):
)
# ref lookup without version - version is not set yet
versioned_model_unique_id = self.manifest.ref_lookup.get_unique_id(
versioned_model_name, None, None
versioned_model_name, target.package_name, None
)
versioned_model_node = None
@@ -702,7 +702,7 @@ class ModelPatchParser(NodePatchParser[UnparsedModelUpdate]):
# If this is the latest version, it's allowed to define itself in a model file name that doesn't have a suffix
if versioned_model_unique_id is None and unparsed_version.v == latest_version:
versioned_model_unique_id = self.manifest.ref_lookup.get_unique_id(
block.name, None, None
block.name, target.package_name, None
)
if versioned_model_unique_id is None:

View File

@@ -14,8 +14,6 @@ from dbt.events.types import (
RunResultErrorNoMessage,
SQLCompiledPath,
CheckNodeTestFailure,
FirstRunResultError,
AfterFirstRunResultError,
EndOfRunSummary,
)
@@ -118,15 +116,7 @@ def print_run_result_error(result, newline: bool = True, is_warning: bool = Fals
fire_event(CheckNodeTestFailure(relation_name=result.node.relation_name))
elif result.message is not None:
first = True
for line in result.message.split("\n"):
# TODO: why do we format like this? Is there a reason this needs to
# be split instead of sending it as a single log line?
if first:
fire_event(FirstRunResultError(msg=line))
first = False
else:
fire_event(AfterFirstRunResultError(msg=line))
fire_event(RunResultError(msg=result.message))
def print_run_end_messages(results, keyboard_interrupt: bool = False) -> None:

View File

@@ -27,6 +27,7 @@ class SnapshotRunner(ModelRunner):
total=self.num_nodes,
execution_time=result.execution_time,
node_info=model.node_info,
result_message=result.message,
),
level=level,
)

View File

@@ -232,5 +232,5 @@ def _get_adapter_plugin_names() -> Iterator[str]:
yield plugin_name
__version__ = "1.6.1rc1"
__version__ = "1.6.2"
installed = get_installed_version()

View File

@@ -25,7 +25,7 @@ with open(os.path.join(this_directory, "README.md")) as f:
package_name = "dbt-core"
package_version = "1.6.1rc1"
package_version = "1.6.2"
description = """With dbt, data analysts and engineers can build analytics \
the way engineers build applications."""

View File

@@ -16,12 +16,12 @@ FROM --platform=$build_for python:3.10.7-slim-bullseye as base
# N.B. The refs updated automagically every release via bumpversion
# N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct
ARG dbt_core_ref=dbt-core@v1.6.1rc1
ARG dbt_postgres_ref=dbt-core@v1.6.1rc1
ARG dbt_redshift_ref=dbt-redshift@v1.6.1rc1
ARG dbt_bigquery_ref=dbt-bigquery@v1.6.1rc1
ARG dbt_snowflake_ref=dbt-snowflake@v1.6.1rc1
ARG dbt_spark_ref=dbt-spark@v1.6.1rc1
ARG dbt_core_ref=dbt-core@v1.6.2
ARG dbt_postgres_ref=dbt-core@v1.6.2
ARG dbt_redshift_ref=dbt-redshift@v1.6.2
ARG dbt_bigquery_ref=dbt-bigquery@v1.6.2
ARG dbt_snowflake_ref=dbt-snowflake@v1.6.2
ARG dbt_spark_ref=dbt-spark@v1.6.2
# special case args
ARG dbt_spark_version=all
ARG dbt_third_party

View File

@@ -1 +1 @@
version = "1.6.1rc1"
version = "1.6.2"

View File

@@ -32,7 +32,10 @@ class PostgresCredentials(Credentials):
sslkey: Optional[str] = None
sslrootcert: Optional[str] = None
application_name: Optional[str] = "dbt"
endpoint: Optional[str] = None
retries: int = 1
options: Optional[str] = None
# options: Dict[str, Any] = field(default_factory=dict)
_ALIASES = {"dbname": "database", "pass": "password"}
@@ -130,6 +133,12 @@ class PostgresConnectionManager(SQLConnectionManager):
if credentials.application_name:
kwargs["application_name"] = credentials.application_name
if credentials.options:
kwargs["options"] = credentials.options
if credentials.endpoint:
kwargs["endpoint"] = credentials.endpoint
def connect():
handle = psycopg2.connect(
dbname=credentials.database,

View File

@@ -41,7 +41,7 @@ def _dbt_psycopg2_name():
package_name = "dbt-postgres"
package_version = "1.6.1rc1"
package_version = "1.6.2"
description = """The postgres adapter plugin for dbt (data build tool)"""
this_directory = os.path.abspath(os.path.dirname(__file__))

View File

@@ -1 +1 @@
version = "1.6.1rc1"
version = "1.6.2"

View File

@@ -20,7 +20,7 @@ except ImportError:
package_name = "dbt-tests-adapter"
package_version = "1.6.1rc1"
package_version = "1.6.2"
description = """The dbt adapter tests for adapter plugins"""
this_directory = os.path.abspath(os.path.dirname(__file__))

File diff suppressed because one or more lines are too long

View File

@@ -1,9 +1,12 @@
import pytest
import json
import os
import shutil
from dbt.tests.util import run_dbt, get_manifest
import pytest
from dbt.contracts.graph.manifest import WritableManifest, get_manifest_schema_version
from dbt.exceptions import IncompatibleSchemaError
from dbt.contracts.graph.manifest import WritableManifest
from dbt.tests.util import run_dbt, get_manifest
# This project must have one of each kind of node type, plus disabled versions, for
# test coverage to be complete.
@@ -351,3 +354,13 @@ class TestPreviousVersionState:
# schema versions 1, 2, 3 are all not forward compatible
for schema_version in range(1, 4):
self.compare_previous_state(project, schema_version, False)
def test_get_manifest_schema_version(self, project):
for schema_version in range(1, self.CURRENT_EXPECTED_MANIFEST_VERSION):
manifest_path = os.path.join(
project.test_data_dir, f"state/v{schema_version}/manifest.json"
)
manifest = json.load(open(manifest_path))
manifest_version = get_manifest_schema_version(manifest)
assert manifest_version == schema_version

View File

@@ -1,5 +1,8 @@
import os
import pytest
from dbt.tests.util import run_dbt, update_config_file
import yaml
from pathlib import Path
from dbt.tests.util import run_dbt, update_config_file, write_config_file
from dbt.exceptions import ProjectContractError
@@ -62,3 +65,50 @@ class TestProjectYamlVersionInvalid:
assert "at path ['version']: 'invalid' is not valid under any of the given schemas" in str(
excinfo.value
)
class TestProjectDbtCloudConfig:
@pytest.fixture(scope="class")
def models(self):
return {"simple_model.sql": simple_model_sql, "simple_model.yml": simple_model_yml}
def test_dbt_cloud(self, project):
run_dbt(["parse"], expect_pass=True)
conf = yaml.safe_load(
Path(os.path.join(project.project_root, "dbt_project.yml")).read_text()
)
assert conf == {"name": "test", "profile": "test"}
config = {
"name": "test",
"profile": "test",
"dbt-cloud": {
"account_id": "123",
"application": "test",
"environment": "test",
"api_key": "test",
},
}
write_config_file(config, project.project_root, "dbt_project.yml")
run_dbt(["parse"], expect_pass=True)
conf = yaml.safe_load(
Path(os.path.join(project.project_root, "dbt_project.yml")).read_text()
)
assert conf == config
class TestProjectDbtCloudConfigString:
@pytest.fixture(scope="class")
def models(self):
return {"simple_model.sql": simple_model_sql, "simple_model.yml": simple_model_yml}
def test_dbt_cloud_invalid(self, project):
run_dbt()
config = {"name": "test", "profile": "test", "dbt-cloud": "Some string"}
update_config_file(config, "dbt_project.yml")
expected_err = (
"at path ['dbt-cloud']: 'Some string' is not valid under any of the given schemas"
)
with pytest.raises(ProjectContractError) as excinfo:
run_dbt()
assert expected_err in str(excinfo.value)

View File

@@ -43,6 +43,26 @@ seeds:
"""
local_dep_schema_yml = """
models:
- name: table_model
config:
alias: table_model_local_dep
columns:
- name: id
tests:
- unique
"""
local_dep_versions_schema_yml = """
models:
- name: table_model
config:
alias: table_model_local_dep
versions:
- v: 1
"""
class TestDuplicateModelEnabled:
@pytest.fixture(scope="class")
@@ -142,6 +162,72 @@ class TestDuplicateModelDisabledAcrossPackages:
assert model_id in manifest.disabled
class TestDuplicateModelNameWithTestAcrossPackages:
@pytest.fixture(scope="class", autouse=True)
def setUp(self, project_root):
local_dependency_files = {
"dbt_project.yml": dbt_project_yml,
"models": {"table_model.sql": enabled_model_sql, "schema.yml": local_dep_schema_yml},
}
write_project_files(project_root, "local_dependency", local_dependency_files)
@pytest.fixture(scope="class")
def models(self):
return {"table_model.sql": enabled_model_sql}
@pytest.fixture(scope="class")
def packages(self):
return {"packages": [{"local": "local_dependency"}]}
def test_duplicate_model_name_with_test_across_packages(self, project):
run_dbt(["deps"])
manifest = run_dbt(["parse"])
assert len(manifest.nodes) == 3
# model nodes with duplicate names exist
local_dep_model_node_id = "model.local_dep.table_model"
root_model_node_id = "model.test.table_model"
assert local_dep_model_node_id in manifest.nodes
assert root_model_node_id in manifest.nodes
# test node exists and is attached to correct node
test_node_id = "test.local_dep.unique_table_model_id.1da9e464d9"
assert test_node_id in manifest.nodes
assert manifest.nodes[test_node_id].attached_node == local_dep_model_node_id
class TestDuplicateModelNameWithVersionAcrossPackages:
@pytest.fixture(scope="class", autouse=True)
def setUp(self, project_root):
local_dependency_files = {
"dbt_project.yml": dbt_project_yml,
"models": {
"table_model.sql": enabled_model_sql,
"schema.yml": local_dep_versions_schema_yml,
},
}
write_project_files(project_root, "local_dependency", local_dependency_files)
@pytest.fixture(scope="class")
def models(self):
return {"table_model.sql": enabled_model_sql}
@pytest.fixture(scope="class")
def packages(self):
return {"packages": [{"local": "local_dependency"}]}
def test_duplicate_model_name_with_test_across_packages(self, project):
run_dbt(["deps"])
manifest = run_dbt(["parse"])
assert len(manifest.nodes) == 2
# model nodes with duplicate names exist
local_dep_model_node_id = "model.local_dep.table_model.v1"
root_model_node_id = "model.test.table_model"
assert local_dep_model_node_id in manifest.nodes
assert root_model_node_id in manifest.nodes
class TestModelTestOverlap:
@pytest.fixture(scope="class")
def models(self):

View File

@@ -1,3 +1,5 @@
from dbt.contracts.graph.nodes import ModelNode
from dbt.contracts.results import RunExecutionResult, RunResult
import pytest
from dbt.tests.util import run_dbt
@@ -53,6 +55,16 @@ models:
"""
SUPPRESSED_CTE_EXPECTED_OUTPUT = """-- fct_eph_first.sql
with int_eph_first as(
select * from __dbt__cte__int_eph_first
)
select * from int_eph_first"""
class TestEphemeralCompilation:
@pytest.fixture(scope="class")
def models(self):
@@ -67,5 +79,13 @@ class TestEphemeralCompilation:
results = run_dbt(["run"])
assert len(results) == 0
results = run_dbt(["test"])
len(results) == 4
def test__suppress_injected_ctes(self, project):
compile_output = run_dbt(
["compile", "--no-inject-ephemeral-ctes", "--select", "fct_eph_first"]
)
assert isinstance(compile_output, RunExecutionResult)
node_result = compile_output.results[0]
assert isinstance(node_result, RunResult)
node = node_result.node
assert isinstance(node, ModelNode)
assert node.compiled_code == SUPPRESSED_CTE_EXPECTED_OUTPUT

View File

@@ -760,20 +760,45 @@ class TestExternalModels:
version=1,
)
@pytest.fixture(scope="class")
def external_model_node_depends_on(self):
return ModelNodeArgs(
name="external_model_depends_on",
package_name="external",
identifier="test_identifier_depends_on",
schema="test_schema",
depends_on_nodes=["model.external.external_model_depends_on_parent"],
)
@pytest.fixture(scope="class")
def external_model_node_depends_on_parent(self):
return ModelNodeArgs(
name="external_model_depends_on_parent",
package_name="external",
identifier="test_identifier_depends_on_parent",
schema="test_schema",
)
@pytest.fixture(scope="class")
def models(self):
return {"model_one.sql": model_one_sql}
@mock.patch("dbt.plugins.get_plugin_manager")
def test_pp_external_models(
self, get_plugin_manager, project, external_model_node, external_model_node_versioned
self,
get_plugin_manager,
project,
external_model_node,
external_model_node_versioned,
external_model_node_depends_on,
external_model_node_depends_on_parent,
):
# initial plugin - one external model
external_nodes = PluginNodes()
external_nodes.add_model(external_model_node)
get_plugin_manager.return_value.get_nodes.return_value = external_nodes
# initial run
# initial parse
manifest = run_dbt(["parse"])
assert len(manifest.nodes) == 2
assert set(manifest.nodes.keys()) == {
@@ -803,8 +828,21 @@ class TestExternalModels:
)
manifest = run_dbt(["--partial-parse", "parse"])
assert len(manifest.nodes) == 5
assert len(manifest.external_node_unique_ids) == 2
# remove a model file that depends on external model
rm_file(project.project_root, "models", "model_depends_on_external.sql")
manifest = run_dbt(["--partial-parse", "parse"])
assert len(manifest.nodes) == 4
# add an external node with depends on
external_nodes.add_model(external_model_node_depends_on)
external_nodes.add_model(external_model_node_depends_on_parent)
manifest = run_dbt(["--partial-parse", "parse"])
assert len(manifest.nodes) == 6
assert len(manifest.external_node_unique_ids) == 4
# skip files parsing - ensure no issues
run_dbt(["--partial-parse", "parse"])
assert len(manifest.nodes) == 6
assert len(manifest.external_node_unique_ids) == 4

View File

@@ -49,6 +49,12 @@ semantic_models:
agg_time_dimension: ds
agg_params:
percentile: 0.99
- name: test_non_additive
expr: txn_revenue
agg: sum
non_additive_dimension:
name: ds
window_choice: max
dimensions:
- name: ds
@@ -125,7 +131,7 @@ class TestSemanticModelParsing:
semantic_model.node_relation.relation_name
== f'"dbt"."{project.test_schema}"."fct_revenue"'
)
assert len(semantic_model.measures) == 5
assert len(semantic_model.measures) == 6
def test_semantic_model_error(self, project):
# Next, modify the default schema.yml to remove the semantic model.

View File

@@ -2,6 +2,31 @@ models__sample_model = """
select * from {{ ref('sample_seed') }}
"""
models__sample_number_model = """
select
cast(1.0 as int) as float_to_int_field,
3.0 as float_field,
4.3 as float_with_dec_field,
5 as int_field
"""
models__sample_number_model_with_nulls = """
select
cast(1.0 as int) as float_to_int_field,
3.0 as float_field,
4.3 as float_with_dec_field,
5 as int_field
union all
select
cast(null as int) as float_to_int_field,
cast(null as float) as float_field,
cast(null as float) as float_with_dec_field,
cast(null as int) as int_field
"""
models__second_model = """
select
sample_num as col_one,

View File

@@ -6,6 +6,8 @@ from tests.functional.show.fixtures import (
models__second_ephemeral_model,
seeds__sample_seed,
models__sample_model,
models__sample_number_model,
models__sample_number_model_with_nulls,
models__second_model,
models__ephemeral_model,
schema_yml,
@@ -14,11 +16,13 @@ from tests.functional.show.fixtures import (
)
class TestShow:
class ShowBase:
@pytest.fixture(scope="class")
def models(self):
return {
"sample_model.sql": models__sample_model,
"sample_number_model.sql": models__sample_number_model,
"sample_number_model_with_nulls.sql": models__sample_number_model_with_nulls,
"second_model.sql": models__second_model,
"ephemeral_model.sql": models__ephemeral_model,
"sql_header.sql": models__sql_header,
@@ -28,69 +32,122 @@ class TestShow:
def seeds(self):
return {"sample_seed.csv": seeds__sample_seed}
@pytest.fixture(scope="class", autouse=True)
def setup(self, project):
run_dbt(["seed"])
class TestShowNone(ShowBase):
def test_none(self, project):
with pytest.raises(
DbtRuntimeError, match="Either --select or --inline must be passed to show"
):
run_dbt(["seed"])
run_dbt(["show"])
class TestShowSelectText(ShowBase):
def test_select_model_text(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(["show", "--select", "second_model"])
(_, log_output) = run_dbt_and_capture(["show", "--select", "second_model"])
assert "Previewing node 'sample_model'" not in log_output
assert "Previewing node 'second_model'" in log_output
assert "col_one" in log_output
assert "col_two" in log_output
assert "answer" in log_output
class TestShowMultiple(ShowBase):
def test_select_multiple_model_text(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(
["show", "--select", "sample_model second_model"]
)
(_, log_output) = run_dbt_and_capture(["show", "--select", "sample_model second_model"])
assert "Previewing node 'sample_model'" in log_output
assert "sample_num" in log_output
assert "sample_bool" in log_output
class TestShowSingle(ShowBase):
def test_select_single_model_json(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(
(_, log_output) = run_dbt_and_capture(
["show", "--select", "sample_model", "--output", "json"]
)
assert "Previewing node 'sample_model'" not in log_output
assert "sample_num" in log_output
assert "sample_bool" in log_output
class TestShowNumeric(ShowBase):
def test_numeric_values(self, project):
run_dbt(["build"])
(_, log_output) = run_dbt_and_capture(
["show", "--select", "sample_number_model", "--output", "json"]
)
# json log output needs the escapes removed for string matching
log_output = log_output.replace("\\", "")
assert "Previewing node 'sample_number_model'" not in log_output
assert '"float_to_int_field": 1.0' not in log_output
assert '"float_to_int_field": 1' in log_output
assert '"float_field": 3.0' in log_output
assert '"float_with_dec_field": 4.3' in log_output
assert '"int_field": 5' in log_output
assert '"int_field": 5.0' not in log_output
class TestShowNumericNulls(ShowBase):
def test_numeric_values_with_nulls(self, project):
run_dbt(["build"])
(_, log_output) = run_dbt_and_capture(
["show", "--select", "sample_number_model_with_nulls", "--output", "json"]
)
# json log output needs the escapes removed for string matching
log_output = log_output.replace("\\", "")
assert "Previewing node 'sample_number_model_with_nulls'" not in log_output
assert '"float_to_int_field": 1.0' not in log_output
assert '"float_to_int_field": 1' in log_output
assert '"float_field": 3.0' in log_output
assert '"float_with_dec_field": 4.3' in log_output
assert '"int_field": 5' in log_output
assert '"int_field": 5.0' not in log_output
class TestShowInline(ShowBase):
def test_inline_pass(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(
(_, log_output) = run_dbt_and_capture(
["show", "--inline", "select * from {{ ref('sample_model') }}"]
)
assert "Previewing inline node" in log_output
assert "sample_num" in log_output
assert "sample_bool" in log_output
class TestShowInlineFail(ShowBase):
def test_inline_fail(self, project):
with pytest.raises(DbtException, match="Error parsing inline query"):
run_dbt(["show", "--inline", "select * from {{ ref('third_model') }}"])
class TestShowInlineFailDB(ShowBase):
def test_inline_fail_database_error(self, project):
with pytest.raises(DbtRuntimeError, match="Database Error"):
run_dbt(["show", "--inline", "slect asdlkjfsld;j"])
class TestShowEphemeral(ShowBase):
def test_ephemeral_model(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(["show", "--select", "ephemeral_model"])
(_, log_output) = run_dbt_and_capture(["show", "--select", "ephemeral_model"])
assert "col_deci" in log_output
class TestShowSecondEphemeral(ShowBase):
def test_second_ephemeral_model(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(
["show", "--inline", models__second_ephemeral_model]
)
(_, log_output) = run_dbt_and_capture(["show", "--inline", models__second_ephemeral_model])
assert "col_hundo" in log_output
class TestShowLimit(ShowBase):
@pytest.mark.parametrize(
"args,expected",
[
@@ -102,16 +159,20 @@ class TestShow:
def test_limit(self, project, args, expected):
run_dbt(["build"])
dbt_args = ["show", "--inline", models__second_ephemeral_model, *args]
results, log_output = run_dbt_and_capture(dbt_args)
results = run_dbt(dbt_args)
assert len(results.results[0].agate_table) == expected
class TestShowSeed(ShowBase):
def test_seed(self, project):
(results, log_output) = run_dbt_and_capture(["show", "--select", "sample_seed"])
(_, log_output) = run_dbt_and_capture(["show", "--select", "sample_seed"])
assert "Previewing node 'sample_seed'" in log_output
class TestShowSqlHeader(ShowBase):
def test_sql_header(self, project):
run_dbt(["build"])
(results, log_output) = run_dbt_and_capture(["show", "--select", "sql_header"])
(_, log_output) = run_dbt_and_capture(["show", "--select", "sql_header"])
assert "Asia/Kolkata" in log_output

View File

@@ -121,39 +121,64 @@ class TestAgateHelper(unittest.TestCase):
self.assertEqual(tbl[0][0], expected)
def test_merge_allnull(self):
t1 = agate.Table([(1, "a", None), (2, "b", None)], ("a", "b", "c"))
t2 = agate.Table([(3, "c", None), (4, "d", None)], ("a", "b", "c"))
t1 = agate_helper.table_from_rows([(1, "a", None), (2, "b", None)], ("a", "b", "c"))
t2 = agate_helper.table_from_rows([(3, "c", None), (4, "d", None)], ("a", "b", "c"))
result = agate_helper.merge_tables([t1, t2])
self.assertEqual(result.column_names, ("a", "b", "c"))
assert isinstance(result.column_types[0], agate.data_types.Number)
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate.data_types.Number)
assert isinstance(result.column_types[2], agate_helper.Integer)
self.assertEqual(len(result), 4)
def test_merge_mixed(self):
t1 = agate.Table([(1, "a", None), (2, "b", None)], ("a", "b", "c"))
t2 = agate.Table([(3, "c", "dog"), (4, "d", "cat")], ("a", "b", "c"))
t3 = agate.Table([(3, "c", None), (4, "d", None)], ("a", "b", "c"))
t1 = agate_helper.table_from_rows(
[(1, "a", None, None), (2, "b", None, None)], ("a", "b", "c", "d")
)
t2 = agate_helper.table_from_rows(
[(3, "c", "dog", 1), (4, "d", "cat", 5)], ("a", "b", "c", "d")
)
t3 = agate_helper.table_from_rows(
[(3, "c", None, 1.5), (4, "d", None, 3.5)], ("a", "b", "c", "d")
)
result = agate_helper.merge_tables([t1, t2])
self.assertEqual(result.column_names, ("a", "b", "c"))
assert isinstance(result.column_types[0], agate.data_types.Number)
self.assertEqual(result.column_names, ("a", "b", "c", "d"))
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate.data_types.Text)
assert isinstance(result.column_types[3], agate_helper.Integer)
self.assertEqual(len(result), 4)
result = agate_helper.merge_tables([t1, t3])
self.assertEqual(result.column_names, ("a", "b", "c", "d"))
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate_helper.Integer)
assert isinstance(result.column_types[3], agate.data_types.Number)
self.assertEqual(len(result), 4)
result = agate_helper.merge_tables([t2, t3])
self.assertEqual(result.column_names, ("a", "b", "c"))
assert isinstance(result.column_types[0], agate.data_types.Number)
self.assertEqual(result.column_names, ("a", "b", "c", "d"))
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate.data_types.Text)
assert isinstance(result.column_types[3], agate.data_types.Number)
self.assertEqual(len(result), 4)
result = agate_helper.merge_tables([t3, t2])
self.assertEqual(result.column_names, ("a", "b", "c", "d"))
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate.data_types.Text)
assert isinstance(result.column_types[3], agate.data_types.Number)
self.assertEqual(len(result), 4)
result = agate_helper.merge_tables([t1, t2, t3])
self.assertEqual(result.column_names, ("a", "b", "c"))
assert isinstance(result.column_types[0], agate.data_types.Number)
self.assertEqual(result.column_names, ("a", "b", "c", "d"))
assert isinstance(result.column_types[0], agate_helper.Integer)
assert isinstance(result.column_types[1], agate.data_types.Text)
assert isinstance(result.column_types[2], agate.data_types.Text)
assert isinstance(result.column_types[3], agate.data_types.Number)
self.assertEqual(len(result), 6)
def test_nocast_string_types(self):
@@ -191,7 +216,7 @@ class TestAgateHelper(unittest.TestCase):
self.assertEqual(len(tbl), len(result_set))
assert isinstance(tbl.column_types[0], agate.data_types.Boolean)
assert isinstance(tbl.column_types[1], agate.data_types.Number)
assert isinstance(tbl.column_types[1], agate_helper.Integer)
expected = [
[True, Decimal(1)],

View File

@@ -1,5 +1,6 @@
import pickle
from datetime import timedelta
import pickle
import pytest
from dbt.contracts.graph.unparsed import (
UnparsedNode,
@@ -940,3 +941,25 @@ class TestUnparsedVersion(ContractTestCase):
version = self.get_ok_dict()
del version["v"]
self.assert_fails_validation(version)
@pytest.mark.parametrize(
"left,right,expected_lt",
[
# same types
(2, 12, True),
(12, 2, False),
("a", "b", True),
("b", "a", False),
# mismatched types - numeric
(2, 12.0, True),
(12.0, 2, False),
(2, "12", True),
("12", 2, False),
# mismatched types
(1, "test", True),
("test", 1, False),
],
)
def test_unparsed_version_lt(left, right, expected_lt):
assert (UnparsedVersion(left) < UnparsedVersion(right)) == expected_lt

View File

@@ -6,7 +6,7 @@ from unittest import mock
import dbt.deps
import dbt.exceptions
from dbt.deps.git import GitUnpinnedPackage
from dbt.deps.local import LocalUnpinnedPackage
from dbt.deps.local import LocalUnpinnedPackage, LocalPinnedPackage
from dbt.deps.tarball import TarballUnpinnedPackage
from dbt.deps.registry import RegistryUnpinnedPackage
from dbt.clients.registry import is_compatible_version
@@ -92,6 +92,21 @@ class TestGitPackage(unittest.TestCase):
self.assertEqual(a_pinned.source_type(), "git")
self.assertIs(a_pinned.warn_unpinned, True)
@mock.patch("shutil.copytree")
@mock.patch("dbt.deps.local.system.make_symlink")
@mock.patch("dbt.deps.local.LocalPinnedPackage.get_installation_path")
@mock.patch("dbt.deps.local.LocalPinnedPackage.resolve_path")
def test_deps_install(
self, mock_resolve_path, mock_get_installation_path, mock_symlink, mock_shutil
):
mock_resolve_path.return_value = "/tmp/source"
mock_get_installation_path.return_value = "/tmp/dest"
mock_symlink.side_effect = OSError("Install deps symlink error")
LocalPinnedPackage("local").install("dummy", "dummy")
self.assertEqual(mock_shutil.call_count, 1)
mock_shutil.assert_called_once_with("/tmp/source", "/tmp/dest")
def test_invalid(self):
with self.assertRaises(ValidationError):
GitPackage.validate(

View File

@@ -2,7 +2,7 @@ import pytest
import re
from typing import TypeVar
from dbt.contracts.results import TimingInfo
from dbt.contracts.results import TimingInfo, RunResult, RunStatus
from dbt.events import AdapterLogger, types
from dbt.events.base_types import (
BaseEvent,
@@ -14,11 +14,15 @@ from dbt.events.base_types import (
WarnLevel,
msg_from_base_event,
)
from dbt.events.functions import msg_to_dict, msg_to_json
from dbt.events.eventmgr import TestEventManager, EventManager
from dbt.events.functions import msg_to_dict, msg_to_json, ctx_set_event_manager
from dbt.events.helpers import get_json_string_utcnow
from dbt.events.types import RunResultError
from dbt.flags import set_from_args
from argparse import Namespace
from dbt.task.printer import print_run_result_error
set_from_args(Namespace(WARN_ERROR=False), None)
@@ -388,8 +392,6 @@ sample_values = [
types.RunResultErrorNoMessage(status=""),
types.SQLCompiledPath(path=""),
types.CheckNodeTestFailure(relation_name=""),
types.FirstRunResultError(msg=""),
types.AfterFirstRunResultError(msg=""),
types.EndOfRunSummary(num_errors=0, num_warnings=0, keyboard_interrupt=False),
types.LogSkipBecauseError(schema="", relation="", index=0, total=0),
types.EnsureGitInstalled(),
@@ -485,3 +487,34 @@ def test_bad_serialization():
str(excinfo.value)
== "[Note]: Unable to parse dict {'param_event_doesnt_have': 'This should break'}"
)
def test_single_run_error():
try:
# Add a recording event manager to the context, so we can test events.
event_mgr = TestEventManager()
ctx_set_event_manager(event_mgr)
error_result = RunResult(
status=RunStatus.Error,
timing=[],
thread_id="",
execution_time=0.0,
node=None,
adapter_response=dict(),
message="oh no!",
failures=[],
)
print_run_result_error(error_result)
events = [e for e in event_mgr.event_history if isinstance(e[0], RunResultError)]
assert len(events) == 1
assert events[0][0].msg == "oh no!"
finally:
# Set an empty event manager unconditionally on exit. This is an early
# attempt at unit testing events, and we need to think about how it
# could be done in a thread safe way in the long run.
ctx_set_event_manager(EventManager())

View File

@@ -638,6 +638,21 @@ def versioned_model_v3(seed):
)
@pytest.fixture
def versioned_model_v12_string(seed):
return make_model(
"pkg",
"versioned_model",
'select * from {{ ref("seed") }}',
config_kwargs={"materialized": "table"},
refs=[seed],
sources=[],
path="subdirectory/versioned_model_v12.sql",
version="12",
latest_version=2,
)
@pytest.fixture
def versioned_model_v4_nested_dir(seed):
return make_model(
@@ -732,6 +747,7 @@ def manifest(
versioned_model_v2,
versioned_model_v3,
versioned_model_v4_nested_dir,
versioned_model_v12_string,
ext_source_2,
ext_source_other,
ext_source_other_2,
@@ -760,6 +776,7 @@ def manifest(
versioned_model_v2,
versioned_model_v3,
versioned_model_v4_nested_dir,
versioned_model_v12_string,
ext_model,
table_id_unique,
table_id_not_null,
@@ -823,6 +840,7 @@ def test_select_fqn(manifest):
"versioned_model.v2",
"versioned_model.v3",
"versioned_model.v4",
"versioned_model.v12",
"table_model",
"table_model_py",
"table_model_csv",
@@ -840,6 +858,7 @@ def test_select_fqn(manifest):
"versioned_model.v2",
"versioned_model.v3",
"versioned_model.v4",
"versioned_model.v12",
}
assert search_manifest_using_method(manifest, method, "versioned_model.v1") == {
"versioned_model.v1"
@@ -1051,6 +1070,7 @@ def test_select_package(manifest):
"versioned_model.v2",
"versioned_model.v3",
"versioned_model.v4",
"versioned_model.v12",
"table_model",
"table_model_py",
"table_model_csv",
@@ -1103,6 +1123,7 @@ def test_select_config_materialized(manifest):
"versioned_model.v2",
"versioned_model.v3",
"versioned_model.v4",
"versioned_model.v12",
"mynamespace.union_model",
}
@@ -1189,6 +1210,7 @@ def test_select_version(manifest):
assert search_manifest_using_method(manifest, method, "prerelease") == {
"versioned_model.v3",
"versioned_model.v4",
"versioned_model.v12",
}
assert search_manifest_using_method(manifest, method, "none") == {
"table_model_py",

View File

@@ -612,7 +612,7 @@ class SchemaParserVersionedModels(SchemaParserTest):
def setUp(self):
super().setUp()
my_model_v1_node = MockNode(
package="root",
package="snowplow",
name="arbitrary_file_name",
config=mock.MagicMock(enabled=True),
refs=[],
@@ -621,7 +621,7 @@ class SchemaParserVersionedModels(SchemaParserTest):
file_id="snowplow://models/arbitrary_file_name.sql",
)
my_model_v2_node = MockNode(
package="root",
package="snowplow",
name="my_model_v2",
config=mock.MagicMock(enabled=True),
refs=[],