* add defer_to_manifest in before_run to fix faulty deferred docs generate
* add a changelog
* add declaration of defer_to_manifest to FreshnessTask and GraphRunnableTask
* fix: add defer_to_manifest method to ListTask
* Re-factor list of YAML keys for hooks to late-render
* Add `pre_` and `post_hook` to list of late-rendered hooks
* Check for non-empty set intersection
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
* Test functional synonymy of `*_hook` with `*-hook`
Test that `pre_hook`/`post_hook` are functionally synonymous with `pre-hook`/`post-hook` for model project config
* Undo bugfix to validate the new test fails
* Revert "Undo bugfix to validate the new test fails"
This reverts commit e83a2be2eb.
Co-authored-by: Kshitij Aranke <kshitij.aranke@dbtlabs.com>
* add meta attribute to nodeinfo for events
* also add meta to dataclass
* add to unit test to ensure meta is added
* adding functional test to check that meta is passed to nodeinfo during logging
* changelog
* remove used imported
* add tests with non-string keys
* renaming test dict keys
* add non-string value
* resolve failing test
* test additional non-string values
* fix flake8
* Stringify meta dict in node_info
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* convert the test and fix an error due to a dead code seed
* Get rid of old test
* Remove unfortunately added files. Don't use that *
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Update types.proto
* pre-commit passes
* Cleanup tests and tweak EventLevels
* Put node_info back on SQLCommit. Add "level" to fire_event function.
* use event.message() in warn_or_error
* Fix logging test
* Changie
* Fix a couple of unit tests
* import Protocol from typing_extensions for 3.7
* ✨ adding pre-commit install to make dev
* 🎨 updating format of Makefile and CONTRIBUTING.md
* 📝 adding changelog via changie new
* ✨ adding dev_req to Makefile + docs
* 🎨 remove dev_req from docs, dry makefile
* Align names of `.PHONY` targets with their associated rules
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <doug.beatty@dbtlabs.com>
* starting to move jinja exceptions
* convert some exceptions
* add back old functions for backward compatibility
* organize
* more conversions
* more conversions
* add changelog
* split out CacheInconsistency
* more conversions
* convert even more
* convert parsingexceptions
* fix tests
* more conversions
* more conversions
* finish converting exception functions
* convert more tests
* standardize to msg
* remove some TODOs
* fix test param and check the rest
* add comment, move exceptions
* add types
* fix type errors
* fix type for adapter_response
* remove 0.13 version from message
* pass predicated to merge strategy
* postgres delete and insert
* merge with predicates
* update to use arbitrary list of predicates, not dictionaries, merge and delete
* changie
* add functional test to adapter zone
* comma in test config
* add test for incremental predicates delete and insert postgres
* update test structure for inheritance
* handle predicates config for backwards compatibility
* test for predicates keyword
* Add generated CLI API docs
Co-authored-by: Colin <colin.rogers@dbtlabs.com>
Co-authored-by: Github Build Bot <buildbot@fishtownanalytics.com>
* Remove unneeded SQL compilation attributes from SeedNode
* Fix various places that referenced removed attributes
* Cleanup a few Unions
* More formatting in nodes.py
* Mypy passing. Untested.
* Unit tests working
* use "doc" in documentation unique_ids
* update some doc_ids
* Fix some artifact tests. Still need previous version.
* Update manifest/v8.json
* Move relation_names to parsing
* Fix a couple of tests
* Update some artifacts. snapshot_seed has wrong schema.
* Changie
* Tweak NodeType.Documentation
* Put store_failures property in the right place
* Fix setting relation_name
* update changie to require issue or pr, and allow multiple
* remove extraneous data from changelog files.
* allow for multiple PR/issues to be entered
* update contributing guide
* remove issue number from bot changelogs
* update format of PR
* fix dependency changelogs
* remove extra line
* remove extra lines, tweak contributor wording
* Update CONTRIBUTING.md
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Get running with Python 3.11
* More tests passing, mypy still unhappy
* Upgrade to 3.11, and bump mashumaro
* patch importlib.import_module last
* lambda: Policy() default_factory on include and quote policy
* Add changelog entry
* Put a lambda on it
* Fix text formatting for log file
* Handle variant type return from e.log_level()
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Josh Taylor <joshuataylorx@gmail.com>
Co-authored-by: Michelle Ark <michelle.ark@dbtlabs.com>
* feat: add a list of default values to the ctx manager
* tests: dbt.get.config default values
* feat: validate the num of args in config.get
* feat: jinja template for dbt.config.get default values
* docs: changie yaml
* fix:typo on error message
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
* v0 - new dbt deps type: tarball url
in support of
https://github.com/dbt-labs/dbt-core/issues/4205
* flake8 fixes
* adding max size tarball condition
* clean up imports
* typing
* adding sha1 and subdirectory options; improve logging feedback
sha1: allow user to specify sha1 in packages.yaml, will only install if package matches
subdirectory: allow user to specify subdirectory of package in tarfile, if the package is a non standard structure (like with git subdirectory option)
* simple tests added
* flake fixes
* changes to support tests; adding exceptions; fire_event logging
* new logging events
* tarball exceptions added
* build out tests
* removing in memory tarball test
* update type codes to M - Misc
* adding new events to test_events
* fix spacing for flake
* add retry download code - as used in registry calls
* clean
* remove saving tar in memory inside tarfile object
will hit url multiple times instead
* remove duplicative code after refactor
* black updates
* black formatting
* black formatting
* refactor - no more in-memory tarfile - all as file operations now
- remove tarfile passing, always use tempfile instead
- reorganize system.* functions, removing duplicative code
- more notes on current flow and structure - esp need for pattern of 1) unpack 2) scan for package dir 3) copy to destination.
- cleaning
* cleaning and sync to new tarball code
* cleaning and sync to new tarball code
* requested changes from PR
https://github.com/dbt-labs/dbt-core/pull/4689#discussion_r812970847
* reversions from revision 2
removing sha1 check to simplify/mirror hub install pattern
* simplify/mirror hub install pattern
to simplify/mirror hub install pattern
- removing sha1 check
- supply name/version to act as our 'metadata' source
* simplify/mirror hub install pattern
simplify with goal of mirroring hub install pattern
- supporting subfolders like git packages, and sha1 checks are removed
- existing code from RegistryPinnedPackage (install() and download_and_untar()) performs the operations
- RegistryPinnedPackage install() and download_and_untar() are not currently set up as functions that can be used across classes - this should be moved to dbt.deps.base, or to a dbt.deps.common file - need dbt labs feedback on how to proceed (or leave as is)
* remove revisions, no longer doing package check
* slim down to basic tests
more complex features have been removed (sha1, subfolder) so testing is much simpler!
* fix naming to match hubs behavior
remove version from package folder name
* refactor install and download to upstream PinnedPackage class
i'm on the fence if this is right approach, but seems like most sensible after some thought
* Create Features-20221107-105018.yaml
* fix flake, black, mypy errors
* additional flake/black fixes
* Update .changes/unreleased/Features-20221107-105018.yaml
fix username on changelog
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* change to fstring
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* cleaning - remove comment
* remove comment/question for dbt team
* in support of issuecomment 1334055944
https://github.com/dbt-labs/dbt-core/pull/4689#issuecomment-1334055944
* in support of issuecomment 1334118433
https://github.com/dbt-labs/dbt-core/pull/4689#issuecomment-1334118433
* black fixes; remove debug bits
* remove `.format` & add 'tarball' as version
'tarball' as version so that the temp files format nicely:
[tempfile_location]/dbt_utils_2..tar.gz # old
vs
[tempfile_location]/dbt_utils_1.tarball.tar.gz # current
* port os.path refs in `PinnedPackage._install` to pathlib
* lowercase as per PR feedback
* update tests after removing version arg
goes along with 8787ba41af
Co-authored-by: Emily Rockman <ebuschang@gmail.com>
* removed Compiled versions of nodes
* Remove compiled fields from dictionary if not compiled
* check compiled is False instead of attribute existence in env_var
processing
* Update artifacts test (CompiledSnapshotNode did not have SnapshotConfig)
* Changie
* more complicated 'compiling' check in env_var
* Update test_exit_codes.py
* CT-1405: Refactor event logging code
* CT-1405: Add changelog entry
* CT-1405: Add code to protect against using closed streams from past tests.
* CT-1405: Restore unit test which was only failing locally
* CT-1405: Document a hack with issue # to resolve it in the future
* CT-1405: Make black happy
* CT-1405: Get rid of confusing factory function and duplicated function
* CT-1405: Remove unused event from types.proto and auto-gen'd file
* Fix the partial parse path
Partial parse should use project root or it does not resolve to correct path.
Eg. `target-path: ../some/dir/target`, if not ran from root, creates an erroneous folder.
* Run pre-commit
* Changie
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* reformatting of test after some spike investigation
* reformat code to pull tests back into base class definition, move a test to more appropriate spot
* Convert incremental schema tests.
* Drop the old test.
* Bad git add. My disappoint is immeasurable and my day has been ruined.
* Adjustments for flake8.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert old test.
Add documentation. Adapt and reenable previously skipped test.
* Convert test and adapt and comment for current standards.
* Remove old versions of tests.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Convert test 067. One bug outstanding.
* Test now working! Schema needed renaming to avoid 63 char max problems
* Remove old test.
* Add some docs and rewrite.
* Add exception for when audit tables' schema runs over the db limit.
* Code cleanup.
* Revert exception.
* Round out comments.
* Rename what shouldn't be a base class.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* BaseContext: expose md5 function in context
* BaseContext: add return value type
* Add changie entry
* rename "md5" to "local_md5"
* fix test_context.py
* init pr for dbt_debug test conversion
* removal of old test
* minor test format change
* add new Base class and Test classes
* reformatting test, new method for capsys and error messgae to check, todo fix badproject
* refomatting tests, ready for review
* checking yaml file, and small reformat
* modifying since update wasn't working in ci/cd
* Combine various print result log events with different levels
* Changie
* more merge cleanup
* Specify DynamicLevel for event classes that must specify level
* Initial structured logging changes
* remove "this" from core/dbt/events/functions.py
* CT-1047: Fix execution_time definitions to use float
* CT-1047: Revert unintended checking of changes to functions.py
* WIP
* first pass to resolve circular deps
* more circular dep resolution
* remove a bunch of duplication
* move message into log line
* update comments
* fix field that wen missing during rebase
* remove double import
* remove some comments and extra code
* fix pre-commit
* rework deprecations
* WIP converting messages
* WIP converting messages
* remove stray comment
* WIP more message conversion
* WIP more message conversion
* tweak the messages
* convert last message
* rename
* remove warn_or_raise as never used
* add fake calls to all new events
* fix some tests
* put back deprecation
* restore deprecation fully
* fix unit test
* fix log levels
* remove some skipped ids
* fix macro log function
* fix how messages are built to match expected outcome
* fix expected test message
* small fixes from reviews
* fix conflict resolution in UI
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Peter Allen Webb <peter.webb@dbtlabs.com>
* Convert test to functional set.
* Remove old statement tests from integration test set.
* Nix whitespace
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Create functors to initialize event types with str-type member attributes. Before this change, the spec of various classes expected base_msg and msg params to be str's. This assumption did not always hold true. post_init hooks ensures the spec is obeyed.
* Add new changelog.
* Add msg type change functor to a few other events that could use it.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Updated string formatting on non-f-strings.
Found all cases of strings separated by white space on a single line and
removed white space separation. EX: "hello " "world" -> "hello world".
* add changelog entry
* CT-625: Fail with clear message for invalid materialized vals
* CT-625: Increase test coverage, run pre-commit checks
* CT-625: run black on problem file
* CT-625: Add changelog entry
* CT-625: Remove test that didn't make sense
* Migrate test
* Remove old integration test.
* Simplify object definitions since we enforce python 3
* Factor many fixtures into a file.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* init query_comment test conversion pr
* importing model and macro, changing to new project_config_update, tests passing locally for core
* delete old integration test
* trying to test against other adapters
* update to main
* file rename
* file rename
* import change
* move query_comment directory to functional/
* move test directory back to adapter zone
* update to main
* updating core test based on feedback from @gshank
* testing removing target checking
* edited comment to correctly specify that views are set, not tables
* updated init test to match starter project change
* added changelog
* update 3 other occurrences of the init test for text update
* clean up debugging
* reword some comments
* changelog
* add more tests
* move around the manifest.node
* fix typos
* all tests passing
* move logic for moving around nodes
* add tests
* more cleanup
* fix failing pp test
* remove comments
* add more tests, patch all disabled nodes
* fix test for windows
* fix node processing to not overwrite enabled nodes
* add checking disabled in pp, fix error msg
* stop deepcopying all nodes when processing
* update error message
* init pr for 026 test conversion
* removing old test, got all tests setup, need to find best way to handle regex in new test and see what we would actually want to do to test check we didn't run anything against
* changes to test_alias_dupe_thorews_exeption passing locally now
* adding test cases for final test
* following the create new shcema method tests are passing up for review for core code
* noving alias test to adapter zone
* adding Base Classes
* changing ref to fixtures
* add double check to test
* minor change to alt schema name formation, removal of unneeded setup fixture
* typo in model names
* update to main
* pull models/schemas/macros into a fixtures file
* Preliminary changes to keep compile from connecting to the warehouse for runtime calls
* Adds option to lib to skip connecting to warehouse for compile; adds prelim tests
* Removes unused imports
* Simplifies test and renames to SqlCompileRunnerNoIntrospection
* Updates name in tests
* Spacing
* Updates test to check for adapter connection call instead of compile and execute
* Removes commented line
* Fixes test names
* Updates plugin to postgres type as snowflake isn't available
* Fixes docstring
* Fixes formatting
* Moves conditional logic out of class
* Fixes formatting
* Removes commented line
* Moves import
* Unmoves import
* Updates changelog
* Adds further info to method docstring
* first pass
* add label and name validation
* changelog
* fix tests
* convert ParsingError to Deprecation
* fix bug where label did not end up in parsed node
* update deprecation msg
* ConfigSelectorMethod should check for bools
* Add changelog entry
* Add support for lists and test cases
* Typo and formatting in test
* pre-commit linting
* Method for capturing standard out during testing (rather than logs)
* Allow dbt exit code assertion to be optional
* Verify priority order to search for profiles.yml configuration
* Updates after pre-commit checks
* Test searching for profiles.yml within the dbt project directory before `~/.dbt/`
* Refactor `dbt debug` to move to the project directory prior to looking up profiles directory
* Search the current working directory for profiles.yml
* Changelog
* Formatting with Black
* Move `run_dbt_and_capture_stdout` into the test case
* Update CLI help text
* Unify separate DEFAULT_PROFILES_DIR definitions
* Remove unused PROFILE_DIR_MESSAGE
* Remove unused DEFAULT_PROFILES_DIR
* Use shared definition of DEFAULT_PROFILES_DIR
* Define global vs. local profiles location and dynamically determine the default
* Restore original
* Remove function for determining the default profiles directory
* init push for 021_test_concurrency conversion
* ref to self, delete old integration tests, core passing locally
* creating base class to send setup to snowflake
* making changes to store all setup in core, todo: remove util changes after 1050 is merged
* swap sql seeds to csv
* white space removal
* rewriting seed to see if it fixes issue in snowflake
* attempt to rewrite file for test in snowflake
* update to main
* remove unneeded variable to seeds
* remove unneeded snowflake specific code
* first pass adding disabled functionality to metrics and exposures
* first pass at getting metrics disabled
* add unsaved file
* fix up comments
* Delete tmp.csv
* fix test
* add exposure logic, fix merge from main
* change when nodes are added to manifest, finish tests
* add changelog
* removed unused code
* minor cleanup
* init file creation for test_ephemeral conversion
* creating base class to run seed through and pass along to classes to test against
* laid out basic flow of tests, need to finish by figuring out how to handle the assertTrue sections and fix error thats occuring
* added creation and comparison of sql and expected result, seeing issue with extra appended test_ on some and issue with errorhandling regarding expect pass
* working on fixing view structure
* update to expected_sql file
* update to expected_sql file
* directory rename, close on all tests need to fix the test_test_ name change for first two tests and figure out why the new test is calling error instead of skipped in status
* renamed expected_sql to include the test_test_ephemeral style name, organized how models are imported into test classes
* move ephemeral functional test to adapter zone
* trying to include the BaseEphemeralMulti class to send to snowflake
* trying to fix snowflake test
* trying to fix snowflake test
* creation of second Base class to feed into others for testing purposes
* found way to check type of warehouse to make data type change for snowflake
* move seed into fixture, to be able to import it from core for adapter tests
* convert to csv and get test passing in core
* remove snowflake specific stuff from util
* remove whitespace
* update to main
* Add structured logging test and provide CI env vars to integration conditionally.
* Add the crazy inline if make feature and ax unneeded variable
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Finish converting first test file.
* Finish test conversion.
* Remove old integration hook tests.
* Move location of schema.yml to models directory.
* fix snapshot delete test that was failing
* Add the extra env var check for our CI.
* Add changelog
* Remove naive json flag check and instead force all integration tests to check for environment variables using flag routine.
* Revise the changelog to be more of an explanation.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Add dbt Core roadmap as of August 2022
* Cody intro
* Florian intro
* Lint my markdown
* add blurb on 1.5+ for Python next steps
* Revert "add blurb on 1.5+ for Python next steps"
This reverts commit 1659a5a727.
* PR feedback, self review
Co-authored-by: Cody Peterson <cody.dkdc2@gmail.com>
Co-authored-by: Florian Eiden <florian.eiden@dbtlabs.com>
* Method for capturing standard out during testing (rather than logs)
* Allow dbt exit code assertion to be optional
* Verify priority order to search for profiles.yml configuration
* Updates after pre-commit checks
* Move `run_dbt_and_capture_stdout` into the test case
* Add supported languages to materializations
* Add changie entry
* Linting
* add more error and only get supported language for materialization macro, update schema
* fix test and add more check
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
* First cut at checking version compat for hub pkgs
* Account for field rename
* Add changelog entry
* Update error message
* Fix unit test
* PR feedback
* Try fixing test
* Edit exception msg
* Expand unit test to include pkg prerelease
* Update core/dbt/deps/registry.py
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
Co-authored-by: Doug Beatty <44704949+dbeatty10@users.noreply.github.com>
* Change postgres name truncation logic to be overridable. Add exception with debugging instructions.
* Add changelog.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Only consider schema change when column cannot be expanded
* Add test for column shortening
* Add changelog entry
* Move test from integration to adapter tests
* Remove print statement
* add on_schema_change
* show reason for schema change failures
When the incremental model fails, I do not get the context I need to easily fix my discrepency.
Adding more info
* Update on_schema_change.sql
Fix identation
* Added changie changes
Added changie changes
* Update on_schema_change.sql
Trim whitespaces
* Update on_schema_change.sql
Log message text enhancement
* Pass patch_config_dict to build_config_dict when creating
unrendered_config
* Add test case for unrendered_config
* Changie
* formatting, fix test
* Fix test so unrendered config includes docs config
* first pass
* tweaks
* convert to use dbt-docs links in contributors section
* fix eq check
* fix format of contributos prs
* update docs changelog to point back to dbt-docs
* update beta 1.3 docs changelog
* remove optional param
* make issue inclusion conditional on being filled
* add Optional node_color config in Docs dataclass
* Remove node_color from the original docs config
* Add docs config and input validation
* Handle when docs is both under docs and config.docs
* Add node_color to Docs
* Make docs a Dict to avoid parsing errors
* Make docs a dataclass instead of a Dict
* Fix error when using docs as dataclass
* Simplify generator for the default value
* skeleton for test fixtures
* bump manifest to v7
* + config hierarchy tests
* add show override tests
* update manifest
* Remove node_color from the original docs config
* Add node_color to Docs
* Make docs a Dict to avoid parsing errors
* Make docs a dataclass instead of a Dict
* Simplify generator for the default value
* + config hierarchy tests
* add show override tests
* Fix unit tests
* Add tests in case of incorrect input for node_color
* Rename tests and Fix typos
* Fix functional tests
* Fix issues with remote branch
* Add changie entry
* modify tests to meet standards (#5608)
Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Python model beta version with update to manifest that renames `raw_sql` and `compiled_sql` to `raw_code` and `compiled_code`
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Ian Knox <ian.knox@dbtlabs.com>
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* [CT-700] [Bug] Logging tons of asterisks when sensitive env vars are missing
* [CT-700][Bug] Added changelog entry
* Updated the changelog body message
* feat: Improve generic test UndefinedMacroException message
The error message rendered from the `UndefinedMacroException` when
raised by a TestBuilder is very vague as to where the problem is
and how to resolve it. This commit adds a basic amount of
information about the specific model and column that is
referencing an undefined macro.
Note: All custom macros referenced in a generic test config will
raise an UndefinedMacroException as of v0.20.0.
* feat: Bubble CompilationException into schemas.py
I realized that this exception information would be better if
CompilationExceptions inclulded the file that raised the exception.
To that end, I created a new exception handler in `_parse_generic_test`
to report on CompilationExceptions raised during the parsing of
generic tests. Along the way I reformatted the message returned
from TestBuilder to play nicely with the the existing formatting of
`_parse_generic_test`'s exception handling code.
* feat: Add tests to confirm CompileException
I've added a basic test to confirm that the approriate
CompilationException when a custom macro is referenced
in a generic test config.
* feat: Add changie entry and tweak error msg
* Update .changes/unreleased/Under the Hood-20220617-150744.yaml
Thanks to @emmyoop for the recommendation that this be listed as a Fix change instead of an "Under the Hood" change!
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* fix: Simplified Compliation Error message
I've simplified the error message raised during a Compilation Error
sourced from a test config. Mainly by way of removing tabs and newlines
where not required.
* fix: Convert format to fstring in schemas
This commit moves a format call to a multiline fstring in the
schemas.py file for CompilationExceptions.
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* add readme to .github
* more changes to readme
* improve docs
* more readme tweaks
* add more docs
* incorporate feedback
* removed section with no info
* first pass at snyk changelog entry
* refactor for single workflow for all bot PRs
* exclude snyk from contributors list
* point action to branch temporarily
* replace quotes
* point to released tag
* init push or ct-660 work
* changes to default versions of get_show_grant_sql and get_grant_sql
* completing init default versions of all macros being called for look over and collaboration
* minor update to should_revoke
* post pairing push up (does have log statements to make sure we remove)
* minor spacing changes
* minor changes, and removal of logs so people can have clean grab of code
* minor changes to how get_revoke_sql works
* init attempt at applying apply_grants to all materialzations
* name change from recipents -> grantee
* minor changes
* working on making a context to handle the diff gathering between grant_config and curreent_grants to see what needs to be revoked, I know if we assign a role, and a model becomes dependent on it we can't drop the role now still not seeing the diff appear in log
* removing logs from most materializations to better track diff of grants generation logs
* starting to build out postgres get_show_grant_sql getting empty query errors hopefully will clear up as we add the other postgres versions of macros and isn't a psycopg2 issue as indicated by searching
* 6/27 eod update looking into diff_grants variable not getting passed into get_revoke_sql
* changes to loop cases
* changes after pairing meeting
* adding apply_grants to create_or_replace_view.sql
* models are building but testing out small issues around revoke statement never building
* postgrest must fixes from jeremy's feedback
* postgres minor change to standarize_grants_dict
* updating after pairing with dough and jeremey incorporating the new version of should revoke logic.
* adding ref of diff_of_two_dicts to base keys ref
* change of method type for standardize_grants_dict
* minor update trying to fix unit test
* changes based on morning feedback
* change log message in default_apply_grants macro
* CT-808 grant adapter tests (#5447)
* Create initial test for grants
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* rename grant[privilege] -> grant_config[privilege]
* postgres macro rename to copy_grants
* CT-808 more grant adapter tests (#5452)
* Add tests for invalid user and invalid privilege
* Add more grant tests
* Macro touch ups
* Many more tests
* Allow easily replacing privilege names
* Keep adding tests
* Refactor macros to return lists, fix test
* Code checks
* Keep tweaking tests
* Revert cool grantees join bc Snowflake isnt happy
* Use Postgres/BQ as standard for standardize_grants_dict
* Code checks
* add missing replace
* small replace tweak, add additional dict diffs
* All tests passing on BQ
* Add type cast to test_snapshot_grants
* Refactor for DRYer apply_grants macros
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* update to main, create changelog, whitespace fixes
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* wip
* More support for ratio metrics
* Formatting and linting
* Fix unit tests
* Support disabling metrics
* mypy
* address all TODOs
* make pypy happy
* wip
* checkpoint
* refactor, remove ratio_terms
* flake8 and unit tests
* remove debugger
* quickfix for filters
* Experiment with functional testing for 'expression' metrics
* reformatting slightly
* make file and mypy fix
* remove config from metrics - wip
* add metrics back to context
* adding test changes
* fixing test metrics
* revert name audit
* pre-commit fixes
* add changelog
* Bumping manifest version to v6 (#5430)
* Bumping manifest version to v6
* Adding manifest file for tests
* Reverting unneeded changes
* Updating v6
* Updating test to add metrics field
* Adding changelog
* add v5 to backwards compatibility
* Clean up test_previous_version_state, update for v6 (#5440)
* Update test_previous_version_state for v6. Cleanup
* Regenerate, rm breakpoint
* Code checks
* Add assertion that will fail when we bump manifest version
* update tests to automatically tests all previous versions
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Callum McCann <cmccann51@gmail.com>
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
* Improve pluralizations for Documentation and SqlOperation NodeTypes
Previously these were `docss` and `sqlss` which leaves something to be
desired.
* Add changie changelog entry for pluralization change
* Slighly simplify node type pluralization tests
* Update node type names for sql and docs so they match pluralizations
* deleting scaffold and .py file from scripts section of core as they are either deprecated or will live outside of core
* adding changelog
* removing files that shouldn't be there
* update changelog to have link to new scaffold
* readding the original script file but changing its output ot be a print statement and leave comment that also points to the new scaffold
* sentence change
* Initialize lift + shift, dateadd + datediff
* Placeholder changelog for now
* Lift and shift cross-database macros, fixtures, and tests from dbt-utils
* Switch namespace from `dbt_utils` to `dbt`
* Remove unreferenced variable
* Remove conflicting definition of current_timestamp()
* Trim leading and trailing whitespace
* Apply Black formatting
* Remove unused import
* Remove references to other profiles
* Update .changes/unreleased/Features-20220518-114604.yaml
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Kick out the `type_*` tests
* Kick out the `type_*` macros
* Kick out the `current_timestamp` tests
* Kick out the `current_timestamp` macros
* Kick out the `current_timestamp` macros
* Kick out the `type_*` macros
* Use built-in adapter functionality for converting string datatypes
* Move comment above macro for postgres__any_value
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Adding scheduled CI testing Action
* Fixing malformed message
* Fixing messaging quotes
* Update to not fail fast
* Reordered branches
* Updating job name
* Removed PR trigger used for testing
* Fixing Windows color regression
* Cleaning up logic
* Consolidating logic to the logger
* Cleaning up vars
* Updating comment
* Removing unused import
* Fixing whitespace
* Adding changelog
* Handle 'grants' in NodeConfig, with correct merge behavior
* Fix a bunch of tests
* Add changie
* Actually add the test
* Change to default replace of grants with '+' extending them
* Additional tests, fix config_call_dict handling
* Tweak _add_config_call to remove unnecessary isinstance checks
* Setting up an env var to use to override the tox python variable used for local dev
* Switch over to py-based tox envs instead of the py38 ones to be friendly to dbt-core devs who don't work at dbt Labs
* changie
* Truncate relation names when appending a suffix that will result in len > 63 characters using make_temp_relation and make_backup_relation macros
* Remove timestamp from suffix appended to backup relation
* Add changelog entry
* Implememt make_relation_with_suffix macro
* Add make_intermediate_relation macro that controls _tmp relation creation in table and view materializations to delienate from database- and schema-less behavior of relation returned from make_temp_relation
* Create backup_relation at top of materialization to use for identifier
* cleanup
* Add dstring arg to make_relation_with_suffix macro
* Only reference dstring in conditional of make_relation_with_suffix macro
* Create both a temp and intermediate relation, update preexisting_temp_relation to preexisting_intermediate_relation
* Migrate test updates to new test location
* Remove restored tmp.csv
* Revert "Remove restored tmp.csv"
This reverts commit 900c9dbcad9a1e6a5a6737c84004504bfdd9926f.
* Actually remove restored tmp.csv
* Creating ADR for versioning and branching strategy
* Fixing image link
* Grammar clean-up
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Grammar clean-up
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Update docs/arch/adr-003-versioning-branching-strategy.md
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Updating Outside Scope section
* Changing from using type to stage
* Adding section on getting changes into certain releases
* Changed stages to phases
* Some wording updates
* New section for branching pros and cons
* Clarifying version bump statement
* A few minor comment fix ups
* Adding requirement to define released
* Updating to completed!
Co-authored-by: Stu Kilgore <stuart.kilgore@gmail.com>
* Fix macro modified from previous state with pkg
When iterating through nodes to check if any of its macro dependencies
have been modified, the state selector will first check all upstream
macro dependencies before returning a judgement.
* Add a new selector method for files and add it to the default method selection criteria if the given selector has a . in it but no path separators
* Add a file: selector method to the default selector methods because it will make Pedram happy
* changie stuff
* fix: Avoid access to profile when calling str(UnsetProfileConfig)
dbt.config.UnsetProfileConfig inherits __str__ from
dbt.config.Project. Moreover, UnsetProfileConfig also raises an
exception when attempting to access unset profile attributes. As
Project.__str__ ultimately calls to_project_config and accesses said
profile attributes, we override to_project_config in
UnsetProfileConfig to avoid accessing the attributes that raise an
exception.
This allows calling str(UnsetProfileConfig) and
repr(UnsetProfileConfig).
Basic unit testing is also included in commit.
* fix: Skip repr for profile fields in UnsetProfileConfig
* chore(changie): Add changie file
* propseal for modification to drop_test_schema
* changelog
* remove hard coded run_dbt version and put back previous version of drop_test_schema, add commit to drop_schema
* When parsing 'all_sources' should be a list of unique dirs
* Changie
* Fix some unit tests of all_source_paths
* Convert 039_config_tests
* Remove old 039_config_tests
* Add test for duplicate directories in 'all_source_files'
* Convert existing metrics test
* add non-failing test for names with spaces
* Raise ParsingException if metrics name contains spaces
* Remove old metrics tests
* Fold so-called 'data' test into new framework with new vocabulary to match.
* Add missing files including changelog.
* Remove unneeded Changelog per team policy on test conversions.
* Refactor test code to better use our pytest framework. Strengthen assertions.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* First test completed.
* Convert and update more test cases.
* Complete test migration and remove old files.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* Use yaml renderer (with target context) for rendering selectors
* Changie
* Convert cli_vars tests
* Add test for var in profiles
* Add test for cli vars in packages
* Add test for vars in selectors
* Restore ability to configure and utilize `updated_at` for snapshots using the check_cols strategy
* Changelog entry
* Optional comparison of column names starting with `dbt_`
* Functional test for check cols snapshots using `updated_at`
* Comments to explain the test implementation
* Updating backport action to latest
* Updating to PR trigger with permissions instead
This is a better model for closing down all permissions and just granting what we actually want
* Updating IF when merged and backport label exists
* Changing to only trigger on label being added
* (finally) idiomatically rewrite a class of tests into the new framework.
* Get simple seed mostly working with design tweaks needed.
* Revamp tests to use more of the framework. Fix TODOs
* Complete migration of 005 and remove old files.
* Fix BOM test for Windows and add changelog entry
* Finalize tests in the adapter zone per conversation with Chenyu.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
I haven't added a message for stale PRs because they're likely to only impact the opening user (who I assume can reopen their own PR) and they're less of a problem. Happy to add that in as well, as well as to take feedback on the specific phrasing here.
* Flexibilize MarkupSafe pinned version
The current `MarkupSafe` pinned version has been added in #4746 as a
temporary fix for #4745.
However, the current restrictive approach isn't compatible with other
libraries that could require an even older version of `MarkupSafe`, like
Airflow `2.2.2` [0], which requires `markupsafe>=1.1.1, <2.0`.
To avoid that issue, we can allow a greater range of supported
`MarkupSafe` versions. Considering the direct dependency `dbt-core` has
is `Jinja2==2.11.3`, we can use its pinning as the lower bound, which is
`MarkupSafe>=0.23` [1].
This fix should be also backported this to `1.0.latest` for inclusion in
the next v1.0 patch.
[0] https://github.com/adamantike/airflow/blob/2.2.2/setup.cfg#L125
[1] https://github.com/pallets/jinja/blob/2.11.3/setup.py#L53
* Add selected_resources in the Jinja context
* Add tests for the Jinja variable selected_resources
* Add Changie entry for the addition of selected_resources
* Move variable to the ProviderContext
* Move selected_resources from ModelContext to ProviderContext
* Update unit tests for context to cater for the new selected_resources variable
* Move tests to a Class where tests are run after a dbt build
* cache schema for selected models
* Create Features-20220316-003847.yaml
* rename flag, update postgres adapter
rename flag to cache_selected_only, update postgres adapter: function _relations_cache_for_schemas
* Update Features-20220316-003847.yaml
* added test for cache_selected_only flag
* formatted as per pre-commit
* Add breaking change note for adapter plugin maintainers
* Fix whitespace
* Add a test
Co-authored-by: karunpoudel-chr <poudel.karun@gmail.com>
Co-authored-by: karunpoudel-chr <62040859+karunpoudel@users.noreply.github.com>
* initial pass at source config test w/o overrides
* Update tests/functional/sources/test_source_configs.py
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Update tests/functional/sources/test_source_configs.py
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* tweaks from feedback
* clean up some test logic - add override tests
* add new fields to source config class
* fix odd formatting
* got a test working
* removed unused tests
* removed extra fields from SourceConfig class
* fixed next failing unit test
* adding back missing import
* first pass at adding source table configs
* updated remaining tests to pass
* remove source override tests
* add comment for config merging
* changelog
* remove old comments
* hacky fix for permission test
* remove unhelpful test
* adding back test file that was accidentally deleted
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
Co-authored-by: Chenyu Li <chenyu.li@dbtlabs.com>
* first draft
* working selector code
* remove debug print logs
* copy test template
* add todo
* smarter depends on graph searching notes
* add excluded source children nodes
* remove prints and clean up logger
* opinionated fresh node selection
* better if handling
* include logs with meaningul info
* add concurrent selectors note
* cleaner logging
* Revert "Merge branch 'main' of https://github.com/dbt-labs/dbt into feature/smart-source-freshness-runs"
This reverts commit 7fee4d44bf, reversing
changes made to 17c47ff42d.
* tidy up logs
* remove comments
* handle criterion that does not match nodes
* use a blank set instead
* Revert "Revert "Merge branch 'main' of https://github.com/dbt-labs/dbt into feature/smart-source-freshness-runs""
This reverts commit 71125167a1.
* make compatible with rc and new logger
* new log format
* new selector flag name
* clarify that status needs to be correct
* compare current and previous state
* correct import
* add current state
* remove print
* add todo
* fix error conditions
* clearer refresh language
* don't run wasteful logs
* remove for now
* cleaner syntax
* turn generator into set
* remove print
* add fresh selector
* data bookmarks matter only
* remove exclusion logic for status
* keep it DRY
* remove unused import
* dynamic project root
* dynamic cwd
* add TODO
* simple profiles_dir import
* add default target path
* headless path utils
* draft work
* add previous sources artifact read
* make PreviousState aware of t-2 sources
* make SourceFreshSelectorMethod aware of t-2 sources
* add archive_path() for t-2 sources to freshness.py
* clean up merged branches
* add to changelog
* rename file
* remove files
* remove archive path logic
* add in currentstate and previousstate defaults
* working version of source fresher
* syntax source_fresher: works
* fix quoting
* working version of target_path default
* None default to sources_current
* updated source selection semantics
* remove todo
* move to test_sources folder
* copy over baseline source freshness tests
* clean up
* remove test file
* update state with version checks
* fix flake tests
* add changelog
* fix name
* add base test template
* delegate tests
* add basic test to ensure nothing runs
* add another basic test
* fix test with copy state
* run error test
* run warn test
* run pass test
* error handling for runtime error in source freshness
* error handling for runtime error in source freshness
* add back fresher selector condition
* top level selector condition
* add runtime error test
* testing source fresher test selection methods
* fix formatting issues
* fix broken tests
* remove old comments
* fix regressions in other tests
* add Anais test cases
* result selector test case
Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>
* init push up of converted unique_key tests
* testing cause of failure
* adding changelog entry
* moving non basic test up one directory to be more broadly part of adapter zone
* minor changes to the bad_unique_key tests
* removed unused fixture
* moving tests to base class and inheriting in a simple class
* taking in chenyu's changes to fixtures
* remove older test_unique_key tests
* removed commented out code
* uncommenting seed_count
* v2 based on feedback for base version of testing, plus small removal of leftover breakpoint
* create incremental test directory in adapter zone
* commenting out TableComparision and trying to implement check_relations_equal instead
* remove unused commented out code
* changing cast for date to fix test to work on bigquery
* start of a README for the include directory
* minor updates
* minor updates after comments from gerda and emily
* trailing space issue?
* black formatting
* minor word change
* typo update
* minor fixes and changelog creation
* remove changelog
* catch None and malformed json reponses
* add json.dumps for format
* format
* Cache registry request results. Avoid one request per version
* updated to be direct in type checking
* add changelog entry
* add back logic for none check
* PR feedback: memoize > global
* add checks for expected types and keys
* consolidated cache and retry logic
* minor cleanup for clarity/consistency
* add pr review suggestions
* update unit test
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* convert 059 to new test framework
* remove replaced tests
* WIP, has pre-commit errors
* WIP, has pre-commit errors
* one failing test, most issued resolved
* fixed final test and cleaned up fixtures
* remove converted tests
* updated test to work on windows
* remove config version
* Reorder kinds in changie
* Reorder change categories for v1.1.0b1
* Update language for breaking change
* Contributors deserve an h3
* Make pre-commit happy? Update language
* Rm trailing whitespace
* pre-commit additions
* added changie changelog entry
* moving integration test over
* Pair programming
* removing ref to mapping as seems to be unnecessary check, unique_key tests pass locally for postgres
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* convert changelog to changie yaml files
* update contributor format and README instructions
* update action to rerun when labeled/unlabled
* remove synchronize from action
* remove md file replaced by the yaml
* add synchronize and comment of what's happening
* tweak formatting
* FEAT: new columns in snapshots for adapters w/o bools
* trigger gha workflow
* using changie to make changelog
* updating to be on par with main
Co-authored-by: swanderz <swanson.anders@gmail.com>
* Change property file version exception to reflect current name and offer clearer guidance in comments.
* Add example in case of noninteger version tag just to drive the point home to readers.
* fix broken links, update GHA to not repost comment
* tweak GHA
* convert GHA used
* consolidate GHA
* fix PR numbers and pull comment as var
* fix name of workflow step
* changie merge to fix link at top of changelog
* add changelog yaml
* convert single test in 004
* WIP
* incremental conversion
* WIP test not running
* WIP
* convert test_missing_strategy, cross_schema_snapshot
* comment
* converting to class based test
* clean up
* WIP
* converted 2 more tests
* convert hard delete test
* fixing inconsistencies, adding comments
* more conversion
* implementing class scope changes
* clean up unsed code
* remove old test, get all new ones running
* fix typos
* append file names with snapshot to reduce collision
* moved all fixtures into test files
* stop using tests as fixtures
* Only select target column for not_null test
* If storing failures include all columns in the select, if not, only select the column being tested
It's desirable for this test to include the full row output when using --store-failures. If the query result stored in the database contained just the null values of the null column, it can't do much to contextualize why those rows are null.
* Update changelog
* chore: update changelog using changie
* Revert "Update changelog"
This reverts commit 281d805959.
* initial setup to use changie
* added `dbt-core` to version line
* fix formatting
* rename to be more accurate
* remove extra file
* add stug for contributing section
* updated docs for contributing and changelog
* first pass at changelog check
* Fix workflow name
* comment on handling failure
* add automatic contributors section via footer
* removed unused initialization
* add script to automate entire changelog creation and handle prereleases
* stub out README
* add changelog entry!
* no longer need to add contributors ourselves
* fixed formatted and excluded core team
* fix typo and collapse if statement
* updated to reflect automatic pre-release handling
Removed custom script in favor of built in pre-release functionality in new version of changie.
* update contributing doc
* pass at GHA
* fix path
* all changed files
* more GHA work
* continued GHA work
* try another approach
* testing
* adding comment via GHA
* added uses for GHA
* more debugging
* fixed formatting
* another comment attempt
* remove read permission
* add label check
* fix quotes
* checking label logic
* test forcing failure
* remove extra script tag
* removed logic for having changelog
* Revert "removed logic for having changelog"
This reverts commit 490bda8256.
* remove unused workflow section
* update header and readme
* update with current version of changelog
* add step failure for missing changelog file
* fix typos and formatting
* small tweaks per feedback
* Update so changelog end up onlywith current version, not past
* update changelog to recent contents
* added the rest of our releases to previous release list
* clarifying the readme
* updated to reflect current changelog state
* updated so only 1.1 changes are on main
* Fix macro modified from previous state
Previously, if the first node selected by state:modified had multiple
dependencies, the first of which had not been changed, the rest of the
macro dependencies of the node would not be checked for changes. This
commit fixes this behavior, so the remainder of the macro dependencies
of the node will be checked as well.
* Convert tests in dbt-adapter-tests to use new pytest framework
* Filter out ResourceWarning for log file
* Move run_sql to dbt.tests.util, fix check_cols definition
* Convert jaffle_shop fixture and test to use classes
* Tweak run_sql methods, rename some adapter file pieces, add comment
to dbt.tests.adapter.
* Add some more comments
* Create DictDefaultNone for to_target_dict in deps and clean commands
* Update test case to handle
* update CHANGELOG.md
* Switch to DictDefaultEmptyStr for to_target_dict
* Do not overwrite node.meta with empty patch.meta
* Restore config_call_dict in snapshot node transform
* Test for snapshot with schema file config
* Test for meta in both toplevel node and node config
* task init: support older click v7.0
`dbt init` uses click for interactively setting up a project. The
version constraints currently ask for click >= 8 but v7.0 has nearly the
same prompt/confirm/echo API. prompt added a feature that isn't used.
confirm has a behavior change if the default is None, but
confirm(..., default=None) is not used. Long story short, we can relax
the version constraint to allow installing with an older click library.
Ref: Issue #4566
* Update CHANGELOG.md
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
* adapter compability messaging added.
* edited plugin version compatibility message
* edited test version for plugin compability
* compare using only major and minor
* Add checking PYPI and update changelog
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: ChenyuLi <chenyu.li@dbtlabs.com>
* Add unique_key to NodeConfig
`unique_key` can be a string or a list.
* merge.sql update to work with unique_key as list
extend the functionality to support both single and multiple keys
Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>
* Updated test to include unique_key
Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>
* updated tests
Signed-off-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>
* Fix unit and integration tests
* Update Changelog for 2479/4618
Co-authored-by: triedandtested-dev (Bryan Dunkley) <bryan@triedandtested.dev>
* new docker setup
* formatting
* Updated spark: support for extras
* Added third-party adapter support
* More selective lib installs for spark
* added docker to bumpversion
* Updated refs to be tag-based because bumpversion doesn't understand 'latest'
* Updated docs per PR feedback
* reducing RUNs and formatting/pip best practices changes
* Added multi-architecture support and small test script, updated docs
* typo
* Added a few more tests
* fixed tests output, clarified dbt-postgres special case-ness
* Fix merge conflicts
* formatting
* Updated spark: support for extras
* Added third-party adapter support
* More selective lib installs for spark
* added docker to bumpversion
* Updated refs to be tag-based because bumpversion doesn't understand 'latest'
* Updated docs per PR feedback
* reducing RUNs and formatting/pip best practices changes
* Added multi-architecture support and small test script, updated docs
* typo
* Added a few more tests
* fixed tests output, clarified dbt-postgres special case-ness
* changelog
* basic framework
* PR ready excepts docs
* PR feedback
* add retry logic, tests when extracting tarfile fails
* fixed bug with not catching empty responses
* specify compression type
* WIP test
* more testing work
* fixed up unit test
* add changelog
* Add more comments!
* clarify why we do the json() check for None
* Initial addition of CODEOWNERS file
* Proposed sub-team ownership (#4632)
* Updating for the events module to be both language and execution
* Adding more comment details
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Validate project names in interactive dbt init
- workflow: ask the user to provide a valid project name until they do.
- new integration tests
- supported scenarios:
- dbt init
- dbt init -s
- dbt init [name]
- dbt init [name] -s
* Update Changelog.md
* Add full URLs to CHANGELOG.md
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
Co-authored-by: Chenyu Li <chenyulee777@gmail.com>
* scrub message of secrets
* update changelog
* use new scrubbing and scrub more places using git
* fixed small miss of string conv and missing raise
* fix bug with cloning error
* resolving message issues
* better, more specific scrubbing
* [#4464] Check specifically for generic node type for some partial parsing actions
* Add check for existence of macro file in saved_files
* Check for existence of patch file in saved_files
* updating contributing.md based on suggestions from updates to adapter contributing files.
* removed section refering to non-postgres databases for core contributing.md
* making suggested changes to contributing.md based on kyle's initial lookover
* Update CONTRIBUTING.md
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
Co-authored-by: Kyle Wigley <kyle@fishtownanalytics.com>
* add node type codes to more events + more hook log
* minor fixes
* renames started/finished keys
* made process more clear
* fixed errors
* Put back report_node_data in fresshness.py
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* Rm unused events, per #4104
* More structured ConcurrencyLine
* Replace \n prefixes with EmptyLine
* Reimplement ui.warning_tag to centralize logic
* Use warning_tag for deprecations too
* Rm more unused event types
* Exclude EmptyLine from json logs
* loglines are not always created by events (#4406)
Co-authored-by: Nathaniel May <nathaniel.may@fishtownanalytics.com>
* WIP
* fixed some merg issues
* WIP
* first pass with node_status logging
* add node details to one more
* another pass at node info
* fixed failures
* convert to classes
* more tweaks to basic implementation
* added in ststus, organized a bit
* saving broken state
* working state with lots of todos
* formatting
* add start/end tiemstamps
* adding node_status logging to more events
* adding node_status to more events
* Add RunningStatus and set in node
* Add NodeCompiling and NodeExecuting events, switch to _event_status dict
* add _event_status to SourceDefinition
* small tweaks to NodeInfo
* fixed misnamed attr
* small fix to validation
* rename logging timestamps to minimize name collision
* fixed flake failure
* move str formatting to events
* incorporate serialization changes
* add node_status to event_to_serializable_dict
* convert nodeInfo to dict with dataclass builtin
* Try to fix failing unit, flake8, mypy tests (#4362)
* fixed leftover merge conflict
Co-authored-by: Gerda Shank <gerda@dbtlabs.com>
* Log formatting from flags earlier
* WARN-level stdout for list task
* Readd tracking events to File
* PR feedback, annotate hacks
* Revert "PR feedback, annotate hacks"
This reverts commit 5508fa230b.
* This is maybe better
* Annotate main.py
* One more comment in base.py
* Update changelog
* pushing up to get eye on from Nate
* updating to compare
* latest push
* finished test for duplicate codes with a lot of help from Nate
* resolving suggestions
* removed duplicated code in types.py, made minor changes to test_events.py
* added missing func call
* simplified data construction
* fixed missed scrubbing of secrets
* switched to vars()
* scrub entire log line, update how attributes get pulled
* get ahead of serialization errors
* store if data is serialized and modify values instead of a copy of values
* fixed unused import from merge
* start adding version logging, noticed some wrong stuff
* fix bad pid and ts
* remove level format on json logs
Co-authored-by: Emily Rockman <emily.rockman@dbtlabs.com>
* Address 3997. Test selection flag can be in profile.yml.
* Per Jerco's 4104 PR unresolved comments, unify i.s. predicate and add env var.
* Couple of flake8 touchups.
* Classier error handling using enum semantics.
* Cherry-pick in part of Gerda's commit to hopefully avoid a future merge conflict.
* Add 3997 to changelog.
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
* added struct logging to base
* fixed merge wierdness
* convert to use single type for integration tests
* converted to 3 reusable test types in sep module
* tweak message
* clean up and making test_types complete for future
* fix missed import
* add struct logging to compilation
* add struct logging to tracking
* add struct logging to utils
* add struct logging to exceptions
* fixed some misc errors
* updated to send raw ex, removed resulting circ dep
* add struct logging to docs serve
* remove merge fluff
* struct logging to seed command
* converting print to use structured logging
* more structured logging print conversion
* pulling apart formatting more
* added struct logging by disecting printer.py
* add struct logging to runnable
* add struct logging to task init
* fixed formatting
* more formatting and moving things around
* convert generic_test to structured logging
* convert macros to structured logging
* add struc logging to most of manifest.py
* add struct logging to models.py
* added struct logging to partial.py
* finished conversion of manifest
* fixing errors
* fixed 1 todo and added another
* fixed bugs from merge
* update config use structured logging
* WIP
* minor cleanup
* fixed merge error
* added in ShowException
* added todo to remove defaults after dropping 3.6
* removed todo that is obsolete
* first cut at supporting metrics definitions
* teach dbt about metrics
* wip
* support partial parsing for metrics
* working on tests
* Fix some tests
* Add partial parsing metrics test
* Fix some more tests
* Update CHANGELOG.md
* Fix partial parsing yaml file to correct model syntax
Co-authored-by: Drew Banin <drew@fishtownanalytics.com>
* [#3885] Partially parse when environment variables in schema files
change
* Add documentation for test kwargs
* Add test and fix for schema configs with env_var
* Fix issue #4178
Allow retries when the answer is None
* Include fix for #4178
Allow retries when the answer from dbt deps is None
* Add link to the PR
* Update exception and shorten line size
* Add test when dbt deps returns None
* Raise error on pip install dbt
* Fix relative path logic
* Do not build dist for dbt
* Fix long descriptions
* Trigger code checks
* Using root readme more trouble than good
* only fail on install, not build
* Edit dist script. Avoid README duplication
* jk, be less clever
* Ignore 'dbt' source distribution when testing
* Add changelog entry
Co-authored-by: Kyle Wigley <kyle@dbtlabs.com>
* Parser no longer takes greedy. Accepts indirect selection, a bool.
* Remove references to greedy and supporting functions.
* 1. Set testing flag default to True. 2. Improve arg parsing.
* Update tests and add new case for when flag unset.
* Update names and styling to fit test requirements. Add default value for option.
* Correct several failing tests now that default behavior was flipped.
* Tests expect eager on by default.
* All but selector test passing.
* Get integration tests working, add them, and mix in selector syntax.
* Clean code and correct test.
* Add changelog entry
Co-authored-by: Mila Page <versusfacit@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Use common columns for incremental schema changes
* on_schema_change:append_new_columns should gracefully handle column removal
* review changes
* Lean approach for `process_schema_changes` to simplify
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Update profile_template.yml to use same syntax as target_options.yml
* Rename target_options to profile_template
* Update profile_template config spec
* Add project name to default search packages
We prefer macros in the project over the ones in the namespace (package)
* Add change to change log
* Use project_name instead of project
* Raise compilation error if no macros are found
* Update change log line
* Add test for package macro override
* Add JCZuurmond to contributor
* Fix typos
* Add test that should not over ride the package
* Add doc string to tests
* added support for dbs without boolean types
* catch errors a bit better
* moved changelog entry
* fixed tests and updated exception
* cleaned up bool check
* added positive test, removed self ref
* removed overlooked breakpoint
* first pass
* save progress - singualr tests broken
* fixed to work with both generic and singular tests
* fixed formatting
* added a comment
* change to use /generic subfolder
* fix formatting issues
* fixed bug on code consolidation
* fixed typo
* added test for generic tests
* added changelog entry
* added logic to treat generic tests like macro tests
* add generic test to macro_edges
* fixed generic tests to match unique_ids
* fixed test
* Fix setup_db.sh by waiting for pg_isready success return. Fixes#3876
* restored noaccess and dbtMixedCase creation and updated changelog and contributing md files
* restored root auth commands
* restored creation of dbt schema, aparently this is needed even if docker compose also creates it...
* pr comments: avoid infinite loop and quote variables
* Update changelog
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Initial
* Further dev
* Make mypy happy
* Further dev
* Existing tests passing
* Functioning integration test
* Passing integration test
* Integration tests
* Add changelog entry
* Add integration test for init outside of project
* Fall back to target_options.yml when invalid profile_template.yml is provided
* Use built-in yaml with exception of in init
* Remove oyaml and fix tests
* Update dbt_project.yml in test comparison
* Create the profiles directory if it doesn't exist
* Use safe_load
* Update integration test
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* Add result: selection method
* make a copy for modified state test suite
* test case notes
* remove macro tests
* add a test setup command
* copy run results state
* passing test case, todos, split work
* clean up result:success test case
* start with build command and remove previous state where needed
* add error result selector tests for seed
* add another error seed test case
* remove todo
* passing build result:error tests
* single failure build test
* add passing test
* fix node assertions for tests
* fix tests
* draft fail+ tests
* add severity to test
* result:warn passing test
* result:warn+ passing tests
* add passing concurrent selector test
* add downstream flag
* add comment
* passing test
* fix test for dynamic node selection
* add build concurrent selector passing test
* add run test cases
* add integration tests for dbt test
* fix formatting
* rename test
* remove extra comments
* add extra newline
* add concurrent selector test / build cases
* clean up todos
* test all nodes
* DRY rebuild code
* test all nodes
* add TODO update assertion code
* cleaner assert code
* fix this test to have a fixed set
* more cleanup
* add changelog
* update concurrent selectors on dbt test
* remove todo
* Update changelog
* Apply suggestions from code review
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
* fix changelog
* fix Contributors headers
Co-authored-by: Jeremy Cohen <jeremy@dbtlabs.com>
Co-authored-by: Matt Winkler <matt.winkler@fishtownanalytics.com>
* WIP to replace source_path with model_path
* updated some test to point to new testing branches
https://github.com/dbt-labs/dbt-integration-project needs updates to get all tests working
* deprecate souce_paths but not remove fully
* added deprecation test for path deprecation
* replace data-pathswith seed-paths: ['seeds']
ypdated tests to use default directory of 'seeds' instead of 'data'
* added test for exception when paths incorectly defined
source-paths and data-paths have been deprecated in favor of model-paths and seed-paths. You can still use the deprecated keys but you cannot define both the deprecated and new keys since we wouldn't know how to handle it.
* fixed test naming issue
* fix formatting issues, standardize names
* updated branches for dbt-integration-project
* updated changelog
* synced up rpc deletion messed up when merging
* changelog updates
* rm rpc specfic code, still more references to rpc to clean up (rpc_method, integration tests, etc.)
* rm move references to rpc
* rm tests against rpc server
* rm more rpc files
* more code!
* sorry!
* Change the data type of `sources` of `ParsedNodeDefaults`
* Add the statement about the change to `CHANGELOG.md`
* Add myself to the contributors
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
Update the default project paths to be `analysis-paths = ['analyses']` and `test-paths = ['tests]`. Also have starter project set `analysis-paths: ['analyses']` from now on. Fixed all associated tests.
* Change the default dbt packages installation directory to `dbt_packages` from `dbt_modules`. Also rename `module-path` to `packages-install-path` to allow default overrides of package install directory. Deprecation warning added for projects using the old `dbt_modules` name without specifying a `packages-install-path`.
* fixed deprecation test bug
* Fixed wording on deprecation warning.
* enacted deprecation for dispatch-packages, cleaned up deprecations tests for unused macros/models. still need to clean up unused code.
* more work to catch packages use
* fixed tests for removing packages on adapter.dispatch.
* cleaned out folder for 012_deprecation_tests to remove unused models/data/macros
* removed obsolete code due to patching for packages arg in adapter.dispatch
* updated exception name
* added deprecation change to changelog.
Add --greedy flag to subparser
Add greedy flag, override default behaviour when parsing union
Add greedy support to ls (I think?!)
That was suspiciously easy
Fix flake issues
Try adding tests with greedy support
Remove trailing whitespace
Fix return type for select_nodes
forgot to add greedy arg
Remove incorrectly expected test
Use named param for greedy
Pull alert_unused_nodes out into its own function
rename resources -> tests
Add expand_selection explanation of --greedy flag
Add greedy support to yaml selectors. Update warning, tests
Fix failing tests
Fix tests. Changelog
Fix flake8
Co-authored-by: Joel Labes c/o Jeremy Cohen <jeremy@dbtlabs.com>
* group by column_name in accepted_values
Group by index is not ANSI SQL and not supported in every database engine (e.g. MS SQL). Use group by column_name in shared code.
* update changelog
* Changed how tables and views are generated to be able to use different options
* 3682 added unit tests
* 3682 had conflict in changelog and became a bit messy
* 3682 Tested to add default kms to dataset and accidently pushed the changes
* Add warning about new package name
* Update CHANGELOG.md
* make linter happy
* Add warning about new package name
* Update CHANGELOG.md
* make linter happy
* move warnings to deprecations
* Update core/dbt/clients/registry.py
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
* add comments for posterity
* Update core/dbt/deprecations.py
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* add deprecation test
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* Parametrize key selectinf for list task
* Remove trailing whitespace
* Add output_keys to RPC List Parameters
* Move up changelog entry, add contributor note
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* Add colourful count of pass/fail tests in dbt debug
* Remove number of checks, move error messages into shared list
* Fix flake issues
* Update CHANGELOG.md
* configurable postgres connect timeout
* changelog for #3582
* test default and change connect_timeout
* Move up contributor note in changelog
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* add bq alias for target_project and target_dataset
* Update CHANGELOG.md
add #3694 to changelog
* Update CHANGELOG.md
Be more specific about the change to bigquery synonym for schema only.
* Set integration test bigquery configs to use alias
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* fewer adapters will need to re-implemnt basic_load_csv_rows
* hack version
* reordering per convention
* make redundant basic_load_csv_rows
* for next version
* Update core/dbt/include/global_project/macros/materializations/seed/seed.sql
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
* Move up changelog entry
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* start blueprinting changes
* extend registry handler for latest package version
* conditional logging for latest version
* remove todo
* add conditional logging
* Upgrades is clearer
* update if elif conditions and log msg
* remove TODO
* fix flake8 errors
* blueprint unit tests
* conditions specific to hub registry
* 1 passing test for get latest version
* DRY method calls
* move version latest to hub only
* add a new line
* remove other draft tests
* update changelog
* update log language for clarity
* pass flake8
* fix changelog
* Update test/unit/test_deps.py
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* update changelog
* remove hub language
* sort for latest version and include prereleases
* fix flake8
* resolves another issue
* fix prerelease string formatting
* fix broken test
* update logging to past tense
* built-in version sorting
* handle prereleases for latest version checks
* get version latest unit test based on prerelease
* update unit test for sorting functionality
* consistent test names
* fix flake8
* clean up contributors list
* simplify if else logic
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* Change BigQuery copy materialization
Change BigQuery copy materialization macros to copy data from several sources into single target
* Change BigQuery copy materialization
Change BigQuery connections.py to copy data from several sources into single target via copy materialization
* Change BigQuery copy materialization
Test to check default value of `copy_materialization` if it is absent in config
* Change BigQuery copy materialization
Update changelog
* Update changelog
* Var renaming + test addition
* Changelog updated
* Changelog updated
* Fix test for copy table
* Update test_bigquery_adapter.py
* Update test_bigquery_adapter.py
* Update impl.py
* Update connections.py
* Update test_bigquery_adapter.py
* Update test_bigquery_adapter.py
* Update connections.py
* Align calls from mock and from adapter
* Split long code ilnes
* Create additional.sql
* Update copy_as_several_tables.sql
* Update schema.yml
* Update copy.sql
* Update connections.py
* Update test_bigquery_copy_models.py
* Add contributor
* test
* test test
* try this again
* test actions in same repo
* nvm revert
* formatting
* fix sh script for building dists
* fix windows build
* add concurrency
* fix random 'Cannot track experimental parser info when active user is None' error
* fix build workflow
* test slim ci
* has changes
* set up postgres for other OS
* update descriptions
* turn off python3.9 unit tests
* add changelog
* clean up todo
* Update .github/workflows/main.yml
* create actions for common code
* temp commit to test
* cosmetic updates
* dev review feedback
* updates
* fix build checks
* rm auto formatting changes
* review feeback: update order of script for setting up postgres on macos runner
* review feedback: add reasoning for not using secrets in workflow
* review feedback: rm unnecessary changes
* more review feedback
* test pull_request_target action
* fix path to cli tool
* split up lint and unit workflows for clear resposibilites
* rm `branches-ignore` filter from pull request trigger
* testing push event
* test
* try this again
* test actions in same repo
* nvm revert
* formatting
* fix windows build
* add concurrency
* fix build workflow
* test slim ci
* has changes
* set up postgres for other OS
* update descriptions
* turn off python3.9 unit tests
* add changelog
* clean up todo
* Update .github/workflows/main.yml
* create actions for common code
* cosmetic updates
* dev review feedback
* updates
* fix build checks
* rm auto formatting changes
* review feedback: add reasoning for not using secrets in workflow
* review feedback: rm unnecessary changes
* more review feedback
* test pull_request_target action
* fix path to cli tool
* split up lint and unit workflows for clear resposibilites
* rm `branches-ignore` filter from pull request trigger
* test dynamic matrix generation
* update label logic
* finishing touches
* align naming
* pass opts to pytest
* slim down push matrix, there are a lot of jobs
* test bump num of proc
* update matrix for all event triggers
* handle case when no changes require integration tests
* dev review feedback
* clean up and add branch name for testing
* Add test results publishing as artifact (#3794)
* Test failures file
* Add testing branch
* Adding upload steps
* Adding date to name
* Adding to integration
* Always upload artifacts
* Adding adapter type
* Always publish unit test results
* Adding comments
* rm unecessary env var
* fix changelog
* update job name
* clean up python deps
Co-authored-by: leahwicz <60146280+leahwicz@users.noreply.github.com>
* add blueprints to resolve issue
* revert to previous version
* intentionally failing test
* add imports
* add validation in existing function
* add passing test for length validation
* add current sanitized label
* remove duplicate var
* Make logging output 2 lines
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* Raise RuntimeException to better handle error
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* update test
* fix flake8 errors
* update changelog
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
* table and view materializations should rename from old_relation to manage changes from view to table and reverse
* edited changelog
* edited changelog
* Update CHANGELOG.md
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
Co-authored-by: Jeremy Cohen <jtcohen6@gmail.com>
* Rm Snowflake txnal logic. Explicit for DML
* Be less clever. Update create_or_replace_view()
* Seed DML as well
* Changelog entry
* Fix unit test
* One semicolon can change the world
* skip all tracking event testing
* Turn off tracking in tests that hits model parsing code path
fix other random test that fails because global tracking.current_user exists but is null
* pytest did not respect skip mark
* fix gh actions
* cli: add selection args for source freshness command
* rename command to `source freshness` and maintain alias to old command
* update and add tests for source freshness command and node selection
* update changelog, add comments
* fix formatting
* update changelog
We use [changie](https://changie.dev/) to automate `CHANGELOG` generation. For installation and format/command specifics, see the documentation.
### Quick Tour
- All new change entries get generated under `/.changes/unreleased` as a yaml file
-`header.tpl.md` contains the contents of the entire CHANGELOG file
-`0.0.0.md` contains the contents of the footer for the entire CHANGELOG file. changie looks to be in the process of supporting a footer file the same as it supports a header file. Switch to that when available. For now, the 0.0.0 in the file name forces it to the bottom of the changelog no matter what version we are releasing.
-`.changie.yaml` contains the fields in a change, the format of a single change, as well as the format of the Contributors section for each version.
### Workflow
#### Daily workflow
Almost every code change we make associated with an issue will require a `CHANGELOG` entry. After you have created the PR in GitHub, run `changie new` and follow the command prompts to generate a yaml file with your change details. This only needs to be done once per PR.
The `changie new` command will ensure correct file format and file name. There is a one to one mapping of issues to changes. Multiple issues cannot be lumped into a single entry. If you make a mistake, the yaml file may be directly modified and saved as long as the format is preserved.
Note: If your PR has been cleared by the Core Team as not needing a changelog entry, the `Skip Changelog` label may be put on the PR to bypass the GitHub action that blacks PRs from being merged when they are missing a `CHANGELOG` entry.
#### Prerelease Workflow
These commands batch up changes in `/.changes/unreleased` to be included in this prerelease and move those files to a directory named for the release version. The `--move-dir` will be created if it does not exist and is created in `/.changes`.
These commands batch up changes in `/.changes/unreleased` as well as `/.changes/<version>` to be included in this final release and delete all prereleases. This rolls all prereleases up into a single final release. All `yaml` files in `/unreleased` and `<version>` will be deleted at this point.
- Changie generates markdown files in the `.changes` directory that are parsed together with the `changie merge` command. Every time `changie merge` is run, it regenerates the entire file. For this reason, any changes made directly to `CHANGELOG.md` will be overwritten on the next run of `changie merge`.
- If changes need to be made to the `CHANGELOG.md`, make the changes to the relevant `<version>.md` file located in the `/.changes` directory. You will then run `changie merge` to regenerate the `CHANGELOG.MD`.
- Do not run `changie batch` again on released versions. Our final release workflow deletes all of the yaml files associated with individual changes. If for some reason modifications to the `CHANGELOG.md` are required after we've generated the final release `CHANGELOG.md`, the modifications need to be done manually to the `<version>.md` file in the `/.changes` directory.
- changie can modify, create and delete files depending on the command you run. This is expected. Be sure to commit everything that has been modified and deleted.
- This file provides a full account of all changes to `dbt-core` and `dbt-postgres`
- Changes are listed under the (pre)release in which they first appear. Subsequent releases include changes from previous releases.
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using [changie](https://github.com/miniscruff/changie). For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry)
description:Report a bug or an issue you've found with dbt
title:"[Bug] <title>"
labels:["bug","triage"]
body:
- type:markdown
attributes:
value:|
Thanks for taking the time to fill out this bug report!
- type:checkboxes
attributes:
label:Is this a new bug in dbt-core?
description:>
In other words, is this an error, flaw, failure or fault in our software?
If this is a bug that broke existing functionality that used to work, please open a regression issue.
If this is a bug in an adapter plugin, please open an issue in the adapter's repository.
If this is a bug experienced while using dbt Cloud, please report to [support](mailto:support@getdbt.com).
If this is a request for help or troubleshooting code in your own dbt project, please join our [dbt Community Slack](https://www.getdbt.com/community/join-the-community/) or open a [Discussion question](https://github.com/dbt-labs/docs.getdbt.com/discussions).
Please search to see if an issue already exists for the bug you encountered.
options:
- label:I believe this is a new bug in dbt-core
required:true
- label:I have searched the existing issues, and I could not find an existing issue for this bug
required:true
- type:textarea
attributes:
label:Current Behavior
description:A concise description of what you're experiencing.
validations:
required:true
- type:textarea
attributes:
label:Expected Behavior
description:A concise description of what you expected to happen.
validations:
required:true
- type:textarea
attributes:
label:Steps To Reproduce
description:Steps to reproduce the behavior.
placeholder:|
1. In this environment...
2. With this config...
3. Run '...'
4. See error...
validations:
required:true
- type:textarea
id:logs
attributes:
label:Relevant log output
description:|
If applicable, log output to help explain your problem.
render:shell
validations:
required:false
- type:textarea
attributes:
label:Environment
description:|
examples:
- **OS**: Ubuntu 20.04
- **Python**: 3.9.12 (`python3 --version`)
- **dbt-core**: 1.1.1 (`dbt --version`)
value:|
- OS:
- Python:
- dbt:
render:markdown
validations:
required:false
- type:dropdown
id:database
attributes:
label:Which database adapter are you using with dbt?
description:If the bug is specific to the database or adapter, please open the issue in that adapter's repository instead
multiple:true
options:
- postgres
- redshift
- snowflake
- bigquery
- spark
- other (mention it in "Additional Context")
validations:
required:false
- type:textarea
attributes:
label:Additional Context
description:|
Links? References? Anything that will give us more context about the issue you are encountering!
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
about: Report a bug or an issue you've found with dbt
title: ''
labels: bug, triage
assignees: ''
---
### Describe the bug
A clear and concise description of what the bug is. What command did you run? What happened?
### Steps To Reproduce
In as much detail as possible, please provide steps to reproduce the issue. Sample data that triggers the issue, example model code, etc is all very helpful here.
### Expected behavior
A clear and concise description of what you expected to happen.
### Screenshots and log output
If applicable, add screenshots or log output to help explain your problem.
- [ ] [Engineering] Create a platform issue to update dbt-spark versions to dbt Cloud
- [ ] [Product] Release new version of dbt-utils with new dbt version compatibility. If there are breaking changes requiring a minor version, plan upgrades of other packages that depend on dbt-utils.
- [ ] [Engineering] If this isn't a final release, create an epic for the next release
We try to maintain actions that are shared across repositories in a single place so that necesary changes can be made in a single place.
[dbt-labs/actions](https://github.com/dbt-labs/actions/) is the central repository of actions and workflows we use across repositories.
GitHub Actions also live locally within a repository. The workflows can be found at `.github/workflows` from the root of the repository. These should be specific to that code base.
Note: We are actively moving actions into the central Action repository so there is currently some duplication across repositories.
___
## Basics of Using Actions
### Viewing Output
- View the detailed action output for your PR in the **Checks** tab of the PR. This only shows the most recent run. You can also view high level **Checks** output at the bottom on the PR.
- View _all_ action output for a repository from the [**Actions**](https://github.com/dbt-labs/dbt-core/actions) tab. Workflow results last 1 year. Artifacts last 90 days, unless specified otherwise in individual workflows.
This view often shows what seem like duplicates of the same workflow. This occurs when files are renamed but the workflow name has not changed. These are in fact _not_ duplicates.
You can see the branch the workflow runs from in this view. It is listed in the table between the workflow name and the time/duration of the run. When blank, the workflow is running in the context of the `main` branch.
### How to view what workflow file is being referenced from a run
- When viewing the output of a specific workflow run, click the 3 dots at the top right of the display. There will be an option to `View workflow file`.
### How to manually run a workflow
- If a workflow has the `on: workflow_dispatch` trigger, it can be manually triggered
- From the [**Actions**](https://github.com/dbt-labs/dbt-core/actions) tab, find the workflow you want to run, select it and fill in any inputs requied. That's it!
### How to re-run jobs
- Some actions cannot be rerun in the GitHub UI. Namely the snyk checks and the cla check. Snyk checks are rerun by closing and reopening the PR. You can retrigger the cla check by commenting on the PR with `@cla-bot check`
___
## General Standards
### Permissions
- By default, workflows have read permissions in the repository for the contents scope only when no permissions are explicitly set.
- It is best practice to always define the permissions explicitly. This will allow actions to continue to work when the default permissions on the repository are changed. It also allows explicit grants of the least permissions possible.
- There are a lot of permissions available. [Read up on them](https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs) if you're unsure what to use.
```yaml
permissions:
contents:read
pull-requests:write
```
### Secrets
- When to use a [Personal Access Token (PAT)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) vs the [GITHUB_TOKEN](https://docs.github.com/en/actions/security-guides/automatic-token-authentication) generated for the action?
The `GITHUB_TOKEN` is used by default. In most cases it is sufficient for what you need.
If you expect the workflow to result in a commit to that should retrigger workflows, you will need to use a Personal Access Token for the bot to commit the file. When using the GITHUB_TOKEN, the resulting commit will not trigger another GitHub Actions Workflow run. This is due to limitations set by GitHub. See [the docs](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#using-the-github_token-in-a-workflow) for a more detailed explanation.
For example, we must use a PAT in our workflow to commit a new changelog yaml file for bot PRs. Once the file has been committed to the branch, it should retrigger the check to validate that a changelog exists on the PR. Otherwise, it would stay in a failed state since the check would never retrigger.
### Triggers
You can configure your workflows to run when specific activity on GitHub happens, at a scheduled time, or when an event outside of GitHub occurs. Read more details in the [GitHub docs](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows).
These triggers are under the `on` key of the workflow and more than one can be listed.
```yaml
on:
push:
branches:
- "main"
- "*.latest"
- "releases/*"
pull_request:
# catch when the PR is opened with the label or when the label is added
types:[opened, labeled]
workflow_dispatch:
```
Some triggers of note that we use:
-`push` - Runs your workflow when you push a commit or tag.
-`pull_request` - Runs your workflow when activity on a pull request in the workflow's repository occurs. Takes in a list of activity types (opened, labeled, etc) if appropriate.
-`pull_request_target` - Same as `pull_request` but runs in the context of the PR target branch.
-`workflow_call` - used with reusable workflows. Triggered by another workflow calling it.
-`workflow_dispatch` - Gives the ability to manually trigger a workflow from the GitHub API, GitHub CLI, or GitHub browser interface.
### Basic Formatting
- Add a description of what your workflow does at the top in this format
- Print out all variables you will reference as the first step of a job. This allows for easier debugging. The first job should log all inputs. Subsequent jobs should reference outputs of other jobs, if present.
When possible, generate variables at the top of your workflow in a single place to reference later. This is not always strictly possible since you may generate a value to be used later mid-workflow.
Be sure to use quotes around these logs so special characters are not interpreted.
```yaml
job1:
- name: "[DEBUG] Print Variables"
run: |
echo "all variables defined as inputs"
echo "The last commit sha in the release: ${{ inputs.sha }}"
echo "The release version number: ${{ inputs.version_number }}"
echo "The changelog_path: ${{ inputs.changelog_path }}"
echo "The build_script_path: ${{ inputs.build_script_path }}"
echo "The s3_bucket_name: ${{ inputs.s3_bucket_name }}"
echo "The package_test_command: ${{ inputs.package_test_command }}"
# collect all the variables that need to be used in subsequent jobs
- When it's not obvious what something does, add a comment!
___
## Tips
### Context
- The [GitHub CLI](https://cli.github.com/) is available in the default runners
- Actions run in your context. ie, using an action from the marketplace that uses the GITHUB_TOKEN uses the GITHUB_TOKEN generated by your workflow run.
### Actions from the Marketplace
- Don’t use external actions for things that can easily be accomplished manually.
- Always read through what an external action does before using it! Often an action in the GitHub Actions Marketplace can be replaced with a few lines in bash. This is much more maintainable (and won’t change under us) and clear as to what’s actually happening. It also prevents any
- Pin actions _we don't control_ to tags.
### Connecting to AWS
- Authenticate with the aws managed workflow
```yaml
- name: Configure AWS credentials from Test account
- Then access with the aws command that comes installed on the action runner machines
```yaml
- name: Copy Artifacts from S3 via CLI
run: aws s3 cp ${{ env.s3_bucket }} . --recursive
```
### Testing
- Depending on what your action does, you may be able to use [`act`](https://github.com/nektos/act) to test the action locally. Some features of GitHub Actions do not work with `act`, among those are reusable workflows. If you can't use `act`, you'll have to push your changes up before being able to test. This can be slow.
# Github package 'latest' tag wrangler for containers
## Usage
Plug in the necessary inputs to determine if the container being built should be tagged 'latest; at the package level, for example `dbt-redshift:latest`.
## Inputs
| Input | Description |
| - | - |
| `package` | Name of the GH package to check against |
| `new_version` | Semver of new container |
| `gh_token` | GH token with package read scope|
| `halt_on_missing` | Return non-zero exit code if requested package does not exist. (defaults to false)|
## Outputs
| Output | Description |
| - | - |
| `latest` | Wether or not the new container should be tagged 'latest'|
| `minor_latest` | Wether or not the new container should be tagged major.minor.latest |
Include the number of the issue addressed by this PR above if applicable.
PRs for code changes without an associated issue*will not be merged*.
See CONTRIBUTING.md for more information.
Example:
resolves #1234
-->
### Description
<!--- Describe the Pull Request here -->
<!---
Describe the Pull Request here. Add any references and info to help reviewers
understand your changes. Include any tradeoffs you considered.
-->
### Checklist
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ]This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have updated the `CHANGELOG.md` and added information about my change to the "dbt next" section.
- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me
- [ ]I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR
- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)
changelog_comment:'Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-changelog-entry).'
The core function of dbt is SQL compilation and execution. Users create projects of dbt resources (models, tests, seeds, snapshots, ...), defined in SQL and YAML files, and they invoke dbt to create, update, or query associated views and tables. Today, dbt makes heavy use of Jinja2 to enable the templating of SQL, and to construct a DAG (Directed Acyclic Graph) from all of the resources in a project. Users can also extend their projects by installing resources (including Jinja macros) from other projects, called "packages."
The core function of dbt is SQL compilation and execution. Users create projects of dbt resources (models, tests, seeds, snapshots, ...), defined in SQL and YAML files, and they invoke dbt to create, update, or query associated views and tables. Today, dbt makes heavy use of Jinja2 to enable the templating of SQL, and to construct a DAG (Directed Acyclic Graph) from all of the resources in a project. Users can also extend their projects by installing resources (including Jinja macros) from other projects, called "packages."
## dbt-core
Most of the python code in the repository is within the `core/dbt` directory. Currently the main subdirectories are:
- [`adapters`](core/dbt/adapters): Define base classes for behavior that is likely to differ across databases
- [`clients`](core/dbt/clients): Interface with dependencies (agate, jinja) or across operating systems
- [`config`](core/dbt/config): Reconcile user-supplied configuration from connection profiles, project files, and Jinja macros
- [`context`](core/dbt/context): Build and expose dbt-specific Jinja functionality
- [`contracts`](core/dbt/contracts): Define Python objects (dataclasses) that dbt expects to create and validate
- [`deps`](core/dbt/deps): Package installation and dependency resolution
- [`graph`](core/dbt/graph): Produce a `networkx` DAG of project resources, and selecting those resources given user-supplied criteria
- [`include`](core/dbt/include): The dbt "global project," which defines default implementations of Jinja2 macros
There are two supported ways of invoking dbt: from the command line and using an RPC server.
The "tasks" map to top-level dbt commands. So `dbt run` => task.run.RunTask, etc. Some are more like abstract base classes (GraphRunnableTask, for example) but all the concrete types outside of task/rpc should map to tasks. Currently one executes at a time. The tasks kick off their “Runners” and those do execute in parallel. The parallelism is managed via a thread pool, in GraphRunnableTask.
The "tasks" map to top-level dbt commands. So `dbt run` => task.run.RunTask, etc. Some are more like abstract base classes (GraphRunnableTask, for example) but all the concrete types outside of task should map to tasks. Currently one executes at a time. The tasks kick off their “Runners” and those do execute in parallel. The parallelism is managed via a thread pool, in GraphRunnableTask.
core/dbt/include/index.html
This is the docs website code. It comes from the dbt-docs repository, and is generated when a release is packaged.
## Adapters
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. The four core adapters that are in the main repository, contained within the [`plugins`](plugins) subdirectory, are: Postgres Redshift, Snowflake and BigQuery. Other warehouses use adapter plugins defined in separate repositories (e.g. [dbt-spark](https://github.com/fishtown-analytics/dbt-spark), [dbt-presto](https://github.com/fishtown-analytics/dbt-presto)).
dbt uses an adapter-plugin pattern to extend support to different databases, warehouses, query engines, etc. For testing and development purposes, the dbt-postgres plugin lives alongside the dbt-core codebase, in the [`plugins`](plugins) subdirectory. Like other adapter plugins, it is a self-contained codebase and package that builds on top of dbt-core.
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.
Each adapter is a mix of python, Jinja2, and SQL. The adapter code also makes heavy use of Jinja2 to wrap modular chunks of SQL functionality, define default implementations, and allow plugins to override it.
Each adapter plugin is a standalone python package that includes:
@@ -46,4 +51,4 @@ The [`test/`](test/) subdirectory includes unit and integration tests that run a
- [docker](docker/): All dbt versions are published as Docker images on DockerHub. This subfolder contains the `Dockerfile` (constant) and `requirements.txt` (one for each version).
- [etc](etc/): Images for README
- [scripts](scripts/): Helper scripts for testing, releasing, and producing JSON schemas. These are not included in distributions of dbt, not are they rigorously tested—they're just handy tools for the dbt maintainers :)
- [scripts](scripts/): Helper scripts for testing, releasing, and producing JSON schemas. These are not included in distributions of dbt, nor are they rigorously tested—they're just handy tools for the dbt maintainers :)
`dbt-core` is open source software. It is what it is today because community members have opened issues, provided feedback, and [contributed to the knowledge loop](https://www.getdbt.com/dbt-labs/values/). Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
1. [About this document](#about-this-document)
2. [Proposing a change](#proposing-a-change)
3. [Getting the code](#getting-the-code)
4. [Setting up an environment](#setting-up-an-environment)
5. [Running `dbt` in development](#running-dbt-in-development)
6. [Testing](#testing)
7. [Submitting a Pull Request](#submitting-a-pull-request)
2. [Getting the code](#getting-the-code)
3. [Setting up an environment](#setting-up-an-environment)
4. [Running `dbt` in development](#running-dbt-core-in-development)
5. [Testing dbt-core](#testing)
6. [Debugging](#debugging)
7. [Adding a changelog entry](#adding-a-changelog-entry)
8. [Submitting a Pull Request](#submitting-a-pull-request)
## About this document
This document is a guide intended for folks interested in contributing to `dbt`. Below, we document the process by which members of the community should create issues and submit pull requests (PRs) in this repository. It is not intended as a guide for using `dbt`, and it assumes a certain level of familiarity with Python concepts such as virtualenvs, `pip`, python modules, filesystems, and so on. This guide assumes you are using macOS or Linux and are comfortable with the command line.
There are many ways to contribute to the ongoing development of `dbt-core`, such as by participating in discussions and issues. We encourage you to first read our higher-level document: ["Expectations for Open Source Contributors"](https://docs.getdbt.com/docs/contributing/oss-expectations).
If you're new to python development or contributing to open-source software, we encourage you to read this document from start to finish. If you get stuck, drop us a line in the `#dbt-core-development` channel on [slack](https://community.getdbt.com).
The rest of this document serves as a more granular guide for contributing code changes to `dbt-core` (this repository). It is not intended as a guide for using `dbt-core`, and some pieces assume a level of familiarity with Python development (virtualenvs, `pip`, etc). Specific code snippets in this guide assume you are using macOS or Linux and are comfortable with the command line.
### Signing the CLA
If you get stuck, we're happy to help! Drop us a line in the `#dbt-core-development` channel in the [dbt Community Slack](https://community.getdbt.com).
Please note that all contributors to `dbt` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements) to have their Pull Request merged into the `dbt` codebase. If you are unable to sign the CLA, then the `dbt` maintainers will unfortunately be unable to merge your Pull Request. You are, however, welcome to open issues and comment on existing ones.
### Notes
## Proposing a change
`dbt` is Apache 2.0-licensed open source software. `dbt` is what it is today because community members like you have opened issues, provided feedback, and contributed to the knowledge loop for the entire communtiy. Whether you are a seasoned open source contributor or a first-time committer, we welcome and encourage you to contribute code, documentation, ideas, or problem statements to this project.
### Defining the problem
If you have an idea for a new feature or if you've discovered a bug in `dbt`, the first step is to open an issue. Please check the list of [open issues](https://github.com/fishtown-analytics/dbt/issues) before creating a new one. If you find a relevant issue, please add a comment to the open issue instead of creating a new one. There are hundreds of open issues in this repository and it can be hard to know where to look for a relevant open issue. **The `dbt` maintainers are always happy to point contributors in the right direction**, so please err on the side of documenting your idea in a new issue if you are unsure where a problem statement belongs.
> **Note:** All community-contributed Pull Requests _must_ be associated with an open issue. If you submit a Pull Request that does not pertain to an open issue, you will be asked to create an issue describing the problem before the Pull Request can be reviewed.
### Discussing the idea
After you open an issue, a `dbt` maintainer will follow up by commenting on your issue (usually within 1-3 days) to explore your idea further and advise on how to implement the suggested changes. In many cases, community members will chime in with their own thoughts on the problem statement. If you as the issue creator are interested in submitting a Pull Request to address the issue, you should indicate this in the body of the issue. The `dbt` maintainers are _always_ happy to help contributors with the implementation of fixes and features, so please also indicate if there's anything you're unsure about or could use guidance around in the issue.
### Submitting a change
If an issue is appropriately well scoped and describes a beneficial change to the `dbt` codebase, then anyone may submit a Pull Request to implement the functionality described in the issue. See the sections below on how to do this.
The `dbt` maintainers will add a `good first issue` label if an issue is suitable for a first-time contributor. This label often means that the required code change is small, limited to one database adapter, or a net-new addition that does not impact existing functionality. You can see the list of currently open issues on the [Contribute](https://github.com/fishtown-analytics/dbt/contribute) page.
Here's a good workflow:
- Comment on the open issue, expressing your interest in contributing the required code change
- Outline your planned implementation. If you want help getting started, ask!
- Follow the steps outlined below to develop locally. Once you have opened a PR, one of the `dbt` maintainers will work with you to review your code.
- Add a test! Tests are crucial for both fixes and new features alike. We want to make sure that code works as intended, and that it avoids any bugs previously encountered. Currently, the best resource for understanding `dbt`'s [unit](test/unit) and [integration](test/integration) tests is the tests themselves. One of the maintainers can help by pointing out relevant examples.
In some cases, the right resolution to an open issue might be tangential to the `dbt` codebase. The right path forward might be a documentation update or a change that can be made in user-space. In other cases, the issue might describe functionality that the `dbt` maintainers are unwilling or unable to incorporate into the `dbt` codebase. When it is determined that an open issue describes functionality that will not translate to a code change in the `dbt` repository, the issue will be tagged with the `wontfix` label (see below) and closed.
### Using issue labels
The `dbt` maintainers use labels to categorize open issues. Some labels indicate the databases impacted by the issue, while others describe the domain in the `dbt` codebase germane to the discussion. While most of these labels are self-explanatory (eg. `snowflake` or `bigquery`), there are others that are worth describing.
| tag | description |
| --- | ----------- |
| [triage](https://github.com/fishtown-analytics/dbt/labels/triage) | This is a new issue which has not yet been reviewed by a `dbt` maintainer. This label is removed when a maintainer reviews and responds to the issue. |
| [bug](https://github.com/fishtown-analytics/dbt/labels/bug) | This issue represents a defect or regression in `dbt` |
| [enhancement](https://github.com/fishtown-analytics/dbt/labels/enhancement) | This issue represents net-new functionality in `dbt` |
| [good first issue](https://github.com/fishtown-analytics/dbt/labels/good%20first%20issue) | This issue does not require deep knowledge of the `dbt` codebase to implement. This issue is appropriate for a first-time contributor. |
| [help wanted](https://github.com/fishtown-analytics/`dbt`/labels/help%20wanted) / [discussion](https://github.com/fishtown-analytics/dbt/labels/discussion) | Conversation around this issue in ongoing, and there isn't yet a clear path forward. Input from community members is most welcome. |
| [duplicate](https://github.com/fishtown-analytics/dbt/issues/duplicate) | This issue is functionally identical to another open issue. The `dbt` maintainers will close this issue and encourage community members to focus conversation on the other one. |
| [snoozed](https://github.com/fishtown-analytics/dbt/labels/snoozed) | This issue describes a good idea, but one which will probably not be addressed in a six-month time horizon. The `dbt` maintainers will revist these issues periodically and re-prioritize them accordingly. |
| [stale](https://github.com/fishtown-analytics/dbt/labels/stale) | This is an old issue which has not recently been updated. Stale issues will periodically be closed by `dbt` maintainers, but they can be re-opened if the discussion is restarted. |
| [wontfix](https://github.com/fishtown-analytics/dbt/labels/wontfix) | This issue does not require a code change in the `dbt` repository, or the maintainers are unwilling/unable to merge a Pull Request which implements the behavior described in the issue. |
#### Branching Strategy
`dbt` has three types of branches:
- **Trunks** are where active development of the next release takes place. There is one trunk named `develop` at the time of writing this, and will be the default branch of the repository.
- **Release Branches** track a specific, not yet complete release of `dbt`. Each minor version release has a corresponding release branch. For example, the `0.11.x` series of releases has a branch called `0.11.latest`. This allows us to release new patch versions under `0.11` without necessarily needing to pull them into the latest version of `dbt`.
- **Feature Branches** track individual features and fixes. On completion they should be merged into the trunk brnach or a specific release branch.
- **Adapters:** Is your issue or proposed code change related to a specific [database adapter](https://docs.getdbt.com/docs/available-adapters)? If so, please open issues, PRs, and discussions in that adapter's repository instead. The sole exception is Postgres; the `dbt-postgres` plugin lives in this repository (`dbt-core`).
- **CLA:** Please note that anyone contributing code to `dbt-core` must sign the [Contributor License Agreement](https://docs.getdbt.com/docs/contributor-license-agreements). If you are unable to sign the CLA, the `dbt-core` maintainers will unfortunately be unable to merge any of your Pull Requests. We welcome you to participate in discussions, open issues, and comment on existing ones.
- **Branches:** All pull requests from community contributors should target the `main` branch (default). If the change is needed as a patch for a minor version of dbt that has already been released (or is already a release candidate), a maintainer will backport the changes in your PR to the relevant "latest" release branch (`1.0.latest`, `1.1.latest`, ...). If an issue fix applies to a release branch, that fix should be first committed to the development branch and then to the release branch (rarely release-branch fixes may not apply to `main`).
- **Releases**: Before releasing a new minor version of Core, we prepare a series of alphas and release candidates to allow users (especially employees of dbt Labs!) to test the new version in live environments. This is an important quality assurance step, as it exposes the new code to a wide variety of complicated deployments and can surface bugs before official release. Releases are accessible via pip, homebrew, and dbt Cloud.
## Getting the code
### Installing git
You will need `git` in order to download and modify the `dbt` source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
You will need `git` in order to download and modify the `dbt-core` source code. On macOS, the best way to download git is to just install [Xcode](https://developer.apple.com/support/xcode/).
### External contributors
If you are not a member of the `fishtown-analytics` GitHub organization, you can contribute to `dbt` by forking the `dbt` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
If you are not a member of the `dbt-labs` GitHub organization, you can contribute to `dbt-core` by forking the `dbt-core` repository. For a detailed overview on forking, check out the [GitHub docs on forking](https://help.github.com/en/articles/fork-a-repo). In short, you will need to:
1.fork the `dbt` repository
2.clone your fork locally
3.check out a new branch for your proposed changes
4.push changes to your fork
5.open a pull request against `fishtown-analytics/dbt` from your forked repository
1.Fork the `dbt-core` repository
2.Clone your fork locally
3.Check out a new branch for your proposed changes
4.Push changes to your fork
5.Open a pull request against `dbt-labs/dbt-core` from your forked repository
### Core contributors
### dbt Labs contributors
If you are a member of the `fishtown-analytics` GitHub organization, you will have push access to the `dbt` repo. Rather than forking `dbt` to make your changes, just clone the repository, check out a new branch, and push directly to that branch.
If you are a member of the `dbt-labs` GitHub organization, you will have push access to the `dbt-core` repo. Rather than forking `dbt-core` to make your changes, just clone the repository, check out a new branch, and push directly to that branch. Branch names should be fixed by `CT-XXX/` where:
* CT stands for 'core team'
* XXX stands for a JIRA ticket number
## Setting up an environment
There are some tools that will be helpful to you in developing locally. While this is the list relevant for `dbt` development, many of these tools are used commonly across open-source python projects.
There are some tools that will be helpful to you in developing locally. While this is the list relevant for `dbt-core` development, many of these tools are used commonly across open-source python projects.
### Tools
A short list of tools used in `dbt` testing that will be helpful to your understanding:
These are the tools used in `dbt-core` development and testing:
- [`tox`](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions. We currently target the latest patch releases for Python 3.6, Python 3.7, Python 3.8, and Python 3.9
- [`pytest`](https://docs.pytest.org/en/latest/) to discover/run tests
- [`make`](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) - but don't worry too much, nobody _really_ understands how make works and our Makefile is super simple
- [`tox`](https://tox.readthedocs.io/en/latest/) to manage virtualenvs across python versions. We currently target the latest patch releases for Python 3.7, 3.8, 3.9, 3.10 and 3.11
- [`pytest`](https://docs.pytest.org/en/latest/) to define, discover, and run tests
- [`flake8`](https://flake8.pycqa.org/en/latest/) for code linting
- [`black`](https://github.com/psf/black) for code formatting
- [`mypy`](https://mypy.readthedocs.io/en/stable/) for static type checking
- [CircleCI](https://circleci.com/product/) and [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/)
- [`pre-commit`](https://pre-commit.com) to easily run those checks
- [`changie`](https://changie.dev/) to create changelog entries, without merge conflicts
- [`make`](https://users.cs.duke.edu/~ola/courses/programming/Makefiles/Makefiles.html) to run multiple setup or test steps in combination. Don't worry too much, nobody _really_ understands how `make` works, and our Makefile aims to be super simple.
- [GitHub Actions](https://github.com/features/actions) for automating tests and checks, once a PR is pushed to the `dbt-core` repository
A deep understanding of these tools in not required to effectively contribute to `dbt`, but we recommend checking out the attached documentation if you're interested in learning more about them.
A deep understanding of these tools in not required to effectively contribute to `dbt-core`, but we recommend checking out the attached documentation if you're interested in learning more about each one.
#### virtual environments
#### Virtual environments
We strongly recommend using virtual environments when developing code in `dbt`. We recommend creating this virtualenv
in the root of the `dbt` repository. To create a new virtualenv, run:
We strongly recommend using virtual environments when developing code in `dbt-core`. We recommend creating this virtualenv
in the root of the `dbt-core` repository. To create a new virtualenv, run:
```sh
python3 -m venv env
source env/bin/activate
@@ -118,12 +79,12 @@ source env/bin/activate
This will create and activate a new Python virtual environment.
#### docker and docker-compose
#### Docker and `docker-compose`
Docker and docker-compose are both used in testing. Specific instructions for you OS can be found [here](https://docs.docker.com/get-docker/).
Docker and `docker-compose` are both used in testing. Specific instructions for you OS can be found [here](https://docs.docker.com/get-docker/).
#### postgres (optional)
#### Postgres (optional)
For testing, and later in the examples in this document, you may want to have `psql` available so you can poke around in the database and see what happened. We recommend that you use [homebrew](https://brew.sh/) for that on macOS, and your package manager on Linux. You can install any version of the postgres client that you'd like. On macOS, with homebrew setup, you can run:
@@ -131,35 +92,41 @@ For testing, and later in the examples in this document, you may want to have `p
brew install postgresql
```
## Running `dbt` in development
## Running `dbt-core` in development
### Installation
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Next, install `dbt` (and its dependencies) with:
First make sure that you set up your `virtualenv` as described in [Setting up an environment](#setting-up-an-environment). Also ensure you have the latest version of pip installed with `pip install --upgrade pip`. Next, install `dbt-core` (and its dependencies):
When `dbt` is installed this way, any changes you make to the `dbt` source code will be reflected immediately in your next `dbt` run.
When installed in this way, any changes you make to your local copy of the source code will be reflected immediately in your next `dbt` run.
### Running `dbt`
### Running `dbt-core`
With your virtualenv activated, the `dbt` script should point back to the source code you've cloned on your machine. You can verify this by running `which dbt`. This command should show you a path to an executable in your virtualenv.
Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as necessary to connect to your target databases. It may be a good idea to add a new profile pointing to a local postgres instance, or a specific test sandbox within your data warehouse if appropriate.
Configure your [profile](https://docs.getdbt.com/docs/configure-your-profile) as necessary to connect to your target databases. It may be a good idea to add a new profile pointing to a local Postgres instance, or a specific test sandbox within your data warehouse if appropriate.
## Testing
Getting the `dbt` integration tests set up in your local environment will be very helpful as you start to make changes to your local version of `dbt`. The section that follows outlines some helpful tips for setting up the test environment.
Once you're able to manually test that your code change is working as expected, it's important to run existing automated tests, as well as adding some new ones. These tests will ensure that:
- Your code changes do not unexpectedly break other established functionality
- Your code changes can handle all known edge cases
- The functionality you're adding will _keep_ working in the future
Since`dbt` works with a number of different databases, you will need to supply credentials for one or more of these databases in your test environment. Most organizations don't have access to each of a BigQuery, Redshift, Snowflake, and Postgres database, so it's likely that you will be unable to run every integration test locally. Fortunately, Fishtown Analytics provides a CI environment with access to sandboxed Redshift, Snowflake, BigQuery, and Postgres databases. See the section on [_Submitting a Pull Request_](#submitting-a-pull-request) below for more information on this CI setup.
Although`dbt-core` works with a number of different databases, you won't need to supply credentials for every one of these databases in your test environment. Instead, you can test most `dbt-core` code changes with Python and Postgres.
### Initial setup
We recommend starting with `dbt`'s Postgres tests. These tests cover most of the functionality in `dbt`, are the fastest to run, and are the easiest to set up. To run the Postgres integration tests, you'll have to do one extra step of setting up the test database:
Postgres offers the easiest way to test most `dbt-core` functionality today. They are the fastest to run, and the easiest to set up. To run the Postgres integration tests, you'll have to do one extra step of setting up the test database:
```sh
make setup-db
@@ -170,15 +137,6 @@ docker-compose up -d database
`dbt` uses test credentials specified in a `test.env` file in the root of the repository for non-Postgres databases. This `test.env` file is git-ignored, but please be _extra_ careful to never check in credentials or other sensitive information when developing against `dbt`. To create your `test.env` file, copy the provided sample file, then supply your relevant credentials. This step is only required to use non-Postgres databases.
```
cp test.env.sample test.env
$EDITOR test.env
```
> In general, it's most important to have successful unit and Postgres tests. Once you open a PR, `dbt` will automatically run integration tests for the other three core database adapters. Of course, if you are a BigQuery user, contributing a BigQuery-only feature, it's important to run BigQuery tests as well.
### Test commands
There are a few methods for running tests locally.
@@ -194,38 +152,82 @@ make test
# Runs postgres integration tests with py38 in "fail fast" mode.
make integration
```
> These make targets assume you have a recent version of [`tox`](https://tox.readthedocs.io/en/latest/) installed locally,
> These make targets assume you have a local installation of a recent version of [`tox`](https://tox.readthedocs.io/en/latest/) for unit/integration testing and pre-commit for code quality checks,
> unless you use choose a Docker container to run tests. Run `make help` for more info.
Check out the other targets in the Makefile to see other commonly used test
suites.
#### `pre-commit`
[`pre-commit`](https://pre-commit.com) takes care of running all code-checks for formatting and linting. Run `make dev` to install `pre-commit` in your local environment (we recommend running this command with a python virtual environment active). This command installs several pip executables including black, mypy, and flake8. Once this is done you can use any of the linter-based make targets as well as a git pre-commit hook that will ensure proper formatting and linting.
#### `tox`
[`tox`](https://tox.readthedocs.io/en/latest/) takes care of managing virtualenvs and install dependencies in order to run
tests. You can also run tests in parallel, for example, you can run unit tests
for Python 3.6, Python 3.7, Python 3.8, `flake8` checks, and `mypy` checks in
parallel with `tox -p`. Also, you can run unit tests for specific python versions
with `tox -e py36`. The configuration for these tests in located in `tox.ini`.
[`tox`](https://tox.readthedocs.io/en/latest/) takes care of managing virtualenvs and install dependencies in order to run tests. You can also run tests in parallel, for example, you can run unit tests for Python 3.7, Python 3.8, Python 3.9, Python 3.10 and Python 3.11 checks in parallel with `tox -p`. Also, you can run unit tests for specific python versions with `tox -e py37`. The configuration for these tests in located in `tox.ini`.
#### `pytest`
Finally, you can also run a specific test or group of tests using [`pytest`](https://docs.pytest.org/en/latest/) directly. With a virtualenv
active and dev dependencies installed you can do things like:
Finally, you can also run a specific test or group of tests using [`pytest`](https://docs.pytest.org/en/latest/) directly. With a virtualenv active and dev dependencies installed you can do things like:
> is a list of useful command-line options for `pytest` to use while developing.
> See [pytest usage docs](https://docs.pytest.org/en/6.2.x/usage.html) for an overview of useful command-line options.
### Unit, Integration, Functional?
Here are some general rules for adding tests:
* unit tests (`test/unit`&`tests/unit`) don’t need to access a database; "pure Python" tests should be written as unit tests
* functional tests (`test/integration`&`tests/functional`) cover anything that interacts with a database, namely adapter
* *everything in* `test/*`*is being steadily migrated to*`tests/*`
## Debugging
1. The logs for a `dbt run` have stack traces and other information for debugging errors (in `logs/dbt.log` in your project directory).
2. Try using a debugger, like `ipdb`. For pytest: `--pdb --pdbcls=IPython.terminal.debugger:pdb`
3. Sometimes, it’s easier to debug on a single thread: `dbt --single-threaded run`
4. To make print statements from Jinja macros: `{{ log(msg, info=true) }}`
5. You can also add `{{ debug() }}` statements, which will drop you into some auto-generated code that the macro wrote.
6. The dbt “artifacts” are written out to the ‘target’ directory of your dbt project. They are in unformatted json, which can be hard to read. Format them with:
* Append `# type: ignore` to the end of a line if you need to disable `mypy` on that line.
* Sometimes flake8 complains about lines that are actually fine, in which case you can put a comment on the line such as: # noqa or # noqa: ANNN, where ANNN is the error code that flake8 issues.
* To collect output for `CProfile`, run dbt with the `-r` option and the name of an output file, i.e. `dbt -r dbt.cprof run`. If you just want to profile parsing, you can do: `dbt -r dbt.cprof parse`. `pip` install `snakeviz` to view the output. Run `snakeviz dbt.cprof` and output will be rendered in a browser window.
## Adding or modifying a CHANGELOG Entry
We use [changie](https://changie.dev) to generate `CHANGELOG` entries. **Note:** Do not edit the `CHANGELOG.md` directly. Your modifications will be lost.
Follow the steps to [install `changie`](https://changie.dev/guide/installation/) for your system.
Once changie is installed and your PR is created for a new feature, simply run the following command and changie will walk you through the process of creating a changelog entry:
```shell
changie new
```
Commit the file that's created and your changelog entry is complete!
If you are contributing to a feature already in progress, you will modify the changie yaml file in dbt/.changes/unreleased/ related to your change. If you need help finding this file, please ask within the discussion for the pull request!
You don't need to worry about which `dbt-core` version your change will go into. Just create the changelog entry with `changie`, and open your PR against the `main` branch. All merged changes will be included in the next minor version of `dbt-core`. The Core maintainers _may_ choose to "backport" specific changes in order to patch older minor versions. In that case, a maintainer will take care of that backport after merging your PR, before releasing the new version of `dbt-core`.
## Submitting a Pull Request
Fishtown Analytics provides a sandboxed Redshift, Snowflake, and BigQuery database for use in a CI environment. When pull requests are submitted to the `fishtown-analytics/dbt` repo, GitHub will trigger automated tests in CircleCI and Azure Pipelines.
Code can be merged into the current development branch `main` by opening a pull request. A `dbt-core` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unit or integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
A`dbt` maintainer will review your PR. They may suggest code revision for style or clarity, or request that you add unitor integration test(s). These are good things! We believe that, with a little bit of help, anyone can contribute high-quality code.
Automated tests run via GitHub Actions. If you're a first-time contributor, all tests (including code checks and unit tests) will require a maintainer to approve. Changes in the `dbt-core` repository trigger integration tests against Postgres. dbt Labs also provides CI environments in which to test changes to other adapters, triggered by PRs in those adapters' repositories, as well as periodic maintenance checks of each adapter in concert with the latest `dbt-core` code changes.
Once all tests are passing and your PR has been approved, a `dbt` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:
Once all tests are passing and your PR has been approved, a `dbt-core` maintainer will merge your changes into the active development branch. And that's it! Happy developing :tada:
Sometimes, the content license agreement auto-check bot doesn't find a user's entry in its roster. If you need to force a rerun, add `@cla-bot check` in a comment on the pull request.
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
@@ -23,7 +17,7 @@ Analysts using dbt can transform their data by simply writing select statements,
These select statements, or "models", form a dbt project. Models frequently build on top of one another – dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
@@ -37,8 +31,8 @@ These select statements, or "models", form a dbt project. Models frequently buil
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-core/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt-core/blob/HEAD/CONTRIBUTING.md)
**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
Analysts using dbt can transform their data by simply writing select statements, while dbt handles turning these statements into tables and views in a data warehouse.
These select statements, or "models", form a dbt project. Models frequently build on top of one another – dbt makes it easy to [manage relationships](https://docs.getdbt.com/docs/ref) between models, and [visualize these relationships](https://docs.getdbt.com/docs/documentation), as well as assure the quality of your transformations through [testing](https://docs.getdbt.com/docs/testing).
- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
## Join the dbt Community
- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
## Reporting bugs and contributing code
- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-core/issues/new)
- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt-core/blob/HEAD/CONTRIBUTING.md)
## Code of Conduct
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).
# This will add to the package’s __path__ all subdirectories of directories on sys.path named after the package which effectively combines both modules into a single namespace (dbt.adapters)
# The matching statement is in plugins/postgres/dbt/__init__.py
The Adapters module is responsible for defining database connection methods, caching information from databases, how relations are defined, and the two major connection types we have - base and sql.
# Directories
## `base`
Defines the base implementation Adapters can use to build out full functionality.
## `sql`
Defines a sql implementation for adapters that initially inherits the above base implementation and comes with some premade methods and macros that can be overwritten as needed per adapter. (most common type of adapter.)
# Files
## `cache.py`
Cached information from the database.
## `factory.py`
Defines how we generate adapter objects
## `protocol.py`
Defines various interfaces for various adapter objects. Helps mypy correctly resolve methods.
## `reference_keys.py`
Configures naming scheme for cache elements to be universal.
# This will add to the package’s __path__ all subdirectories of directories on sys.path named after the package which effectively combines both modules into a single namespace (dbt.adapters)
# The matching statement is in plugins/postgres/dbt/adapters/__init__.py
The class `SQLAdapter` in [base/imply.py](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/base/impl.py) is a (mostly) abstract object that adapter objects inherit from. The base class scaffolds out methods that every adapter project usually should implement for smooth communication between dbt and database.
Some target databases require more or fewer methods--it all depends on what the warehouse's featureset is.
help="Supply arguments to the macro. This dictionary will be mapped to the keyword arguments defined in the selected macro. This argument should be a YAML string, eg. '{my_variable: my_value}'",
type=YAML(),
)
browser=click.option(
"--browser/--no-browser",
envvar=None,
help="Wether or not to open a local web browser after starting the server",
default=True,
)
cache_selected_only=click.option(
"--cache-selected-only/--no-cache-selected-only",
envvar="DBT_CACHE_SELECTED_ONLY",
help="Pre cache database objects relevant to selected resource only.",
)
compile_docs=click.option(
"--compile/--no-compile",
envvar=None,
help="Wether or not to run 'dbt compile' as part of docs generation",
default=True,
)
compile_parse=click.option(
"--compile/--no-compile",
envvar=None,
help="TODO: No help text currently available",
default=True,
)
config_dir=click.option(
"--config-dir",
envvar=None,
help="If specified, DBT will show path information for this project",
type=click.STRING,
)
debug=click.option(
"--debug/--no-debug",
"-d/ ",
envvar="DBT_DEBUG",
help="Display debug logging during dbt execution. Useful for debugging and making bug reports.",
)
# TODO: The env var and name (reflected in flags) are corrections!
# The original name was `DEFER_MODE` and used an env var called "DBT_DEFER_TO_STATE"
# Both of which break existing naming conventions.
# This will need to be fixed before use in the main codebase and communicated as a change to the community!
defer=click.option(
"--defer/--no-defer",
envvar="DBT_DEFER",
help="If set, defer to the state variable for resolving unselected nodes.",
help="Supply variables to the project. This argument overrides variables defined in your dbt_project.yml file. This argument should be a YAML string, eg. '{my_variable: my_value}'",
type=YAML(),
)
version=click.option(
"--version",
envvar=None,
help="Show version information",
is_flag=True,
)
version_check=click.option(
"--version-check/--no-version-check",
envvar="DBT_VERSION_CHECK",
help="Ensure dbt's version matches the one specified in the dbt_project.yml file ('require-dbt-version')",
default=True,
)
warn_error=click.option(
"--warn-error",
envvar="DBT_WARN_ERROR",
help="If dbt would normally warn, instead raise an exception. Examples include --select that selects nothing, deprecations, configurations with no associated models, invalid test configurations, and missing sources/refs in tests.",
default=None,
flag_value=True,
)
warn_error_options=click.option(
"--warn-error-options",
envvar="DBT_WARN_ERROR_OPTIONS",
default=None,
help="""If dbt would normally warn, instead raise an exception based on include/exclude configuration. Examples include --select that selects nothing, deprecations, configurations with no associated models, invalid test configurations,
and missing sources/refs in tests. This argument should be a YAML string, with keys 'include' or 'exclude'. eg. '{"include": "all", "exclude": ["NoNodesForSelectionCriteria"]}'""",
type=WarnErrorOptionsType(),
)
write_json=click.option(
"--write-json/--no-write-json",
envvar="DBT_WRITE_JSON",
help="Writing the manifest and run_results.json files to disk",
Model materializations are kept in `core/dbt/include/global_project/macros/materializations/models/`. Materializations are defined using syntax that isn't part of the Jinja standard library. These tags are referenced internally, and materializations can be overridden in user projects when users have specific needs.
```
-- Pseudocode for arguments
{% materialization <name>, <target name := one_of{default, adapter}> %}'
…
{% endmaterialization %}
```
These blocks are referred to Jinja extensions. Extensions are defined as part of the accepted Jinja code encapsulated within a dbt project. This includes system code used internally by dbt and user space (i.e. user-defined) macros. Extensions exist to help Jinja users create reusable code blocks or abstract objects--for us, materializations are a great use-case since we pass these around as arguments within dbt system code.
The code that defines this extension is a class `MaterializationExtension` and a `parse` routine. That code lives in [clients/jinja.py](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/clients/jinja.py). The routine
enables Jinja to parse (i.e. recognize) the unique comma separated arg structure our `materialization` tags exhibit (the `table, default` as seen above).
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.