Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update tests to be able to run them outside of AppVeyor (part 3) #9426

Open
wants to merge 31 commits into
base: development
Choose a base branch
from

Conversation

andreasjordan
Copy link
Contributor

Followup to #9413.

@niphlod
Copy link
Contributor

niphlod commented Jul 22, 2024

it's going to be a nightmare reviewing this PR ... that being said, if Start-DbaXESession was working before, why should it be skipped now ?

@andreasjordan
Copy link
Contributor Author

Very good question. I will remove all those changes from this PR in the next step. And I can remove all those changes, that can not easily be reviewed.

@andreasjordan
Copy link
Contributor Author

Some background info: I have setup a local lab with three instances: instance1 is an express edition (because some tests need express), instance2 is a named instance on a fixed port and instance3 is a "normal" named instance. All version 2022.

I run all the pester tests, one after another, against these instances. After each test script, I test for leftover dbs, certs, jobs, etc. So I changed the tests to make sure they cleanup after them.

My goal is to be able to run all tests against these instances. Then I will publish a script to set up this lab in Azure so that everyone can build such a lab to work on the tests.

Bute there is still a long way to go...

@niphlod
Copy link
Contributor

niphlod commented Jul 22, 2024

IMHO having the objective of not leaving leftovers for each and every test is ..... not useful. Tests should run on a clean image, that gets reset on each run, just like on a CI. You can't (and end-users can't, too) ask to be able to run potentially dangerous tests within an environment and that all tests cleanup after themselves... that's not a point of integration testing.

@andreasjordan
Copy link
Contributor Author

I think the tests should also support the development. So while I change the code, I run the related tests over and over again to prove that my changes don't break them. That only works if I can execute all tests multiple times.

@niphlod
Copy link
Contributor

niphlod commented Jul 22, 2024

sure but it isn't a strict requirement. Most of dbatools tests are integrationtests, not unittests.
So, you may want to focus/invest time on a script that resets your lab easily/rapidly rather than modifying each and every test to "cleanup most things after themselves": I assure you the cleanup won't ever be 100% complete and, on top, you're "adding time" to each and every build on a proper CI (like appveyor) for no real benefit.

@andreasjordan
Copy link
Contributor Author

Ok, I get your point. And yes, I can do the cleanup outside of the test script.

Just add a note to my changes that you think are not needed - and I will revert them as well.

@andreasjordan andreasjordan marked this pull request as draft July 24, 2024 14:11
@andreasjordan
Copy link
Contributor Author

Just reverted this to a draft and will close it next week.

@potatoqualitee @niphlod If you find any of the changes useful, just add a note then I will open a new PR with those changes.

I will open a brand new repo in my personal account and transfer all the test scripts to the new repo. Then I will update them step by step to be able to run them in my test lab and later update them to use pester 5. This way I don't have to deal with AppVayor.

@niphlod
Copy link
Contributor

niphlod commented Jul 24, 2024

I'm completely lost: why would you spend time maintaining another set of tests that do not run on the "official" CI ?
Are you going to run tests on your lab for each and every PR that comes in ? There's a reason appveyor exists, and it is to avoid forcing users to have a local lab to run the test suite.
For authoring new tests, or run single tests sparingly, editing tests/constants.ps1 is more than enough.

@andreasjordan
Copy link
Contributor Author

My problems with AppVayor:

  • 2008 R2 Express Edition, 2016 and 2017 are outdated, Current versions like 2019 and 2022 are not tested.
  • A lot of commands are not tested. And it's not easy to get a list of commands that are not tested.
  • Availability Groups on Windows Failover Clusters are not tested.
  • Failover Cluster Instances are not tested,
  • Azure SQL Database is not tested.
  • Instances with case sensitive collations are not tested.
  • Remote instances are not tested. A lot of tests only work with local instances.
  • It is not easy to analyse, why a test fails if it failes. (This took me way too much time)

If my clients ask me if dbatools is tested, I can only answer: Most of the commands are tested against some old versions of SQL Server. But I want to tell them that we have a test for every command and that all the tests are run regularly against current versions of SQL Server in typical environments.

I personally don't want to run all the tests for every PR, only once for a new release. But we could spin up that lab in Azure once just after merging all PRs to main and before a new release.

And I want to be able to develop new features or fix bugs in that lab where I can easily run all the tests before pushing to the dbatools repo.

Maybe I want something that none of the other contributors want. That's totally ok. That's why I will start in my own space so I don't bother you. @wsmelton sometimes talks about pester 5. I personally don't see a way to update the tests without having a new lab outside of AppVayor (locally or in the cloud) to do the migration.

@niphlod
Copy link
Contributor

niphlod commented Jul 24, 2024

This makes more sense, although there are quite a few answers to give. I'm not trying to advocate for appveyor, I'm trying to make you spend time "time" on things that matter.

2008 R2 Express Edition, 2016 and 2017 are outdated, Current versions like 2019 and 2022 are not tested.

newer versions can be added, having 2008R2 was (is?) useful for people running migrations. There were definitely a LOT, maybe now there are less people in the need to interact with 2008R2. Except niche commands, I expect SMO to be less "buggy" on 2019 and 2022 rather than , say, 2016. It's also true that we could test against each and every sql version, but we probably need to select a few and periodically "move to a newer set"

A lot of commands are not tested. And it's not easy to get a list of commands that are not tested.

Lemme introduce you to https://app.codecov.io/github/dataplat/dbatools

Availability Groups on Windows Failover Clusters are not tested.

And .... they'll probably never be on a CI, so having tests running on labs seldomly is useful

Failover Cluster Instances are not tested,

Same as above

Azure SQL Database is not tested.

maybe 10% of the commands are useful on the context of Azure SQL. Point here is that someone needs to shell out $$$ to have a readily-available db on Azure that tests can target.

Instances with case sensitive collations are not tested.

I wouldn't spend time on it generally as the installed percentage of CS instances is really tiny. Users will report bugs and we'll fix them.

Remote instances are not tested. A lot of tests only work with local instances.

This is news to me... what tests only work with local instances ?

It is not easy to analyse, why a test fails if it failes. (This took me way too much time)

Except a handful of corner-cases, you can even stop the test execution and jump on the appveyor box via RDP to see what's going on. This has more to do with the fact that our tests are lacking exception management and logging in general rather than a problem within appveyor itself.

If my clients ask me if dbatools is tested, I can only answer: Most of the commands are tested against some old versions of SQL Server. But I want to tell them that we have a test for every command and that all the tests are run regularly against current versions of SQL Server in typical environments.

The way you wanna go is a reply that MY customers would care less than the actual status: "tests work on andreasjordan's lab, and he runs them once every while".

And I want to be able to develop new features or fix bugs in that lab where I can easily run all the tests before pushing to the dbatools repo.

Yeah, but you also may want to spend time developing new features, fixing bugs and adding regression tests that then EVERYBODY can run (and benefit from). 1000 points if they can run anywhere, locally, remotely, on azure, on aws rds, on managed instances, on failover clusters, on containers .... 100 points if they can AT LEAST run on appveyor (because of regressions, the fact that tests can run in parallel and that a lab is not strictly needed, because appveyor exists), 5 points if they can only run on your lab.

On "pester 5", yeah, me and @wsmelton are trying since at least 3 or 4 years but either there's some committment from the community or it's going to be a never-ending project.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the change from 13 to 2 is the only thing that I can't readily say "ok, I get it"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me change that back and see what happens...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests are still green. How is that possible?

Maybe because the test failed with an exception:
image

So we don't catch those errors, or am I missing something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you look at the AppVayor output from the dbatools-buildref-index.json run, you see more failed tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, it seems that those tests have a structure that is not compatible with pester.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the test works locally on my maschine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

problem here is that something either on the beforeall or afterall raises an error (seemingly the offending one is Start-DbaAgentJob -SqlInstance $script:instance2 -Job 'dbatoolsci_testjob').
I'm trying different variations because here, e.g., it seems that there are a sleuth of other jobs already present due to other tests (and the reason why the test creates 2 jobs but checks for 11, which is strange).
Being "bad-dy" ideally this test should clear out any existing job on the agent, create its own, do the tests, and re-cleanup (optionally).
Approaching with a bit of brain, the test creates two jobs and may only make assumptions on those two (and doing so, if one wants to run it locally, it will try - at least - to preserve already existing jobs).
I fear that starting the job (which is the step failing in the beforeall, and avoiding all the tests to be run) is needed for the very last test, but, still, I'm investigating.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aaaand found. jobstep was failing to be created, because appveyor targets "localhost\SQL2016" but the param -SubsystemServer needs the real name (doesn't work with "localhost"). JobStep was failing to be created silently, Start-DbaAgent excepted (because it was a job with no steps), Pester4 doesn't handle well exceptions in the beforeall/afterall blocks.

@@ -63,6 +63,7 @@ Describe "$CommandName Integration Tests" -Tags "IntegrationTests" {
AfterAll {
$null = Get-DbaDatabase -SqlInstance $script:instance1 -Database $dbname | Remove-DbaDatabase -Confirm:$false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe here a single Get-DbaDatabase -database $dbname,$dbname2,$dbname3 is shorter and still effective

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. Will update that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a real difference in the beforeall and afterall logic, but if it works I trust you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seeing the code, it seems that the previous logic does the correct thing (adding it back if it has been removed) rather than adding it "no matter what"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is e.g. a thing that a test running in a proper CI that gets reset at each run doesn't care about, but if they were to run locally without a "reset phase" it'd be a problem.

Copy link
Contributor Author

@andreasjordan andreasjordan Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will revert this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

care to elaborate what's not correct on the removed test ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The config is only NOT supported on 2008 R2, so the test will fail on any later version as it is now supported. So we would have to get the version of the instance first.
Should I put the test back in?

@niphlod
Copy link
Contributor

niphlod commented Jul 24, 2024

added bits of comments where things didn't "feel right" at a first view. every other modification done seems perfectly fine.

@andreasjordan
Copy link
Contributor Author

Thanks for all the feedback. Will work on that tomorrow.

@andreasjordan andreasjordan marked this pull request as ready for review July 26, 2024 11:27
@niphlod
Copy link
Contributor

niphlod commented Jul 29, 2024

Sorry for the delay, I'll review this on the next day or two, I didn't forget it ^_^

@andreasjordan
Copy link
Contributor Author

Thank you very much for the detailed answers. It is always a pleasure to share ideas with you.

A lot of commands are not tested. And it's not easy to get a list of commands that are not tested.

Lemme introduce you to https://app.codecov.io/github/dataplat/dbatools

Thanks for the link.

Availability Groups on Windows Failover Clusters are not tested.

And .... they'll probably never be on a CI, so having tests running on labs seldomly is useful

Failover Cluster Instances are not tested,

Same as above

Those tests are not for CI tests. I'm trying to get both: CI test and release tests. So every test should be able to run as a release test against more than one instance (different versions and configurations), but only a subset runs in the pipeline. Like it is today, as some tests don't run on appvayor.

Azure SQL Database is not tested.

maybe 10% of the commands are useful on the context of Azure SQL. Point here is that someone needs to shell out $$$ to have a readily-available db on Azure that tests can target.

I personally have a 140 euro budget per month and that would be ok to run the tests just before or after the release. This would be a starting point, not the best solution long term.

Instances with case sensitive collations are not tested.

I wouldn't spend time on it generally as the installed percentage of CS instances is really tiny. Users will report bugs and we'll fix them.

If we test against instances that would be installed as part of the process, this would just be a single change in the install script.

Remote instances are not tested. A lot of tests only work with local instances.

This is news to me... what tests only work with local instances ?

Some of them use C:\temp as a backup destination and then test with Test-Path or remove the files in the AfterAll. Maybe not "a lot of tests" but at least some of them. We should switch so network shares to access the files from the instance and the test maschine.

It is not easy to analyse, why a test fails if it failes. (This took me way too much time)

Except a handful of corner-cases, you can even stop the test execution and jump on the appveyor box via RDP to see what's going on. This has more to do with the fact that our tests are lacking exception management and logging in general rather than a problem within appveyor itself.

Maybe I should learn to RDP into the AppVayor maschines. But currently I don't want to spend that time.

If my clients ask me if dbatools is tested, I can only answer: Most of the commands are tested against some old versions of SQL Server. But I want to tell them that we have a test for every command and that all the tests are run regularly against current versions of SQL Server in typical environments.

The way you wanna go is a reply that MY customers would care less than the actual status: "tests work on andreasjordan's lab, and he runs them once every while".

Maybe I should think about that and try to rephrase it.

And I want to be able to develop new features or fix bugs in that lab where I can easily run all the tests before pushing to the dbatools repo.

Yeah, but you also may want to spend time developing new features, fixing bugs and adding regression tests that then EVERYBODY can run (and benefit from). 1000 points if they can run anywhere, locally, remotely, on azure, on aws rds, on managed instances, on failover clusters, on containers .... 100 points if they can AT LEAST run on appveyor (because of regressions, the fact that tests can run in parallel and that a lab is not strictly needed, because appveyor exists), 5 points if they can only run on your lab.

I want to build "infrastructure as code" that everybody can use if they have local Hyper-V oder Azure access. "My local lab" should only be a first step as I have limited time.

On "pester 5", yeah, me and @wsmelton are trying since at least 3 or 4 years but either there's some committment from the community or it's going to be a never-ending project.

Can we introduce a new folder "tests5" (or a better name) and copy (or move?) a first test over and change the syntax to pester 5 and run that first test with pester 5? And then try to more the tests over one by one? Or have a comment line like "# pester5 ready" in the file so that we don't need a second directory? We need a way to start.

@niphlod
Copy link
Contributor

niphlod commented Jul 30, 2024

without replying inline to all points, I think we're more or less in line with the general ideas, just my 2 cents here:

  • once upon a time a psdbatools.database.windows.net existed to at least test Connect-DbaInstance within the realm of CI, dunno if it's still up @potatoqualitee
  • backup/restore scripts using c:\temp are IMHO the fastest and easiest way to accomplish testing locally and/or everywhere. Just as like we have something in appveyor-lab, having fixed dirs avoid messing with the default dirs. So it basically resort to either:
    • use something like $script:backupfolder in constants.ps1 and work from there
    • run this
New-Item -Path C:\temp -ItemType Directory -ErrorAction SilentlyContinue | Out-Null
New-Item -Path C:\temp\migration -ItemType Directory -ErrorAction SilentlyContinue | Out-Null
New-Item -Path C:\temp\backups -ItemType Directory -ErrorAction SilentlyContinue | Out-Null
  • on pester5, @wsmelton is pretty opinionated and may have already "templatized" a base test. If I recall correctly he wanted to do everything on GH Actions (so linux (pscore7) against sql containers) but IMHO we'd loose a great deal of the current system (namely parallelism, appveyor general env setup, coverage). I suggested something more akin to what you're suggesting, so a way to run both pester4 and pester5 tests until we can "forget about pester4", and migrate a few test each time.

The most important point, IMHO, is trying to find some time (it's august) between the usual suspects and figure out if we want really still test things against 2008R2 OR we can maybe move towards 2016,2019,2022 instead of the current 2008R2,2016,2017 "instances".

#subsystemServer needs the real underlying name, and it doesn't work if targeting something like localhost\namedinstance
# the typical error would be WARNING: [17:19:26][New-DbaAgentJobStep] Something went wrong creating the job step | The specified '@server' is
#invalid (valid values are returned by sp_helpserver).
$srvName = Invoke-DbaQuery -SqlInstance $script:instance2 -Query "select @@servername as sn" -as PSObject
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using -As SingleValue and replacing $srvName.sn by $srvName?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that would work, too.

@andreasjordan
Copy link
Contributor Author

Thanks for your work on this, @niphlod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants