-
Notifications
You must be signed in to change notification settings - Fork 1
Unit tests for opening-saving YAML #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Kate/yaml validation
* adjust datetime format * run tests on PR, show failed tests * add pytests for file open-save * typo * separate the workflow * absolute import * stop ignoring yml on commit * import top level repo * run pytests from root * run pytests from the workspace * update imports in test folder * init file for tests * mode importlib * import qgis as optional * avoiding qgis imports if possible * fix imports * optional typing for qgis imports * same * make pyqt5 offscreen mode * run tests in a virtual display * Install PyQt5 dependencies manually * switch to qbot instead of QApplication * only init dialog * remove typos * YAML for windows * runt test * ignore Qgs imports * run a bit more * print URL * print from start * show print statements * print statements flush * more prints * check file location * more prints * wrap QgsMessageLog to try/except * print new file path for checking * write wrong YAML * Revert "write wrong YAML" This reverts commit dd7a6e4. * fix test * remove datatime validation (for testing) * Revert "remove datatime validation (for testing)" This reverts commit 627f488.
sync to master
|
👍🏽 |
doublebyte1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the unit tests, and for including them in the GitHub actions! 🥇
Could you also add a section about setting up and running the tests (standalone) in the README file?
I wrote some comments about reading the yaml test files directory from their repositories. Do you think that makes sense, or maybe there is a reason for having them here?
If we copy them here, I think that we may run into the risk that they become outdated, and we won't even know from which version they belong to. At the same time, reading them from the upstream repositories can also lead to failing tests, and it will be hard to pinpoint if it is because of code changes, or schema changes. I think one other option could also be to tag the config files with a commit (or release) hashtag, and in that way we would know exactly to which version they belong. I would ❤️ to hear your thoughts about this.
I think with all the code that you introduced, you are in a position to implement the most challenging test of all: comparing the input yaml and the output yaml 😺
I am aware this may require a refactoring of the code, but from a user's perspective, I think it is super important for the tool to not change anything that was not strictly updated by the user (even when these changes are apparently harmless).
| import subprocess | ||
|
|
||
| from ..pygeoapi_config_dialog import PygeoapiConfigDialog | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a third test, to validate that the yaml that we input is the same as the yaml that we output:
Example:
diff cite.config.yml saved_cite.config.yml - The output of this command should be an empty set.
I foresee some challenges, in order to pass that text (but I am confident you can address them 😁 ):
- The order of the keys may be different in the returned file; nevertheless that should still yield a valid result.
- Some optional fields with default values are being added; that should not happen - we can delegate the task of adding default values to pygeoapi. For example this:
hello-world:
type: process
processor:
name: HelloWorld
Is being transformed into this:
hello-world:
type: collection
title: ''
description: ''
keywords: []
links: []
extents:
spatial:
bbox: [-180, -90, 180, 90]
crs: http://www.opengis.net/def/crs/OGC/1.3/CRS84
providers: []
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the issue here. The test is only writing YAML, while the normal user interaction would be to see the Error window and not be able to proceed with Save file option (the validation check is done before opening a window to choose Save file directory - that part is skipped in unit tests). This is solvable. But the mandatory fields would still be added on file opening, so I'll look for a way around to ignore them in diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don´t think you should implement validation for this test. Maybe I should have picked an example of a collection type that is supported (not OGC API - Processes) (:
Let's take the lakes collection, at cite.config.yml:
lakes:
type: collection
title: Large Lakes
description: lakes of the world, public domain
keywords:
- lakes
crs:
- CRS84
links:
- type: text/html
rel: canonical
title: information
href: http://www.naturalearthdata.com/
hreflang: en-US
extents:
spatial:
bbox: [-180,-90,180,90]
crs: http://www.opengis.net/def/crs/OGC/1.3/CRS84
temporal:
begin: 2011-11-11T00:00:00Z
end: null # or empty
providers:
- type: tile
name: MVT-tippecanoe
data: ../data/tiles/ne_110m_lakes
options:
bounds: [[-124.953634,-16.536406],[109.929807,66.969298]]
zoom:
min: 0
max: 5
format:
name: pbf
mimetype: application/vnd.mapbox-vector-tile
The tests output this collection:
lakes:
type: collection
title: Large Lakes
description: lakes of the world, public domain
keywords:
- lakes
links:
- type: text/html
rel: canonical
href: http://www.naturalearthdata.com/
title: information
hreflang: en-US
extents:
spatial:
bbox: [-180, -90, 180, 90]
crs: http://www.opengis.net/def/crs/OGC/1.3/CRS84
temporal:
begin: 2011-11-11T00:00:00Z
providers:
- type: tile
name: MVT-tippecanoe
data: ../data/tiles/ne_110m_lakes
options:
layer: ''
style: ''
version: ''
format:
name: pbf
mimetype: application/vnd.mapbox-vector-tile
The provider options had bounds and a zoom dictionary, but now I see a layer, style and version options that did not exist before, with empty values (maybe these come from WMSFacade?) I think this is the sort of thing that could be captured by a simple test that compares the input yaml and the output yaml (without ny validation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is happening to the OGC API - Processes collection points out a different issue. What happens to collections or providers that are not supported? I think they should be written in the output file without any changes, but some warning should be thrown to the user stating that a non supported key was detected and that we cannot guarantee the validation of that element. Having elements that are not supported breaks our ability to provide a valid pygeoapi config, but I guess there is not much we can do about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The process collection is a bit different from the others; [the only required fields are 'type' and 'processor.name']. Type in this case is process, rather than collection.(https://github.com/geopython/pygeoapi/blob/5ed31b47773e7fc0b2a5b1a34055c3b2659d4952/pygeoapi/resources/schemas/config/pygeoapi-config-0.x.yml#L591).
A process is a mathematical function, which may not have a spatial extent. These fields do not apply in the case of processes:
title:
description:
keywords:
links:
extents:
spatial:
bbox:
crs:
If we cannot disable the validation, maybe it is better to updated the validator to look for these fields.
|
Regarding the files to test against, I would keep them deterministic with the similar set of files to test against. With testing against the versioned config files, what would be that we want to test for exactly? |
Hopefully I have answered that question in my other comments. Let me know, if I didn't (: |
|
Thanks, all clear now! :) |
Uodate field values according to byteroad#6
|
As we discussed, here are the keys that should have free formed strings in the UI (to allow environmental variables):
There are examples in the reference file: |
This reverts commit 8feb4f0.
|
Applied all the changes we discussed today and moved problematic YAML files out of the 'samples' folder so they won't block the tests 🙌 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KatKatKateryna Thanks a lot for the latest changes! 💯
- I find the diff output, when saving really informative; the fact that it prompts the user to accept the differences before saving really addresses the issue of responsibility.
- I made some manual tests, editing and adding things and the plugin seems to be loading and saving as it should, without further surprises.
- Now we can use environmental variables on relevant fields, without issues 😃
- The read-only keys seem to be working really well!
Thanks for removing the sample files which were problematic. However, our prime file to test is this one: https://github.com/dgterritorio/OGCAPI/blob/main/pygeoapi/docker.config.yml ; apologies for not having pointing you to it, earlier. Do you think you could add it to the test samples? 🙏
When testing loading and saving with this file (without any changes), I noticed a couple of glitches:
- The plugin seems to be removing the search_path of a couple of collections (cos2023v1, cos2018v3), which is problematic because it can originate unexpected results.
- The diff identifies changes in temporal extents of multiple collections; however, the old and new keys are exactly the same; this is fine, as in reality there are no changes, but it may be confusing for the user. Example:
"changed": {
"resources.nuts2.extents.temporal.end": {
"old": "2007-10-30T08:57:29Z",
"new": "2007-10-30T08:57:29Z"
},
If you could just address these two issues, I think that we are ready to merge! 🙏
|
@doublebyte1 thanks a lot for checking! I added a similar file from byteroad repo after our conversation, noticed these issues and moved the file out from yaml_samples folder, so the test would pass. The issues are:
but in some Resources it is directly on Resource level (which is not supported according to this table): this is why search_path which is out of the hierarchy is being deleted. The file is passing JSON SCHEMA validation probably because it's only checking mandatory, non-provider-specific fields. The schema file doesn't even have a 'search_path' keyword.
The DIFF was not picking up the similarity because it was comparing plugin data (deserialized to 'datetime') instead of saved YAML (converted back to string). This is fixed in the last commit 🙌 So everything is working as intended now, except the unexpected 'search_path' fields out of hierarchy |
|
@KatKatKateryna this is a good example of how the plugin can help us fixing the yaml files! 😉 I have updated the pygeoapi config file, so that the search_path is in the right place and datetime objects are unquoted. Now this file is correctly saved through the UI (without any warnings), and it is passing the test. It seems I do not have permissions to edit your PR, so I will fix the file after the merge! Thanks for all the great work! 🙏 |



Unit tests added to run PyQt plugin in a headless mode. This required some changes in the main code:
Additionally:
Screenshots:

Failed tests:

