You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/en/latest/pipelines/provisioning/_index.md
+44-1Lines changed: 44 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,6 @@ no_list: true
7
7
8
8
Manage resources in GoodData.
9
9
10
-
11
10
## Supported Resources
12
11
13
12
Resources you can provision using GoodData Pipelines:
@@ -34,3 +33,47 @@ Full load provisioning aims to fully synchronize the state of your GoodData inst
34
33
### Incremental Load
35
34
36
35
During incremental provisioning, the algorithm will only interact with resources specified in the input. During the incremental load, the input data expects an extra parameter: `is_active`. Resources with `True` value will be updated. On the other hand, by setting it to `False`, you can mark resources for deletion. Any other resources already existing in GoodData will not be altered.
36
+
37
+
38
+
## Usage
39
+
40
+
Regardless of workflow type or provisioned resource, the typical usage can be broken down into following steps:
41
+
42
+
1. Initialize the provisioner
43
+
44
+
1. Validate your data using an input model
45
+
46
+
1. Run the selected provisioning method (`.full_load()` or `.incremental_load()`) with your validated data
47
+
48
+
49
+
Check the [resource pages](#supported-resources) for examples of workflow implementations.
50
+
51
+
## Logs
52
+
53
+
By default, the provisioners will not print out any information to the console. However, you can subscribe to the emitted logs using the `.subscribe()` method on the `logger` property of the provisioner instance. The logging service emits unformatted messages based on severity.
54
+
55
+
```python
56
+
# Import and set up your logger
57
+
import logging
58
+
59
+
# Import the provisioner
60
+
from gooddata_pipelines import WorkspaceProvisioner
Then validate your data using an input model corresponding with the provisioned resource and selected workflow type, i.e., `WorkspaceFullLoad` if you intend to run the provisioning in full load mode, or `WorkspaceIncrementalLoad` if you want to provision incrementally.
28
+
29
+
The models expect following fields:
30
+
```python
31
+
class WorkspaceFullLoad
32
+
parent_id: str# ID of parent workspace
33
+
workspace_id: str# ID of child workspace
34
+
workspace_name: str# Name of child workspace
35
+
workspace_data_filter_id: str|None=None# ID of applied workspace data filter
36
+
workspace_data_filter_values: list[str] |None=None# Filter values to apply
Now with the provisioner initialized and your data validated, you can run the provisioner:
82
+
83
+
```python
84
+
# Import, initialize, validate...
85
+
...
86
+
87
+
# Run the provisiong method
88
+
provisioner.full_load(validated_data)
89
+
```
90
+
91
+
92
+
## Workspace Data Filters
93
+
94
+
If you want to apply Workspace Data Filters on a child workspace, it needs to be set up on the parent workspace before you run the provisioning.
95
+
96
+
See [Set Up Data Filters in Workspaces](https://www.gooddata.com/docs/cloud/workspaces/workspace-data-filters/) to learn how workspace data filters work in GoodData.
97
+
98
+
99
+
## Examples
100
+
101
+
Here are full examples of a full load and incremental load workspace provisioning workflows:
102
+
103
+
### Full Load
104
+
105
+
```python
106
+
import logging
107
+
108
+
from gooddata_pipelines import WorkspaceProvisioner, WorkspaceFullLoad
0 commit comments