Skip to content

Commit 47915f9

Browse files
committed
docs for workspaces
1 parent 477d6c3 commit 47915f9

File tree

3 files changed

+259
-2
lines changed

3 files changed

+259
-2
lines changed

docs/content/en/latest/pipelines-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ pip install gooddata-pipelines
2525

2626

2727
## Examples
28-
Here are a couple of introductory examples how to manage GoodData resources using Pipelines:
28+
Here are a couple of introductory examples how to manage GoodData resources using GoodData Pipelines:
2929

3030
### Provision Child Workspaces
3131
```python

docs/content/en/latest/pipelines/provisioning/_index.md

Lines changed: 44 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ no_list: true
77

88
Manage resources in GoodData.
99

10-
1110
## Supported Resources
1211

1312
Resources you can provision using GoodData Pipelines:
@@ -34,3 +33,47 @@ Full load provisioning aims to fully synchronize the state of your GoodData inst
3433
### Incremental Load
3534

3635
During incremental provisioning, the algorithm will only interact with resources specified in the input. During the incremental load, the input data expects an extra parameter: `is_active`. Resources with `True` value will be updated. On the other hand, by setting it to `False`, you can mark resources for deletion. Any other resources already existing in GoodData will not be altered.
36+
37+
38+
## Usage
39+
40+
Regardless of workflow type or provisioned resource, the typical usage can be broken down into following steps:
41+
42+
1. Initialize the provisioner
43+
44+
1. Validate your data using an input model
45+
46+
1. Run the selected provisioning method (`.full_load()` or `.incremental_load()`) with your validated data
47+
48+
49+
Check the [resource pages](#supported-resources) for examples of workflow implementations.
50+
51+
## Logs
52+
53+
By default, the provisioners will not print out any information to the console. However, you can subscribe to the emitted logs using the `.subscribe()` method on the `logger` property of the provisioner instance. The logging service emits unformatted messages based on severity.
54+
55+
```python
56+
# Import and set up your logger
57+
import logging
58+
59+
# Import the provisioner
60+
from gooddata_pipelines import WorkspaceProvisioner
61+
62+
host="http://localhost:3000", token="some_user_token"
63+
64+
# In this example, we will use Python standard logging library.
65+
# However, you can subscribe any logger conforming to the LoggerLike protocol
66+
# defined in gooddata_pipelines.logger.logger
67+
logging.basicConfig(level=logging.INFO)
68+
logger = logging.getLogger(__name__)
69+
70+
71+
# Initialize the provisioner
72+
provisioner = WorkspaceProvisioner.create(host=host, token=token)
73+
74+
# Subscribe to the logging service
75+
provisioner.logger.subscribe(logger)
76+
77+
# Continue with the provisioning
78+
...
79+
```

docs/content/en/latest/pipelines/provisioning/workspaces.md

Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,217 @@ title: "Workspaces"
33
linkTitle: "Workspaces"
44
weight: 1
55
---
6+
7+
Workspace provisioning allows you to create, update or delete child workspaces.
8+
9+
You can provision child workspaces using full or incremental load methods. Each of these methods requires a specific input type.
10+
11+
12+
## Usage
13+
14+
Start by importing and initializing the WorkspaceProvisioner.
15+
16+
```python
17+
18+
from gooddata_pipelines import WorkspaceProvisioner
19+
20+
host="http://localhost:3000", token="some_user_token"
21+
22+
# Initialize the provisioner with GoodData credentials
23+
provisioner = WorkspaceProvisioner.create(host=host, token=token)
24+
```
25+
26+
27+
Then validate your data using an input model corresponding with the provisioned resource and selected workflow type, i.e., `WorkspaceFullLoad` if you intend to run the provisioning in full load mode, or `WorkspaceIncrementalLoad` if you want to provision incrementally.
28+
29+
The models expect following fields:
30+
```python
31+
class WorkspaceFullLoad
32+
parent_id: str # ID of parent workspace
33+
workspace_id: str # ID of child workspace
34+
workspace_name: str # Name of child workspace
35+
workspace_data_filter_id: str | None = None # ID of applied workspace data filter
36+
workspace_data_filter_values: list[str] | None = None # Filter values to apply
37+
38+
class WorkspaceIncrementalLoad
39+
parent_id: str
40+
workspace_id: str
41+
workspace_name: str
42+
workspace_data_filter_id: str | None = None
43+
workspace_data_filter_values: list[str] | None = None
44+
is_active: bool # Set to True to keep the workspace, False to delete it
45+
46+
```
47+
48+
Use one of the models to validate your data:
49+
50+
```python
51+
# Add the model to the imports
52+
from gooddata_pipelines import WorkspaceProvisioner, WorkspaceFullLoad
53+
54+
host="http://localhost:3000", token="some_user_token"
55+
56+
# Initialize the provisioner with GoodData credentials
57+
provisioner = WorkspaceProvisioner.create(host=host, token=token)
58+
59+
# Load your data
60+
raw_data = [
61+
{
62+
"parent_id": "parent_workspace_id",
63+
"workspace_id": "workspace_id_1",
64+
"workspace_name": "Workspace 1",
65+
"workspace_data_filter_id": "data_filter_id",
66+
"workspace_data_filter_values": ["workspace_data_filter_value_1"],
67+
},
68+
]
69+
70+
# Validate the data
71+
validated_data = [WorkspaceFullLoad(
72+
parent_id=item["parent_id"],
73+
workspace_id=item["workspace_id"],
74+
workspace_name=item["workspace_name"],
75+
workspace_data_filter_id=item["workspace_data_filter_id"],
76+
workspace_data_filter_values=item["workspace_data_filter_values"],
77+
) for item in raw_data]
78+
79+
```
80+
81+
Now with the provisioner initialized and your data validated, you can run the provisioner:
82+
83+
```python
84+
# Import, initialize, validate...
85+
...
86+
87+
# Run the provisiong method
88+
provisioner.full_load(validated_data)
89+
```
90+
91+
92+
## Workspace Data Filters
93+
94+
If you want to apply Workspace Data Filters on a child workspace, it needs to be set up on the parent workspace before you run the provisioning.
95+
96+
See [Set Up Data Filters in Workspaces](https://www.gooddata.com/docs/cloud/workspaces/workspace-data-filters/) to learn how workspace data filters work in GoodData.
97+
98+
99+
## Examples
100+
101+
Here are full examples of a full load and incremental load workspace provisioning workflows:
102+
103+
### Full Load
104+
105+
```python
106+
import logging
107+
108+
from gooddata_pipelines import WorkspaceProvisioner, WorkspaceFullLoad
109+
110+
host="http://localhost:3000", token="some_user_token"
111+
112+
# Initialize the provisioner
113+
provisioner = WorkspaceProvisioner.create(host=host, token=token)
114+
115+
# Optional: set up logging and subscribe to logs emited by the provisioner
116+
logging.basicConfig(level=logging.INFO)
117+
logger = logging.getLogger(__name__)
118+
119+
provisioner.logger.subscribe(logger)
120+
121+
# Prepare your data
122+
raw_data: list[dict] = [
123+
{
124+
"parent_id": "parent_workspace_id",
125+
"workspace_id": "workspace_id_1",
126+
"workspace_name": "Workspace 1",
127+
"workspace_data_filter_id": "data_filter_id",
128+
"workspace_data_filter_values": ["workspace_data_filter_value_1"],
129+
},
130+
{
131+
"parent_id": "parent_workspace_id",
132+
"workspace_id": "workspace_id_2",
133+
"workspace_name": "Workspace 2",
134+
"workspace_data_filter_id": "data_filter_id",
135+
"workspace_data_filter_values": ["workspace_data_filter_value_2"],
136+
},
137+
{
138+
"parent_id": "parent_workspace_id",
139+
"workspace_id": "child_workspace_id_1",
140+
"workspace_name": "Workspace 3",
141+
"workspace_data_filter_id": "data_filter_id",
142+
"workspace_data_filter_values": ["workspace_data_filter_value_3"],
143+
},
144+
]
145+
146+
# Validate the data
147+
validated_data = [WorkspaceFullLoad(
148+
parent_id=item["parent_id"],
149+
workspace_id=item["workspace_id"],
150+
workspace_name=item["workspace_name"],
151+
workspace_data_filter_id=item["workspace_data_filter_id"],
152+
workspace_data_filter_values=item["workspace_data_filter_values"],
153+
) for item in raw_data]
154+
155+
# Run the provisioning with the validated data
156+
provisioner.full_load(validated_data)
157+
158+
```
159+
160+
### Incremental Load
161+
162+
```python
163+
import logging
164+
165+
from gooddata_pipelines import WorkspaceProvisioner, WorkspaceIncrementalLoad
166+
167+
host="http://localhost:3000", token="some_user_token"
168+
169+
# Initialize the provisioner
170+
provisioner = WorkspaceProvisioner.create(host=host, token=token)
171+
172+
# Optional: set up logging and subscribe to logs emited by the provisioner
173+
logging.basicConfig(level=logging.INFO)
174+
logger = logging.getLogger(__name__)
175+
176+
provisioner.logger.subscribe(logger)
177+
178+
# Prepare your data
179+
raw_data: list[dict] = [
180+
{
181+
"parent_id": "parent_workspace_id",
182+
"workspace_id": "workspace_id_1",
183+
"workspace_name": "Workspace 1",
184+
"workspace_data_filter_id": "data_filter_id",
185+
"workspace_data_filter_values": ["workspace_data_filter_value_1"],
186+
"is_active": True,
187+
},
188+
{
189+
"parent_id": "parent_workspace_id",
190+
"workspace_id": "workspace_id_2",
191+
"workspace_name": "Workspace 2",
192+
"workspace_data_filter_id": "data_filter_id",
193+
"workspace_data_filter_values": ["workspace_data_filter_value_2"],
194+
"is_active": True,
195+
},
196+
{
197+
"parent_id": "parent_workspace_id",
198+
"workspace_id": "child_workspace_id_1",
199+
"workspace_name": "Workspace 3",
200+
"workspace_data_filter_id": "data_filter_id",
201+
"workspace_data_filter_values": ["workspace_data_filter_value_3"],
202+
"is_active": False # This will mark the workspace for deletion
203+
},
204+
]
205+
206+
# Validate the data
207+
validated_data = [WorkspaceIncrementalLoad(
208+
parent_id=item["parent_id"],
209+
workspace_id=item["workspace_id"],
210+
workspace_name=item["workspace_name"],
211+
workspace_data_filter_id=item["workspace_data_filter_id"],
212+
workspace_data_filter_values=item["workspace_data_filter_values"],
213+
is_active=item["is_active]
214+
) for item in raw_data]
215+
216+
# Run the provisioning with the validated data
217+
provisioner.incremental_load(validated_data)
218+
219+
```

0 commit comments

Comments
 (0)