-
Notifications
You must be signed in to change notification settings - Fork 100
Add disk space limitation section to Dynamic Worker page #3144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -42,7 +42,7 @@ Customers may choose between Windows and Ubuntu virtual machine images for their | |
|
|
||
| Your Octopus Cloud [task cap](/docs/octopus-cloud/task-cap) determines the resources available to your dynamic worker. As at January 2025, dynamic worker virtual machines are resourced as follows. These specifications may be adjusted over time. | ||
|
|
||
| | Task cap | vCPUs (Qty.) | Memory (GB) | | ||
| | Task cap | vCPUs (Qty.) | Memory (GB) | | ||
| | -----: | ------: | ------: | | ||
| | 5 | 2 | 4 | | ||
| | 10 | 4 | 8 | | ||
|
|
@@ -59,6 +59,12 @@ We recommend customers who would benefit from scalable workers consider [Kuberne | |
|
|
||
| Dynamic workers are created on demand and leased to an Octopus Cloud instance for a limited time [before being destroyed](/docs/infrastructure/workers/dynamic-worker-pools#on-demand). Dynamic workers are destroyed when they have been idle for 60 minutes or when they reached 72 hours of existence. All data written to disk is lost upon worker destruction. | ||
|
|
||
| ### Disk space | ||
|
|
||
| Dynamic workers run on virtual machines with a fixed disk. The amount of free space available to your deployments and runbook runs is bounded, and may be consumed by package downloads, container image pulls, and intermediate files created during a step. Workloads that regularly process large packages, pull large container images, or run many parallel steps may exhaust the available disk and cause the step to fail. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we add an additional sentence that explains how our health checks monitor and recycles workers that run out of dynamic workers? My reasoning is that I don't think that we want to convince people to not use dynamic workers, just to make its limitations explicit and let them choose. If they think that we never clean up the workers, they might not use them even if they are not expected to hit this limitation.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Thanks for raising this — totally fair concern, and I agree we don't want readers walking away thinking the dynamic workers shouldn't be used. My hesitation with going into the health-check / recycling mechanics is that it starts to expose internal infrastructure behaviour that might not be relevant to the customers. I think the current wording already addresses your concern indirectly — by talking about workloads that may exhaust the available disk it implies a bounded resource. The Life-cycle section just above also already tells customers that workers are destroyed on a schedule, which should help head off the "are these ever cleaned up?" worry. 💡 If you'd still like a soft reassurance in this section, we could add something light like "Octopus monitors and replaces dynamic workers as needed" — without the specifics. Happy to add that if you think it reads better. What do you reckon?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm happy with whatever you and Krish land on 🙏 |
||
|
|
||
| If your workloads need more disk headroom than dynamic workers provide, consider [Kubernetes workers](/docs/infrastructure/workers/kubernetes-worker) or [external workers](/docs/infrastructure/workers#external-workers), where you control the disk size and lifecycle. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just double checking that the links work? they didn't for me but it might be because of the PR?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🤔 I tested the links on this page: https://stoctodocspr3144.z22.web.core.windows.net/docs/octopus-cloud/dynamic-worker and they are working fine for me. Where did you see that they are not working?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perfect, thanks! They didn't work on the PR itself, I guessed in must be some deployment magic that doesn't work on the PR 🌟 thanks for checking 🙏 |
||
|
|
||
| ### Installed software | ||
|
|
||
| Dynamic workers come with a small number of [baseline tools](/docs/infrastructure/workers/dynamic-worker-pools#available-dynamic-worker-images) installed. The version of baseline tools may be updated between worker leases. | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also add Disk space here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. Does disk space differ by Task Cap tier? Or are they all 20GB?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that TaskCap 5 has 64GB and the rest 128 GB - see code. I'm also unsure how we got to 20GB, probably took the 64 GB and reduced the size of the OS and tools from it?