-
Notifications
You must be signed in to change notification settings - Fork 28
MULTIARCH-5587: Enabling power arch builds #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hey @hash-d, I wanted to confirm if there’s anything else required from my end to move this PR forward. |
hash-d
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaushikOP, it was missed when the github workflows were added, but the images were originally built by Plano with Podman. You can see the configuration for them on the files .plano.py under the image subdirectories.
Z did add their arch on there, but the workflow is using its own Docker command instead of calling Plano. Ideally, the workflow should call Plano, so local and workflow builds use the same build instructions.
In any case, P needs to be added to .plano.py, at the very least.
|
Converting to draft as validations going on. |
328079c to
d4e923b
Compare
d4e923b to
4937b39
Compare
|
| --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le \ | ||
| -t quay.io/skupper/hello-world-frontend \ | ||
| --push -f Containerfile . | ||
| ./plano --file frontend/.plano.py update-gesso,build,push |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using plano available in parent directory and providing appropriate .plano.py file to use for targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as mentioned above for backend (running from repo root dir):
$ podman run -p 8081:8080 -ti quay.io/skupper/hello-world-frontend
python: can't open file '/home/fritz/python/main.py': [Errno 2] No such file or directory
I tested running from the frontend dir with the following change, and the image ran fine:
- run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file frontend/Containerfile --manifest {image_tag} .")
+ run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file Containerfile --manifest {image_tag} .")
I had to set PYTHONPATH to <repo_root>/python before running it, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Earlier Changes in my custom image had the similar issue; I changed the context for image builds and now I can see image running.
`
$ podman run -p 8080:8080 -ti --entrypoint sh quay.io/ktalathi/skupper/hello-world-frontend:latest
~ $ ls
python static
~ $ ls -l python/
total 16
-rw-r--r-- 1 fritz root 8682 Nov 18 12:05 animalid.py
-rw-r--r-- 1 fritz root 3536 Nov 18 12:05 main.py
~ $ ls -l static/
total 12
drwxr-xr-x 2 fritz root 161 Nov 18 12:05 gesso
-rw-r--r-- 1 fritz root 268 Nov 18 12:05 index.html
-rw-r--r-- 1 fritz root 124 Nov 18 12:05 main.css
-rw-r--r-- 1 fritz root 2434 Nov 18 12:05 main.js
~ $
~ $ exit
$
$ podman run quay.io/ktalathi/skupper/hello-world-frontend:latest
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]
$
`
| --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le \ | ||
| -t quay.io/skupper/hello-world-backend \ | ||
| --push -f Containerfile . | ||
| ./plano --file backend/.plano.py build,push |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using plano available in parent directory and providing appropriate .plano.py file to use for targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaushikOP , this looks ok, but have you successfully run an image generated with that command? I have the impression that, being run from the repo root directory, it's pulling the wrong python directory. When I ran using an image built with these instructions, this is what I got:
$ podman run quay.io/skupper/hello-world-backend -p 8080:8080
python: can't open file '/home/fritz/python/main.py': [Errno 2] No such file or directory
Inspecting the image, the contents of the python directory correspond to the one at the repo root (plano, skewer), and not the one from backend/python (main.py, thingid.py):
$ podman run -p 8080:8080 -ti --entrypoint sh quay.io/skupper/hello-world-backend
~ $ ls
python
~ $ ls -l python
total 8
lrwxrwxrwx 1 fritz root 31 Aug 22 2024 plano -> ../external/skewer/python/plano
lrwxrwxrwx 1 fritz root 32 Aug 22 2024 skewer -> ../external/skewer/python/skewer
Or perhaps I'm doing something wrong? If not, the options would be to:
- Keep the
cdcommand, runplanofrom the subdir, or - Update
backend/.plano.pyto usebackendinstead of.as the build context directory
I'd rather keep the cd, as to not change the current operation on plano
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Earlier Changes in my custom image had the similar issue; I changed the context for image builds and now I can see image running.
`$ podman run -p 8080:8080 -ti --entrypoint sh quay.io/ktalathi/skupper/hello-world-frontend:latest
~ $ ls
python static
~ $ ls -l python/
total 16
-rw-r--r-- 1 fritz root 8682 Nov 18 12:05 animalid.py
-rw-r--r-- 1 fritz root 3536 Nov 18 12:05 main.py
~ $ ls -l static/
total 12
drwxr-xr-x 2 fritz root 161 Nov 18 12:05 gesso
-rw-r--r-- 1 fritz root 268 Nov 18 12:05 index.html
-rw-r--r-- 1 fritz root 124 Nov 18 12:05 main.css
-rw-r--r-- 1 fritz root 2434 Nov 18 12:05 main.js
~ $
~ $ exit
$
$ podman run quay.io/ktalathi/skupper/hello-world-backend:latest
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
^CINFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]
$
`
Keep the cd command, run plano from the subdir, or
This causes issue as plano is not getting initialized properly in subdir. It only works properly in root dir.
That is why going with other option and setting context in specific .plano.py files in sub directories.
frontend/.plano.py
Outdated
| run(f"podman manifest rm {image_tag}", check=False) | ||
| run(f"podman rmi {image_tag}", check=False) | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x --manifest {image_tag} .") | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file frontend/Containerfile --manifest {image_tag} .") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added Power arch and context for Containerfile to use for build
backend/.plano.py
Outdated
| run(f"podman manifest rm {image_tag}", check=False) | ||
| run(f"podman rmi {image_tag}", check=False) | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x --manifest {image_tag} .") | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file backend/Containerfile --manifest {image_tag} .") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added Power arch and context for Containerfile to use for build
|
@hash-d is there any need to test the artifacts that come out of this before merging it? |
hash-d
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaushikOP, perhaps I'm doing something wrong? but the images I generated mirroring the modifications on the PR were invalid. Can you check my other comments and verify that?
Also, I see that the plano test run failed on your fork. It seems to me that the reason is that the hello-world example is still set for v1 on main, while the main workflow (.github/workflows/main.yaml) that runs the test is downloading the latest skupper version's CLI, which is v2:
name: main
on:
(...)
jobs:
test:
strategy:
(...)
matrix:
skupper-version: [latest, main]
steps:
- uses: actions/checkout@v4
(...)
- run: curl https://skupper.io/install.sh | bash -s -- --version ${{matrix.skupper-version}}
I think the only way around this at this point is to change the versions in the matrix, removing main and latest and setting some specific versions (say, 1.9.4 and 1.8.7, the latest 1.x releases), while the example itself is not updated to v2.
| --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le \ | ||
| -t quay.io/skupper/hello-world-backend \ | ||
| --push -f Containerfile . | ||
| ./plano --file backend/.plano.py build,push |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaushikOP , this looks ok, but have you successfully run an image generated with that command? I have the impression that, being run from the repo root directory, it's pulling the wrong python directory. When I ran using an image built with these instructions, this is what I got:
$ podman run quay.io/skupper/hello-world-backend -p 8080:8080
python: can't open file '/home/fritz/python/main.py': [Errno 2] No such file or directory
Inspecting the image, the contents of the python directory correspond to the one at the repo root (plano, skewer), and not the one from backend/python (main.py, thingid.py):
$ podman run -p 8080:8080 -ti --entrypoint sh quay.io/skupper/hello-world-backend
~ $ ls
python
~ $ ls -l python
total 8
lrwxrwxrwx 1 fritz root 31 Aug 22 2024 plano -> ../external/skewer/python/plano
lrwxrwxrwx 1 fritz root 32 Aug 22 2024 skewer -> ../external/skewer/python/skewer
Or perhaps I'm doing something wrong? If not, the options would be to:
- Keep the
cdcommand, runplanofrom the subdir, or - Update
backend/.plano.pyto usebackendinstead of.as the build context directory
I'd rather keep the cd, as to not change the current operation on plano
| --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le \ | ||
| -t quay.io/skupper/hello-world-frontend \ | ||
| --push -f Containerfile . | ||
| ./plano --file frontend/.plano.py update-gesso,build,push |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as mentioned above for backend (running from repo root dir):
$ podman run -p 8081:8080 -ti quay.io/skupper/hello-world-frontend
python: can't open file '/home/fritz/python/main.py': [Errno 2] No such file or directory
I tested running from the frontend dir with the following change, and the image ran fine:
- run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file frontend/Containerfile --manifest {image_tag} .")
+ run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file Containerfile --manifest {image_tag} .")
I had to set PYTHONPATH to <repo_root>/python before running it, though.
* Converting image image builds to use plano tool * using python v3.14 as alpine version has updated python Signed-off-by: Kaushik Talathi <kaushik.talathi1@ibm.com>
e8be4b4 to
f801156
Compare
| run(f"podman manifest rm {image_tag}", check=False) | ||
| run(f"podman rmi {image_tag}", check=False) | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x --manifest {image_tag} .") | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file backend/Containerfile --manifest {image_tag} ./backend") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep the cd command, run plano from the subdir, or
Update backend/.plano.py to use backend instead of . as the build context directory
Updated to use appropriate context image build.
The plano tool is directly available only at root dir. Plus, the dependant python packages in root python dir.
| run(f"podman manifest rm {image_tag}", check=False) | ||
| run(f"podman rmi {image_tag}", check=False) | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x --manifest {image_tag} .") | ||
| run(f"podman build {no_cache_arg} --format docker --platform linux/amd64,linux/arm64,linux/s390x,linux/ppc64le --file frontend/Containerfile --manifest {image_tag} ./frontend") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep the cd command, run plano from the subdir, or
Update backend/.plano.py to use backend instead of . as the build context directory
Updated to use appropriate context image build.
The plano tool is directly available only at root dir. Plus, the dependant python packages in root python dir.
|
Updated the build context and I'm able to view the proper contents and able to run the images fine. Here's the run details on my fork. Images created - |
hash-d
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KaushikOP, I'm approving this PR, but I will not merge it yet, because it will not pass the tests.
As mentioned previously, the example is still v1-based, but the CI is installing the latest skupper CLI, which currently is v2, and these two versions are incompatible, causing the test to fail.
I have created a pair of PRs [1] to allow install.sh to install the latest v1 version, but I do not know when those will be reviewed and whether they will be merged.
Meanwhile, if you want to have this merged ASAP, here are some options:
- Remove
latestfrom the matrix, and put a hardcoded version (such as 1.9.4) - Remove
latestfrom the matrix, and putv1-dev-release, instead. - Wait for one of my
install.shPRs to be merged, and update the matrix accordingly
Hardcoding is never good, but it should probably be fine for v1 at this point, and it would be fixed whenever the example/test is updated for v2.
Using v1-dev-release might be preferable, but we'd be going from released to unreleased cli in the test, which may cause CI failures.
If you're uncomfortable making these changes, let me know and I'll do them, and you can rebase later.
[1] skupperproject/skupper-website#109 and skupperproject/skupper-website#110
|
This PR will need to move to be merged against v1 branch, as I plan to move main to reflect the v2 experience |
@hash-d , updated with |
I'm verifying Skupper on Power for MULTIARCH-5587
CC: @prb112, @mjturek@us.ibm.com & @hash-d