I have a deployment of shinyproxy on kubernetes with many apps working great, except the RStudio app.
One user can launch the RStudio image and it works great. But as soon as a second user logs in, a new container is launched for them, but they get and error and the first user gets kicked out of their session. The public RStudio image does not generate any logs, so it is hard to debug and the shinyproxy pod is not showing any issues with the apps.
What I think is happening is that shinyproxy does not realize which pod each user belongs to, and since they are going to the same URL, mydomain.com/apps/app/rstudio_451?sp_hide_navbar=true, it treats the most recent window as the logged in one, and the users as the same user, and RStudio only allows one open window per user.
What's interesting is that this error occurs with openanalytics/shinyproxy-operator:2.3.1 and openanalytics/shinyproxy:3.2.3 but the RStudio app works fine with the older versions, 2.1.0 and 3.1.1, respectively.
I have replicated this error on two different clusters, on two different cloud providers. So I think there is something wrong with the latest images but I can't figure out what.
I'm putting a minimal set of kubernetes manifests below in the hopes that it helps figure out the problem.
Any insights into what can be causing this?
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
labels:
project: myproject
name: shinyproxy01
namespace: myns
spec:
fqdn: my.domain.com
image: openanalytics/shinyproxy:3.1.1
imagePullPolicy: IfNotPresent
kubernetesIngressPatches: |
- op: add
path: /metadata/annotations
value:
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-busy-buffers-size: 128k
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "400m"
nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
nginx.ingress.kubernetes.io/use-proxy-protocol: "true"
# nginx.ingress.kubernetes.io/add-trailing-slash: "true"
- op: add
path: /spec/ingressClassName
value: nginx
- op: replace # This assumes the ShinyProxy operator generates a path at this JSON path
path: /spec/rules/0/http/paths/0/path
value: /apps
- op: add # This assumes pathType is not set by default or you want to ensure it's Prefix
path: /spec/rules/0/http/paths/0/pathType
value: Prefix
- op: add
path: /spec/tls
value:
- hosts: []
secretName: main-tls
- op: copy
from: /spec/rules/0/host
path: /spec/tls/0/hosts/-
kubernetesPodTemplateSpecPatches: |
# Add or update a label in the pod template metadata
- op: add
path: /metadata/labels/rebooted
value: "2025-08-12T155900Z" # Updated timestamp
# app list
- op: add
path: /spec/volumes/-
value:
name: sp-app-list-volume
configMap:
name: sp-app-list
- op: add
path: /spec/containers/0/volumeMounts/-
value:
name: sp-app-list-volume
mountPath: /opt/shinyproxy/app-list.yaml
subPath: app-list.yaml
readOnly: true
# ensures shinyproxy can talk to redis
- op: add
path: /spec/containers/0/env/-
value:
name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: shinyproxy-redis-info
key: redis_hostname
- op: add
path: /spec/containers/0/env/-
value:
name: REDIS_PORT
valueFrom:
configMapKeyRef:
name: shinyproxy-redis-info
key: redis_port
- op: add
path: /spec/containers/0/env/-
value:
name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: shinyproxy-redis-creds
key: REDIS_PASSWORD
# environment variables for shinyproxy oidc config
# first we have to create the envFrom array
- op: add
path: /spec/containers/0/envFrom
value: []
# then add to it
- op: add
path: /spec/containers/0/envFrom/-
value:
secretRef:
name: oidc-creds
# this creates the env var for the domain name
- op: add
path: /spec/containers/0/envFrom/-
value:
configMapRef:
name: root-domain
- op: add
path: /spec/containers/0/env/-
value:
name: MANAGEMENT_SERVER_PORT
value: "9090"
- op: add
path: /spec/containers/0/env/-
value:
name: MANAGEMENT_SERVER_ADDRESS
value: "0.0.0.0"
- op: add
path: /spec/containers/0/env/-
value:
name: MANAGEMENT_ENDPOINT_HEALTH_PROBES_ENABLED
value: "true"
- op: add
path: /spec/containers/0/env/-
value:
name: MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE
value: "health,info"
# keep probes on 9090 (operator default)
- op: replace
path: /spec/containers/0/livenessProbe/httpGet/port
value: 9090
- op: replace
path: /spec/containers/0/readinessProbe/httpGet/port
value: 9090
- op: replace
path: /spec/containers/0/startupProbe/httpGet/port
value: 9090
# increase startup probe budget to handle slow boot
- op: replace
path: /spec/containers/0/startupProbe/initialDelaySeconds
value: 90
- op: replace
path: /spec/containers/0/startupProbe/failureThreshold
value: 30
- op: replace
path: /spec/containers/0/startupProbe/timeoutSeconds
value: 5
# sets up how many resources the pods can use
- op: add
path: /spec/containers/0/resources
value:
requests:
cpu: 1000m
memory: 1Gi
limits:
cpu: 4000m
memory: 4Gi
- op: add
path: /spec/serviceAccountName
value: shinyproxy-sa
logging:
level:
eu:
openanalytics:
containerproxy:
auth: DEBUG
proxy:
admin-groups:
- admins
authentication: openid
container-wait-time: 600000
containerBackend: kubernetes
default-max-instances: -1
default-webSocket-reconnection-mode: auto
heartbeat-timeout: 360000
hide-navbar: false
kubernetes:
image-pull-policy: IfNotPresent
internal-networking: true
namespace: myns
pod-wait-time: 600000
landingPage: /
my-apps-mode: Inline
openid:
auth-url: https://oidc.example.com/${OIDC_DOMAIN_ID}/oauth2/v2.0/authorize
client-id: ${OIDC_CLIENT_ID}
client-secret: ${OIDC_CLIENT_SECRET}
jwks-url: https://oidc.example.com/${OIDC_DOMAIN_ID}/discovery/v2.0/keys
roles-claim: groups
scopes:
- openid
- profile
- email
- offline_access
token-url: https://oidc.mydomain.com/${OIDC_DOMAIN_ID}/oauth2/v2.0/token
username-attribute: preferred_username
sameSiteCookie: Lax
stop-proxies-on-shutdown: false
store-mode: Redis
title: My Apps
replicas: 1
server:
forward-headers-strategy: native
frameOptions: sameorigin
port: 8080
secureCookies: true
servlet:
context-path: /apps
session:
timeout: 3600
spring:
config:
import:
- app-list.yaml
data:
redis:
password: ${REDIS_PASSWORD}
url: rediss://:${REDIS_PASSWORD}@${REDIS_HOST}:${REDIS_PORT}
session:
redis:
configure-action: none
store-type: redis
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
app-list.yaml: |-
proxy:
specs:
- id: rstudio_451
displayName: RStudio 4.5.1
description: RStudio IDE with R 4.5.1
resource-name: sp-app-rstudio-451-#{proxy.id}-0
# this is a modified version of the rocker rstudio image that handles www-root-path
container-image: lander.azurecr.io/sp-rstudio:4.5.1
port: 8787
access-groups: [admins, devs]
seats-per-container: 1
allow-container-re-use: false
scale-down-delay: 60
container-cpu-request: 500m
container-memory-request: 2Gi
container-cpu-limit: 4000m
container-memory-limit: 8Gi
faviconPath: "images/rstudio/logo-rstudio.png"
hide-navbar-on-main-page-link: true
container-env:
USER: "#{proxy.userId.substring(0, proxy.userId.indexOf('@')).replace('.', '_').replace('-', '_')}"
DEFAULT_USER: "#{proxy.userId.substring(0, proxy.userId.indexOf('@')).replace('.', '_').replace('-', '_')}"
BEST_USER: "#{proxy.userId.substring(0, proxy.userId.indexOf('@')).replace('.', '_').replace('-', '_')}"
PASSWORD: supersecret
USERID: 1001
GROUPID: 1001
DISABLE_AUTH: true
WWW_ROOT_PATH: "#{proxy.getRuntimeValue('SHINYPROXY_PUBLIC_PATH')}"
REPOS: https://packagemanager.posit.co/cran/__linux__/manylinux_2_28/latest
kubernetes-pod-patches: |
# give the pod a good hostname
- op: add
path: /spec/hostname
value: "rstudio"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: shinyproxy-operator
project: myproject
name: shinyproxy-operator
namespace: myns
spec:
replicas: 1
selector:
matchLabels:
app: shinyproxy-operator
strategy:
type: Recreate
template:
metadata:
labels:
app: shinyproxy-operator
spec:
containers:
- env:
- name: SPO_MODE
value: namespaced
- name: SPO_PROBE_TIMEOUT
value: "3"
- name: SPO_PROBE_INITIAL_DELAY
value: "5"
- name: SPO_PROBE_FAILURE_THRESHOLD
value: "5"
- name: SPO_LOG_LEVEL
value: DEBUG
- name: SPO_PROCESS_MAX_LIFETIME
value: "30"
image: openanalytics/shinyproxy-operator:2.1.0
imagePullPolicy: IfNotPresent
name: shinyproxy-operator
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 125m
memory: 128Mi
serviceAccountName: shinyproxy-operator-sa
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
project: myproject
name: shinyproxy-operator-sa
namespace: myns
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
project: myproject
name: shinyproxy-sa
namespace: myns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
project: myproject
name: shinyproxy-operator-role
namespace: myns
rules:
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- openanalytics.eu
resources:
- shinyproxies
- shinyproxies/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- customresource
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
project: myproject
name: shinyproxy-sa-role
namespace: myns
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
project: myproject
name: shinyproxy-operator-rolebinding
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: shinyproxy-operator-role
subjects:
- kind: ServiceAccount
name: shinyproxy-operator-sa
namespace: myns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
project: myproject
name: shinyproxy-sa-rolebinding
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: shinyproxy-sa-role
subjects:
- kind: ServiceAccount
name: shinyproxy-sa
namespace: myns
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
project: myproject
name: main-tls
namespace: myns
spec:
commonName: mydomain.com
dnsNames:
- mydomain.com
- ldapi
- sp-shinyproxy01-svc
- sp-shinyproxy01-svc.svc
- sp-shinyproxy01-svc.svc.cluster.local
duration: 2160h
issuerRef:
kind: Issuer
name: letsencrypt-nginx
privateKey:
rotationPolicy: Always
renewBefore: 360h
secretName: main-tls
subject:
organizations:
- Lander Analytics
usages:
- digital signature
- key encipherment
- server auth
- client auth
I have a deployment of shinyproxy on kubernetes with many apps working great, except the RStudio app.
One user can launch the RStudio image and it works great. But as soon as a second user logs in, a new container is launched for them, but they get and error and the first user gets kicked out of their session. The public RStudio image does not generate any logs, so it is hard to debug and the shinyproxy pod is not showing any issues with the apps.
What I think is happening is that shinyproxy does not realize which pod each user belongs to, and since they are going to the same URL, mydomain.com/apps/app/rstudio_451?sp_hide_navbar=true, it treats the most recent window as the logged in one, and the users as the same user, and RStudio only allows one open window per user.
What's interesting is that this error occurs with openanalytics/shinyproxy-operator:2.3.1 and openanalytics/shinyproxy:3.2.3 but the RStudio app works fine with the older versions, 2.1.0 and 3.1.1, respectively.
I have replicated this error on two different clusters, on two different cloud providers. So I think there is something wrong with the latest images but I can't figure out what.
I'm putting a minimal set of kubernetes manifests below in the hopes that it helps figure out the problem.
Any insights into what can be causing this?