-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: use single image for operator and vgmanager #45
Conversation
@@ -189,3 +207,31 @@ func newVGManagerDaemonset(lvmCluster lvmv1alpha1.LVMCluster) appsv1.DaemonSet { | |||
setDaemonsetNodeSelector(nodeSelector, &ds) | |||
return ds | |||
} | |||
|
|||
func getRunningPodImage(r *LVMClusterReconciler, ctx context.Context) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about adding the image as env in the csv?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- that's also and option, we need to update both image and env variable in that case.
config/manager/manager.env
Outdated
@@ -4,4 +4,4 @@ CSI_REGISTRAR_IMAGE=k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0 | |||
CSI_PROVISIONER_IMAGE=k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0 | |||
CSI_LIVENESSPROBE_IMAGE=k8s.gcr.io/sig-storage/livenessprobe:v2.5.0 | |||
CSI_RESIZER_IMAGE=k8s.gcr.io/sig-storage/csi-resizer:v1.3.0 | |||
VGMANAGER_IMAGE=quay.io/ocs-dev/vgmanager:latest | |||
VGMANAGER_IMAGE=quay.io/ocs-dev/lvm-operator:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need VGMANAGER_IMAGE
env if we are fetching the image from the container using getRunningPodImage
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, incase if you want to custom build only vgmanager and use that image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
custom build of vgmanager would mean building the operator image again (since vgmanager is part of operator build)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if the primary use case is to be able to test custom vgmanager builds, then having separate images for both operator and vgmanager would be an ideal option, IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- yes, you'll be commenting out vgmanager build and remove env in manager yaml. I can remove this but would like to have this atleast for a week or so and if we aren't seeing it's usage, I'll remove it.
- sounds good or want me to remove it now itself?
controllers/vgmanager_daemonset.go
Outdated
image, err := getRunningPodImage(r, ctx) | ||
if err != nil { | ||
r.Log.Error(err, "failed to get image from running operator") | ||
image = VGManagerImage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, we should either use env
or use getRunningPodImage
. We don't need both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and this would also mean we would never be able to test the custom vgmanager image in the env
. Since it will always use the lvm-operator
image!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
// aggregate nodeSelector and tolerations from all deviceClasses | ||
nodeSelector, tolerations := extractNodeSelectorAndTolerations(lvmCluster) | ||
volumes := []corev1.Volume{LVMDConfVol, DevHostDirVol, UDevHostDirVol, SysHostDirVol} | ||
volumeMounts := []corev1.VolumeMount{LVMDConfVolMount, DevHostDirVolMount, UDevHostDirVolMount, SysHostDirVolMount} | ||
privileged := true | ||
var zero int64 = 0 | ||
|
||
// try to get vgmanager image from env and on absence get from running pod | ||
// TODO: investigate why storing this env in a variable is failing tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the test failure still a problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with current implementation it isn't, it happens when we store env values in a variable and use that here
- add make target to build and push combined operater and vgmanager image - fallback to get image name from running operator when vgmanager env var isn't set Signed-off-by: Leela Venkaiah G <[email protected]>
/lgtm |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: leelavg, nbalacha The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Leela Venkaiah G [email protected]