-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only execute the "command" hook once. #1055
Conversation
If multiple plug-ins provide one, we have to exit the loop early for this hook.
Thanks, I've been meaning to fix this. My thinking is that we should probably either raise an error here or at least print a warning. Interested to know what combination of plugins you used to result in this scenario @philwo. |
I agree that usually a warning would be good here. Do you have a suggested wording? My use case is a bit ... hacky, but happy to explain 😄 We use a Python script for various tasks in our pipelines, but it's neither stored in the agent image, nor in the project repo, so currently we just We use the official Docker plug-in for that: We put it first in the plugins list of the step, so that its This works really well, except that the agent now executes the |
How about something like
|
Let's get this one in for the time being anyway. |
@lox this broke us...HARD. Where is the right place to start a conversation around how to layer the commands of two plugins together? Our use case is layering the |
Sorry about that! Do you have an example pipeline I could look at? |
Rolled elastic-ci-stack up to v4.3.5 and got nailed by this also. Our deployment step is a series of plugins, controlling stuff like kubernetes init, helm release, route53, in order. They're all command hooks, so now, only init works 🥇 Easily remedied once we figured out what was going on by rolling back our deploy queue to v4.3.4 while we move our stuff into being pre-command and post-command hooks, which they should be anyway. I agree with the change, despite it breaking me, wah wah warnings are nice, who cares though, I found this PR easily enough, I think you made the right call. I would wager that nearly every command-hook in a plugin really wants to be a pre-, post-, or some other type of hook. |
Any idea how we could get a warning printed here? We've had some reports from people that have been tripped up by this, and went on long goose chases to figure out why it's not working when they upgraded their agents / Elastic Stack. |
Moin! It also broke our environment quite badly. We are running multiple commands as docker containers. They need to run in sequence and they depend on each other, so I want to keep them in one step. Basically a got a repo with a structure like this: root
Imagine module2 depends on module1 and module3. My build contains 3 steps: Step 1 - test module1
Step 2 - test module2
Step 3 - test module3
Compile and running the tests are done inside docker. So I just run the docker plugin multiple times. Compiling the whole project installs it to a local artifact cache directory (one per agent). This cache directory contains mostly downloaded artifacts. So compiling it once and copying the artifacts across makes the build quite slow because there might be a fair bit of data. Basically I use multiple docker plugins to compose my build. Moving everything into separate steps would involve just a lot of copying, which increases build time. Imho if you specify a plugin twice, it should execute twice. The original problem is that a plugin is not wanted, the system is simply abused to download it. For such cases, I recommend a per-agent cache and then simply download it once per agent. |
@elruwen we've decided to retain the removal of executing multiple command plugin hooks, and also do the same for checkout, to match the agent hook behaviour. I think for your needs, perhaps you might need to skip the Docker plugin and use your own build script that just runs the Docker commands you need? The Docker plugin is going to stay designed as a single self-contained execution on an agent. If you want to share the script between pipelines, or not have it checked into source, it might be worth developing your own small private plugin that does what you need? Sorry this changed unexpectedly and now requires more work from you to get working again! It's a bug we didn't catch earlier, when we limited the execution of hooks. |
I got already a plugin which generates pipelines on the fly. So far I generated depending on the project configuration various docker invocation. I can change that because I run the same base-container multiple times, but I still believe that being able to compose a build out of multiple plugins is something good. |
I wanted to drop in another example that this change broke:
Previously, this built both the build and app images and everything ran okay. Currently, it builds the build image and then tries to build the app image during the push step. The old behavior was really useful for multistage docker builds to save the network overhead. I understand the docs pointed to different behavior but the old behavior was pretty useful as a plugin author. It feels odd as a plugin author to have my plugin potentially fail because someone else defined a command step. |
If multiple plug-ins provide one, we have to exit the loop early for
this hook.
Fixes #1052