-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory! #11
Comments
Hi! What GPU are you using? |
I'm using H100 with 80GB |
I see. Are you training the model with the default resolution? |
yes 512x512? have you tried with accelerator? I trained with it and got OOM |
By default, the |
I just started the training but I'm checking your code, it only input with the shape |
In image and video restoration, training with a patch size smaller than the test one is quite a common way to save memory during training. On which dataset are you training? Is it the one of the paper or a custom one? |
I'm training with car dataset |
Can I see a video example somewhere? Is it public? |
yes! it recorded around the car |
What's the end goal of your work? Because the videos don't seem degraded with artifacts similar to those considered in our paper. |
I want to increase the quality of the image by degradation ground truth image while training. |
What kind of degradation? Our model is designed for analog video restoration, I'm not sure it's suitable for your purpose. |
it's simple degradations (gaussian noise, blur gaussian, blur part of car, downsample_upsample) in video |
How did you determine the reference frames? Did you use the standard textual prompts or did you change them to fit your needs? By the way, I'm closing the issue since the out of memory problem is now solved. |
I just select random from dataset. Do you if we select different views it will support that? |
Why should random frames represent reference frames for the restoration? In the paper, we devised a specific methodology for analog videos. You should try to adjust that for your needs if you want to use our model for your purpose. |
I'm just thinking while training we can select random then the model can learn better after that, in the inference stage we can choose reference based on CLIP score. |
I've never tried it, so I'm not sure if it will work (or not) |
Thanks! Can you provide your training chart? |
I don't have the charts anymore, but I have the values of the losses at the end of the training: about 3.2 for the pixel_loss, and about 4.6 for the perceptual_loss. However, since you are using a different training dataset, I don't think these values are useful for you. But from what I remember, the profiles of the losses were similar to your charts. |
Thanks a lot! |
I used the same config with your training but got OOM! Are you sure it's correct?
The text was updated successfully, but these errors were encountered: