Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] PPOs with composite distribution #2791

Merged
merged 1 commit into from
Feb 17, 2025

Conversation

louisfaury
Copy link
Contributor

Description

I believe there is a bug in PPOs' implementation when both prev_log_prob and log_prob are TensorDicts.

Motivation and Context

In the setting were both prev_log_prob and log_prob are TensorDicts, we were clamping prev_log_prob - log_prob directly, instead of their sum over features.

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Feb 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2791

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 21 Pending

As of commit eadb9e1 with merge base 27a8ecc (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 17, 2025
Copy link
Contributor Author

@louisfaury louisfaury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked that all PPO tests are still passing; two comments:

  1. We should probably have test to catch this,
  2. I'm planning to do a docstring pass on the entire PPO stack so things can be much clearer (some operations are a bit obscure at first read).

Comment on lines +578 to +581
if is_tensor_collection(log_weight):
log_weight = _sum_td_features(log_weight)
log_weight = log_weight.view(adv_shape).unsqueeze(-1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is the main change for this method, which is also now consistent with type hints.

@@ -987,8 +982,6 @@ def forward(self, tensordict: TensorDictBase) -> TensorDictBase:
# to different, unrelated trajectories, which is not standard. Still, it can give an idea of the weights'
# dispersion.
lw = log_weight.squeeze()
if not isinstance(lw, torch.Tensor):
lw = _sum_td_features(lw)
ess = (2 * lw.logsumexp(0) - (2 * lw).logsumexp(0)).exp()
batch = log_weight.shape[0]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main error is two lines below; clamp was applied to the TensorDict log_weight before it is summed over the feature dimension.

Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks

@vmoens vmoens changed the title Fix PPOs with composite distribution [BugFix] PPOs with composite distribution Feb 17, 2025
@vmoens vmoens added the bug Something isn't working label Feb 17, 2025
@vmoens vmoens merged commit edfa25d into pytorch:main Feb 17, 2025
64 of 76 checks passed
vmoens pushed a commit that referenced this pull request Feb 17, 2025
Co-authored-by: Louis Faury <[email protected]>
(cherry picked from commit edfa25d)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants