-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration fixes #79
Migration fixes #79
Conversation
This reverts commit 92e1ac0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear: remaining work on migrations is just testing restricted data?
We should try to be better about structuring PR's. Migrations are already hard to review because the code is not heavily trafficked, and when semantic changes are buried under refactoring + tooling changes, it becomes even harder to separate things and identify what needs attention. This ideally should have been a series of 3+ PRs, though I appreciate that may have been hard to do as you were fixing issues as they came up.
Yes, which I actually am doing right now.
I agree, a 5000+ changes PR is not easily reviewable. I can try to break this down into smaller PRs if you are okay with it! |
Not sure if it would be worth the effort, I meant the comment more as something to keep in mind in the future, structuring work with PRs a bit in mind, and filing them more frequently |
…eparately
…only in legacy.data
…ature
Closing because somehow Github is refusing to merge even though the branch is already rebased on trunk 😕 |
This PR contains a lot of stuff (probably too much!):
I reworked a bit the code organization in the migration directory:
zerolog
Added labels for kdvh and kvalobs data, so it's easier to delete specific timeseries in case we need to migrate again.
Added Frost quality codes (which are not exhaustive, but what can I do?)
Added handling of restricted data for migrations
Renamed SQL files so that we can automate database setup without having to update the file names every time
Significantly improved the just file, so it's a bit less of a pain to use
Figured out we have to set
GOMEMLIMIT
(or should I sayGOMEM*E*LIMIT
, I love golang) to avoid OOM when importing dataI don't expect you to check everything, but if you quickly go through the changes and spot something weird, please let me know!
Edit: I need to add an ansible script to disable/enable the replication before/after the migration