Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-memory learning algorithms (partially) with pytorch #567

Merged
merged 25 commits into from
Dec 18, 2023

Conversation

maljoras
Copy link
Collaborator

Related issues

closes #560

Description

The TransferCompounds that implemts the Tiki-taka learning rules are encapsulated in RPUCuda for speed. However, for ease of experimentation, a python version would be good. Also for hardware implementation, the digital operations are best separated from the analog operations, so that the connection to any HW call is easier.

Here the TorchTransferTile is introduced that implements TTv2, c-TTv2, as well as AGAD in torch. That is the transfer part is explicitly available, while the pulsed update is done with single tiles (in RPUCuda).

Details

Usage:

See example 26. The learning rule is first defined as before using the ChoppedCompound etc. Then the tile_class property of the rpu_config is simply set to the TorchTransferTile instead of the default tiles.AnalogTile (that would be using the RPUCuda library)

rpu_config = build_config('agad', device=SoftBoundsReferenceDevice())
rpu_config.tile_class = TorchTransferTile

convert_to_analog(model, rpu_config)

Kaoutar El Maghraoui and others added 25 commits January 4, 2023 22:39
@maljoras maljoras requested review from kkvtran and kaoutar55 and removed request for kkvtran December 15, 2023 19:01
@maljoras maljoras merged commit 309f881 into IBM:master Dec 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Transfer compound as optimizors
2 participants