-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch from tiny to smollm:135m #891
Conversation
Reviewer's Guide by SourceryThis pull request switches the default model used in the demo script from 'tiny' to 'smollm:135m'. The change involves updating the model name in various commands and messages within the script. Additionally, the pull request updates the shellcheck command in the Makefile to include nested directories. Sequence diagram for pulling the modelsequenceDiagram
participant User
participant ramalama script
participant ramalama CLI
participant Ollama
User->>ramalama script: Executes ramalama.sh pull
ramalama script->>ramalama CLI: ramalama rm --ignore smollm:135m
ramalama CLI-->>ramalama script: Removes the model (if it exists)
ramalama script->>ramalama CLI: ramalama pull smollm:135m
ramalama CLI->>Ollama: Downloads smollm:135m model
Ollama-->>ramalama CLI: Model downloaded
ramalama CLI-->>ramalama script: Model pulled
ramalama script->>ramalama CLI: ramalama ls
ramalama CLI-->>ramalama script: Lists models, grep for smollm:135m
ramalama script->>ramalama CLI: podman images
ramalama CLI-->>ramalama script: Lists container images, grep for ramalama
Sequence diagram for converting the model to OCI contentsequenceDiagram
participant User
participant ramalama script
participant ramalama CLI
participant podman
User->>ramalama script: Executes ramalama.sh kubernetes
ramalama script->>ramalama CLI: ramalama convert smollm:135m quay.io/ramalama/smollm:1.0
ramalama CLI->>podman: Converts smollm:135m model to OCI content
podman-->>ramalama CLI: OCI content created
ramalama CLI-->>ramalama script: Model converted
ramalama script->>ramalama CLI: podman images
ramalama CLI->>podman: Lists container images, grep for quay.io/ramalama/smollm
podman-->>ramalama CLI: Lists images
ramalama script->>ramalama CLI: ramalama serve --generate kube --name smollm-service oci://quay.io/ramalama/smollm:1.0
ramalama CLI-->>ramalama script: Generates kubernetes YAML file
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ericcurtin - I've reviewed your changes - here's some feedback:
Overall Comments:
- Consider renaming
ramalama.sh
toramalama_smollm.sh
or similar to reflect the specific model it uses. - The
ramalama rm --ignore
commands could be simplified toramalama rm smollm:135m
.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
This is probably a consequence of my slow network, but I switched to smollm:135m, it's easier for demos. tiny was taking too long to download. Signed-off-by: Eric Curtin <[email protected]>
LGTM |
This is probably a consequence of my slow network, but I switched to smollm:135m, it's easier for demos. tiny was taking too long to download.
Summary by Sourcery
Switch the default model in the demo script from
tiny
tosmollm:135m
to improve the out-of-the-box experience for users with slower networks.