Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chat template support #917

Merged
merged 5 commits into from
Mar 11, 2025
Merged

Conversation

engelmi
Copy link
Member

@engelmi engelmi commented Mar 7, 2025

This PR enables the (automatic) use of the chat template file via ramalama run by passing the chat template file to llama-run. The chat template can be either provided/downloaded directly or is extracted from the GGUF model and stored - preference is given to the provided chat template file.

TODO:

Summary by Sourcery

This PR adds support for chat templates to ramalama run. It enables the automatic use of chat template files by passing them to llama-run. The chat template can be provided directly, downloaded, or extracted from the GGUF model. It also includes a conversion tool to convert Go Templates (used by Ollama) to Jinja templates.

New Features:

  • Adds support for chat templates, allowing users to specify a chat template file for use with models.
  • Adds automatic conversion of Go templates to Jinja templates for compatibility with llama-run.

Copy link
Contributor

sourcery-ai bot commented Mar 7, 2025

Reviewer's Guide by Sourcery

This PR adds support for chat templates to ramalama run. It enables the automatic use of chat template files by passing them to llama-run. The chat template can be provided directly, downloaded, or extracted from the GGUF model. Additionally, the PR includes a Go template to Jinja template converter.

Sequence diagram for adding a new snapshot with chat template

sequenceDiagram
    participant MS as ModelStore
    participant OR as OllamaRepository
    participant HF as HuggingfaceRepository
    participant GGUF as GGUFInfoParser
    MS->OR: new_snapshot(model_tag, snapshot_hash, snapshot_files)
    OR->MS: _prepare_new_snapshot(model_tag, snapshot_hash, snapshot_files)
    MS->MS: Creates ref file and snapshot directory
    OR->MS: _download_snapshot_files(snapshot_hash, snapshot_files)
    loop For each file in snapshot_files
        MS->OR: get_blob_file_path(file.hash)
        OR->OR: download(dest_path, snapshot_dir)
        OR->MS: get_snapshot_file_path(snapshot_hash, file.name)
        MS->MS: Create symlink
    end
    OR->MS: _ensure_chat_template(model_tag, snapshot_hash, snapshot_files)
    alt Chat template file specified
        MS->MS: get_blob_file_path(file.hash)
        MS->MS: Reads chat template file
        opt Is Go template
            MS->go2jinja: go_to_jinja(chat_template)
            go2jinja->MS: jinja_template
            MS->MS: update_snapshot(model_tag, snapshot_hash, files)
        end
    else No chat template file specified
        MS->MS: get_blob_file_path(model_file.hash)
        MS->GGUF: is_model_gguf(model_file_path)
        alt Is GGUF model
            MS->GGUF: parse(model_file_path)
            GGUF->MS: info
            MS->MS: info.get_chat_template()
            opt Is Go template
                MS->go2jinja: go_to_jinja(tmpl)
                go2jinja->MS: jinja_template
                MS->MS: update_snapshot(model_tag, snapshot_hash, files)
            end
        end
    end
Loading

Sequence diagram for updating a snapshot with chat template

sequenceDiagram
    participant MS as ModelStore
    MS->MS: update_snapshot(model_tag, snapshot_hash, new_snapshot_files)
    alt Directory setup does not exist
        MS->MS: return False
    else Ref file does not exist
        MS->MS: return False
    else
        MS->MS: get_ref_file(model_tag)
        MS->MS: Updates ref_file.filenames, model_name and chat_template_name
        MS->MS: Writes ref_file.serialize() to file
        MS->MS: _download_snapshot_files(snapshot_hash, new_snapshot_files)
        loop For each file in new_snapshot_files
            MS->MS: get_blob_file_path(file.hash)
            MS->MS: download(dest_path, snapshot_dir)
            MS->MS: get_snapshot_file_path(snapshot_hash, file.name)
            MS->MS: Create symlink
        end
        MS->MS: return True
    end
Loading

Updated class diagram for RefFile

classDiagram
    class RefFile {
        - hash: str
        - filenames: list[str]
        - model_name: str
        - chat_template_name: str
        - _path: str
        + from_path(path: str) : RefFile
        + serialize() : str
        + path : str
    }
    note for RefFile "SEP, MODEL_SUFFIX and CHAT_TEMPLATE_SUFFIX are class constants"
Loading

Class diagram for GGUFModelInfo

classDiagram
    class GGUFModelInfo {
        - Metadata: dict
        - Tensors: list
        - LittleEndian: bool
        + get_chat_template() : str
        + serialize(json: bool, all: bool) : str
    }
Loading

File-Level Changes

Change Details Files
Added support for chat templates to ramalama run by passing the chat template file to llama-run.
  • The chat template can be provided directly, downloaded, or extracted from the GGUF model.
  • The chat template file is mounted into the container running the model.
  • The --chat-template-file argument is passed to llama-run.
ramalama/model.py
ramalama/common.py
Added logic to extract chat templates from GGUF models and convert Go templates to Jinja templates.
  • Added GGUFInfoParser to extract chat templates from GGUF files.
  • Added go2jinja library to convert Go templates to Jinja templates.
  • The extracted or converted chat template is stored in the model store.
  • The update_snapshot method is used to add the chat template to the model store.
ramalama/model_store.py
ramalama/gguf_parser.py
ramalama/go2jinja/go2jinja.py
ramalama/go2jinja/README.md
Makefile
Added SnapshotFileType enum to distinguish between different types of snapshot files.
  • Added SnapshotFileType enum with values for Model, ChatTemplate, and Other.
  • The SnapshotFile class now includes a type attribute of type SnapshotFileType.
  • The OllamaRepository and HuggingfaceRepository classes now specify the correct SnapshotFileType when creating SnapshotFile objects.
ramalama/model_store.py
ramalama/ollama.py
ramalama/huggingface.py
ramalama/url.py
Added logic to handle local models without a model store.
  • The exists and path methods in OllamaRepository now check if a model store is available before attempting to use it.
  • The model_path method in OllamaRepository now checks if a model store is available before attempting to use it.
ramalama/ollama.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@engelmi
Copy link
Member Author

engelmi commented Mar 7, 2025

@ericcurtin Could we rebuild a new version of the ramalama container image?
It seems that it does not yet include the changes from ggml-org/llama.cpp#11961 since the --chat-template-file CLI option for llama-run is missing.

@ericcurtin
Copy link
Collaborator

ericcurtin commented Mar 7, 2025

@rhatdan has the build infrastructure set up to do it, so it would be less effort to wait for him for a new release.

But if you build a container image locally, you can just pass that to RamaLama for development purposes.

@engelmi engelmi force-pushed the add-chat-template-support branch from 66ad4bf to d756093 Compare March 7, 2025 15:39
if file.type == SnapshotFileType.ChatTemplate:
return
if file.type == SnapshotFileType.Model:
model_file = file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should break here? Can there be multiple SnapshotfileType.Model, if yes then we only see the last.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ModelStore would allow multiple models at the moment and huggingface allows that as well (e.g. mradermacher/SmolLM-135M-GGUF). However, this would be invalid for ramalama as input. When the refs file is serialized, the current approach is to use the last seen model file - the same applies to the chat template file. So currently, I think its probably worth to add some kind of validaton for this (i.e. only one model and chat template in the list of files) and raise an Exception. WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think we should throw an error, since RamaLama has no way to know if it chose the right one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a validation function, incl. a few unit test cases.

@rhatdan
Copy link
Member

rhatdan commented Mar 7, 2025

We will do a release on Monday. or Sunday,

@codefromthecrypt
Copy link

So, when I run the following, and access the openai endpoint I still get jinja errors on tool calls as this is foundational to jinja, but not yet jinja, right?

$ gh pr checkout 917
$ python3 -m venv .venv && source .venv/bin/activate && pip install -e . && python bin/ramalama serve qwen2.5:3b

@codefromthecrypt
Copy link

ps if I make this change, and do ramalama serve qwen2.5:3b, my tool calls examples work, except one (semantic-kernel dotnet)

--- a/ramalama/model.py
+++ b/ramalama/model.py
@@ -586,6 +586,7 @@ class Model(ModelBase):
         else:
             exec_args = [
                 "llama-server",
+                "--jinja",
                 "--port",
                 args.port,
                 "-m",

ggml-org/llama.cpp#12279 has details on failures in general with hf qwen2.5, and the error of semantic-kernel dotnet, which also applies here.

@engelmi
Copy link
Member Author

engelmi commented Mar 9, 2025

@codefromthecrypt By using the --jinja option for llama-run or llama-server, implicitly the built-in chat templates are being used. I assume that one of the llama.cpp built-in templates produces quite similar output to one of the one required by your model. When you run

$ ramalama inspect qwen2.5:3b | grep chat_template

You get basically the jinja template required by that model.
(If the model has been pulled from ollama, then its probably not there since ollama uses Go Templates instead of Jinja - we are working on that as well)

This PR is part of enabling to detect and use the chat templates provided by the platformas such as ollama as well as extract this information from .gguf models. However, this will take a bit longer. So I think its best to use ramalama with your changes for the presentation.

@ericcurtin
Copy link
Collaborator

@codefromthecrypt By using the --jinja option for llama-run or llama-server, implicitly the built-in chat templates are being used. I assume that one of the llama.cpp built-in templates produces quite similar output to one of the one required by your model. When you run

$ ramalama inspect qwen2.5:3b | grep chat_template

You get basically the jinja template required by that model. (If the model has been pulled from ollama, then its probably not there since ollama uses Go Templates instead of Jinja - we are working on that as well)

This PR is part of enabling to detect and use the chat templates provided by the platformas such as ollama as well as extract this information from .gguf models. However, this will take a bit longer. So I think its best to use ramalama with your changes for the presentation.

Long-term we want llama-server to just default to jinja without any manual intervention and fallback to other techniques, needs upstream llama.cpp work.

@codefromthecrypt
Copy link

thanks for the advice folks! ps please applaud @ochafik for the work upstream on llama-cpp! ggml-org/llama.cpp#12279

@engelmi engelmi force-pushed the add-chat-template-support branch 6 times, most recently from 7f5f8c2 to b2d9aa3 Compare March 10, 2025 16:06
@engelmi
Copy link
Member Author

engelmi commented Mar 10, 2025

Added also the feature to convert ollama's Go Templates to Jinja ones so those should work as well. For example, when running:

# with chat template support
$ ramalama --use-model-store run --pull=never ollama://granite-code
🦭 > write a hello world program in python
print("hello world")

🦭 > 

# without the chat template support
$ ramalama run --pull=never ollama://granite-code
🦭 > write a hello world program in python
terminate called after throwing an instance of 'std::runtime_error'
  what():  this custom template is not supported

I had to use --pull=never since the quay.io container image doesn't have the changes for --chat-template-file in llama.cpp (yet).

It downloads the go template from ollama and then converts it to jinja, passing the latter one to llama-run.

$ cat chat_template
{{ if .Suffix }}<fim_prefix> {{ .Prompt }}<fim_suffix> {{ .Suffix }}<fim_middle>
{{- else if .Messages }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}Question:
{{ .Content }}
...
$ cat chat_template_converted
{% if suffix %}<fim_prefix> {{ prompt }}<fim_suffix> {{ suffix }}<fim_middle>
{%- elif messages %}
{%- for m in messages %}
{%- set last = ((messages)[loop.index0:])|length==1 %}
{%- if m["role"]=="user" %}Question:
{{ m["content"] }}
...

@rhatdan @ericcurtin PTAL, I think the PR is ready for review.

@engelmi engelmi marked this pull request as ready for review March 10, 2025 16:08
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @engelmi - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Consider adding a method to RefFile to check if a file exists in the snapshot directory.
  • The go2jinja dependency should be added to install-requirements in the Makefile.
Here's what I looked at during the review
  • 🟡 General issues: 2 issues found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@engelmi engelmi force-pushed the add-chat-template-support branch 3 times, most recently from 36060fd to 7198cb8 Compare March 11, 2025 07:38
@engelmi engelmi requested a review from rhatdan March 11, 2025 07:55
@ericcurtin
Copy link
Collaborator

Tested this branch on macOS, problem:

./install.sh -l
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/__init__.py: No such file or directory
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/go2jinja.py: No such file or directory

@engelmi
Copy link
Member Author

engelmi commented Mar 11, 2025

Tested this branch on macOS, problem:

./install.sh -l
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/__init__.py: No such file or directory
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/go2jinja.py: No such file or directory

Did you use the current version? I pushed some updates recently since the CI was failing due to this.

@rhatdan
Copy link
Member

rhatdan commented Mar 11, 2025

LGTM other then raising an error on multiple templates issue.

@engelmi engelmi force-pushed the add-chat-template-support branch from 7198cb8 to 311a4d3 Compare March 11, 2025 14:15
@ericcurtin
Copy link
Collaborator

Tested this branch on macOS, problem:

./install.sh -l
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/__init__.py: No such file or directory
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/go2jinja.py: No such file or directory

Did you use the current version? I pushed some updates recently since the CI was failing due to this.

Just refreshed branch, still occuring

@ericcurtin
Copy link
Collaborator

ericcurtin commented Mar 11, 2025

We have this problem again on macOS /usr/bin/python3 . It would be nice to set up CI to check this. It's an easy fix everytime, just populate a variable first. If we remember to only use variables in f-string's it will stop happening.

Traceback (most recent call last):
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 98, in <module>
    main(sys.argv[1:])
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 52, in main
    import ramalama
  File "/Users/ecurtin/git/ramalama/./ramalama/__init__.py", line 5, in <module>
    from ramalama.cli import HelpException, init_cli, print_version
  File "/Users/ecurtin/git/ramalama/./ramalama/cli.py", line 10, in <module>
    import ramalama.oci
  File "/Users/ecurtin/git/ramalama/./ramalama/oci.py", line 9, in <module>
    from ramalama.model import Model
  File "/Users/ecurtin/git/ramalama/./ramalama/model.py", line 24, in <module>
    from ramalama.model_store import ModelStore
  File "/Users/ecurtin/git/ramalama/./ramalama/model_store.py", line 14, in <module>
    from ramalama.go2jinja import go2jinja
  File "/Users/ecurtin/git/ramalama/./ramalama/go2jinja/go2jinja.py", line 117
    return f"{self.operands[0].to_jinja()}.format({", ".join([op.to_jinja() for op in self.operands[1:]])})"
                                                    ^
SyntaxError: f-string: expecting '}'

@engelmi
Copy link
Member Author

engelmi commented Mar 11, 2025

We have this problem again on macOS /usr/bin/python3 . It would be nice to set up CI to check this. It's an easy fix everytime, just populate a variable first. If we remember to only use variables in f-string's it will stop happening.

Traceback (most recent call last):
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 98, in <module>
    main(sys.argv[1:])
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 52, in main
    import ramalama
  File "/Users/ecurtin/git/ramalama/./ramalama/__init__.py", line 5, in <module>
    from ramalama.cli import HelpException, init_cli, print_version
  File "/Users/ecurtin/git/ramalama/./ramalama/cli.py", line 10, in <module>
    import ramalama.oci
  File "/Users/ecurtin/git/ramalama/./ramalama/oci.py", line 9, in <module>
    from ramalama.model import Model
  File "/Users/ecurtin/git/ramalama/./ramalama/model.py", line 24, in <module>
    from ramalama.model_store import ModelStore
  File "/Users/ecurtin/git/ramalama/./ramalama/model_store.py", line 14, in <module>
    from ramalama.go2jinja import go2jinja
  File "/Users/ecurtin/git/ramalama/./ramalama/go2jinja/go2jinja.py", line 117
    return f"{self.operands[0].to_jinja()}.format({", ".join([op.to_jinja() for op in self.operands[1:]])})"
                                                    ^
SyntaxError: f-string: expecting '}'

This error is new to me: Why does this cause an issue on mac? And yes, would be great to have the CI properly checking for this on mac - same for the install.sh.
Edit: Ah, its the use of " inside the {...}.

@engelmi
Copy link
Member Author

engelmi commented Mar 11, 2025

Tested this branch on macOS, problem:

./install.sh -l
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/__init__.py: No such file or directory
cp: /var/folders/q3/5msn8s8j02d62vcdg_y2hh2w0000gn/T/tmp.UMDH5HbFl3/share/go2jinja/go2jinja.py: No such file or directory

Did you use the current version? I pushed some updates recently since the CI was failing due to this.

Just refreshed branch, still occuring

I can't test this on my system or via the CI, so its quite hard to fix this. Could you help me here? @ericcurtin

Edit: Removed the directory and put the copied go2jinja.py directly into the ramalama directory.

engelmi added 5 commits March 11, 2025 15:41
Usually, the chat templates for gguf models are written as jinja templates.
Ollama, however, uses Go Templates specific to ollama. In order to use the
proper templates for models pulled from ollama, the chat templates are
converted to jinja ones and passed to llama-run.

Signed-off-by: Michael Engel <[email protected]>
Signed-off-by: Michael Engel <[email protected]>
@ericcurtin
Copy link
Collaborator

We have this problem again on macOS /usr/bin/python3 . It would be nice to set up CI to check this. It's an easy fix everytime, just populate a variable first. If we remember to only use variables in f-string's it will stop happening.

Traceback (most recent call last):
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 98, in <module>
    main(sys.argv[1:])
  File "/Users/ecurtin/git/ramalama/bin/ramalama", line 52, in main
    import ramalama
  File "/Users/ecurtin/git/ramalama/./ramalama/__init__.py", line 5, in <module>
    from ramalama.cli import HelpException, init_cli, print_version
  File "/Users/ecurtin/git/ramalama/./ramalama/cli.py", line 10, in <module>
    import ramalama.oci
  File "/Users/ecurtin/git/ramalama/./ramalama/oci.py", line 9, in <module>
    from ramalama.model import Model
  File "/Users/ecurtin/git/ramalama/./ramalama/model.py", line 24, in <module>
    from ramalama.model_store import ModelStore
  File "/Users/ecurtin/git/ramalama/./ramalama/model_store.py", line 14, in <module>
    from ramalama.go2jinja import go2jinja
  File "/Users/ecurtin/git/ramalama/./ramalama/go2jinja/go2jinja.py", line 117
    return f"{self.operands[0].to_jinja()}.format({", ".join([op.to_jinja() for op in self.operands[1:]])})"
                                                    ^
SyntaxError: f-string: expecting '}'

This error is new to me: Why does this cause an issue on mac? And yes, would be great to have the CI properly checking for this on mac - same for the install.sh. Edit: Ah, its the use of " inside the {...}.

Older version on python3 doesn't have PEP 701

@engelmi engelmi force-pushed the add-chat-template-support branch from 311a4d3 to b751eef Compare March 11, 2025 14:42
@ericcurtin
Copy link
Collaborator

It's working now

@ericcurtin ericcurtin requested a review from Copilot March 11, 2025 14:52

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for chat templates to "ramalama run" by enabling users to provide, download, or extract chat template files and automatically convert Go templates to Jinja templates when necessary. Key changes include:

  • Introducing a new SnapshotFileType for chat templates and updating related file creation and validation functions.
  • Adding a get_chat_template method to model inspection and updating container mount options to support chat templates.
  • Incorporating go2jinja conversion support and extending test coverage for snapshot file validations.

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated no comments.

Show a summary per file
File Description
test/unit/test_model_store.py Tests now include cases for chat template validation.
ramalama/model_inspect.py Added get_chat_template method for chat template extraction support.
ramalama/url.py Updated SnapshotFile instantiation with explicit file types.
ramalama/common.py Added a constant for the chat template mount point.
ramalama/ollama.py Updated SnapshotFile creation and path handling to include chat template files.
ramalama/model_store.py Introduced SnapshotFileType enum, LocalSnapshotFile, and enhanced snapshot logic to handle chat templates including conversion.
ramalama/model.py Updated command building for container mounts to include chat template file.
ramalama/huggingface.py Integrated SnapshotFileType in file creation for consistency.
Comments suppressed due to low confidence (1)

ramalama/model_store.py:407

  • [nitpick] The inner variable 'file' shadows the outer loop variable from the 'for file in snapshot_files' construct, which may lead to confusion. Consider renaming it (e.g., to 'template_file').
with open(chat_template_file_path, "r") as file:
@rhatdan rhatdan merged commit a28d902 into containers:main Mar 11, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants