Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Lora support in 2.3 #3072

Merged
merged 117 commits into from
Apr 7, 2023
Merged
Show file tree
Hide file tree
Changes from 103 commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
141be95
initial setup of lora support
felorhik Feb 18, 2023
afc8639
add pending support for safetensors with cloneofsimo/lora
felorhik Feb 19, 2023
5a7145c
Create convert_lora.py
felorhik Feb 19, 2023
82e4d5a
change to new method to load safetensors
felorhik Feb 20, 2023
096e1d3
start of rewrite for add / remove
felorhik Feb 20, 2023
e744774
Rewrite lora manager with hooks
neecapp Feb 20, 2023
404000b
Merge pull request #1 from neecapp/add_lora_support
felorhik Feb 20, 2023
8f6e43d
code cleanup
felorhik Feb 20, 2023
3c6c18b
cleanup suggestions from neecap
felorhik Feb 20, 2023
ac972eb
update prompt setup so lora's can be loaded in other ways
felorhik Feb 20, 2023
884a554
adjust loader to use a settings dict
felorhik Feb 20, 2023
6e730bd
Merge branch 'main' into add_lora_support
felorhik Feb 20, 2023
c3edede
add notes and adjust functions
felorhik Feb 20, 2023
488326d
Merge branch 'add_lora_support' of https://github.com/jordanramstad/I…
felorhik Feb 20, 2023
de89041
optimize functions for unloading
felorhik Feb 21, 2023
3732af6
fix prompt
neecapp Feb 21, 2023
8f527c2
Merge pull request #2 from jordanramstad/prompt-fix
felorhik Feb 21, 2023
e2b6dfe
Update generate.py
felorhik Feb 21, 2023
c1c62f7
Merge branch 'main' into add_lora_support
felorhik Feb 21, 2023
49c0516
change hook to override
felorhik Feb 21, 2023
5529309
adjusting back to hooks, forcing to be last in execution
felorhik Feb 21, 2023
c669336
Update lora_manager.py
felorhik Feb 21, 2023
24d9297
fix typo
felorhik Feb 21, 2023
f70b727
cleanup / concept of loading through diffusers
felorhik Feb 22, 2023
686f6ef
Merge branch 'main' into add_lora_support
felorhik Feb 22, 2023
af3543a
further cleanup and implement wrapper
felorhik Feb 22, 2023
cd333e4
move key converter to wrapper
felorhik Feb 22, 2023
5b4a241
Merge branch 'main' into add_lora_support
felorhik Feb 22, 2023
d408322
Merge branch 'main' into add_lora_support
felorhik Feb 22, 2023
71972c3
re-enable load attn procs support (no multiplier)
felorhik Feb 23, 2023
3f477da
Merge branch 'add_lora_support' of https://github.com/jordanramstad/I…
felorhik Feb 23, 2023
f64a4db
setup legacy class to abstract hacky logic for none diffusers lora an…
felorhik Feb 23, 2023
8e1fd92
Merge branch 'main' into add_lora_support
felorhik Feb 23, 2023
6a1129a
switch all none diffusers stuff to legacy, and load through compel pr…
felorhik Feb 23, 2023
b69f9d4
initial setup of cross attention
felorhik Feb 24, 2023
68a3132
move legacy lora manager to its own file
felorhik Feb 24, 2023
4ce8b1b
setup cross conditioning for lora
felorhik Feb 24, 2023
6a79484
Merge branch 'main' into add_lora_support
felorhik Feb 24, 2023
523e44c
simplify manager
felorhik Feb 24, 2023
7dbe027
tweaks and small refactors
damian0815 Feb 24, 2023
036ca31
Merge pull request #4 from damian0815/pr/2712
felorhik Feb 24, 2023
e700da2
Sync main with v2.3.1 (#2792)
lstein Feb 24, 2023
1447b6d
translationBot(ui): update translation (Spanish)
gallegonovato Feb 23, 2023
5725fcb
translationBot(ui): added translation (Romanian)
mahoney-jb Feb 23, 2023
ec14e2d
translationBot(ui): update translation (Portuguese (Brazil))
telles0808 Feb 23, 2023
ef82290
Merge branch 'main' into add_lora_support
felorhik Feb 24, 2023
49ffb64
ui: translations update from weblate (#2804)
psychedelicious Feb 24, 2023
34e3aa1
parent 9eed1919c2071f9199996df747c8638c4a75e8fb
Kyle0654 Dec 1, 2022
cd98d88
[nodes] Removed InvokerServices, simplying service model
Kyle0654 Feb 25, 2023
c22d529
Add node-based invocation system (#1650)
blessedcoolant Feb 25, 2023
d9c4627
add peft setup (need to install huggingface/peft)
felorhik Feb 26, 2023
9cf7e5f
Merge branch 'main' into add_lora_support
felorhik Feb 26, 2023
5a8d66a
merge lora support
lstein Mar 29, 2023
c2487e4
Kohya lora models load but generate freezes
lstein Mar 30, 2023
6ce2484
merge with 2.3 release candidate 6
lstein Mar 30, 2023
2a586f3
upgrade compel to work with lora syntax
lstein Mar 30, 2023
91e4c60
add solution to ROCm fail-to-install error
lstein Mar 30, 2023
ea5f6b9
Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3
lstein Mar 31, 2023
879c800
preliminary LoRA support ready for testing
lstein Mar 31, 2023
8fbe019
Merge branch 'release/2.3.3-rc3' into feat/lora-support-2.3
lstein Mar 31, 2023
74ff73f
default --ckpt_convert to true
lstein Mar 31, 2023
8554f81
feat(ui): Add Lora To WebUI
blessedcoolant Mar 31, 2023
b598b84
fix(ui): Missing Colors
blessedcoolant Mar 31, 2023
1040076
build(ui): Add Lora to WebUI
blessedcoolant Mar 31, 2023
274d623
fix: Typescript being broken
blessedcoolant Mar 31, 2023
a8b9458
fix: LoraManager UI not returning a component
blessedcoolant Mar 31, 2023
2c5c20c
localization(ui): Localize Lora Stuff
blessedcoolant Mar 31, 2023
4faf902
build(ui): Rebuild Frontend - Add Lora WebUI
blessedcoolant Mar 31, 2023
dabf56b
feat: Add Lora Manager to remaining tabs
blessedcoolant Mar 31, 2023
beff122
build(ui): Add Lora To Other Tabs
blessedcoolant Mar 31, 2023
acd9838
Merge branch 'v2.3' into feat/lora-support-2.3
lstein Apr 1, 2023
c9372f9
moved LoRA manager cleanup routines into a context
lstein Apr 1, 2023
b632b35
remove direct legacy checkpoint rendering capabilities
lstein Apr 1, 2023
605ceb2
add support for loras ending with .pt
lstein Apr 1, 2023
d3b63ca
detect lora files with .pt suffix
lstein Apr 1, 2023
8518f8c
LoRA alpha can be 0
lstein Apr 1, 2023
67435da
added a button to retrieve textual inversion triggers; but causes hig…
lstein Apr 2, 2023
71e4add
add debugging to where spinloop is occurring
lstein Apr 2, 2023
110b067
Update ldm/modules/kohya_lora_manager.py
lstein Apr 2, 2023
941fc22
Update ldm/modules/kohya_lora_manager.py
lstein Apr 2, 2023
90f77c0
Update ldm/modules/lora_manager.py
lstein Apr 2, 2023
e0bd30b
more elegant handling of lora context
lstein Apr 2, 2023
16aeb8d
tweak debugging message for lora unloading
lstein Apr 2, 2023
d7b2dbb
limit number of suggested concepts to those with at least 6 likes
lstein Apr 2, 2023
63ecdb1
rebuild frontend
lstein Apr 2, 2023
fad6fc8
fix(ui): LoraManager UI causing overload
blessedcoolant Apr 2, 2023
3a0fed2
add withLora() readline autocompletion support
lstein Apr 2, 2023
0a0e44b
fix crash when no extra conditioning provided
lstein Apr 2, 2023
afcb278
fix crash when no extra conditioning provided (redux)
lstein Apr 2, 2023
8246e4a
fix cpu overload issue with TI trigger button
lstein Apr 3, 2023
f75a20b
rebuild frontend
lstein Apr 3, 2023
667dee7
add scrollbars to textual inversion button menu
lstein Apr 3, 2023
a14fc3a
fix: Fix Lora / TI Prompt Interaction
blessedcoolant Apr 3, 2023
25faec8
feat(ui): Make HuggingFace Concepts display optional
blessedcoolant Apr 3, 2023
11cd8d0
build: Frontend (Lora Support)
blessedcoolant Apr 3, 2023
793488e
sort lora list alphabetically
lstein Apr 3, 2023
1e6d804
Merge branch 'feat/lora-support-2.3' of github.com:invoke-ai/InvokeAI…
lstein Apr 3, 2023
fc4b76c
change label for HF concepts library option
lstein Apr 3, 2023
ad5142d
remove nodes app directory
lstein Apr 4, 2023
e3772f6
sort loras and TIs in case-insensitive fashion
lstein Apr 4, 2023
cb1d433
create loras directory at update time
lstein Apr 5, 2023
e069523
bump compel version
lstein Apr 5, 2023
261be4e
adjust debouncing timeout; fix duplicated ti triggers in menu
lstein Apr 5, 2023
c8fa019
remove app tests
lstein Apr 5, 2023
6a8848b
Draft implementation if LyCORIS(LoCon and LoHi)
Apr 5, 2023
b62cce2
Clean up
Apr 5, 2023
7bd870f
decrease minimum number of likes to 5
lstein Apr 5, 2023
baf6094
Update kohya_lora_manager.py
Apr 5, 2023
ab9756b
[FEATURE] LyCORIS support in 2.3 (#3118)
lstein Apr 6, 2023
3dffa33
Merge branch 'v2.3' into feat/lora-support-2.3
lstein Apr 6, 2023
4b624dc
Merge branch 'feat/lora-support-2.3' of github.com:invoke-ai/InvokeAI…
lstein Apr 6, 2023
e9d2205
rebuild frontend
lstein Apr 6, 2023
09fe211
Update shared_invokeai_diffusion.py
damian0815 Apr 6, 2023
0784e49
code cleanup and change default LoRA weight
lstein Apr 6, 2023
35c4ff8
prevent crash when prompt blend requested
lstein Apr 7, 2023
0590bd6
Merge branch 'v2.3' into feat/lora-support-2.3
lstein Apr 7, 2023
b141ab4
bump compel version to fix lora + blend
damian0815 Apr 7, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[run]
omit='.env/*'
source='.'

[report]
show_missing = true
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ htmlcov/
.cache
nosetests.xml
coverage.xml
cov.xml
*.cover
*.py,cover
.hypothesis/
Expand Down
5 changes: 5 additions & 0 deletions .pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[pytest]
DJANGO_SETTINGS_MODULE = webtas.settings
; python_files = tests.py test_*.py *_tests.py

addopts = --cov=. --cov-config=.coveragerc --cov-report xml:cov.xml
93 changes: 93 additions & 0 deletions docs/contributing/ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Invoke.AI Architecture

```mermaid
flowchart TB

subgraph apps[Applications]
webui[WebUI]
cli[CLI]

subgraph webapi[Web API]
api[HTTP API]
sio[Socket.IO]
end

end

subgraph invoke[Invoke]
direction LR
invoker
services
sessions
invocations
end

subgraph core[AI Core]
Generate
end

webui --> webapi
webapi --> invoke
cli --> invoke

invoker --> services & sessions
invocations --> services
sessions --> invocations

services --> core

%% Styles
classDef sg fill:#5028C8,font-weight:bold,stroke-width:2,color:#fff,stroke:#14141A
classDef default stroke-width:2px,stroke:#F6B314,color:#fff,fill:#14141A

class apps,webapi,invoke,core sg

```

## Applications

Applications are built on top of the invoke framework. They should construct `invoker` and then interact through it. They should avoid interacting directly with core code in order to support a variety of configurations.

### Web UI

The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/frontend` and the backend code is found in `/ldm/invoke/app/api_app.py` and `/ldm/invoke/app/api/`. The code is further organized as such:

| Component | Description |
| --- | --- |
| api_app.py | Sets up the API app, annotates the OpenAPI spec with additional data, and runs the API |
| dependencies | Creates all invoker services and the invoker, and provides them to the API |
| events | An eventing system that could in the future be adapted to support horizontal scale-out |
| sockets | The Socket.IO interface - handles listening to and emitting session events (events are defined in the events service module) |
| routers | API definitions for different areas of API functionality |

### CLI

The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/ldm/invoke/app/cli_app.py`.

## Invoke

The Invoke framework provides the interface to the underlying AI systems and is built with flexibility and extensibility in mind. There are four major concepts: invoker, sessions, invocations, and services.

### Invoker

The invoker (`/ldm/invoke/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services:
- **invocation services**, which are used by invocations to interact with core functionality.
- **invoker services**, which are used by the invoker to manage sessions and manage the invocation queue.

### Sessions

Invocations and links between them form a graph, which is maintained in a session. Sessions can be queued for invocation, which will execute their graph (either the next ready invocation, or all invocations). Sessions also maintain execution history for the graph (including storage of any outputs). An invocation may be added to a session at any time, and there is capability to add and entire graph at once, as well as to automatically link new invocations to previous invocations. Invocations can not be deleted or modified once added.

The session graph does not support looping. This is left as an application problem to prevent additional complexity in the graph.

### Invocations

Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/ldm/invoke/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations.

### Services

Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/ldm/invoke/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import).

## AI Core

The AI Core is represented by the rest of the code base (i.e. the code outside of `/ldm/invoke/app/`).
105 changes: 105 additions & 0 deletions docs/contributing/INVOCATIONS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Invocations

Invocations represent a single operation, its inputs, and its outputs. These operations and their outputs can be chained together to generate and modify images.

## Creating a new invocation

To create a new invocation, either find the appropriate module file in `/ldm/invoke/app/invocations` to add your invocation to, or create a new one in that folder. All invocations in that folder will be discovered and made available to the CLI and API automatically. Invocations make use of [typing](https://docs.python.org/3/library/typing.html) and [pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration into the CLI and API.

An invocation looks like this:

```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'

# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description = "The upscale level")

def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
)

# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
)
```

Each portion is important to implement correctly.

### Class definition and type
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
```
All invocations must derive from `BaseInvocation`. They should have a docstring that declares what they do in a single, short line. They should also have a `type` with a type hint that's `Literal["command_name"]`, where `command_name` is what the user will type on the CLI or use in the API to create this invocation. The `command_name` must be unique. The `type` must be assigned to the value of the literal in the type hint.

### Inputs
```py
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description="The upscale level")
```
Inputs consist of three parts: a name, a type hint, and a `Field` with default, description, and validation information. For example:
| Part | Value | Description |
| ---- | ----- | ----------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |

Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this field to be parsed with `None` as a value, which enables linking to previous invocations. All fields should either provide a default value or allow `None` as a value, so that they can be overwritten with a linked output from another invocation.

The special type `ImageField` is also used here. All images are passed as `ImageField`, which protects them from pydantic validation errors (since images only ever come from links).

Finally, note that for all linking, the `type` of the linked fields must match. If the `name` also matches, then the field can be **automatically linked** to a previous invocation by name and matching.

### Invoke Function
```py
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
)

# Results are image and seed, unwrap for now
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
)
```
The `invoke` function is the last portion of an invocation. It is provided an `InvocationContext` which contains services to perform work as well as a `session_id` for use as needed. It should return a class with output values that derives from `BaseInvocationOutput`.

Before being called, the invocation will have all of its fields set from defaults, inputs, and finally links (overriding in that order).

Assume that this invocation may be running simultaneously with other invocations, may be running on another machine, or in other interesting scenarios. If you need functionality, please provide it as a service in the `InvocationServices` class, and make sure it can be overridden.

### Outputs
```py
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
type: Literal['image'] = 'image'

image: ImageField = Field(default=None, description="The output image")
```
Output classes look like an invocation class without the invoke method. Prefer to use an existing output class if available, and prefer to name inputs the same as outputs when possible, to promote automatic invocation linking.
15 changes: 13 additions & 2 deletions docs/features/CONCEPTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,14 @@ library which downloads and merges TI files automatically upon request. You can
also install your own or others' TI files by placing them in a designated
directory.

You may also be interested in using [LoRA Models](LORAS.md) to
generate images with specialized styles and subjects.

### An Example

Here are a few examples to illustrate how it works. All these images were
generated using the command-line client and the Stable Diffusion 1.5 model:
Here are a few examples to illustrate how Textual Inversion works. All
these images were generated using the command-line client and the
Stable Diffusion 1.5 model:

| Japanese gardener | Japanese gardener <ghibli-face> | Japanese gardener <hoi4-leaders> | Japanese gardener <cartoona-animals> |
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
Expand Down Expand Up @@ -147,6 +151,13 @@ angle brackets. In the example above `<easynegative`> is such a file
(the filename was `easynegative.safetensors`). In such cases, you can
change the trigger term simply by renaming the file.

## Training your own Textual Inversion models

InvokeAI provides a script that lets you train your own Textual
Inversion embeddings using a small number (about a half-dozen) images
of your desired style or subject. Please see [Textual
Inversion](TEXTUAL_INVERSION.md) for details.

## Further Reading

Please see [the repository](https://github.com/rinongal/textual_inversion) and
Expand Down
100 changes: 100 additions & 0 deletions docs/features/LORAS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
title: Low-Rank Adaptation (LoRA) Models
---

# :material-library-shelves: Using Low-Rank Adaptation (LoRA) Models

## Introduction

LoRA is a technique for fine-tuning Stable Diffusion models using much
less time and memory than traditional training techniques. The
resulting model files are much smaller than full model files, and can
be used to generate specialized styles and subjects.

LoRAs are built on top of Stable Diffusion v1.x or 2.x checkpoint or
diffusers models. To load a LoRA, you include its name in the text
prompt using a simple syntax described below. While you will generally
get the best results when you use the same model the LoRA was trained
on, they will work to a greater or lesser extent with other models.
The major caveat is that a LoRA built on top of a SD v1.x model cannot
be used with a v2.x model, and vice-versa. If you try, you will get an
error! You may refer to multiple LoRAs in your prompt.

When you apply a LoRA in a prompt you can specify a weight. The higher
the weight, the more influence it will have on the image. Useful
ranges for weights are usually in the 0.0 to 1.0 range (with ranges
between 0.5 and 1.0 being most typical). However you can specify a
higher weight if you wish. Like models, each LoRA has a slightly
different useful weight range and will interact with other generation
parameters such as the CFG, step count and sampler. The author of the
LoRA will often provide guidance on the best settings, but feel free
to experiment. Be aware that it often helps to reduce the CFG value
when using LoRAs.

## Installing LoRAs

This is very easy! Download a LoRA model file from your favorite site
(e.g. [CIVITAI](https://civitai.com) and place it in the `loras`
folder in the InvokeAI root directory (usually `~invokeai/loras` on
Linux/Macintosh machines, and `C:\Users\your-name\invokeai/loras` on
Windows systems). If the `loras` folder does not already exist, just
create it. The vast majority of LoRA models use the Kohya file format,
which is a type of `.safetensors` file.

You may change where InvokeAI looks for the `loras` folder by passing the
`--lora_directory` option to the `invoke.sh`/`invoke.bat` launcher, or
by placing the option in `invokeai.init`. For example:

```
invoke.sh --lora_directory=C:\Users\your-name\SDModels\lora
```

## Using a LoRA in your prompt

To activate a LoRA use the syntax `withLora(my-lora-name,weight)`
somewhere in the text of the prompt. The position doesn't matter; use
whatever is most comfortable for you.

For example, if you have a LoRA named `parchment_people.safetensors`
in your `loras` directory, you can load it with a weight of 0.9 with a
prompt like this one:

```
family sitting at dinner table withLora(parchment_people,0.9)
```

Add additional `withLora()` phrases to load more LoRAs.

You may omit the weight entirely to default to a weight of 1.0:

```
family sitting at dinner table withLora(parchment_people)
```

If you watch the console as your prompt executes, you will see
messages relating to the loading and execution of the LoRA. If things
don't work as expected, note down the console messages and report them
on the InvokeAI Issues pages or Discord channel.

That's pretty much all you need to know!

## Training Kohya Models

InvokeAI cannot currently train LoRA models, but it can load and use
existing LoRA ones to generate images. While there are several LoRA
model file formats, the predominant one is ["Kohya"
format](https://github.com/kohya-ss/sd-scripts), written by [Kohya
S.](https://github.com/kohya-ss). InvokeAI provides support for this
format. For creating your own Kohya models, we recommend the Windows
GUI written by former InvokeAI-team member
[bmaltais](https://github.com/bmaltais), which can be found at
[kohya_ss](https://github.com/bmaltais/kohya_ss).

We can also recommend the [HuggingFace DreamBooth Training
UI](https://huggingface.co/spaces/lora-library/LoRA-DreamBooth-Training-UI),
a paid service that supports both Textual Inversion and LoRA training.

You may also be interested in [Textual
Inversion](TEXTUAL_INVERSION.md) training, which is supported by
InvokeAI as a text console and command-line tool.

1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ This method is recommended for those familiar with running Docker containers
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Using LoRA models](features/LORAS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
Expand Down
Loading