You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/development/releasing.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,11 @@ Always create a release branch for the releases, for example branch `release-0.5
7
7
## Release Steps
8
8
9
9
1. Cherry-pick fixes to the release branch, skip this step if it's the first release in the branch.
10
-
1. Run `make test` to make sure all test test cases pass locally.
10
+
1. Run `make test` to make sure all test cases pass locally.
11
11
1. Push to remote branch, and make sure all the CI jobs pass.
12
12
1. Run `make prepare-release VERSION=v{x.y.z}` to update version in manifests, where `x.y.x` is the expected new version.
13
13
1. Follow the output of last step, to confirm if all the changes are expected, and then run `make release VERSION=v{x.y.z}`.
14
-
1. Follow the output, push a new tag to the release branch, Github actions will automatically build and publish the new release, this will take around 10 minutes.
14
+
1. Follow the output, push a new tag to the release branch, GitHub actions will automatically build and publish the new release, this will take around 10 minutes.
15
15
1. Test the new release, make sure everything is running as expected, and then recreate a `stable` tag against the latest release.
Copy file name to clipboardexpand all lines: docs/operations/installation.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ To do managed namespace installation, besides `--namespaced`, add `--managed-nam
74
74
75
75
By default, the Numaflow controller is installed with `Active-Passive` HA strategy enabled, which means you can run the controller with multiple replicas (defaults to 1 in the manifests).
76
76
77
-
To turn off HA, add following environment variable to the deployment spec.
77
+
To turn off HA, add the following environment variable to the deployment spec.
Copy file name to clipboardexpand all lines: docs/specifications/edges-buffers-buckets.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@
6
6
7
7
`Edge` is the connection between the vertices, specifically, `edge` is defined in the pipeline spec under `.spec.edges`. No matter if the `to` vertex is a Map, or a Reduce with multiple partitions, it is considered as one edge.
8
8
9
-
In the following pipeline, there are 3 edges defined (`in` - `aoti`, `aoti` - `compute-sum`, `compute-sum` - `out`).
9
+
In the following pipeline, there are 3 edges defined (`in` - `aoti`, `aoti` - `compute-sum`, `compute-sum` - `out`).
10
10
11
11
```yaml
12
12
apiVersion: numaflow.numaproj.io/v1alpha1
@@ -54,9 +54,9 @@ Each `edge` could have a name for internal usage, the naming convention is `{pip
54
54
55
55
`Buffer`is `InterStepBuffer`. Each buffer has an owner, which is the vertex who reads from it. Each `udf` and `sink` vertex in a pipeline owns a group of partitioned buffers. Each buffer has a name with the naming convention `{pipeline-name}-{vertex-name}-{index}`, where the `index` is the partition index, starting from 0. This naming convention applies to the buffers of both map and reduce udf vertices.
56
56
57
-
When multiple vertices connecting to the same vertex, if the `to` vertex is a Map, the data from all the from vertices will be forwarded to the group of partitoned buffers round-robinly. If the `to` vertex is a Reduce, the data from all the from vertices will be forwarded to the group of partitoned buffers based on the partitioning key.
57
+
When multiple vertices connecting to the same vertex, if the `to` vertex is a Map, the data from all the from vertices will be forwarded to the group of partitoned buffers round-robinly. If the `to` vertex is a Reduce, the data from all the from vertices will be forwarded to the group of partitioned buffers based on the partitioning key.
58
58
59
-
A Source vertex does not have any owned buffers. But a pipeline may have multiple Source vertices, followed by one vertex. Same as above, if the following vertex is a map, the data from all the Source vertices will be forwarded to the group of partitoned buffers round-robinly. If it is a reduce, the data from all the Source vertices will be forwarded to the group of partitoned buffers based on the partitioning key.
59
+
A Source vertex does not have any owned buffers. But a pipeline may have multiple Source vertices, followed by one vertex. Same as above, if the following vertex is a map, the data from all the Source vertices will be forwarded to the group of partitioned buffers round-robinly. If it is a reduce, the data from all the Source vertices will be forwarded to the group of partitioned buffers based on the partitioning key.
Copy file name to clipboardexpand all lines: docs/user-guide/reference/multi-partition.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ that the JetStream is provisioned with more nodes to support higher throughput.
7
7
Since partitions are owned by the vertex reading the data, to create a multi-partitioned edge
8
8
we need to configure the vertex reading the data (to-vertex) to have multiple partitions.
9
9
10
-
The following code snippet provides an example of how to configure a vertex (in this case, the `cat` vertex) to have multiple partitions, which enables it (`cat` vertex) to read at a higher throughput.
10
+
The following code snippet provides an example of how to configure a vertex (in this case, the `cat` vertex) to have multiple partitions, which enables it (`cat` vertex) to read at a higher throughput.
Copy file name to clipboardexpand all lines: docs/user-guide/reference/side-inputs.md
+1-3
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,6 @@
3
3
For an unbounded pipeline in Numaflow that never terminates, there are many cases where users want to update a configuration of the UDF without restarting the pipeline. Numaflow enables it by the `Side Inputs` feature where we can broadcast changes to vertices automatically.
4
4
The `Side Inputs` feature achieves this by allowing users to write custom UDFs to broadcast changes to the vertices that are listening in for updates.
5
5
6
-
7
-
8
6
### Using Side Inputs in Numaflow
9
7
The Side Inputs are updated based on a cron-like schedule,
10
8
specified in the pipeline spec with a trigger field.
Similarly, this can be written in [Python](https://github.com/numaproj/numaflow-python/blob/main/examples/sideinput/simple-sideinput/example.py)
75
+
Similarly, this can be written in [Python](https://github.com/numaproj/numaflow-python/blob/main/examples/sideinput/simple-sideinput/example.py)
78
76
and [Java](https://github.com/numaproj/numaflow-java/blob/main/examples/src/main/java/io/numaproj/numaflow/examples/sideinput/simple/SimpleSideInput.java) as well.
79
77
80
78
After performing the retrieval/update, the side input value is then broadcasted to all vertices that use the side input.
0 commit comments