Pipeline
Pipelines are the core engine for moving and transforming data. They define data flows that connect sources to targets, applying logic and transformations along the way. As an example, the pipeline below executes once a second, reads modeled data from Ignition, then sends it to a local UNS and to Snowflake, buffering the data to Snowflake to make writes more efficient.
Pipelines consist of Events, Triggers, and Stages, which are described below.
Events
When data enters a pipeline it’s called an event. An event holds both a value and metadata. For example, if an event is from an OPC UA tag read, the value of the tag would be the event value and the tag source address would be metadata. Think of the event value as in what is sent over the wire when a pipeline write occurs. Event metadata is additional information in the event that can be used in the pipeline logic but is not part of the value.
For a more concrete example, consider the Filter stage below shown in Debug mode. Here we’re sending an event through the Filter stage to remove the ’type’ attribute from the value and place it in the metadata. On the bottom left you can see the event entering the filter stage, and the bottom right shows the event that exits the stage, with the difference shown by the coloring.
When this event reaches the toUNS or toSnowflake write stages, only the ‘sample’ attribute is included in the payload. The metadata could be used to control the MQTT topic, or other settings.
Events are generated by Triggers and processed sequentially through Stages.
Triggers
Triggers inject events into the pipeline. As examples, a Polled Trigger generates an empty event every time it fires, and an Event Trigger subscribes to an MQTT topic and generates an event any time data is published on the topic.
All pipeline events generated by triggers are sent to the start stage (play icon), which is the entry point of the pipeline. Note pipelines can have one or more triggers.
Stages
Stages perform I/O or transformations on events. For example, the Breakup stage takes an event value that’s an Array and breaks it into one event for each row in the Array. The read stage uses the event value or metadata to issue a read (ex. machineID to read an OPC UA server) and outputs an event that’s the result of the read.
Stages take a single event in and can access either the event value or metadata. Stages can output 0..N events. When no events are output by a stage the pipeline execution stops for that event.
Creating a Pipeline
- Click Pipeline in the configuration’s Main Menu and then click New Pipeline to get started.
- Enter a Name to represent the pipeline. Optionally enter a Description, Tags, and Grouping.
- Choose to create an Empty Pipeline or choose Build Flow.
- Empty Pipeline creates a pipeline with no triggers or stages that must be manually configured.
- Build Flow continues the wizard and displays flow triggers settings to create a simple pipeline that reads and writes.
- Once the Pipeline is created, navigate to Stages in the top left of the Pipeline to view and drag and drop Trigger or other Stages onto the Pipeline.
Building & Debugging Pipelines
Pipeline Debug mode allows you to simulate events flowing through the pipeline, and visually inspect what each stage does to the event. This can be very useful and intuitive way to understand and work with pipelines and stages.
For more information on how to use the feature, see the Debug section.
Pipeline Status
The list view shows the name, enabled status, processing status, active features, and tags for each pipeline. The name, enabled, and tags columns can be sorted by clicking on the column header. All triggers can be enabled/disabled on multiple pipelines at the same time by selecting them (using the checkboxes on the left) and then using an action from the Actions dropdown.
Column | Description |
---|---|
Name | The name of the pipeline or group |
Enabled | This will show a play button if the pipeline has any enabled triggers, or if it is Callable. |
Status | Status is based off of the last execution of the pipeline. Not Started - The pipeline has never run Good - The pipeline is executing successfully Error - An error occurred while processing Warning - The pipeline queue is growing faster than it is being processed. |
Features | The feature icons indicate if a pipeline can be called internally, from the API, or if it has track changed enabled |
Pipeline Stage Status
Each pipeline stage shows a status when viewing the pipeline, with the same status options as the overall pipeline status (i.e. good, not started, bad, etc). Each stage also shows the total execution count, which is the number events that have passed through the stage since it was last saved.
Pipeline Errors
When errors occur in a pipelines execution the stage where the error occurred and the overall pipeline go into error status. This can be seen on the pipeline list page and when opening the pipeline. Errors could include bad I/O (read or write), errors in transformation logic, etc.
For more details on errors and overall error handling strategies for large deployments, see Pipeline Error Handling.
Helpful Features
Below is a collection of features that make working with pipelines easier.
Auto Layout
For large pipelines, the auto-layout feature can help keep stages organized. You can find this in the lower left hand corner of the canvas.
Multi-Select
The select multiple stages at once by either holding left-shift and dragging over the selected stages or holding left-control and selecting each stage.
Cloning Stages
To clone a stage, click on the desired stage and press the clone button.
Pipelines are a powerful tool to move and transform data. See the links below to learn more about pipeline components, debugging, monitoring, and more.