# Tutorial 2 - Using Annotators and Pipelines The current pipeline used by RoboKudo is rather simple to illustrate the basic idea. In this tutorial you will learn how to add an Annotator to the PPT. To do this, some knowledge about the RoboKudo codebase is required. Generally the source code for RoboKudo is located in the folder `src/robokudo/robokudo/src/robokudo`, which can be accessed through the folder view on the left. ![](../img/folder-view.png) For this tutorial only the `annotators` and `descriptors/analysis_engines` folders are required in this directory. The folder `descriptors/analysis_engines` contains the various Analysis Engines (AEs). Analysis Engines can be seen as a collection of PPTs, but they can also just contain the definition of a single PPT. The `annotators` folder contains the various Annotators used in RoboKudo. - **Task 2-1:** Take a look at the implementation of the AE called `web_from_storage_binder.py` in respect to the following questions: - How are Annotators added to the PPT? - How are Annotators imported to the AE? - How are Sequences created? - How are Sequences added to the PPT? - **Task 2-2:** Add the Annotator called **ClusterColorAnnotator** to the end of the `object_detection` Sequence. Then: - Restart RoboKudo - Send query with of type **detect** - View output of the newly added Annotator :::{important} :class: important A pipeline should contain the Sequences `base_tree` as the **first member** and `reply_tree` as the **last member**. `base_tree` contains annotators for reading in image data and perception tasks and the `reply_tree` is responsible for returning perception task results and cleanup processes. ::: :::{important} :class: important If we ask you to change the AE or PPTs in the following tutorials, always refer to `web_from_storage_binder.py` unless noted otherwise. ::: Now that you have added the ClusterColorAnnotator to the PPT, it should be able to recognize the dominant colors of detected objects. This should also be visible in the output image of the annotator. The output of it should look something like this: ![Five objects on a kitchen counter highlighted with rectangles colored in the main color of the objects](../img/02-clustor-color-output.png) If you have completed the task correctly your PPT should look something like this: ![A PPT including the ClusterColorAnnotator](../img/02-clustor-color-ppt.png) When hovering over an object in the details tree next to the output image you should also be able to see the annotations that were added to the **ObjectHypothesis** called **SemanticColor**. As an example: ![The details tree showing ObjectHypothesis with one of them showing details about their annotations, which include SemanticColor annotations](../img/02-color-annotation.png) :::{admonition} The final sequence could look something like this: :class: dropdown hint ```python object_detection = py_trees.Sequence("ObjectDetection") object_detection.add_children( [ PointcloudCropAnnotator(descriptor=pcc_descriptor), PlaneAnnotator(), PointCloudClusterExtractor(), ClusterColorAnnotator() ] ) ``` :::