ImageNet Designer - Graphical Programming Tutorial

= Basic Graphical Tutorials (2D) = The advanced graphical tutorials are an introduction to Imagenets and are based on 2D calculations.

Task 1: Load Image and Conversion (15 minutes)
What you learn: Load image, conversion to gray, image dialog, save ImageNet

Plugins needed: Load, Conversion

Start ImageNet Designer by a click on the ImageNet Designer logo.


 * 1) Right click on the Drawing Area and select Load $$\rightarrow$$ Load Image. You have created a new block which can load images. The selected block is visualized by the blue highlight.
 * 2) Left Click on the Drawing Area. The block loses its highlight.
 * 3) Click on the Load Image block to select it. Then you can see its Properties on the right side in the Block Properties area. Double click on the Filename property and select the image you want to load (e.g. Bumblebee2/LibraryScene-01-L.png). Your selected image will be visible in a small Preview Box on the block. [[Image:ImageNetBlock.png|thumb|right|ImageNet Block]] Double click on the Preview Box to see the loaded image in full size. You can zoom the image by using the scroll-wheel of your mouse. With a left click you can drag the image. If you hover (just do not move the mouse) with the mouse above the image, you will see information about the currently pointed at pixel. Close this window again or bring it to the background.
 * 4) Create a new block: Conversion $$\rightarrow$$ Color 2 Gray. This block has a color input port and a gray output port.
 * 5) To connect both blocks, you have to click and hold the left mouse button from the output port (right side) of the Load Image block and release it on the input port (left side) of the Color 2 Gray block. The result is visible in the figure on the right. [[Image:Block Connect.png|thumb|right|ImageNet, which consists of two blocks and a connection]]
 * 6) To get a gray version of your image, you can either process the whole net by pressing the [[Image:Button.png]] (Process Net in the Toolbar) button or you can process only one block when you right click on a block and select Process Block. To process one block, it is also possible to select one block and press the [[Image:ProcessBlock.png]] (Process Block in the Toolbar) button.
 * 7) Press the [[Image:SaveAs.png]] (Save As in the Toolbar) button to save the net under a name "task1", The postfix .imagenet will be added.
 * 8) Close ImageNet Designer.

Task 2: Image Enhancements (15 minutes)
What you learn: Histogram, Draw Histogram, Cumulative Histogram, Histogram Equalization, Joiner

Plugins needed: Load, Conversion, Feature Extraction, Preprocessing, Draw, Join


 * 1) Start ImageNet Designer and open your previously saved net. This can be done with the help of the menu over File $$\rightarrow$$  Recently Opened Files.
 * 2) Save the net under the name "task2".
 * 3) Add a new block: Feature Extraction $$\rightarrow$$ Histogram and connect its input with the output from Color 2 Gray.
 * 4) Change the image in the Load Image block to a weak contrast image, e.g. "Exercises/girl.jpg".
 * 5) Save your net by pressing the [[Image:Save.png]](save) button and press the "Process Net" button.
 * 6) The Histogram block and the other blocks should now be processed. Double click on the preview box of the Histogram block to see the probability of occurrences of each gray value in the image, which is a vector with 256 values [0 ... 255].
 * 7) Like this, it is very hard to see the result as a whole. Due to this reason, you should visualize the histogram with the block Draw $$\rightarrow$$ Draw Histogram. Now connect the Histogram block with the Draw Histogram block.
 * 8) Press Process Net.
 * 9) Visualize the histogram by double clicking on the preview box of the Draw Histogram block.
 * 10) Now enhance the input gray image with the block Preprocessing $$\rightarrow$$ Image Enhancements $$\rightarrow$$ Histogram Equalization. This block needs a Preprocessing $$\rightarrow$$ Image Enhancements $$\rightarrow$$ Cumulative Histogram as input. Use another Draw Histogram block to visualize the cumulative histogram. A value "i" in the cumulative histogram is the sum of all values "0 ... i" of the original histogram.
 * 11) Join both drawn histograms with a Join $$\rightarrow$$ Joiner block. When you double click on the preview box of the processed Joiner block you can switch between the two histograms over the registers at the top (Layer 0: COLOR and Layer 1: COLOR). [[Image:JoinImages.png|thumb|none|This comparison ImageNet shows the original and the cumulative histogram]]
 * 12) Save this ImageNet.
 * 13) Close ImageNet Designer

Task 3: Embedded ImageNets (30 minutes)
What you learn: Embedding, SubNets, Deleting Blocks, Multiple Block Selection

Plugins needed: Load, Conversion, Preprocessing, Draw


 * 1) Start ImageNet Designer and create an Embedding $$\rightarrow$$ Executor Input on the left of an empty Drawing Area and four Embedding $$\rightarrow$$ Executor Output blocks on the right side.
 * 2) Add a Histogram block, a Cumulative Histogram block and two Draw Histogram blocks. Connect them like in the image below so that you create an ImageNet which outputs the histogram and cumulative histogram and the visualization of them of an input gray image.[[Image:Histogram.png|thumb|none|The histogram and cumulative histogram (data and image) of the gray input image is drawn in the output.]]
 * 3) Describe all Port blocks over their Help_Text Property. This will help you later. (e.g. Executor Input: Gray Input Image, Output 0: Histogram Image, Output 1: Cumulative Histogram Image, Ouput 2: Histogram, Output 3: Cumulative Histogram)
 * 4) Open the ImageNet Overview Dock Widget by right clicking on the toolbar and then click on ImageNet Overview. On the bottom right the new widget should appear. Now write your name in the Author field and in the Help Text field you should write an explanation about the net (e.g. This block calculates the histogram and cumulative histogram of a gray image).
 * 5) Save this ImageNet under SubNets/Feature Extraction with the name HistogramAnalysis.
 * 6) Load the ImageNet "task2". Save it as "task3".
 * 7) Remove the Histogram block, the Cumulative Histogram block, the Joiner block and the two Draw Histogram blocks by selecting them and pressing the delete key on the keyboard. You can select multiple blocks by left clicking and holding on the drawing area and moving the mouse. Like this you can draw a rectangle around all wanted blocks.
 * 8) Add an Embedding $$\rightarrow$$ ImageNetExecutor block. Setup the Filename property to the previously saved ImageNet SubNets/Feature Extraction/HistogramAnalysis. As you can see, the ImageNetExecutor block now has one input and four output ports, like defined before.
 * 9) Add a new Feature Extraction $$\rightarrow$$ SubNets $$\rightarrow$$ HistogramAnalysis.imagenet block. This is a shortcut to load a SubNet (the shortcut will only work if you place the ImageNet in one of the SubNets folders). Both ways are equivalent. Take a look at the Help texts of the ports: Move the mouse over a port icon and do not move the mouse. Your previously defined help text will appear. The general Help Text which you defined previously is visible if you hover over the block everywhere but its ports. See also that your name is visible in the Block Infos - Author if you select a HistogramAnalysis block.
 * 10) Connect the Color 2 Gray block with one HistogramAnalysis and the cumulative histogram output of it to the Histogram Equalization block. Analyze the Histogram Equalization with the other HistogramAnalysis.[[Image:Excuter.png|thumb|none|With SubNets you can re-use an ImageNet and reduce screen space usage.]]
 * 11) Through the analysis of the histogram equalized image, you can see that the cumulative histogram is linearized.
 * 12) Save the ImageNet and close the ImageNet Designer.

Calibration
What you learn: Intrinsic Calibration, Undistortion, Extrinsic Calibration

Plugins needed: Load, Calibration, Draw

Everything described below depends on the OpenCV Camera Calibration Description.

The so-called pinhole camera model describes the relation between a real world point in 3D and a 2D point on the image plane of the camera. Based on this model a camera has to be calibrated on the one hand for its internal (intrinsic) parameters and on the other for its external (extrinsic) parameters.

Info: Chessboard Calibration Pattern
To calibrate a camera we use a chessboard calibration pattern. It is assumed, that the pattern is really planar and the squares of the pattern have equal width and height.

On the image below, you can see that the right relation between odd and even squares of the chessboard are important to get unique solution.



Counting of the chessboard corners starts at a black corner from which it goes in clock-wise direction of the width of the chessboard. Only of there are 4 white corners, counting starts at a white corner (This can only be the case if the pattern is odd x odd).

Task 4: Chessboard Detection (10 minutes)

 * 1) Create a Load Image block and load one image from the folder Calibration (e.g. BB3_4.jpg).
 * 2) Convert the image to gray and detect the chessboard corners with the block Calibration $$\rightarrow$$ Chessboard Detection.
 * 3) Visualize the found chessboard corners on the original image with the block Draw $$\rightarrow$$ Draw Chessboard.

Task 5: Intrinsic Calibration (10 minutes)
The intrinsic camera parameters describe a camera internally. Included are the field of view (focal length and center point) of a camera and how the optics distort (distortion coefficients) the real world onto the image plane.



Because we know some points in the image (corners of the chessboard pattern) are relative to each other on a plane, the field of view and the distortion can be estimated.


 * 1) Create a Load Image block and load a set of images from the folder Calibration. (You have to select multiple images at once!)
 * 2) Create the block Calibration $$\rightarrow$$ Calibrate Camera Intrinsic and connect the input images with the intrinsic calibration block. You can tell the block to display found chessboards over the property Draw_ChessboardCorners.
 * 3) After execution, an intrinsic matrix and a distortion coefficients vector were created. You can take a look at the result by double-clicking on the preview field of the upper output port.
 * 4) Try out the intrinsic calibration with different amounts of images. Normally, too few images (below 10) lead to a wrong calibration result.

Task 6: Undistortion (10 minutes)
After the intrinsic calibration we can use the estimated distortion coefficients to calculate an undistorted version of an image.


 * 1) Use the net from the previous task.
 * 2) Load one single distorted image in a new Load Image block. (e.g. Images/Calibration/BB3_4.jpg)
 * 3) Use a Join $$\rightarrow$$ Join Image and Matrices block to add the calculated calibration parameters to the single image. After process, double click on its preview image to visualize the images and check Show Matrices. The single image got the intrinsic matrix and the distortion coefficients vector filled now. The extrinsic matrix is still empty.
 * 4) Add a block Calibration $$\rightarrow$$ Undistort Image.
 * 5) Join the original and the undistorted image, so you can switch back and forth between the images and compare the result.



Task 7: Extrinsic Calibration (10 minutes)
For the extrinsic calibration we need to match 2D Points with known 3D Points. If we use a chessboard calibration pattern, we have to tell the OpenCV calibration algorithm which 3D coordinate which 2D Point corresponds to.

In our example, we set the first corner of the chessboard to the coordinate (0,0,0) and the other corners relative to it depending on the chessboard element size. It is important that the order of the 2D points and the 3D points are the same to have a correct correspondance.




 * 1) Use the new from the previous task.
 * 2) Remove the Load Image block with the single image.
 * 3) Connect the multiple images Load Image with the Join Image and Matrices block.
 * 4) Remove the connections to the Joiner block.
 * 5) Add a new Calibrate Camera Extrinsic block and set its input to the output of the Undistort Image block.
 * 6) Join both outputs of the new block in the Joiner.
 * 7) Process the net and inspect the Joiner output.
 * 8) Switch to 3D view mode on the top left. You can move the images away from the camera origins. Move the sliders on the bottom so that the images go through the world origin to measure the distance between camera and chessboard.





= Advanced Graphical Tutorials (3D) = The advanced graphical tutorials handle 3D calculations.

Multiple Views (15 minutes)
What you learn: Join Images, 3D-View


 * 1) Start ImageNet Designer and load the image Multiple_Views/2Books-L.png with a Load Image block.
 * 2) Double click on the preview image of this block and switch to the 3D view (on the left side of the dialog). The big origin (the three colored arrows, green: x direction, red: y direction, blue: z direction) is the world origin. The small origin is the camera location and orientation (also called Frame), which took the image. The lines, which go out of the camera origin are the edges of the camera's field of view. When a camera captures an image, it is not clear where in 3D-space the pixels are. It is just clear, that they are somewhere on a ray out of the camera origin. To emphasize this, the camera image can be moved over a slider below the 3D view.
 * 3) You can use the left mouse button to look around.
 * 4) The right mouse button moves the viewpoint sideways.
 * 5) Use the scroll wheel to zoom.
 * 6) Load the image Multiple_Views/2Books-R.png with another Load Image block.
 * 7) Join both images with a Join Images block and execute the net.
 * 8) Double click on the preview image of the Join Images block and change to 3D. See how the 3D view now contains both camera frames (the images were captured by a stereo camera where the two cameras are oriented parallel to each other and have a distance of 12 cm). Over the sliders below the 3D view you can move the two images independently.
 * 9) Load the image Multiple_Views/2Books_BlueFox_1.png with another Load Image block.
 * 10) Join the new image with a Join Images block with the other Join Images block. Like this, you can join as many block outputs as you want. Execute the whole net.
 * 11) Double click on the preview image of the new Join Images block and switch to the 3D view.

Rotation Matrix and Euler Angles (30 minutes)
What you learn: Create Frame, Rotation Matrix, Gimball Lock


 * 1) Create a new block: Create $$\rightarrow$$ Create Frame. If you would process this block with its default values and inspect the result in the 3D view, you would not see the resulting small origin, because it is inside the big world origin.
 * 2) Change X,Y,Z to 0.2 to move the frame outside the world coordinate system.
 * 3) Double click on the preview image and inspect the 3D-View. You should see a small coordinate system moved away from the world coordinate system by 0.2 meters in all directions. (The grid on the x-y-plane of the world coordinate system has a 1 meter step width) [[Image:3D-Reconstruction2.png|thumb|none|3D-Reconstruction Example]]
 * 4) Play around with the rotation angles.
 * 5) Try out 15 degrees around one of the rotation angles at a time and open the 3D view. Compare it to the 2D view. Find a detailed explanation about the rotation matrix in Wikipedia.
 * 6) When you change any parameter of the Create Frame block, a so called Frame is calculated, which is a 4x4 Matrix and describes the translation and rotation seen from the world coordinate system.
 * 7) When you double click on the preview image on this block, you can see that the 2D view shows this 4x4 matrix and below its decomposition to explain the contents.
 * 8) To better understand the 3x3 rotation matrix (top left part of the 4x4 frame), its is automatically calculated into the partial rotations around the x, y and z axes. Mostly, these angles are the same as the parameters you typed in before. [[Image:RotationAngle.png|thumb|none|Same angle calculated as the angle entered before.]]
 * 9) When the angle around the y-axis (Theta) becomes +90° or -90° the rotation axes around X and Z become the same. This is also called a Gimbal lock (explained in Wikipedia), Video explanation of Gimbal lock. In this special case, various combinations of angles result in the same rotation matrix. Because of this, the backwards computation is not unique and can produce different angles than you entered before. Generally, the rotation angle around X is set to 0 and only the rotation around Z is computed.
 * 10) If the angle around y (Theta) is 90°, then the combined resulting angle around the other two axes are $$R = Z - X$$ [[Image:Angle.png|thumb|none|R = Z - X]] and if Theta is -90°, the resulting angle is $$R = Z + X$$[[Image:-Angle.png|thumb|none|R = Z + X]] afterwards the result is normalized between 0 and 360°.

Advanced Frame Transformations (15 minutes)
Frame Multiplication

Same as Matrix Multiplication. The First Frame is moved in its coordinate system by the second frame.

Frame Inversion

Same as Matrix Inversion. If you invert twice, you get back the original frame.

Frame Difference

Same as Inversion of the first Frame and multiplication of this with the second Frame.

Convert a Frame in Camera Coordinates to World Coordinates

This is necessary after a Marker detection.

Use these blocks:
 * 1) Extrinsic Matrix 2 Frame
 * 2) Frame Multiplication (SubNet)

Extract the extrinsic Matrix of the Image, in which the Frame was detected. Multiply the extrinsic Matrix with the Frame in Camera Coordinates. The result is the Frame in World Coordinates.

Convert a Frame in World Coordinates to Camera Coordinates

Use these blocks:
 * 1) Extrinsic Matrix 2 Frame
 * 2) Frame Inversion(SubNet)
 * 3) Frame Multiplication (SubNet)

Extract the extrinsic Matrix of the Image, in which the Frame was detected. Invert the extrinsic Matrix and multiply the result with the Frame in World Coordinates. The result is the Frame in Camera Coordinates.

Calculate the extrinsic matrix of camera A, over the extrinsic matrix of camera B and a common marker M

Similar to Frame Difference but here the second input to the multiplication is inverted at first.

Multiply the Frame M in World Coordinates (calculated by camera B) with the inverse of Frame M in coordinates of camera A.

Rotate around the World Coordinate System

Create a Frame with the rotation you want and multiply it with your original Frame. Be careful with the order of the multiplication!

Rotate around the World Coordinate System but keep the location

Move the Original Frame to the World Coordinate System, Multiply with the Rotation you want (Create Frame with only rotations) and move the Frame back to the original location.



Marker Detection (15 minutes) only Linux
What you learn: Segmentation, Marker Detection, 2D Points, Draw 2D Points, Calculate Frame, Draw Frame, Frame Conversion

Remark: The ARToolKit Plugin is currently only working under Linux.


 * 1) Start ImageNet Designer and load the image Bumblebee2/LibraryScene-01-L.png with a Load Image block [[Image:Marker.png|thumb|none|Library Scene: A known maker is placed on a table]].
 * 2) Add a new block: Segmentation $$\rightarrow$$ Fuzzy Segmentation and connect both blocks.
 * 3) Adjust the property Threshold of the new block so that the marker (bottom left of the table) is clearly visible.
 * 4) Add a new block: Feature Extraction $$\rightarrow$$ ARToolKit Marker Detection and connect the segmented image with the marker detection block. The result is a set of 2D Points. If you double click on the preview box you can see the values of the points.
 * 5) It is more user friendly to see the result of the marker detection if you use the block Draw $$\rightarrow$$ Draw 2D Points. Connect the original image and the resulting 2D Points with this block,[[Image:Marker2.png|thumb|none|ImageNet of marker detection (left) and zoomed result image (right)]].
 * 6) As we know the size of the marker and that the marker is flat, we can calculate the camera position relative to the marker. This can be done with the block 3D-Reconstruction $$\rightarrow$$ ARToolKit Frame Calculation. The property is a matrix of eight 3D points of the marker. The default expects a 10cm marker with 2.5cm black frame [[Image:Marker3.png|thumb|none|3D frame projected on a 2D image]].
 * 7) With the block Draw $$\rightarrow$$ Draw Frame you can project the calculated frame from the previous block to the input image. Change the property Reference_Coordinate_System from World to Camera. The result is visible in the above image.
 * 8) Add a new block: Matrix Operations $$\rightarrow$$ Matrix Camera 2 World. This block can convert the previously calculated frame from camera coordinates to world coordinates. Use the Join $$\rightarrow$$ Join Images block with the original image and the frame in world coordinates. Inspect the 3D of the Join Images block and move the slider of the COLOR layer to move the image away from the camera origin. How far is the marker away from the camera?[[Image:Framedetection.png|thumb|none|Marker detection ImageNet]].

Robot Error Simulation


= Execution order in ImageNets =



Special functions of all blocks
 * 1) resetProcessData: is called automatically before process net.
 * 2) process: is called while process net and also if only a part of the net is processed.
 * 3) finishNetExecution: is called automatically after process net.

At the begin it is checked, if the complete net is executed, or just parts of it. When the complete net is executed, the resetProcessData method of all blocks is called to clear data from the last processing. This is important for blocks that e.g. collect scalars. Then are all blocks to be processed enqueued in a list and executed as a step. During this step all possible blocks are started and the net waits for at least one block to finish before it tries to start another block. A block is possible to process, when all blocks of its incoming ports are processed and do not need to be reprocessed in this step. If the ingoing port is a feedback port it may be ignored, if no other block can be executed. This rules ensure the correct order of block execution. This loop continues as long as there is at least one block left to be processed. When a step is finished, the net checks for blocks to be reprocessed and enqueues them for the next step. A block is to be reexecuted, when a link of a previously executed block ends in an ingoing feedback port. Additionally the feedback conditions of that block must be true. This detail is explained in a later part. These blocks and all blocks linked afterwards are enqueued. In this way, step after step is processed until no block requires a reprocessing. At last the net calls the finishNetExecution method of al blocks. This can be used to e.g. save the collected frames of a video stream to the hard disk only after all processing was done and not every step.

If more than one block is processable, the process order of these blocks is random. If multi-threading is activated all of these blocks are processed at the same time utilizing all available CPUs.



Processable blocks

 * 1) Blocks which have no input ports
 * 2) Feedback Ports are ignored on first run
 * 3) Blocks of which all input ports are connected to blocks which have been processed

Process Current Block

 * 1) The current block is needs to be processed and is processed (if it has input ports, old data from last execution is used while processing)
 * 2) Depending on the setting "Process Next Blocks", also the following blocks of the current block need to be processed (all which are directly or indirectly connected to one output of the current block)

Process Net

 * 1) Process Blocks which have no input ports (feedback ports are ignored on first loop)
 * 2) Process Blocks which have all ingoing blocks processed (this part is looped as this set of blocks changes)
 * 3) Do as long as not all blocks are processed

Feedback
Feedback Loops Video




 * 1) If at least one block has a feedback port, after processing normally, the blocks with a feedback port are processable again and all following blocks also
 * 2) The feedback loop stops depending on the settings defined in the feedback dialog

Feedback in ImageNets always means the setting of block properties with defined rules. The processing of feedback is illustrated in the flowcharts of Figure 2. Feedback consists of the two separate steps optimization and feedback. During the optimization part a defined range of values is tested for a set of properties and optionally the best combination will be set at the end of the optimization. It is possible to link continuing of the tests to conditions. If they fail, the optimization part is aborted. In the feedback part a set of rules is applied to set the properties. The result of each rule is a obtained mathematical formula, or given as a string to be set. Rules may be limited to certain conditions like for example a certain range of an input port value. This feedback part is processed as long as the conditions hold true. If there are no conditions, they are always true. A general limit for the execution is given by a maximum execution amount. If that amount is reached, the block will not be processed again in the given processing of the complete net. It is also possible to set time limits for the block to keep the net execution in a set time frame.

Example: Setting of Properties inside a Sub-Net
(Figure 3)

Feedback can be used to easily set properties inside a sub-net from the net outside. Figure 3 shows a net with an Executor Input block connected to the feedback port of the Create Scalar block. This port was added by enabling feedback for the block in the ImageNet designer. The property Value of the block is set by an always active feedback rule to the first value passed into the ingoing feedback-port. The required settings in the feedback dialog can be seen in Figure 4.

(Figure 4: The feedback dialog settings to set always the Value to the first value inside port 0.)

Example: Combining Optimization and Feedback
(Figure 5: ImageNet for collecting a range of scalars.)

This example builds step by step a simple example to combine optimization and feedback. The necessary ImageNet is displayed in Figure 5. Feedback was enabled on a Create Scalar block in the ImageNet designer. Thereby it got a Feedback port. To trigger a continued execution of the block it needs an ingoing link. Since the block do not depend on another block to continue, it triggers itself. This is done by a link from the outgoing to the ingoing port. Since links to a ingoing Feedback port are ignored if no other block can be processed, the block can be executed by the ImageNets process method. The outgoing port is also linked to a Collect Scalar block to gather the created range of scalars. Right now the Create Scalar block would trigger itself and create scalars, but the value would always be the same. To iterate in a range e.g. from 0 to 10 in steps of 1 Optimization can be used. For this Optimization must be enabled in the Feedback dialog and the Value property must be added with the New button. A numeric test range can be defined for the property after selecting it. Here the start value, end value and the step size can be defined with formulas. In this example they are just constants. The settings are depicted in Figure 6 together with a continue condition. This condition breaks the Optimization part when the value in port 0 is greater than 5. Without the condition the scalars collected are [0 … 10] and with the condition they are [0 … 5].

(Figure 6: The Optimization settings in the Feedback dialog to iterate the property Value over a numeric range and break when the condition fails.)

Up to now the block generates the numbers 0 to 5 in steps of 1. Now a Feedback rule is defined to increase the value in steps of 2 and a condition limits the value to 15. First Feedback must be enabled in the dialog. Building the Feedback condition works analogously to the condition for the Optimization, but in this case the property Value is compared and not the ingoing port. The rule is defined to apply to All conditions and the formula for the result that is set to the property adds always 2 to the current value. With this settings the scalars collected are [0 … 5, 7, 9, 11, 13, 15]. Figure 7 shows the settings of the Feedback in the dialog.

(Figure 7: The Feedback dialog with the Feedback settings to set always the Value property as long as the condition holds true.)

This range of numbers could also be created by using one Feedback rule for the range [0 ..4], one from [5 … 13] and one above 13 to reset the value to 0. For testing a given range Optimization is easier to use. Also it is possible with Optimization to define a Quantifier for the quality of the current property settings and to memorize e.g. the settings with the highest value. This memorized property settings can then be applied automatically after the tests are done and before the Feedback part starts. With this it is possible to e.g. first change the angles of a Pan-Tilt-Head to search the area for a marker and then skip to the Feedback to bring the marker in the centre of the image.

Feedback programmed into block
The previous parts explained the Feedback applied to a block in the ImageNet designer. Blocks can also be build to use Feedback internally to fulfil its duties. An example for this is a video block that loads the frames of a given video file from the start to the end one by one and places the images in its outgoing port. For this functionality the block must overload some of the methods derived from CBlock. Feedback related methods in CBlock:


 * 1) resetProcessData: For clearing data from previous executions of the complete net
 * 2) isFeedbackConditionTrue: To check, if the block should be executed any further
 * 3) finishNetExecution: For final actions after the net was processed

In this example the method resetProcessData must be used to restart the video file from the beginning. The isFeedbackConditionTrue method must check for the end of the video. It returns true as long as the video plays and false as soon as the end is reached. Both methods must also call at the beginning the implementation of the base class (CBlock) to maintain consistent behaviour with the other blocks. In the process method the block gets the current frame, put it in the outgoing port and increases the position to load next time.