4  The command line interface

4.1 The command line parameters in detail
4.2 Parameters of the grabber driver
4.2.1 MME driver for Composite or S_Video devices
4.2.2 V4L2 driver for Composite, S_Video, and other devices
4.2.3 Aravis driver for GenICam devices
4.2.4 Firewire driver for digital cameras connected via firewire
4.2.5 iCube driver for cameras conforming to the NET iCube interface
4.2.6 MVimpact driver for cameras conforming to the MATRIX VISION impact acquire interface
4.2.7 uEye driver for cameras conforming to the IDS uEye interface
4.2.8 Unicap driver for various devices
4.3 Configuration files

4.1  The command line parameters in detail

  > icewing -h

gives you the list of command line parameters. Now they will be explained more detailed, arranged to subject.

General options
@<file>
This allows to store arguments in files. Option “@” replaces its argument <file> with the content of <file>. Any lines in <file> starting with ’#’ are ignored, the remaining lines are treated as further options.
-h | –help
Besides showing all options and there meaning iceWing writes the names of all plugin instances, that are created by parameter “-l” or “-lg” and then terminates. You need the precise names of the plugin instances if you wish to send options to specific plugin instances with option “-a”.
–version
Shows version information and exits the program.
-n <name>
When you use DACS, the launched instances of iceWing must be somehow addressable. This option specifies the process name of this instance of iceWing - the default name is “icewing”. If there are several instances of iceWing (network wide), you must care: Give at least those iceWing processes unambiguous names, that are used with DACS.
-p <width>x<height>
Sets the size of preview windows, default: 384x288.
-rc <config-file|config-setting>
If the argument to this option contains a ’=’, the argument is interpreted as a gui setting and the referenced gui element is modified accordingly. Otherwise, the given config file <config-file> is loaded additionally to the standard file “${HOME}/.icewing/values” (which is read first). This option can be given multiple times.
-ses <session-file>
Load the session file session-file instead of the standard file “${HOME}/.icewing/session” and use this file for any session related operations.
-time <cnt|plugins|all>...
“cnt” specifies after how many main loop iterations time measurements are given out. If “cnt”<=0 all time measurements are disabled. The default is 50. The other arguments allow to automatically create timers for measuring the execution time of the process() call for single plugin instances. If “all” is given, all plugin instances are measured. For example

  > icewing -time “5 backpro imgclass”

outputs time measurements all 5 main loop runs and creates timers for the plugin instances backpro and imgclass. This option can be given multiple times.

-iconic
Start the main iceWing window iconified.
-t <talklevel>
iceWing outputs debug messages only if their level is below <talklevel>, default: 5, used range of levels: 0..4.

Input options

Remember: All this options, that are related to image input (-sg, -sp, -sp1, -sd, -prop, -nyuv, -nrgb, -c, -f, -r, -stereo, -bayer, -crop, -rot, and -oi) are passed to the special plugin “grab”. If you use another plugin as data-source, that plugin will have its own input options (passed via “-a”). Neither “grab” knows of the other plugins options, nor does the other plugin see this input options.

So this options could also be thought as “input options for plugin grab”. You can use multiple instances of the plugin grab and thus multiple images at the same time. If you want to do that, you have to pass these options via the iceWing option “-a” to the additional grab instances. Thus every instance of grab can get its very own options. The multiple instances are created by loading the plugin via the option “-l” multiple times.

-sg <inputDrv> <drvOptions>
Source of Grabber: If you use a grabber camera for your images and you compiled iceWing with grabber support, you can select with this option how to access your grabber hardware. iceWing supports several camera systems, and here you select which you use.

<inputDrv> can be, depending on how you compiled iceWing, one of PAL_COMPOSITE, PAL_S_VIDEO, V4L2, ARAVIS, FIREWIRE, ICUBE, MVIMPACT, UEYE, or UNICAP (or abbreviated “C”, “S”, “V”, “A”, “F”, “I”, “M”, “E”, and “U”). See section 4.2 for more details about the different drivers.

<drvOptions> is an option string, which specifies in more detail how the driver you selected with <inputDrv> should behave. The different options of the drivers are described in section 4.2.

The different drivers provide help for its driver options: If you are not sure, which options your selected driver, e.g. “F”, the firewire driver, has, try

  > icewing -sg F help

and an overview of the options will be printed to the console.

If you only give -sg without anything special, the default setting is PAL_S_VIDEO with no special options.

-sp <fileset>
Source of Pictures: What you specify as <fileset> will be the picture(s) data-source for the plugin “grab”. Reaching the last picture iceWing loops, beginning again with the first picture.

iceWing natively supports the pnm image format and two AVI formats with a RAW codec. Depending on your version (or if installed at all) of the gdk-pixbuf library, the range of supported image formats is greatly enhanced. Version 0.16 provides e.g. this formats: bmp, gif, ico, jpeg, png, ras, tiff, and xbm. pnm images are read in bit depths from 8 to 32 and in special variants float and double images can be read, too. png images are read in 8 bit and 16 bit depths. All other formats are 8 bit only.

<fileset> specifies the list of file names. It has the following format:
   fileset = fileset | ’y’ | ’r’ | ’e’ | ’E’ | ’f’ | ’F’ | ’file’

Single Pictures
You can simply give single pictures as files

  > icewing -sp image.ppm image2.gif
’y’, ’r’ YUV or RGB
The pictures can be stored as color model YUV or RGB (which is the default). With a ’y’ or ’r’ in front of the name you can specify the color model of the coming files. So this example names one RGB, two YUV and another RGB picture as data source sequence:

  > icewing -sp imageRGB1.ppm y imageYUV1.ppm imageYUV2.ppm r imageRGB2.ppm
’file’ with %d or any int based printf() conversion specifier
As further option you can name a whole series of pictures: with e.g. %d the plugin “grab” replaces %d by integer numbers beginning from 0 and tries to open the file. As another example “%04d” matches all numbers, starting with 4 leading zeros. As soon as iceWing cannot match the current number, it moves on to the next fileset. E.g.

  > icewing -sp image%03d.ppm picture%d.ppm

makes the plugin “grab” increasingly scan for (and if found: load) files named with “image000.ppm”, “image001.ppm”... If no further file is found, it scans for pictures named “picture0.ppm”, “picture1.ppm”...

’file’ with at least %t or %T, or a combination of %t, %T, and %d
An alternative method to specify a series of pictures, one example would be ’/tmp/image%T_%t.png’. If %t or %T is inside the file name part of one ’file’ to the -sp option, iceWing scans the complete directory, in the example ’/tmp’, for files matching the file name part with any numbers replacing %d, %T, and %t. It loads the found files sorted by the numbers replacing %d, %T, and %t in that direction. I.e. the coarsest order is given by %d and the finest by %t.

In this case no printf() style format specifiers are allowed, as files with any number format are used at the same time.

’e’, ’E’ Open files/Check file extension
iceWing must know the number of images you specified on the command line. To verify if a file is a movie file and then get its frame count, iceWing opens every file during startup. With an ’E’ in front of the file names iceWing opens only these files which have a known movie extension (e.g. ’.avi’ or ’.ogm’). This speeds up the program start. With an ’e’ in front of the file names you switch back to opening all files. The default is to open all files.
’f’, ’F’ Duplicate/No duplicate frames in movies
Movie files store the number of frames to display in one second (FPS value) and the to be displayed frames. Normally, if less or more frames are stored in the movie at one point in time than the number of frames, which had to be available according to the FPS value, single frames get duplicated or removed. If ’F’ is specified, this duplication and removal does not happen. In this case, the “Image Num” slider in the user interface on the “GrabImage1” page (see page 72) can only be used for seeking if the read continuous buttons (’<<’ and ’>>’) are not pressed. The default is to comply with the FPS value.
-sp1 <fileset>
This is just the same as -sp. But after the last picture is reached and every registered plugin has finished its work on that picture, the iceWing process ends instead of looping back to the first picture.
-sd <stream> [synclev]
Use a DACS stream as input of images (for plugin “grab”).

An external process creates that stream of images somewhere in the network. It has published it via DACS, and this iceWing process can order that stream as input of images.

The stream can have some synchronize-tokens of several hierarchical levels integrated. This SYNC-tokens of increasing level create substructures of the incoming data (e.g. letters, words, sentences...). You can register the stream at a given synclevel. Level 0 means every single image will be delivered to this iceWing instance. Higher levels lead to fewer images, depending strongly on the SYNC-level philosophy of the stream creating process. You need to get this information about the stream creating process to choose the appropriate SYNC-level, with that you register the DACS stream. If the stream delivering process separates the images with SYNC-level 2 and you order this stream at this level, you will get always the latest image. Any older images get dropped off the stream, as soon as the new image arrives the stream. So with this strategy iceWing gets always up to date images - simply by ordering at the appropriate level.

Caution: SYNC-level of 0 is special and means, that this instance of iceWing gets every single image, that is put into the stream (no loss of any image). You may need this, e.g. because plugins sometimes need access to older, but still unreceived images. But the DACS process must store all of the undelivered images. If iceWing consumes the images at a slower rate than the images are put into the stream (in average), this will definitely lead sooner or later to a huge size of the DACS process – and finally (when the storage limit is exceeded) the process gets killed by the system.

If you wish write your own image-creation process to send images to iceWing via DACS: There is already the SFB-360 internal data type “struct Bild_t” (which is declared in the file “sfb.h”). It encodes the image, and you must use it as the type to be passed to DACS. Then iceWing can receive images from your own process.

For further details about DACS see the dissertation of Nils Jungclaus [Jun98].

-prop
Normally, when you use a grabber, see option “-sg”, and the grabber supports changing properties on the fly, a GUI in form of an own page in the categories list is created to change these properties interactively. When you use “-prop” this page is not created.
-stereo
Expect gray images containing interlaced stereo images as input and decompose them by putting them one after the other. This option is similar to the “stereo=deinter” option of the V4L2 driver (see section 4.2.2 for more details).
-bayer [method] [pattern]
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method and the specified bayer pattern to decompose it. If no method or pattern is specified, downsampling and RGGB are used. The supported methods are:
down
Downsampling of the input image by a factor of 2.
neighbor
Nearest neighbor interpolation.
bilinear
Bilinear interpolation.
hue
Smooth hue transition interpolation.
edge
Edge sensing interpolation.
ahd
Adaptive homogeneity-directed interpolation.

Supported bayer patterns are: RGGB | BGGR | GRBG | GBRG.

This option is similar to the “bayer” and “pattern” options of the V4L2 driver (see section 4.2.2 for more details).

-crop x y width height
Crop a rectangle starting at position (x,y) of size width x height from the input image. If width or height are smaller than zero or zero, the values are measured from the right or bottom side. E.g. “-crop 5 10 -5 -10” would crop a border of 5 pixel from the left and right sides and a border of 10 pixels from the top and bottom sides of the input image.
-rot {0 | 90 | 180 | 270}
Rotate the input image by 0, 90, 180, or 270. The default is 0.
-nyuv <name>
iceWing provides its input images for other plugins under a special name, normally “image” for YUV images. With this option you can change this identifier to <name>. See the paragraphs 9.2 and 9.3.1 for alternatives plugins should normally use.
-nrgb <name>
iceWing provides its input images for other plugins under a special name, normally “imageRGB” for RGB images. With this option you can change this identifier to <name>. See the paragraphs 9.2 and 9.3.1 for alternatives plugins should normally use.

Options for up-/downsampling behavior
-c <cnt>
iceWing internally manages a queue of downsampled images. With this option you can specify the length of this queue.

Default value is 2.

-f [cnt]
If you do not specify this option, iceWing has only the downsampled queue of images. With “-f” iceWing activates the Full sized (i.e. the upsampled) queue of images.

The optional [cnt] sets the queue size, the default is 1.

-r <factor>
Remember downsample factor of input images.

If you use already downsampled images as input, unfortunately iceWing does not know this without further notifying. <factor> tells the interested plugins, what factor the source images got downsampled.

Remember: With downsampled image sources, plugin grab will additionally downsample the input into the downsample queue. So when the input images already have downsample factor of 2, and the current iceWing instance has downsample factor of 3, the images in the downsample queue will have a true downsample factor of 6 (compared to the original image), while the images inside the upsampled queue have downsample factor of 2. The only way to solve this and e.g. simulate the original size of the image is to use this option “-r”: You tell iceWing, what downsample factor the input images already have. And the plugins can (but not need to) make use of this additional information.

And also remember: Even the commando “Save Original” saves the original (=unrendered) image, but including the given downsample factor (set in figure 3.1 page “other” or in paragraph Downsampling 5.2.3).

Output/remote control options

The “-o” options have several different purposes regarding the communication to other programs and with the sub options you specify, what to output and what interfaces to enable.

-of
With option “-of”, this instance of iceWing can be fully remote controlled via DACS including nearly every single GUI-widget element.

This uses the capability of iceWing to save and load all its current status into the config file (while session file stores the window properties). Remote control via DACS works quite similar: With this option “-of” iceWing publishes DACS wide the functions void <icewing>_control(char[]) and char[] <icewing>_getSettings(void). <icewing> is the name of this instance of iceWing (that you specified with option “-n”). The “char[]” means, that you send a normal C-string as parameter to the _control() function. The content of that string can be any lines of the iceWing configuration file and iceWing accepts the new settings. Similar, the _getSettings() function returns a string with the current settings of all widgets in the format of the configuration file. See also the icewing-control program from the utils directory for a utility, that allows to send such strings to a running iceWing instance. See section 7.3 for more details about this utility.

Additionally this option publishes to DACS the function struct Bild_t <icewing>_getImg(imgspec). With this function, external processes can receive an image from this instance of iceWing via DACS. The images are send encoded in the SFB-360 “struct Bild_t” data type. There are further options, to allow the external process to select precisely which image it receives, and in which downsample format.

The format of “imgspec”:

    [’PLUG’ <plugnum>] (’NUM’ <imgnum>|’TIME’ <sec> <usec>|  
    ’FTIME’ <sec> <usec>) down

PLUG
If multiple instances of the plugin grab are running, multiple images are available at the same time. With PLUG you can select the image from the instance <plugnum>. The default is 1, i.e. the image from the first grab instance.
NUM
Every single image in iceWing has a continuous number, starting with 0. iceWing returns the image with the number <imgnum>. If <imgnum><0, return a full size image, see option “-f”. If <imgnum>=0, return the current full size image.

So the sign determines, from which queue iceWing takes the image from: the upsampled or the downsampled queue (See also options “-c” and “-f”).

If the upsampled queue is not existing, you will always get the downsampled version of the image.

TIME
iceWing returns that image with a grabbing time most similar to (<sec> <usec>). The image is taken from the downsampled queue.

If TIME is in the future, the nearest image is taken - and that will always be the most recent image.

FTIME
iceWing returns a full size image with a grabbing time most similar to (<sec> <usec>).

Again, if the upsampled queue is not existing, you will always get the downsampled version of the image.

<down>
iceWing will downsample the returned image by factor <down>. This factor is applied additionally to the iceWing downsample factor (adjustable in page “GrabImage1”, see figure 3.1)!

As example: iceWing has set a downsample factor of 2 and this option <down> is e.g. set to 3. Now the image will be delivered to DACS with a true downsample factor of 6 (well, with FTIME it is 3).

-oi [interval]
Output images on DACS stream <icewing>_images and provide a function void <icewing>_setCrop(“x1 y1 x2 y2”) to crop the streamed images. <icewing> stands for the DACS name of this iceWing process (see option “-n”). With the optional [interval] iceWing will send only every nth image to the stream. If the upsampled queue exists, you will get an upsampled image, otherwise a downsampled image. The images are sent in the “struct Bild_t” format.

With the function <icewing>_setCrop(“x1 y1 x2 y2”) a freely defined rectangle of the image can be dumped to the stream. The parameter string defines the rectangle. The four coordinates refer to the full size, i.e. not downsampled image. iceWing adapts them internally to the real image size.

-os
Output some (currently very few) status informations on DACS stream <icewing>_status.

The function “iw_output_status (const char *msg)” declared in output.h sends on this stream.

Plugin options
-l <plugin libraries>
Each plugin lives inside a library. This option loads the given plugin libraries into iceWing. The library names must be separated by ’ ’, ’,’, or ’;’. Additionally, this option can be given multiple times. If you wish to have several instances of a plugin, repeat the name of the relevant library. But be cautious: not all plugins can operate as several instances (e.g. the “min” plugin)!

The libraries are searched at different locations in the file system. At every location first the given name is tried. If the library can not be loaded the name is expanded by “lib[...].so” and loading is tried again with the new name.

The locations the libraries are searched are:

  1. As specified with this option.
  2. At ${ICEWING_PLUGIN_PATH}/plugins/.
    ICEWING_PLUGIN_PATH is an environment variable which specifies a colon separated list of directories.
  3. At ˜/.icewing/plugins/.
  4. At ${PREFIX}/lib/iceWing/.
-lg
Not for Alpha machines! Similar to “-l”, but with a very impacting difference: while dlopen()’ing the libraries, the flag RTLD_GLOBAL is set (makes all not-static symbols of the library global for the whole iceWing process)! So this is more a linker option, than an iceWing feature!

Use only, when you really know what you are doing (e.g. great danger of name-clashes...)!

-a <plugin instance> <option>
Send command line arguments to a plugin instance. This option can be given multiple times.

Every plugin should provide help on its options. To find those help messages, use parameter ‘-a pluginName “-h” ’.

-d <plugins>
Disables the given plugin instances, i.e. the process() function of the given plugins is not called any more. The init() and cleanup() functions of the plugin are still called. This option can be given multiple times.

Normally all loaded plugins get activated. You can toggle plugin activation also via GUI, but sometimes you may wish to start an iceWing session with an initially disabled plugin instance.

E.g. ‘-d “backpro imgclass” ’

4.2  Parameters of the grabber driver

If you want to use a grabber camera as a source for your images, you have to pass the option “-sg” with, depending on how you compiled iceWing, one of the drivers PAL_COMPOSITE, PAL_S_VIDEO, V4L2, ARAVIS, FIREWIRE, ICUBE, MVIMPACT, UEYE, or UNICAP (or abbreviated “C”, “S”, “V”, “A”, “F”, “I”, “M”, “E”, and “U”) to the grabbing plugin. You can additionally pass different options to the different grabber drivers. If you pass “help” as an option, you get an overview of all available options, e.g. for the firewire driver:

  > icewing -sg F help

The help message will be printed to the console. The different options are separated by “:”. Parameters of an option are separated by a “=” from the option name, e.g. “camera=2:bayer=hue” would be an allowed option string. In the following the options will be described in more detail, arranged by the different drivers.

4.2.1  MME driver for Composite or S_Video devices

This driver uses the AVVideo library to access cameras on OSF Alpha systems.

help
Shows the help page of the driver.
camera=val
Up to two cameras can be connected to the computer. Here you can select with a number of 0 or 1 which of these cameras should be used. The default is 0, the first one.
fps=val
You can grab the images with different speeds. This option sets the frame rate the camera should operate in. The default is 25.

4.2.2  V4L2 driver for Composite, S_Video, and other devices

help
Shows the help page of the driver.
debug
If given, different debugging information about the camera, its capabilities, and the current driver status is printed to the console.
device=name
Specifies the device, the driver will use to grab images from. If this option is not specified, “/dev/video” or, if this is not available, “/dev/video0” is used.
input=num
The video input to use. If the driver was called as “C” or PAL_COMPOSITE, the first composite video input is the default for this option. If called as “S” or PAL_S_VIDEO, the first S-Video video input is the default. And finally, if the driver is called as “V” or V4L2, 0 is the default.
size=widthxheight
The maximal size of the grabbed image if no downsampling is selected. The actual selected size may be smaller. The selected size is the biggest size below the given size which is still supported by the the camera. The default is 768x576.
format=num
Most devices support different image formats, e.g. YUV or RGB formats in various pixel depths. Here you can select which one to use. If this option is not specified, the driver uses a YUV format with a depth as big as possible.
propX=val
V4L2 devices have several properties, e.g. brightness, gain, contrast, and other. With this option you can set them. E.g. "prop1=0.64" will set the property one to 0.64. Alternatively you can access the properties by there name, e.g. "propGain=0.64". By not specifying a value you set the property to it’s default value, e.g. "propGain". Information about the available properties and their allowed values is shown if the option “debug” is given.
buffer=cnt
V4L2 can use several intermediate image buffers to compensate for an intermediate slowness of the program before grabbing the next frame. This option sets the number of buffers, the default is 4.
bayer=down|neighbor|bilinear|hue|edge|ahd
Some cameras support color image grabbing, but deliver the image not decomposed but as one gray image plane with an embedded bayer pattern. In a bayer pattern a square of size 2 by 2 pixels holds information about all three RGB channels. To decompose this information, different interpolation methods exist. If this option is given, a gray image of depth 8 or 16 bit with an embedded bayer pattern is expected and decomposed with the specified method. If no interpolation method is given, downsampling is used. The supported interpolation methods are:
down
The 2x2 bayer square is used to only get one color pixel, the destination image gets downsampled by a factor of 2.
neighbor
Nearest neighbor interpolation, where each interpolated output pixel gets the value of the nearest pixel in the input image, is used.
bilinear
Bilinear interpolation, where each interpolated output pixel gets the average value of the two or four nearest pixels in the input image, is used.
hue
Smooth hue transition interpolation. Here the green channel is gained by bilinear interpolation. For the blue and the red channel a “hue value” gets defined as B∕G or R∕G. The neighboring hue values are then used to estimate a color pixel. E.g. if a blue pixel is located on the left and on the right side of a pixel, the blue pixel in the middle gets estimated by:
BM = GM
-2- *( BL   BR )
  GL-+ GR-
edge
Edge sensing interpolation. The blue and red channels are computed identical to the “smooth hue transition interpolation” method. For the green channel horizontal and vertical gradient magnitudes are calculated. A green pixel gets interpolated by the horizontal neighbors, if the horizontal gradient is smaller than the vertical one. Otherwise the vertical pixels are used.
ahd
Adaptive homogeneity-directed interpolation based on the work of Keigo Hirakawa and Thomas Parks for the algorithm [HP05] and Paul Lee for the implementation from dcraw [Cof06]. A horizontal interpolation is selected, if more pixels in a local neighborhood of the horizontally interpolated image are similar than in the vertically interpolated image. Otherwise, the image gets interpolated vertically. The similarity is evaluated in the CIE L*a*b* color space. The similarity threshold is adapted to the local image. The interpolation takes into account, that G - R and G - B is varying more slowly than one color plane alone.
pattern=RGGB|BGGR|GRBG|GBRG
The information in a 2 by 2 bayer square can be ordered in different ways. This option specifies how it is ordered. The default is RGGB, red in the first pixel, green in the second pixel and the first pixel on the second row, and blue in the second column on the second row.
stereo=raw|deinter
If given, an image of type YUV422 or 16bit mono is expected. However, this image is not interpreted as a normal color image, but as two interlaced gray scale images with a bit depth of 8. If stereo=raw is given, the image gets interpreted as a gray image without decoding the interlacing. If stereo=deinter is given, the image gets decoded and saved one after the other. For example the Videre stereo camera delivers its two images in this way.
gray=w|h
Interpret all grabbed images as gray images and increase there width or height according to the image size in bytes. At least some cameras from “The Imaging Source” (http://www.theimagingsource.com) announce that they deliver color images but in truth deliver gray images. This option helps in such cases to interpret the data correctly.
noselect
Normally the driver uses the select() system call to check if any new images are available. If this option is given, select() is not used but images are directly requested. Normally you should not use this option. But at least the zoran kernel driver till at least version 0.9.5 has a broken implementation of the select() call and thus needs this option to function correctly.

4.2.3  Aravis driver for GenICam devices

This driver uses the aravis library to access cameras following the GenICam standard (see “http://www.genicam.org”). See “http://live.gnome.org/Aravis” for details about this library.

help
Shows the help page of the driver.
debug
If given, different debugging information about the used devices and the current driver status is printed to the console.
device=val
Multiple supported devices can be available. Here you can select which one to use by it’s device name or it’s id. The default is 0, the first one.
listProps
List all available properties of the selected device and show additional details about the properties, e.g. there allowed and there current values.
propX=val
The different devices have several properties, e.g. width, height, pixelFormat, exposure, gain, and many other. With this option you can set them to new values. If “=val” is not given, the value of property X is set to it’s default value. If X = “R[ADR]”, the register ADR is set to “val”. E.g. "propWidth=640:prop27=Continuous:propOffsetX:propR[0x123]=240" will set the property “Width” to 640, will set the property number 27 to “Continuous”, will set property “OffsetX” to it’s default value, and will set the register 0x123 to the value 240. You can access every property by it’s name as well as by it’s number. Information about the available properties is shown if the option “listProps” is given.
bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the identical pattern option of the V4L2 driver in section 4.2.2.
stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.2.4  Firewire driver for digital cameras connected via firewire

This driver allows to access cameras supporting the “Digital Camera Specification”, also known as the DCAM specification. This includes most 1394 webcams and a majority of industrial or scientific cameras. However, any cameras in which you can insert a video tape (camcorders,...) will NOT work with this driver. These cameras record compressed DV video on the tape, while DCAM specifies uncompressed, on-the-fly video flows.

help
Shows the help page of the driver.
debug
If given, different debugging information about the camera, its capabilities, and the current driver status is printed to the console.
device=name
Specifies the device, the driver will use to grab images from. If this option is not specified, “/dev/video1394” or “/dev/video1394/0” is used, whichever is available.
camera=val
Multiple cameras can be connected to the computer via one device. Here you can select with a number starting with 1 which of these cameras should be used. The default is 1, the first one.
fps=val
Digital cameras normally support different speeds in which they can deliver the images. This option sets the frame rate the camera should operate in. Supported frame rates are: 1.875, 3.75, 7.5, 15, 30, and 60. The default is 15.

If a format 7 mode is used for grabbing, any frame rate above 0 may be supported, depending on the selected mode, region, and color coding.

packetSize=val
Firewire cameras transmit exactly one packet per camera per bus cycle. Thus the transfer rate, and therefore the frame rate as well, is determined by the number of bytes in each packet, the packet size. In all format 7 modes the frame rate must be specified via the packet size.

Additionally, if the fps option is specified, the firewire driver automatically converts this value to the needed packet size and ignores the separate packetSize option.

mode=yuvXXX|rgbXXX|monoXXX|16monoXXX|f7XXX
Cameras can support different color spaces and different image sizes for the images. With this option you can select which mode the camera should operate in if an image without any downsampling should be grabbed. If the desired mode is not supported by the camera, the driver falls back to a supported mode. The “XXX” specifies the desired width, e.g. “mono1024” would be a possible mode specifier. The default is yuv640x480.

Format 7 modes, which all start with “f7”, have different additional parameters. Here “XXX” is:

  XXX=0|1|2|3|4|5|6|7[mono8|yuv411|yuv422|yuv444|rgb8|mono16|  
      rgb16][width]x[height]+[x]+[y]

An example is “f72yuv422640x480+10+20”, which grabs images in the format 7_2 with a YUV422 color coding. The images start at position x=10 and y=20 and have a size of 640x480 pixel. The default is the first available color coding, the maximal allowed width and height, and the start position x=0 and y=0.

speed=100|200|400|800
The iso speed in MBit/s to use. The default is 400. This option also affects the operation mode: For speeds below 800, 1394a mode (so called “legacy” mode) is selected, for 800 1394b is selected. This coupling helps prevent mistakes, but it also means that the speed selected should be the overall bus speed. For example, if you have two cameras on a shared FW800 bus, you should select speed 800.
propX=val
The different devices have several properties, e.g. brightness, gamma, exposure, and other. With this option you can set these properties. E.g. "prop0=238:propGamma=12" will set the first property to 238 and the property “Gamma” to 12. You can access every property by it’s name as well as by it’s number. Information about the available properties and their allowed values is shown if the option “debug” is given.

If “=val” is not given, the corresponding property is set to it’s default value.

reset
Reset the bus the camera is attached to prior to starting the capture process. This will affect all devices on the bus! Use this option only when really necessary. This option is only available if you use libdc1394 V2, see page 8.
pgtimestamp
Different cameras from Point Grey Research (http://www.ptgrey.com) can embed timestamp information in the first four bytes of the image. This option tries to activate this feature and use the value instead of the normal timestamp, which is provided by the libdc1394 library.
bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the mostly identical pattern option of the V4L2 driver in section 4.2.2.

The only difference is that the correct pattern is determined automatically if you use libdc1394 V2 (see page 8). So, normally you should not need to specify this option.

stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.2.5  iCube driver for cameras conforming to the NET iCube interface

This driver uses the iCube NETUSBCAM library to access all cameras conforming to the NET iCube interface. See “http://www.net-gmbh.com” for the driver and the cameras.

help
Shows the help page of the driver.
debug
If given, different debugging information about the used devices and the current driver status is printed to the console.
device=val
Multiple supported devices can be available. Here you can select which one to use. The default is 0, the first one.
mode=rgbXXX|monoXXX|rawXXX
Cameras can support different color spaces and different image sizes for the images. With this option you can select which mode the camera should operate in. Supported are RGB images, mono images (i.e. gray images) and bayer raw images. The “XXX” specifies the desired width or width and height, e.g. “mono320” or “mono320x240” would be a possible mode specifier. Supported sizes are “320x240”, “640x480”, “752x480”, “800x600”, “1024x768”, “1280x1024”, “1600x1200”, “2048x1536”, and “2592x1944”.
roi=[width]x[height]+[x]+[y]
It is possible to only grab a region of interesst. An example for a “roi” option is “640x480+10+20”. The images will start at position x=10 and y=20 and will have a size of 640x480 pixel.
propX=val
The different devices have several properties, e.g. brightness, gamma, exposure, and other. With this option you can set these properties. E.g. "prop0=11738:propGamma=12" will set the first property to 11738 and the property “Gamma” to 12. You can access every property by it’s name as well as by it’s number. Information about the available properties and their allowed values is shown if the option “debug” is given.

If “=val” is not given, the corresponding property is set to it’s default value.

bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the identical pattern option of the V4L2 driver in section 4.2.2.
stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.2.6  MVimpact driver for cameras conforming to the MATRIX VISION impact acquire interface

This driver uses the impact acquire library to access all cameras conforming to the MATRIX VISION impact acquire interface. See “http://www.matrix-vision.com” for the driver and the cameras.

help
Shows the help page of the driver.
debug
If given, different debugging information about the used devices and the current driver status is printed to the console.
device=val
Multiple supported devices can be available. Here you can select which one to use. The default is 0, the first one.
config=fname
Load camera settings from an external configuration file. For example “wxPropView” can be used to save such a configuration file.
bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the identical pattern option of the V4L2 driver in section 4.2.2.
stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.2.7  uEye driver for cameras conforming to the IDS uEye interface

This driver uses the uEye library to access all cameras conforming to the IDS uEye interface. See “http://www.ids-imaging.com” for the driver and the cameras.

help
Shows the help page of the driver.
debug
If given, different debugging information about the used devices and the current driver status is printed to the console.
camera=val
Multiple supported devices can be available. Here you can select which one to use by it’s camera id. The camera id is a number 1, which can be persistently stored on the camera. The default for this option is 0, the first available device.
device=val
Multiple supported devices can be available. Here you can select which one to use by it’s device id. The device id is assigned by the driver based on the order the cameras are attached to the computer. The device id’s are not persistent. The default is 0, which selects the device by the camera parameter (see above).
config=fname
Load camera settings from an external configuration file. For example the program “ueyedemo” can be used to save such a configuration file.
trigger=hilo|lohi|software
Normally images are captured in free running mode, where exposure and readout/transfer of the image data are performed in parallel. This allows the maximum camera frame rate to be achieved. With this option trigger mode is used where the sensor is on standby and starts exposing on receipt of a trigger signal. The trigger signal can be the hardware trigger on falling signal edge (hilo), the hardware trigger on rising signal edge (lohi), or the software trigger.
bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the identical pattern option of the V4L2 driver in section 4.2.2.
stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.2.8  Unicap driver for various devices

This driver uses the unicap library to access various devices. Unicap provides a uniform API for different kinds of video capture devices, e.g. IEEE1394, Video for Linux, and some other. See “http://unicap-imaging.org” for details about this library.

help
Shows the help page of the driver.
debug
If given, different debugging information about the devices, its capabilities, and the current driver status is printed to the console.
device=val
Unicap supports multiple devices. Here you can select which one to use. The default is 0, the first one.
format=val
Most devices support different image formats. Here you can select which one to use. The default is 0, the first one.
propX=val
The different devices have several properties, e.g. brightness, hue, video source, and other. With this option you can set them. E.g. "prop0=11738" will set the first property to 11738. Alternatively you can access the properties by there name, e.g. "propHue=14" will set property “hue” to 14. If “=val” is not given, the property is set to it’s default value. Information about the available properties and their allowed values is shown if the option “debug” is given.
bayer=down|neighbor|bilinear|hue|edge|ahd
Expect a gray image with an embedded bayer pattern as the input image. Use the specified method to decompose it. For more details see the identical bayer option of the V4L2 driver in section 4.2.2.
pattern=RGGB|BGGR|GRBG|GBRG
The bayer pattern to use during bayer decomposing. For more details see the identical pattern option of the V4L2 driver in section 4.2.2.
stereo=raw|deinter
Expect gray images containing interlaced stereo images as input and decompose them. For more details see the identical stereo option of the V4L2 driver in section 4.2.2.

4.3  Configuration files

iceWing uses two configuration files, which get loaded and stored during runtime:

.icewing/session stores the window properties of the current session. Default wise it’s “${HOME}/.icewing/session”, but you can use alternative files via command line option “-ses”. In the preferences window or with the context menu you can save your current session into an alternative file.

The content is simple - for each active window there is an entry like

  “Name of the window”= “x” win-x “y” win-y “w” width “h” height
      “zoom” zoom “dx” pan-x “dy” pan-y

where win-x and win-y specify the window position, zoom specifies the zoom factor of the window (0 means fit-to-window) and pan-x and pan-y specifies the panning position for the content of the window. The zoom and panning values are only stored if “Save pan/zoom values” in the preferences window is active, see section 5.2.2. Lines starting with “#” are treated as comments and get ignored.

.icewing/values stores all settings of every single GUI value of the plugins and the iceWing system. Additionally, the hotkeys for the context menu of the image windows get stored in these files. Default wise it’s “${HOME}/.icewing/values”, but you can use additional files via command line option “-rc” or in the main window with the Load/Save buttons.

If you wish to remote control the iceWing process via DACS, you must know the structure of the content of this config file. The best will be to save the current settings and have a look at the file.

Every line holds for one iceWing widget its setting. Each widget is unambiguously addressed (its path) via its window name or its category name and its widget name. Its entries look like this: “windowname.widgetname” = value

Each widget type has its own kind of values (e.g. for booleans: true=1, false=0), the most complex widget surely is “list”. More details about the different widgets can be found in the Programming Guide in section 9.3.1.