WO2007077283A1 - Method and device for controlling auto focusing of a video camera by tracking a region-of-interest - Google Patents

Method and device for controlling auto focusing of a video camera by tracking a region-of-interest Download PDF

Info

Publication number
WO2007077283A1
WO2007077283A1 PCT/FI2005/050495 FI2005050495W WO2007077283A1 WO 2007077283 A1 WO2007077283 A1 WO 2007077283A1 FI 2005050495 W FI2005050495 W FI 2005050495W WO 2007077283 A1 WO2007077283 A1 WO 2007077283A1
Authority
WO
WIPO (PCT)
Prior art keywords
roi
image frame
region
tracking
frp
Prior art date
Application number
PCT/FI2005/050495
Other languages
French (fr)
Inventor
Fehmi Chebil
Mohamed Khames Ben Hadj Miled
Asad Islam
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to US12/087,207 priority Critical patent/US8089515B2/en
Priority to EP05821826A priority patent/EP1966648A4/en
Priority to JP2008547998A priority patent/JP2009522591A/en
Priority to PCT/FI2005/050495 priority patent/WO2007077283A1/en
Publication of WO2007077283A1 publication Critical patent/WO2007077283A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the present invention relates to an electronic device equipped with a video imaging process capability, which device includes
  • a camera unit arranged to produce image frames from an imaging view which includes a region-of-interest ROI
  • an adjustable optics arranged in connection with the camera unit in order to focus the ROI on the camera unit
  • an auto-focus unit arranged to analyze the ROI on the basis of the tracking results provided by the tracking unit in order to adjust the optics.
  • the invention also relates to a method and a cor- responding program product.
  • the image sensor 11 of the device 10' produces (image) data I for the auto-focus unit 12.
  • Auto-focus unit 12 calculates parameters for motor 13 adjusting the position of the lens 14 on the basis of the data.
  • the motor 13 ad- justs the lens 14 owing to which the captured image I is more accurate.
  • AAF active auto-focus
  • PAF passive auto-focus
  • the camera emits a signal to the direction of the object (or scene) to capture in order to detect the distance of the subject.
  • the signal could be a sound wave, as it is the case in submarines under water, or an infrared wave.
  • the time of the reflected wave is then used to calculate distance. Basically, this is similar to the Doppler radar principle. Based on the distance the auto-focus unit then tells the focus motor which way to move the lens and how far to move it.
  • the cameras determine the distance to the subject by analyzing the image.
  • the image is first captured, and through a sensor, dedicated for the auto-focus, it is analyzed.
  • the sensor specific for the auto-focus use has a limited number of pixels. Thus, a por- tion of the image, only, is often utilized.
  • a typical auto-focus sensor applied in PAF is a charge-coupled device (CCD) . It provides input to algorithms that compute the contrast of the actual picture elements.
  • the CCD is typically a single strip of 100 or 200 pixels, for example. Light from the scene hits this strip and the microprocessor looks at the values from each pixel. This data is then analyzed by checking the sharpness, horizontally or vertically, and/or its contrast. The obtained results are then sent as a feedback to the lens, which is adjusted to improve the sharpness and the contrast. So, for example, if the image is very blurred then it is understood that the lens needs to move forward in order to adjust the focus.
  • ROI tracking process initialisation Such a problem arises in many applications such as the initialization of region of interest (ROI) tracking in a video sequence.
  • ROI region of interest
  • the skin color information is used to detect the foreground.
  • Some approaches apply feature-based refinement to distinguish faces from other parts of the body with similar color characteristics, e.g. hands.
  • Such methods cannot be applicable in the general case where the target can be of any type, such as, for example, a car, an airplane, an athlete, an animal, etc.
  • the target is not necessarily of a particular type, e.g. human face, but can rather be any object, the identification can not be done automatically and user input becomes imminent.
  • a tracking process is performed in order to localize the ROI in each frame.
  • visual tracking of non-rigid objects in a video sequence can be seen as a recursive-matching problem from which certain reliability is re- quired.
  • a region in the current frame is matched to a previous instance, or a model, of the target.
  • the difficulty of the tracking problem is of multiple dimensions.
  • 3D objects captured on a 2D video frame do not capture the depth of the objects. Changes in the object that occur with time, such as translation, rotation, deformation, etc., are not captured faithfully on the 2D screen. Tracking the object while it undergoes these changes is a challenging task.
  • the object may also be affected by its surroundings and other ex- ternal factors . Some examples of these are interference with other objects, occlusions, changes in background and lighting conditions, capturing conditions, camera motion, etc. All these numerous factors impede a robust and reliable tracking mechanism of the object-of-interest .
  • the matching can be done based on the colour, shape or other features .
  • Methods based on one of these aspects usually provide robustness in one sense but show weaknesses under some other scenarios.
  • the tracking of shapes and fea- tures involves significant computational load. Therefore, algorithms that consider more than one multiple visual aspect of the target can enhance tracking performance but at the expense of higher computational load.
  • the present invention is intended to create a new type of electronic device equipped with video imaging process capability and auto-focus means, as well as an auto-focus method for video imaging.
  • the characteristic features of the electronic device according to the invention are stated in the accompanying Claim 1 while the characteristic features of the method applied in it are stated in Claim 17.
  • the invention also relates to a program product, the characteristic features of which are stated in the accompanying Claim 33.
  • the invention describes algorithms to utilize in the auto- focus modules in cameras to improve video capturing.
  • the approaches are based on utilizing a region of interest tracking technique to identify the optimal parameters for focus control.
  • the invention provides means for efficiently keeping the focus on an object during video recording process.
  • the invention is composed of an algorithm for identifying an object, tracking it and an auto-focus unit which uses the tracking results to control the lens system.
  • the invention optimizes the performance of the passive auto- focus lens by introducing a region of interest tracking technique, which makes the object of interest sharp and with the same size in proportion to the video frames along the video sequence.
  • the technique can be implemented, for example, within an auto-focus unit. With the ROI tracking results a more accurate update on the lens movements is reached, while the camera is recording the auto-focus is done automatically.
  • the region-of-interest (ROI) tracking in a video sequence is performed applying macroblock- based region-of-interest tracking method.
  • two histo- grams may be calculated in order to find out which pixels of the current macroblock of the current image frame are target pixels and which pixels are background pixels.
  • the histograms are calculated for the ROI region and for the background re- gion of the previous frame. The information which regions belong to the ROI and which regions belong to background have been achieved from the ROI mask of the previous frame.
  • the invention describes a new approach characterized by tracking robustness and low computational complexity. These two features are the main measures of whether or not a tracking technique is implementable in an application targeting mobile devices where system resources are very limited.
  • the tracking scheme according to the invention provides robustness to shape deformation, partial occlusion and environment variations while maintaining a low computa- tional complexity with reliable and efficient performance.
  • the invention it is also possible to apply a semi-automatic identification of an object or a region of interest (ROI) in an image or a video frame.
  • ROI region of interest
  • the local color-content inside and around the defined area including the object of interest is analyzed in order to distinguish between background and target.
  • the output of this process may be a mask describing the ROI.
  • the developed algorithm is quite simple .
  • Figure Ia shows a principle of a generic architecture of a passive auto-focus system according to the prior art
  • Figure Ib shows a principle of a device according to the invention in which the identification and the tracking of the ROI is applied in connection with the determination of the spatial position of the ROI
  • Figure 2 shows an example of the method according to the invention as a flowchart
  • Figure 3 shows an illustration of the ROI tracking in the scene plane
  • Figure 4 shows an illustration of the ROI tracking in the Z-plane
  • FIG. 5 shows an example of the ROI identification process according to the invention as a flowchart
  • Figures 6a and 6b show examples of the window selection procedure in order to define the area including the object of interest
  • Figure 7a shows an example of the color-content based distinction of target from background using the color histograms corresponding to exo- centric rectangular regions
  • Figure 7b shows an example of the color histograms of the exocentric rectangular regions
  • Figures 8a and 8b show examples of the produced ROI masks of embodiments of Figure 6a and 6b
  • Figure 9 shows an example of the ROI tracking process according to the invention as a flowchart
  • Figures 10a, 10b show principle examples of the generation of the block-wise histograms, for the background and the ROI
  • Figure 11 show examples of the shape of the mask block and probability macroblocks in connection with the shape matching procedure
  • Figures 12a, 12b show a mask of the Y-component and the ROI mask as an example in the real imaging situation
  • Figures 13a - 13f show examples of the color componentwise and probability histograms from the real imaging situation presented in Figures 12a and 12b.
  • Figure Ib shows a rough schematic example of the functionalities in a device 10, in as much as they relate to the inven- tion.
  • the camera means of the device 10 can include the func- tional components or sub-modules 11 - 14, which are, as such known, shown in Figure Ib and already described in connection with Figure Ia in the prior art section.
  • These modules 11 - 14 form a loop architecture, which is performed continuously in connection with the video recording.
  • the video recording may be understood as well as the measures that are performed before the actual recording process (initialization measures) and also during the actual recording in which storing or network streaming of the video data is performed.
  • At least part of the functionalities of the camera may be performed by using data-processing means, i.e. CPU 15 - 17.
  • This may include one or more processor units or corresponding.
  • the image processor CPU may be mentioned with the auto-focus unit 12 included in that.
  • the auto-focus unit 12 may also be a separate entity which communicates with main processor CPU.
  • the program product 30 is implemented on either the HW or SW level, in order to perform actions according to the invention.
  • the device 10 can also include a display / viewfinder 18 on which information can be visualized to the user of the device 10.
  • the device 10 also includes a proces- sor functionality CPU, which includes functionalities for controlling the various operations of the device 10.
  • the actions relating to the auto-focusing process according to the invention can be performed using program 30.
  • the program 30, or the code 31 forming it can be written on a storage medium MEM in the device 10, for example, on an updatable, nonvolatile semiconductor memory, or, on the other hand, it can also be burned directly in a circuit CPU, 15 - 17 as an HW implementation.
  • the code 31 consists of a group of commands 31.1 - 31.18 to be performed in a set sequence, by means of which data processing according to a selected processing algorithm is achieved.
  • data processing can be mainly understood actions and measures relating to ROI identification 15 that is performed before storing video recording process.
  • Data processing means also in this connection the measures and actions performed by the ROI tracking 16, determination process 17 of the spatial position of the ROI in the video frame and also an auto-focusing process 12. These three actions are all performed during the actual video recording process, as will be explained in later in greater detail.
  • Figure 2 presents as a flowchart an example of the main stages of the invention.
  • the basic idea of the invention is not intended to be limited to these steps or their performance order.
  • other additional steps may also become into the question and/or the performance order of the presented (or currently not presented) steps may also vary, if that is possible.
  • There may also be sub-processes, like the ROI tracking 204 that is performed independently relative to the other steps (code means 31.2, 31.7) . Owing to this the ROI tracking 204, 16 provides always the most refresh ROI mask for the unit 17 that determines the spatial position of the ROI in a frame and which provides that coordinate information for the auto-focus unit 12.
  • the method according to the invention may be built on top of a passive auto-focus method if the invention is applied in con- nection with the video capturing process.
  • the method is basically divided into stages (tracking and updating the lens, or in general, optics 14) .
  • an object of inter- est is first identified. This identification may be performed either by the user aided or totally automatically by the device 10.
  • the region of interest is a target defined by a user or an application in a frame in a video sequence.
  • the object T is made the interest of the camera lens 14 and the focus is operated to get the sharpest image quality of this object.
  • the region-of-interest (ROI) tracking algorithm 16 is following it and sending feedback parameters to the auto-focus actuator (motor) 13 to move the auto-focus lens 14 accordingly backward or forward depending on the current situation.
  • stage 202 may be performed where the region-of-interest is identified in the viewfinder video (code means 31.1) .
  • This procedure may include several sub-stages which are described in more details in con- nection with Figure 5 which is explained more precisely hereinafter. Of course, other methods may also be applied in this ROI identification process. The invention is not intended to limit this to the embodiment presented in Figure 5.
  • the video capturing process starts in order to produce video for the desired purpose (for example, for storing to the memory MEM or for streaming to the network) (stage 203) .
  • the captured video image data i.e. the video frames, are processed at stage 203' as a manner known as such after which they are stored to the memory MEM of the device 10, for example.
  • auto-focus is also performed in the loop 204 - 213 in order to adjust the focus lens system 14 and keep the target in the image as sharp as possible.
  • This ROI tracking stage 204 also includes several sub-stages which are also described as an embodiment more pre- cisely in a little bit later. Basically any method may be used in this connection.
  • Owing to the ROI tracking stage 204 achieved data/results may be used in the next stage 205 that is the determination of the plane position of the ROI (code means 31.3, 31.4) .
  • Figure 3 an example of the determination of the plane position of the ROI has been presented.
  • the frame FRl only exists on this current loop cycle.
  • There the head T is the ROI, i.e. the target, and its spatial position in the X-Y plane may vary between the frames FRl - FR3 during the recording process.
  • this stage 205 the XY-coordinates of the ROI' s new position in the image frame FRl - FR3 are found. This information is then applied in connection with the auto-focus stage 210.
  • the spatial position of the ROI T along the Z-axis is determined (code means 31.3, 31.5) . This is performed in order to detect the movement of the ROI T in depth of the imaging view (code means 31.5) .
  • Figure 4 presents an example relating to that.
  • the middle frame FR2' presents the size of the target T at initial capturing point.
  • the frames may be achieved, for exam- pie, from the stage 202, or from each loop cycle if the shape of the ROI changes during recording process.
  • the ROI ratio i.e. the initial (or current) size ratio of the ROI relative to the frame size may be named Rl.
  • the ROI Area means the number of the pixels of the ROI.
  • ROI ratio current the ratio of the tracked ROI area with respect to the video frame size
  • ROI_ratio_old the previous ratio obtained from some of the older frames
  • an evaluation of how much movement on Z-axis took place between the consecutive frames FRl' - FR3' is per- formed. This is performed by analyzing the size ratio of the ROI T, with respect to frame size, between produced consecutive image frames (FRl' - FR3' ) . Using the ROI size variation results, the lens system 14 is adjusted in an established manner (code means 31.6) . More precisely, if the current R0I_ratio is greater than Rl (i.e. the target is nearer relative to the device 10 than desired/originally) then the lens 14 must be instructed to move backward. Owing to this the desired/original situation i.e. the size ratio Rl of the ROI in the frame FR2' is achieved.
  • Example of that kind of situation is presented in frame FRl' . If the R0I_ratio is less than Rl (i.e. the target is farer relative to the device 10 than desired/originally) then the lens 14 must be instructed to move forward. Owing to this the desired/original situation i.e. the size ratio Rl of the ROI in the frame FR2' is achieved. This situation is presented in the frame FR3' .
  • the advantage achieved owing to the determination of the size ratio of the ROI in the current frame relates to the ROI tracking stage 204.
  • the adjusting of the auto-focus 14 is based on the current measured values i.e. realized image content. Owing to the Z-axis determination of the target T the adjustment takes into account the area ratio of ROI with re- spect to whole frame and implicitly takes into account target motion by keeping enough space between target boundaries and frame boundaries. This will guarantee that ROI will stay in focus in the next frame without doing any estimation.
  • stage 209 the coordinates of the ROI and the distance of the ROI from the camera 10 is sent to the auto-focus unit 12.
  • the auto-focus unit 12 extracts the pixels from the ROI, analyzes their sharpness and contrast. This may be performed by using the whole area of the ROI or at least part of the ROI .
  • Figure 3 illustrates an embodiment in which only portions of the ROI are analyzed and used for fixing the auto- focus 12.
  • Fl - F4 In the area of the frame FRl - FR3 there are projected squares Fl - F4 or, in generally, areas which may be understood as focus points.
  • they may cover the frame FRl - FR3 mainly and evenly.
  • the ROI or at least part of that is always on the area of one or more of the focus point Fl - F4 covering that at least partly.
  • the ROI is in the area of Fl and only the data of the ROI of that area Fl is then used in the sharpness and contrast analysis. This also reduces the need for calculation power.
  • stage 211 the unit 12 finds the parameters needed for updating the distance of the lens 14 from the target T.
  • the auto-focus unit 12 uses as well the movement in the Z- direction to update the new position of the lens 14.
  • stage 212 the results/parameters determined by the auto- focus unit 12 is sent to the actuator/motor 13 in order to update the lens 14 position. This updating of the position of the lens 14 is not presented in details in this flowchart be- cause that may be performed in a well known manner.
  • stage 213 If in stage 213 an end of recording is noticed then the process will be terminated to stage 214.
  • Figure 5 presents as an example a flowchart how these sub- stages of main stage 202 may be implemented.
  • the device 10 presented in Figure Ib is equipped with corresponding functionality 15 which may be based on processor CPU and program codes 31.1, 31.16 - 31.18 executed by that.
  • the ROI identification process starts 501 with displaying the image/frame of interest.
  • the imaging scene/view is displayed, including the target i.e. the ROI.
  • Figures 6a and 6b present examples of the viewfinder views VFVl, VFV2 (left side images) .
  • stage 502 the user is asked to pick an object Tl, T2 from the image VFVl, VFV2.
  • the user can itself pick an object Tl, T2 of interest in the viewfinder video frame VFVl, VFV2 that is currently viewed on the display 18 when the camera sensor 11 of the device 10 is aimed towards the desired scene BGl, Tl, BG2, T2.
  • the application in the camera device 10 may also pick that object Tl, T2 for him.
  • a segmentation tech- nique may be applied, for example.
  • the user is asked to define the target Tl, T2.
  • the user may draw a window or adjusts a rectangle WINl, WIN2 around the target Tl, T2 being in the frame VFVl, VFV2 displayed on the display 18 in order to enclose the object of interest.
  • the viewfinder views VFVl, VFV2 relating to this window WINl, WIN2 definition step 502 is presented in the right side of the Figures 6a and 6b.
  • the target Tl i.e. the region of the interest is the walking couple and the sea scenery represents the background BGl.
  • the target T2 is the boat and the mountain/water landscape represents the background BG2.
  • the drawing of the window WINl, WIN2 can be done on a touch screen if it is part of the system, otherwise UI displays a rectangle and the user can move it and resize it using some specified cursor keys of the keypad 19, for example.
  • the object Tl, T2 picked in stage 502 is named also a region of in- terest (ROIl, R0I2) .
  • the coordinates of the selected rectangle or window WINl, WIN2 are determined in stage 503 and in stage 504 they are read and passed from the UI to the identification algorithm.
  • the window coordinates may include, for example, top-left and bottom- right corners of the defined window WINl, WIN2.
  • the statistics of the colours in connection with the defined window WINl, WIN2 are analyzed and as a result the object Tl, T2 inside the de- fined window WINl, WIN2 is automatically identified.
  • the user In the picking stage 502 the user must leave some safe zone around the target Tl, T2 when defining the window WINl, WIN2.
  • Figure 7a describes the principle of the identification algorithm in a more detailed manner.
  • the defined area in the viewfinder frame VFV3 i.e. the target T3 is the face of the ice-hockey player.
  • two other rectangles RECl, REC2 are constructed, which are close to the de- fined window WIN3 (including the target T3) .
  • Rectangles RECl, REC2 may be constructed by the algorithm itself (code means 31.17) .
  • One rectangle REC2 is inside the window WIN3 defined by the user and the other rectangle RECl is outside of the defined window WIN3.
  • the spacing between the edges of con- structed rectangles RECl, REC2 is small with respect to their widths and lengths .
  • the intensity histograms are computed for each rectangle RECl, REC2 and for the defined window WIN3. These histograms describe the corresponding luminance content of the concerned rectangle. Let h ⁇ , hi and h2 be these histograms associated, respectively, with selected area WIN3, outside and inside rectangles RECl, REC2.
  • histogram-based matching for each pixel within the selected window WIN3, is performed (code means 31.16) .
  • the matching is followed by a binary dilation in order to uniformly expand the size of the ROI in controlled and well- defined manner (code means 31.18) .
  • Binary dilation may be per- formed with neighbouring pixels, for example, 3x3 block. Other block sizes may also be applied depending on, for example, the frame resolution.
  • the purpose of the binary dilation process is to fill in any small "holes" in the mask due to pixels (in ROI) but estimated in background based on their color content. Generally speaking, this is a unifying process in which the neighbourhood of the current pixel is harmonized.
  • the binary dilation filter should not be too large in order not to expand the ROI region in background. This kind of application can applied simultaneously with pixel matching in order to achieve a significant reduction in complexity.
  • stage 509 of main stage 508 the next pixel, within the selected area WIN3, is taken.
  • stage 510 the pixel luminance value is quantized with the histogram bin size resulting in the quantized intensity ⁇ q value' .
  • Figure 7b presents an example of the color histograms hO - h2 of exocentric rectangular regions WIN3, RECl, REC2.
  • stage 511 the status of each pixel in the selected area WIN3 is determined.
  • the ratios representing the relative recurrence i.e. with respect the pixel count in WIN3, of the pixel colour both inside REC2 and within the layer between WIN3 and RECl are computed.
  • stage 512 is performed a test. If the calculated ratio rl ⁇ thresholdl and the ratio r2 > threshold2 then a step to stage 513 is made.
  • the value of threshold2 is to be chosen within the range [0.5, 0.7] whereas a good choice of thresholdl would be within [0.05, 0.25].
  • the threshold values may also be chosen based on the ratios of the areas of REC2 and the region between WINl and RECl divided by the area of WIN3. For the sake of a simple and efficient implementation of the omnidirectional dilation method, the current pixel of the defined area WIN3 and all its 8 nearest neighbours are considered to be part of the ROI, i.e. the target T3, if both thresholding tests are satisfied.
  • the ROI-mask (Ml, M2 in Figure 8a, 8b, if considering the imaging cases presented in Figures 6a and 6b) is initialized accordingly at the current pixel and its neighbours. These steps, starting at stage 509 are repeated for the next pixel and this loop is continued until each of the pixels of the defined area WIN3 are tested.
  • stage 512 If one, or both, of the conditions of the test of stage 512 fails then the current pixel of the defined area WIN3 is considered not to be the part of the ROI i.e. the target T3. It is decided to belong to background and no further actions are then required. A step back to stage 509 is taken if there are still untested pixels in the area WIN3.
  • stage totality 508 When the stage totality 508 is entirely performed, then a step to stage 514 is taken. There it is possible to perform one or more possible refinement steps.
  • the refinement steps may apply, for example, morphological filters. These may be used to guarantee regularity in the object shape by eliminating any irregularities, such as, holes, small isolated areas, etc.
  • stage 515 the produced ROI mask is passed to an algorithm that tracks the moving or not moving ROI in the video capturing process (stage 204) .
  • stage 204 the user is informed in the UI that the automatic identification of the ROI has been performed and now it is possible to start the actual video recording process.
  • the ROI masks Ml, M2 of the example cases of Figures 6a and 6b have been presented in the right side of the Figures 8a and 8b.
  • the target Tl, T2 is indicated as white and the background BGl, BG2 is indicated as black.
  • the ROI identification solution described just above takes into account a very important factor. That is the simplification of the user interaction in order to keep any related application attractive and pleasant.
  • the user draws (on touch screen) or adjusts (using specified cursor keys) a rectangle WINl - WIN3 containing the target Tl - T3.
  • the algorithm reads the window WINl - WIN3 coordinates from the UI and automatically identifies the ROI in the manner just described above.
  • the algorithm is of low computational complexity as it is based on color-histogram processing within the selected window WINl - WIN3 and simple morphological filters.
  • the output of the identification is a generated mask Ml, M2 for the ROI, which is easy to use in any ROI tracking approach during actual recording process. All these features make such method suitable for mobile device applications .
  • This described method above can be implemented as the initialisation of ROI tracker 16 which is intended to be described as a one example next in the below.
  • Tracking of the region-of-interest is described next. It is the software module that makes use of the results of identification stage 202 described just above.
  • the device 10 may have functionality concerning this ROI tracking 16. This may also be implemented by the processor CPU and program codes 31.2, 31.7 - 31.15 executed by the processor CPU. A tracked object from the recorded video can then be used for improved and customized auto-focus 12 of the camera 10, for example.
  • the tracking process according to the invention can be understood as a two-stage approach.
  • the first step applies localized colour histograms.
  • the second phase is applied only if the target i.e. the region-of-interest ROI and the neighbouring background regions, within a local area, share some colour content. In that case, simple shape matching is performed (code means 31.13) .
  • the goal of the tracking stage 204 is to define a ROI mask M5 describing the current location and shape of the target T4, T5 in each frame FRC, FRP, FR and output a tracking window TWP, TWC, TW containing the target T4, T5.
  • the design i.e. the size and place, for example
  • the tracking window TWP, TWC, TW takes into account the motion of the target T4, T5 so that in the next future frame the target T4, T5 is expected to stay within the window TWP, TWC, TW, i.e. keep a background BG4, BG5 margin on the sides of the target T4, T5.
  • the main stage 204 assumes that the user defines a window or area presented in Figure 6a and 6b, or a region-of-interest ROI, around the targeted object T4, T5 to give an idea of the location of the region-of- interest. Otherwise, there could be many potential objects in the video frame to choose from, and without a-priori information, it would be impossible to target the correct one.
  • the next task is to identify the object-of-interest within the defined window.
  • Various algorithms can be used to segment the object within the ROI and separate it form the background.
  • the main stage 202 and the more detailed embodiment presented in Figure 5 performs these duties.
  • the object is identified by a mask (stage 515) , which is fed to the tracking process 204 described next.
  • the task of the tracking process 204 is to update the mask M5 and the tracking window TWC, TWP, TW in each frame FRC, FRP, FR and provide correct information about the object.
  • the stages 205 - 212 may then be successfully performed in determining the spatial position of the ROI for the auto-focus unit 12.
  • the input data to the ROI tracking stage 204 includes the previous tracked frame FRP, FR (i.e. the image content of the previous frame), the image content of the current frame FRC and also the ROI mask M5 of the previous frame FR.
  • the ROI mask M5 is required in order to decide which parts of the previous frame FR represent background portions BG5 and which parts of the previous frame FR represent the target T5.
  • some frames must already be produced with the camera sensor 11 (in stages 203, 203' ) before it is possible to start the main stage 204.
  • the tracking window TWC is projected on the current frame FRC.
  • the tracking window TWC is an area defined by the tracking algorithm and whose corner coordinates are up- dated at each cycle of the loop 204. If the current loop of the tracking is the first, then the tracking window TWC may be generated based on the identified (initial) ROI mask produced in the stage 202.
  • the current tracking window TWC is divided into, i.e. macroblocks MB.
  • Macroblock MB is a geometric entity representing a set of pixels belonging to an N-by-N square (for example, 8x8 or 16x16, for example) .
  • the colour histogram technique used in the first stage is applied on macro-block-wise. It provides a good tracking robustness.
  • the current macroblock MB inside the current projected tracking window TWC of the current frame FRC and presented in dashed short lines is projected on the previous frame FRP (small block PMB in dashed in the previous frame FRC) .
  • the projecting measure means in this connection that in the previous frame FRP is defined an area PMB which location and size corresponds its location and size in the current frame FRC.
  • stage 903 is defined a search block SB presented dashed line in the previous frame FRP (in Figure 10a) .
  • This search block SB surrounds the projected macroblock PMB that was just projected there.
  • the search block SB defined in the previous frame FRP represents the search area for the current macroblock MB.
  • the search block SB is its own block for the each of the projected macroblocks PMB of the current tracking window TWC which are go through one by one by the algorithm (code means 31.8) .
  • the search block SB is constructed by enlarging the projected macroblock PMB in the previous frame FRP in each direction, as indicated by the double-headed arrows (code means 31.9) .
  • the distance of the enlarging may be a constant and that may be equal to the estimated motion range (code means 31.10) . Such a distance may represent the maximum possible motion range. It can be estimated adaptively based on previous values or can be set to a constant that is large enough to be an upper bound for displacements within video sequences (e.g. 16 or 32) . Its purpose is to ensure that the best match for the current mac- roblock MB is inside this search block SB defined for it.
  • Figure 12a presents the previous image frame FR and, more particular, its Y-component (if the applied color space is YUV) .
  • this component-wise frame FR may now also be understood in this connection as the actual image frame that is observed.
  • the content of the other components (U and V) would be very blurred (even if they would be printed by using the laser printing technology not to speak of offset printing used in patent publications) . Due to this reason these other (U and V) component frames are not pre- sented in this context.
  • FIG 12a there are shown two projected macroblocks PMBl, PMB2 which have their own search blocks SBl, SB2 in the tracking window TW. It should be understood that though the blocks PMBl, PMB2, SBl, SB2 are here presented now in this same figure, in reality they are tracked in order by the tracking loop stage 204. Thus, one macroblock is under tracking process during the current tracking loop 204.
  • stage 904 are computed two histograms (code means 31.11) . These are background and ROI histograms.
  • the histograms are computed only for the pixels inside the search block SB, SBl, SB2, in the previous frame FRP, FR surrounding the projected macro-block PMB, PMBl, PMB2. This means that the pixels of the previous frame FRP, FR which are inside the area of the pro- jected macroblock PMB, PMBl, PMB2 of the previous frame FRP, FR are also taken into account when constructing ROI and background histograms of the search block SB, SBl, SB2 of the previous frame FRP, FR.
  • the histograms represent the color content of the target and background portions inside the large dashed line block SB, SBl, SB2. Using these background and target histograms, it is possible to detect a possible common colour between the two regions (target and background) whereas processing macroblock- wise results in an efficient handling of shape flexibility. In this way, the generated histograms provide a description of localized color distributions of ROI and background.
  • histograms for two areas which are the ROI area and the background area.
  • For each of the area are constructed the Y, U and V component-wise histograms.
  • YUV is only used as an example in this connection.
  • An example of these histograms in the case of the real imaging situation shown in Figure 12a are presented in upper parts of Figures 13a - 13f.
  • These histograms describe the colour contents of the target and background regions inside the search block SBl, SB2 defined in previous step.
  • the histograms Hl - H3 are YUV histograms of the case one target region T5 i.e. the component-wise histograms of the search block SBl of the Figure 12a.
  • the histograms H4 - H6 are YUV histograms of the case two target region T5 i.e. the component wise histograms of the search block SB2 of the Figure 13a.
  • the histograms Hl' - H3' are background portion BG5 of search block SBl and histograms H4' - H6' are the background' s SB2.
  • the status of the each pixel of the search block SBl, SB2 i.e. whether the pixel is a ROI pixel or a background pixel
  • These six histograms may be constructed as follows. For every pixel in search block SBl, SB2 in previous frame FR including also the projected macroblock area PMBl, PMB2 of the previous frame FR is performed an analysis according to which:
  • each bin (i.e. X-axis) in ROI histograms Hl - H6 represents the number of pixels (Y-axis) , whose color values fall into a specific range, belong to the target region T5 in the search area SBl, SB2 of the previous frame FR.
  • background BG5 histograms Hl' - H6' represent number of pixels, within the search block SBl, SB2 of the previous frame FR, that are discovered to be background pixels based on the ROI mask M5.
  • FRC region-of-interest
  • the computed probabilities are then applied to the current macroblock in current frame.
  • the probabilities indicate the status of the pixels of the current macroblock (MB) i.e. whether a given pixel of the current macroblock is more likely a ROI pixel or a background pixel (code means 31.12) .
  • Figures lib and lie are presented two hypothetical examples of the probability macroblocks Pmbl, Pmb2 for shape matching. These may be imagined to relate to the imaging case presented in Figures 10a and 10b.
  • Figure 11a is presented a hypothetical ROI mask of the search block SB in which is projected i.e. defined a macroblock PMB.
  • Figures 13a - 13f are also presented correspond probability histograms Pl - P6 which relate to the real imaging case presented in Figure 12a (YUV color components of search blocks SBl, SB2) .
  • stage 905 an examination relating to the colour content differences between the object and background of the search block area SBl, SB2. If histograms for at least one of the components Y, U, V are disjoint, i.e. for each bin either ROI or background histogram value is equal to zero, a step to stage 906 is then performed. In stage 906 all pixels of the current macro-block MBl, MB2 in the current frame FRC corresponding to that bin are all assigned as ROI pixels or background pixels depending on the color of each pixel. More general, the here the statuses of the pixels of the macroblock are determined. This means that there may be both ROI pixels and/or background pixels in the macroblock.
  • the ROI and background histograms for one of the color components are said to be disjoint if for every bin either the ROI histogram or back- ground histogram is empty. On the other words, if there are pixels on the same bin in the histograms of both areas, then the disjoint condition is not valid. This basically means that for that particular color range, all pixels of the macroblock of the current frame are in ROI or all pixels are in background. This implies that for each pixel it is clear whether it belongs to the target or to the background and therefore no further processing is required in this case.
  • stage 905 if in stage 905 is detected that all histograms of the components U, Y, V between the regions is congruent, then a step to stage 907 is performed.
  • stage 907 is constructed a probability macro-block (if that was not constructed already in connection with analysing disjoint/congruent condition) .
  • Congruence means in this connection that the ROI and the background regions, inside the large block SBl, SB2 in previous frame FR share some colors in the case of each component YUV and due to this reason the nature of the macroblock is not so clear.
  • each intermediate color between black and white and also in the probability histograms P4 - P6 each entry (Y-axis) represents the probability Pu, Py, Pv of the corresponding pixel value (X-axis) being in the target.
  • stage 907 may be applied the shape-based technique. Shape matching may be performed in the probability domain.
  • the probability macro-block Pmb2 of the current frame FRC is matched to a mask region within the search block SB of the previous frame FRP. This is presented in Figures 11a and lie.
  • the ROI mask M5 is also applied when deciding if the pixel of the search block SB belongs to ROI or background in the previous frame.
  • the shape-based matching of the ROI region performed in stage 908 is applied within a "shape representation" i.e. mask with values equal to 0 or 1 and a probability macro-block with values between 0 and 1. At the end of this stage, false alarms can be eliminated and a decision on whether or not a pixel inside the macro-block of the current frame is in the background or in the target.
  • the shape-based matching of the ROI may be performed by minimizing the sum of absolute difference (SAD) . This is performed between the probability values of the current macroblock (the probability block) and values of candidate blocks in ROI mask (code means 31.14, 31.15) .
  • the next step is to find the best block in the ROI mask on stage 908. So the algorithm makes a search in the ROI mask to find the macroblock sized area that is the closest, i.e has least sum of absolute difference, to the current probability macroblock. In other words, if the current macroblock has probabilities Pmb(i,j) and ROI mask has values M(i,j) then it is performed a looking procedure for a block in the mask that has the minimum:
  • the indexes (i,j) are going through the values of the current block.
  • the parameters kl and k2 indicate the displacements of the mask block when performing the search. These displacements are performed in each direction pixel by pixel i.e. the matched macroblock is fitted to each location of the ROI mask area. When the best match has been found, then a step to stage 909 is performed in which the mask is updated. After steps 910, 911 a loop is again initiated for the next macroblock of the current frame. If all macroblocks of the current frame have already been went through, then a step to main stage 205 is performed with the determined ROI mask. Also, the current tracking window is stored in order to be used that in the next loop cycle.
  • the real time tracking algorithm described just above is very robust to all variabilities already described in prior art section. At the same time it is a computationally effective and memory-efficient technique so that it can be adopted in the current and upcoming electronic devices generating video sequences . It provides a capability to detect and handle colour similarity between target and background and robustness to shape deformation and partial occlusion. Generally speaking, it provides means for performing shape matching, i.e. matching in the probability domain.
  • the algorithm according to the invention performs the matching without using any detailed features. This can act as an advantage in that it provides robustness to flexibility in position and shape.
  • the auto-focus 12 according to the invention is based on, for example, the passive scheme. It means that there is applied the image data produced by the sensor 11. In that the auto- focus unit 12 relies mainly on the sharpness (by computing edges on horizontal and vertical direction) and by using the contrast info. Only areas of the image may be considered for this analysis, for example. Based on the results the actuating motor 13 of the lens system updates the position of one or more lens 14.
  • the method according to the invention may be used in that kind of auto-focus scheme by including the ROI identifier 15 and tracker 16.
  • the invention itself doesn't depend on the used auto-focus scheme but the invention may also be implemented in connection with different kind of auto-focus schemes.
  • the tracker 16 will indicate the coordinates of the areas to analyze in the auto-focus unit 12. This is performed via the spatial positioning unit of the ROI 17. It determines these coordinates in stages 205 and 206.
  • the tracker 16 will also add a third dimension on the displacement of the object along the Z-axis. Owing to this spatial handling without any pre-estimation of the location of the ROI in the frame a more robust auto-focus will be achieved and also the moving targets are kept in good focus .

Abstract

The invention concerns an electronic device (10) equipped with a video imaging process capability, which device includes a camera unit (11) arranged to produce image frames (I, FRC, FRP, FR, VFV1 - VFV3, FR1 - FR3, FR1' - FR3') from an imaging view which includes a region-of-interest ROI (T, T1 - T5), an adjustable optics (14) arranged in connection with the camera unit in order to focus the ROI on the camera unit, an identifier unit (CPU, 15) in order to identify a ROI from the image frame, a tracking unit (CPU, 16) in order to track the ROI from the image frames during the video imaging process and an auto-focus unit (12) arranged to analyze the ROI on the basis of the tracking results provided by the tracking unit in order to adjust the optics. The device is arranged to determine the spatial position of the ROI in the produced image frame without any estimation measures.

Description

METHOD AND DEVICE FOR CONTROLLING AUTO FOCUSING OF A VIDEO CAMERA BY TRACKING A REGION-OF-INTEREST
The present invention relates to an electronic device equipped with a video imaging process capability, which device includes
- a camera unit arranged to produce image frames from an imaging view which includes a region-of-interest ROI,
- an adjustable optics arranged in connection with the camera unit in order to focus the ROI on the camera unit,
- an identifier unit in order to identify a ROI from the image frame,
- a tracking unit in order to track the ROI from the image frames during the video imaging process and
- an auto-focus unit arranged to analyze the ROI on the basis of the tracking results provided by the tracking unit in order to adjust the optics.
In addition, the invention also relates to a method and a cor- responding program product.
In Figure Ia has been presented a prior art example of the generic architecture of a passive auto-focus system arranged in connection with a digital imaging system 10' . Generally speak- ing, auto-focus 12 is the procedure of moving of the lens 14 in and out until the sharpest possible image I of the subject T is obtained. Depending on the distance of the subject T from the camera 10', the lens 14 has to be at a certain distance from an image sensor 11 to form a clear image I.
More particular, the image sensor 11 of the device 10' produces (image) data I for the auto-focus unit 12. Auto-focus unit 12 calculates parameters for motor 13 adjusting the position of the lens 14 on the basis of the data. The motor 13 ad- justs the lens 14 owing to which the captured image I is more accurate. Of course, other architectures are also possible. There are two main approaches for auto-focus in the prior art, an active auto-focus, AAF and a passive auto-focus, PAF.
In the active auto-focus, which is the more expensive, the camera emits a signal to the direction of the object (or scene) to capture in order to detect the distance of the subject. The signal could be a sound wave, as it is the case in submarines under water, or an infrared wave. The time of the reflected wave is then used to calculate distance. Basically, this is similar to the Doppler radar principle. Based on the distance the auto-focus unit then tells the focus motor which way to move the lens and how far to move it.
However, there are several problems with this AAF approach, besides the costs. In using infrared, the subject has to be in the middle of the frame. Otherwise the auto-focus will be fooled by receiving reflected waves from other objects. The reflected beams could also come from objects in front of the subject, if user is taking a photo, for example, from behind a barrier, in a stadium. Any source of bright objects in the scene will also make it difficult for the camera to receive the reflected waves .
In the passive auto-focus, PAF, the cameras determine the distance to the subject by analyzing the image. The image is first captured, and through a sensor, dedicated for the auto- focus, it is analyzed. Usually the sensor specific for the auto-focus use, has a limited number of pixels. Thus, a por- tion of the image, only, is often utilized.
A typical auto-focus sensor applied in PAF is a charge-coupled device (CCD) . It provides input to algorithms that compute the contrast of the actual picture elements. The CCD is typically a single strip of 100 or 200 pixels, for example. Light from the scene hits this strip and the microprocessor looks at the values from each pixel. This data is then analyzed by checking the sharpness, horizontally or vertically, and/or its contrast. The obtained results are then sent as a feedback to the lens, which is adjusted to improve the sharpness and the contrast. So, for example, if the image is very blurred then it is understood that the lens needs to move forward in order to adjust the focus.
The same technique could be used for videos as well. However, in capturing videos, there are several use cases, where an object of interest is needed to be tracked and focused on during recording process. In videos objects of interest are often moving. The movement may happen either in the same plane of the captured scene, or backward or forward with respect to the camera position. The challenge is how to maintain the focus on an object while shooting a video.
In the literature, different techniques were used for object identification and ROI tracking process initialisation. Such a problem arises in many applications such as the initialization of region of interest (ROI) tracking in a video sequence. In the particular case where only human faces are targeted, the skin color information is used to detect the foreground. Some approaches apply feature-based refinement to distinguish faces from other parts of the body with similar color characteristics, e.g. hands. Such methods cannot be applicable in the general case where the target can be of any type, such as, for example, a car, an airplane, an athlete, an animal, etc. In the case where the target is not necessarily of a particular type, e.g. human face, but can rather be any object, the identification can not be done automatically and user input becomes imminent. As a matter of fact, in order to perform any ROI-based editing/tracking, the target has to be differentiated from the background. Other semi-automatic alternatives require quite complex input from the end-user. The latter is expected to se- lect points on the boundary of the ROI, which will be used as input for automatic segmentation. The user still can supervise the result of segmentation and provide feedback, if necessary. This type of user-interaction is not trivial and might be considered as tedious for the typical mobile device user. Such approach might make any related application experience unpleasant. Besides, the problem of image/frame segmentation has no simple solution yet and most reliable methods are of high computational complexity.
In addition to the above, when the ROI has been first identified then a tracking process is performed in order to localize the ROI in each frame. Generally speaking, visual tracking of non-rigid objects in a video sequence can be seen as a recursive-matching problem from which certain reliability is re- quired. At each time step, a region in the current frame is matched to a previous instance, or a model, of the target.
The difficulty of the tracking problem is of multiple dimensions. In real life, 3D objects captured on a 2D video frame do not capture the depth of the objects. Changes in the object that occur with time, such as translation, rotation, deformation, etc., are not captured faithfully on the 2D screen. Tracking the object while it undergoes these changes is a challenging task.
Firstly, variability in the target's location, shape and texture and the changes in the surrounding environment make target recognition a very challenging task. More precisely, the object may also be affected by its surroundings and other ex- ternal factors . Some examples of these are interference with other objects, occlusions, changes in background and lighting conditions, capturing conditions, camera motion, etc. All these numerous factors impede a robust and reliable tracking mechanism of the object-of-interest .
Secondly, the matching can be done based on the colour, shape or other features . Methods based on one of these aspects usually provide robustness in one sense but show weaknesses under some other scenarios. Finally, the tracking of shapes and fea- tures involves significant computational load. Therefore, algorithms that consider more than one multiple visual aspect of the target can enhance tracking performance but at the expense of higher computational load.
Existing tracking methods can be classified as colour-based, shape-based, motion-based or model-based. The latter usually relies on multiple features, i.e. colour, motion, shape and texture details. Methods using colour content usually are computationally simple. However, their drawback is mainly the false alarms generated when the target shares some colour contents with a neighbouring background region. In such a scenario, the tracked region can falsely expand or split into two distinct regions and the tracking starts degrading. In the literature, shape and feature based methods are usually compu- tationally complex and memory-inefficient. The main sources of complexity are the modelling of such features and the matching of the model, e.g. edge matching, contour representation and matching, etc.
Examining most of the proposed tracking methods, one can conclude that tracking robustness is usually associated with very- high computational complexity. This is not a feasible approach for mobile devices, for example, with limited computational power. Hence, there is a need for a new solution that provides robust tracking with low complexity algorithms . The present invention is intended to create a new type of electronic device equipped with video imaging process capability and auto-focus means, as well as an auto-focus method for video imaging. The characteristic features of the electronic device according to the invention are stated in the accompanying Claim 1 while the characteristic features of the method applied in it are stated in Claim 17. In addition, the invention also relates to a program product, the characteristic features of which are stated in the accompanying Claim 33.
The invention describes algorithms to utilize in the auto- focus modules in cameras to improve video capturing. The approaches are based on utilizing a region of interest tracking technique to identify the optimal parameters for focus control.
The invention provides means for efficiently keeping the focus on an object during video recording process. The invention is composed of an algorithm for identifying an object, tracking it and an auto-focus unit which uses the tracking results to control the lens system.
The invention optimizes the performance of the passive auto- focus lens by introducing a region of interest tracking technique, which makes the object of interest sharp and with the same size in proportion to the video frames along the video sequence. The technique can be implemented, for example, within an auto-focus unit. With the ROI tracking results a more accurate update on the lens movements is reached, while the camera is recording the auto-focus is done automatically.
According to one embodiment the region-of-interest (ROI) tracking in a video sequence is performed applying macroblock- based region-of-interest tracking method. In that two histo- grams may be calculated in order to find out which pixels of the current macroblock of the current image frame are target pixels and which pixels are background pixels. The histograms are calculated for the ROI region and for the background re- gion of the previous frame. The information which regions belong to the ROI and which regions belong to background have been achieved from the ROI mask of the previous frame.
Additionally, according to the invention it is also possible to perform a simple shape matching procedure if there is color similarity between the target and the background regions inside the current macroblock. The invention describes a new approach characterized by tracking robustness and low computational complexity. These two features are the main measures of whether or not a tracking technique is implementable in an application targeting mobile devices where system resources are very limited. The tracking scheme according to the invention provides robustness to shape deformation, partial occlusion and environment variations while maintaining a low computa- tional complexity with reliable and efficient performance.
According to one embodiment in connection with the invention it is also possible to apply a semi-automatic identification of an object or a region of interest (ROI) in an image or a video frame. According to this solution, it is possible to define an area around the ROI and the target is then automatically identified by the device. Defining may be performed by the user or by the device. Owing to this is guaranteed a robust identification while keeping the end-user interaction with the system fairly simple.
In this identification process the local color-content inside and around the defined area including the object of interest is analyzed in order to distinguish between background and target. The output of this process may be a mask describing the ROI. Computationally, the developed algorithm is quite simple .
Other features characteristic of the electronic device, met- hod, and program product according to the invention will become apparent from the accompanying Claims, while additional advantages achieved are itemized in the description portion.
In the following, the invention, which is not restricted to the embodiment disclosed in the following, is examined in greater detail with reference to the accompanying figures, in which
Figure Ia shows a principle of a generic architecture of a passive auto-focus system according to the prior art,
Figure Ib shows a principle of a device according to the invention in which the identification and the tracking of the ROI is applied in connection with the determination of the spatial position of the ROI, Figure 2 shows an example of the method according to the invention as a flowchart,
Figure 3 shows an illustration of the ROI tracking in the scene plane,
Figure 4 shows an illustration of the ROI tracking in the Z-plane,
Figure 5 shows an example of the ROI identification process according to the invention as a flowchart,
Figures 6a and 6b show examples of the window selection procedure in order to define the area including the object of interest,
Figure 7a shows an example of the color-content based distinction of target from background using the color histograms corresponding to exo- centric rectangular regions, Figure 7b shows an example of the color histograms of the exocentric rectangular regions, Figures 8a and 8b show examples of the produced ROI masks of embodiments of Figure 6a and 6b, Figure 9 shows an example of the ROI tracking process according to the invention as a flowchart,
Figures 10a, 10b show principle examples of the generation of the block-wise histograms, for the background and the ROI, Figure 11 show examples of the shape of the mask block and probability macroblocks in connection with the shape matching procedure, Figures 12a, 12b show a mask of the Y-component and the ROI mask as an example in the real imaging situation and Figures 13a - 13f show examples of the color componentwise and probability histograms from the real imaging situation presented in Figures 12a and 12b.
Nowadays, many electronic devices 10 include camera means 11. Besides digital video cameras, examples of such devices include mobile stations, PDA (Personal Digital Assistant) de- vices, and similar Λ smart communicators' and also surveillance cameras. In this connection, the concept 'electronic device' can be understood very widely. For example, it can be a device, which is equipped, or which can be equipped with a digital video imaging capability. In the following, the invention is described in connection with a mobile station 10, by way of example .
Figure Ib shows a rough schematic example of the functionalities in a device 10, in as much as they relate to the inven- tion. The camera means of the device 10 can include the func- tional components or sub-modules 11 - 14, which are, as such known, shown in Figure Ib and already described in connection with Figure Ia in the prior art section. These modules 11 - 14 form a loop architecture, which is performed continuously in connection with the video recording. Here the video recording may be understood as well as the measures that are performed before the actual recording process (initialization measures) and also during the actual recording in which storing or network streaming of the video data is performed.
At least part of the functionalities of the camera may be performed by using data-processing means, i.e. CPU 15 - 17. This may include one or more processor units or corresponding. As an example of these the image processor CPU may be mentioned with the auto-focus unit 12 included in that. Of course, the auto-focus unit 12 may also be a separate entity which communicates with main processor CPU. By means of these processors CPU, 15 - 17 the program product 30 is implemented on either the HW or SW level, in order to perform actions according to the invention.
Further, the device 10 can also include a display / viewfinder 18 on which information can be visualized to the user of the device 10. In addition, the device 10 also includes a proces- sor functionality CPU, which includes functionalities for controlling the various operations of the device 10.
The actions relating to the auto-focusing process according to the invention can be performed using program 30. The program 30, or the code 31 forming it can be written on a storage medium MEM in the device 10, for example, on an updatable, nonvolatile semiconductor memory, or, on the other hand, it can also be burned directly in a circuit CPU, 15 - 17 as an HW implementation. The code 31 consists of a group of commands 31.1 - 31.18 to be performed in a set sequence, by means of which data processing according to a selected processing algorithm is achieved. In this case, data processing can be mainly understood actions and measures relating to ROI identification 15 that is performed before storing video recording process. Data processing means also in this connection the measures and actions performed by the ROI tracking 16, determination process 17 of the spatial position of the ROI in the video frame and also an auto-focusing process 12. These three actions are all performed during the actual video recording process, as will be explained in later in greater detail.
Next the invention is described in a more detailed manner referring to Figure 2. Figure 2 presents as a flowchart an example of the main stages of the invention. One should understand that the basic idea of the invention is not intended to be limited to these steps or their performance order. In addition, other additional steps may also become into the question and/or the performance order of the presented (or currently not presented) steps may also vary, if that is possible. There may also be sub-processes, like the ROI tracking 204 that is performed independently relative to the other steps (code means 31.2, 31.7) . Owing to this the ROI tracking 204, 16 provides always the most refresh ROI mask for the unit 17 that determines the spatial position of the ROI in a frame and which provides that coordinate information for the auto-focus unit 12.
The method according to the invention may be built on top of a passive auto-focus method if the invention is applied in con- nection with the video capturing process. The method is basically divided into stages (tracking and updating the lens, or in general, optics 14) .
When generally describing the method according to the inven- tion, during the video capturing process, an object of inter- est is first identified. This identification may be performed either by the user aided or totally automatically by the device 10. The region of interest is a target defined by a user or an application in a frame in a video sequence. The object T is made the interest of the camera lens 14 and the focus is operated to get the sharpest image quality of this object. As the object T is moving, the region-of-interest (ROI) tracking algorithm 16 is following it and sending feedback parameters to the auto-focus actuator (motor) 13 to move the auto-focus lens 14 accordingly backward or forward depending on the current situation.
In a given video sequence, a user might be interested in a specific object in the sequence, e.g. a person, an animal, a motor vehicle, etc. After start stage 201, in which the user turns on the viewfinder 18 of the device 10, stage 202 may be performed where the region-of-interest is identified in the viewfinder video (code means 31.1) . This procedure may include several sub-stages which are described in more details in con- nection with Figure 5 which is explained more precisely hereinafter. Of course, other methods may also be applied in this ROI identification process. The invention is not intended to limit this to the embodiment presented in Figure 5.
Once the ROI is properly identified in stage 202, the video capturing process starts in order to produce video for the desired purpose (for example, for storing to the memory MEM or for streaming to the network) (stage 203) . The captured video image data, i.e. the video frames, are processed at stage 203' as a manner known as such after which they are stored to the memory MEM of the device 10, for example.
On the same time of the possible recording process, auto-focus is also performed in the loop 204 - 213 in order to adjust the focus lens system 14 and keep the target in the image as sharp as possible. As a first stage 204 in this loop tracking of the ROI is performed in the current video frame by the ROI tracking unit 16. This ROI tracking stage 204 also includes several sub-stages which are also described as an embodiment more pre- cisely in a little bit later. Basically any method may be used in this connection.
Owing to the ROI tracking stage 204 achieved data/results may be used in the next stage 205 that is the determination of the plane position of the ROI (code means 31.3, 31.4) . In Figure 3 an example of the determination of the plane position of the ROI has been presented. Three frames FRl - FR3 has been presented each of which represents a different time point T = tθ, tl, t2 during the recording process. The frame FRl only exists on this current loop cycle. There the head T is the ROI, i.e. the target, and its spatial position in the X-Y plane may vary between the frames FRl - FR3 during the recording process. In this stage 205 the XY-coordinates of the ROI' s new position in the image frame FRl - FR3 are found. This information is then applied in connection with the auto-focus stage 210.
As a next stage 206 the spatial position of the ROI T along the Z-axis is determined (code means 31.3, 31.5) . This is performed in order to detect the movement of the ROI T in depth of the imaging view (code means 31.5) . Figure 4 presents an example relating to that. There has been presented three different cases which may again represent different time moments. The middle frame FR2' presents the size of the target T at initial capturing point. The frames may be achieved, for exam- pie, from the stage 202, or from each loop cycle if the shape of the ROI changes during recording process. The ROI ratio i.e. the initial (or current) size ratio of the ROI relative to the frame size may be named Rl. The ROI ratio at the each moment may be defined to be R0I_ratio = (ROI Area) / (number of pixels in the frame) . Here the ROI Area means the number of the pixels of the ROI.
As a sub-stage 207 of the stage 206 the ratio of the tracked ROI area with respect to the video frame size (= ROI ratio current) is determined, and compared to the previous ratio obtained from some of the older frames (= ROI_ratio_old) . In this connection it is also possible to apply the information on the size of the ROI (for example, the horizontal and vertical dimensions) . This is important if the shape of the ROI changes during the recording process.
In sub-stage 208 an evaluation of how much movement on Z-axis took place between the consecutive frames FRl' - FR3' is per- formed. This is performed by analyzing the size ratio of the ROI T, with respect to frame size, between produced consecutive image frames (FRl' - FR3' ) . Using the ROI size variation results, the lens system 14 is adjusted in an established manner (code means 31.6) . More precisely, if the current R0I_ratio is greater than Rl (i.e. the target is nearer relative to the device 10 than desired/originally) then the lens 14 must be instructed to move backward. Owing to this the desired/original situation i.e. the size ratio Rl of the ROI in the frame FR2' is achieved. Example of that kind of situation is presented in frame FRl' . If the R0I_ratio is less than Rl (i.e. the target is farer relative to the device 10 than desired/originally) then the lens 14 must be instructed to move forward. Owing to this the desired/original situation i.e. the size ratio Rl of the ROI in the frame FR2' is achieved. This situation is presented in the frame FR3' .
The advantage achieved owing to the determination of the size ratio of the ROI in the current frame relates to the ROI tracking stage 204. In the invention instead of estimating XY- position or distance of ROI in next future frame that is not yet even captured, the adjusting of the auto-focus 14 is based on the current measured values i.e. realized image content. Owing to the Z-axis determination of the target T the adjustment takes into account the area ratio of ROI with re- spect to whole frame and implicitly takes into account target motion by keeping enough space between target boundaries and frame boundaries. This will guarantee that ROI will stay in focus in the next frame without doing any estimation.
In stage 209 the coordinates of the ROI and the distance of the ROI from the camera 10 is sent to the auto-focus unit 12.
In stage 210 the auto-focus unit 12 extracts the pixels from the ROI, analyzes their sharpness and contrast. This may be performed by using the whole area of the ROI or at least part of the ROI . Figure 3 illustrates an embodiment in which only portions of the ROI are analyzed and used for fixing the auto- focus 12. In the area of the frame FRl - FR3 there are projected squares Fl - F4 or, in generally, areas which may be understood as focus points. One understands that they may cover the frame FRl - FR3 mainly and evenly. Because the size ratio of the ROI is continuously adjusted to be reasonable relative to the frame size, the ROI or at least part of that is always on the area of one or more of the focus point Fl - F4 covering that at least partly. For example, in frame FR2 the ROI is in the area of Fl and only the data of the ROI of that area Fl is then used in the sharpness and contrast analysis. This also reduces the need for calculation power.
In stage 211 the unit 12 finds the parameters needed for updating the distance of the lens 14 from the target T. The auto-focus unit 12 uses as well the movement in the Z- direction to update the new position of the lens 14. In stage 212 the results/parameters determined by the auto- focus unit 12 is sent to the actuator/motor 13 in order to update the lens 14 position. This updating of the position of the lens 14 is not presented in details in this flowchart be- cause that may be performed in a well known manner. In stage 213 the status of the recording process is checked, whether it still continues or if that is already finished. If recording process still continues a step to stage 204 is then performed. There the current (i.e. the latest captured frame, T = ti, Figure 3) frame FR2 is taken to the ROI tracking process 16.
If in stage 213 an end of recording is noticed then the process will be terminated to stage 214.
IDENTIFICATION OF THE ROI (STAGE 202) :
Next is described an example of how the identification stage 202 of the ROI may be performed according to one embodiment. Figure 5 presents as an example a flowchart how these sub- stages of main stage 202 may be implemented. The device 10 presented in Figure Ib is equipped with corresponding functionality 15 which may be based on processor CPU and program codes 31.1, 31.16 - 31.18 executed by that.
The ROI identification process (stage 202 in Figure 2) starts 501 with displaying the image/frame of interest. Generally speaking, on the display 18 of the device 10 the imaging scene/view is displayed, including the target i.e. the ROI. Figures 6a and 6b present examples of the viewfinder views VFVl, VFV2 (left side images) .
In stage 502 the user is asked to pick an object Tl, T2 from the image VFVl, VFV2. According to one embodiment the user can itself pick an object Tl, T2 of interest in the viewfinder video frame VFVl, VFV2 that is currently viewed on the display 18 when the camera sensor 11 of the device 10 is aimed towards the desired scene BGl, Tl, BG2, T2. On behalf of the user the application in the camera device 10 may also pick that object Tl, T2 for him. In order to perform this, a segmentation tech- nique may be applied, for example.
In connection with the picking stage 502 the user is asked to define the target Tl, T2. In order to define the target Tl, T2 the user may draw a window or adjusts a rectangle WINl, WIN2 around the target Tl, T2 being in the frame VFVl, VFV2 displayed on the display 18 in order to enclose the object of interest. The viewfinder views VFVl, VFV2 relating to this window WINl, WIN2 definition step 502 is presented in the right side of the Figures 6a and 6b. In the example of Figure 6a the target Tl i.e. the region of the interest is the walking couple and the sea scenery represents the background BGl. In the example of Figure 6b the target T2 is the boat and the mountain/water landscape represents the background BG2.
The drawing of the window WINl, WIN2 can be done on a touch screen if it is part of the system, otherwise UI displays a rectangle and the user can move it and resize it using some specified cursor keys of the keypad 19, for example. The object Tl, T2 picked in stage 502 is named also a region of in- terest (ROIl, R0I2) .
The coordinates of the selected rectangle or window WINl, WIN2 are determined in stage 503 and in stage 504 they are read and passed from the UI to the identification algorithm. The window coordinates may include, for example, top-left and bottom- right corners of the defined window WINl, WIN2.
In the identification algorithm stages 505 the statistics of the colours in connection with the defined window WINl, WIN2 are analyzed and as a result the object Tl, T2 inside the de- fined window WINl, WIN2 is automatically identified. In the picking stage 502 the user must leave some safe zone around the target Tl, T2 when defining the window WINl, WIN2.
Figure 7a describes the principle of the identification algorithm in a more detailed manner. There the defined area in the viewfinder frame VFV3 i.e. the target T3 is the face of the ice-hockey player. In the method stage 506 two other rectangles RECl, REC2 are constructed, which are close to the de- fined window WIN3 (including the target T3) . Rectangles RECl, REC2 may be constructed by the algorithm itself (code means 31.17) . One rectangle REC2 is inside the window WIN3 defined by the user and the other rectangle RECl is outside of the defined window WIN3. The spacing between the edges of con- structed rectangles RECl, REC2 is small with respect to their widths and lengths .
At stage 507 the intensity histograms are computed for each rectangle RECl, REC2 and for the defined window WIN3. These histograms describe the corresponding luminance content of the concerned rectangle. Let hθ, hi and h2 be these histograms associated, respectively, with selected area WIN3, outside and inside rectangles RECl, REC2.
At stage 508 histogram-based matching, for each pixel within the selected window WIN3, is performed (code means 31.16) . The matching is followed by a binary dilation in order to uniformly expand the size of the ROI in controlled and well- defined manner (code means 31.18) . Binary dilation may be per- formed with neighbouring pixels, for example, 3x3 block. Other block sizes may also be applied depending on, for example, the frame resolution. The purpose of the binary dilation process is to fill in any small "holes" in the mask due to pixels (in ROI) but estimated in background based on their color content. Generally speaking, this is a unifying process in which the neighbourhood of the current pixel is harmonized. However, the binary dilation filter should not be too large in order not to expand the ROI region in background. This kind of application can applied simultaneously with pixel matching in order to achieve a significant reduction in complexity.
As a sub-stage 509 of main stage 508 the next pixel, within the selected area WIN3, is taken. In stage 510 the pixel luminance value is quantized with the histogram bin size resulting in the quantized intensity Λq value' . Figure 7b presents an example of the color histograms hO - h2 of exocentric rectangular regions WIN3, RECl, REC2.
In stage 511 the status of each pixel in the selected area WIN3 is determined. In order to find out whether a pixel belongs to the target T3 or not the ratios representing the relative recurrence, i.e. with respect the pixel count in WIN3, of the pixel colour both inside REC2 and within the layer between WIN3 and RECl are computed. The ratios are equal to the number of pixels, (= counts) per the corresponding q_value, in each region divided by the number of pixels, per the same q value, in WIN3 :
_ hl[q _ value ] - hθ[q _ value ] hθ[q _ value ] _ h2[q _ value ] hθ[q value ]
In stage 512 is performed a test. If the calculated ratio rl < thresholdl and the ratio r2 > threshold2 then a step to stage 513 is made. The value of threshold2 is to be chosen within the range [0.5, 0.7] whereas a good choice of thresholdl would be within [0.05, 0.25]. The threshold values may also be chosen based on the ratios of the areas of REC2 and the region between WINl and RECl divided by the area of WIN3. For the sake of a simple and efficient implementation of the omnidirectional dilation method, the current pixel of the defined area WIN3 and all its 8 nearest neighbours are considered to be part of the ROI, i.e. the target T3, if both thresholding tests are satisfied. The ROI-mask (Ml, M2 in Figure 8a, 8b, if considering the imaging cases presented in Figures 6a and 6b) is initialized accordingly at the current pixel and its neighbours. These steps, starting at stage 509 are repeated for the next pixel and this loop is continued until each of the pixels of the defined area WIN3 are tested.
If one, or both, of the conditions of the test of stage 512 fails then the current pixel of the defined area WIN3 is considered not to be the part of the ROI i.e. the target T3. It is decided to belong to background and no further actions are then required. A step back to stage 509 is taken if there are still untested pixels in the area WIN3.
When the stage totality 508 is entirely performed, then a step to stage 514 is taken. There it is possible to perform one or more possible refinement steps. The refinement steps may apply, for example, morphological filters. These may be used to guarantee regularity in the object shape by eliminating any irregularities, such as, holes, small isolated areas, etc.
When all ROI identification stages are performed then in stage 515 the produced ROI mask is passed to an algorithm that tracks the moving or not moving ROI in the video capturing process (stage 204) . Before that, the user is informed in the UI that the automatic identification of the ROI has been performed and now it is possible to start the actual video recording process. The ROI masks Ml, M2 of the example cases of Figures 6a and 6b have been presented in the right side of the Figures 8a and 8b. In these mask Ml, M2 the target Tl, T2 is indicated as white and the background BGl, BG2 is indicated as black.
The ROI identification solution described just above takes into account a very important factor. That is the simplification of the user interaction in order to keep any related application attractive and pleasant. Specifically, the user draws (on touch screen) or adjusts (using specified cursor keys) a rectangle WINl - WIN3 containing the target Tl - T3. The algorithm reads the window WINl - WIN3 coordinates from the UI and automatically identifies the ROI in the manner just described above. Besides the simplicity of the user interaction, the algorithm is of low computational complexity as it is based on color-histogram processing within the selected window WINl - WIN3 and simple morphological filters. Furthermore, the output of the identification is a generated mask Ml, M2 for the ROI, which is easy to use in any ROI tracking approach during actual recording process. All these features make such method suitable for mobile device applications .
This described method above can be implemented as the initialisation of ROI tracker 16 which is intended to be described as a one example next in the below.
REGION-OF-INTEREST TRACKING (STAGE 204) :
Tracking of the region-of-interest is described next. It is the software module that makes use of the results of identification stage 202 described just above. The device 10 may have functionality concerning this ROI tracking 16. This may also be implemented by the processor CPU and program codes 31.2, 31.7 - 31.15 executed by the processor CPU. A tracked object from the recorded video can then be used for improved and customized auto-focus 12 of the camera 10, for example. Next an example of the tracking procedure 204 is presented. The described process provides the capability to provide tracking robustness with a fairly simple algorithm.
Generally speaking, the tracking process according to the invention can be understood as a two-stage approach. The first step applies localized colour histograms. The second phase is applied only if the target i.e. the region-of-interest ROI and the neighbouring background regions, within a local area, share some colour content. In that case, simple shape matching is performed (code means 31.13) .
The goal of the tracking stage 204 is to define a ROI mask M5 describing the current location and shape of the target T4, T5 in each frame FRC, FRP, FR and output a tracking window TWP, TWC, TW containing the target T4, T5. The design (i.e. the size and place, for example) of the tracking window TWP, TWC, TW takes into account the motion of the target T4, T5 so that in the next future frame the target T4, T5 is expected to stay within the window TWP, TWC, TW, i.e. keep a background BG4, BG5 margin on the sides of the target T4, T5.
The main stage 204, described in details below, assumes that the user defines a window or area presented in Figure 6a and 6b, or a region-of-interest ROI, around the targeted object T4, T5 to give an idea of the location of the region-of- interest. Otherwise, there could be many potential objects in the video frame to choose from, and without a-priori information, it would be impossible to target the correct one.
Once the region-of-interest (ROI) has been defined, the next task is to identify the object-of-interest within the defined window. Various algorithms can be used to segment the object within the ROI and separate it form the background. The main stage 202 and the more detailed embodiment presented in Figure 5 performs these duties. Once the object is detected, it is identified by a mask (stage 515) , which is fed to the tracking process 204 described next. The task of the tracking process 204 is to update the mask M5 and the tracking window TWC, TWP, TW in each frame FRC, FRP, FR and provide correct information about the object. Using tracking results the stages 205 - 212 may then be successfully performed in determining the spatial position of the ROI for the auto-focus unit 12.
Considering Figures 10a, 10b and 12a, the skiing woman represents the target T4 in Figures 10a and 10b and the man in the office is the target T5 in Figure 12a whereas the office scenery is the background BG5. The input data to the ROI tracking stage 204, at each frame, includes the previous tracked frame FRP, FR (i.e. the image content of the previous frame), the image content of the current frame FRC and also the ROI mask M5 of the previous frame FR. The ROI mask M5 is required in order to decide which parts of the previous frame FR represent background portions BG5 and which parts of the previous frame FR represent the target T5. Thus, some frames must already be produced with the camera sensor 11 (in stages 203, 203' ) before it is possible to start the main stage 204. When the recording process 203, 203' is just started i.e. the first cycle of the ROI tracking process 204 is in question, then the ROI mask identified in stage 202 is passed to the tracking algorithm. These both frames FRP and FRC and the latest ROI mask are stored to the memory MEM of the device 10. Figure 10a presents a previous frame FRP and Figure 10b shows the current frame FRC. In general, frames FRP and FRC are not necessarily sequential as frame skipping might be applied so that tracking can keep up with the recording speed.
In stage 901 the tracking window TWC is projected on the current frame FRC. The tracking window TWC is an area defined by the tracking algorithm and whose corner coordinates are up- dated at each cycle of the loop 204. If the current loop of the tracking is the first, then the tracking window TWC may be generated based on the identified (initial) ROI mask produced in the stage 202.
In the invention the current tracking window TWC is divided into, i.e. macroblocks MB. Macroblock MB is a geometric entity representing a set of pixels belonging to an N-by-N square (for example, 8x8 or 16x16, for example) .
Generally speaking, the colour histogram technique used in the first stage is applied on macro-block-wise. It provides a good tracking robustness. In the next stage 902 the current macroblock MB inside the current projected tracking window TWC of the current frame FRC and presented in dashed short lines is projected on the previous frame FRP (small block PMB in dashed in the previous frame FRC) . The projecting measure means in this connection that in the previous frame FRP is defined an area PMB which location and size corresponds its location and size in the current frame FRC.
In stage 903 is defined a search block SB presented dashed line in the previous frame FRP (in Figure 10a) . This search block SB surrounds the projected macroblock PMB that was just projected there. The search block SB defined in the previous frame FRP represents the search area for the current macroblock MB. The search block SB is its own block for the each of the projected macroblocks PMB of the current tracking window TWC which are go through one by one by the algorithm (code means 31.8) .
The search block SB is constructed by enlarging the projected macroblock PMB in the previous frame FRP in each direction, as indicated by the double-headed arrows (code means 31.9) . The distance of the enlarging may be a constant and that may be equal to the estimated motion range (code means 31.10) . Such a distance may represent the maximum possible motion range. It can be estimated adaptively based on previous values or can be set to a constant that is large enough to be an upper bound for displacements within video sequences (e.g. 16 or 32) . Its purpose is to ensure that the best match for the current mac- roblock MB is inside this search block SB defined for it.
Next the imaging case presented in Figure 12a is described a little bit detailed. It is an example representing the real imaging situation. Actually, Figure 12a presents the previous image frame FR and, more particular, its Y-component (if the applied color space is YUV) . However, this component-wise frame FR may now also be understood in this connection as the actual image frame that is observed. The content of the other components (U and V) would be very blurred (even if they would be printed by using the laser printing technology not to speak of offset printing used in patent publications) . Due to this reason these other (U and V) component frames are not pre- sented in this context.
In Figure 12a there are shown two projected macroblocks PMBl, PMB2 which have their own search blocks SBl, SB2 in the tracking window TW. It should be understood that though the blocks PMBl, PMB2, SBl, SB2 are here presented now in this same figure, in reality they are tracked in order by the tracking loop stage 204. Thus, one macroblock is under tracking process during the current tracking loop 204.
In stage 904 are computed two histograms (code means 31.11) . These are background and ROI histograms. The histograms are computed only for the pixels inside the search block SB, SBl, SB2, in the previous frame FRP, FR surrounding the projected macro-block PMB, PMBl, PMB2. This means that the pixels of the previous frame FRP, FR which are inside the area of the pro- jected macroblock PMB, PMBl, PMB2 of the previous frame FRP, FR are also taken into account when constructing ROI and background histograms of the search block SB, SBl, SB2 of the previous frame FRP, FR.
The histograms represent the color content of the target and background portions inside the large dashed line block SB, SBl, SB2. Using these background and target histograms, it is possible to detect a possible common colour between the two regions (target and background) whereas processing macroblock- wise results in an efficient handling of shape flexibility. In this way, the generated histograms provide a description of localized color distributions of ROI and background.
More particular, there are constructed histograms for two areas which are the ROI area and the background area. For each of the area are constructed the Y, U and V component-wise histograms. Of course, other color spaces will also become into question, YUV is only used as an example in this connection. An example of these histograms in the case of the real imaging situation shown in Figure 12a are presented in upper parts of Figures 13a - 13f. These histograms describe the colour contents of the target and background regions inside the search block SBl, SB2 defined in previous step.
According to this there are generated in the every cycle of the loop six histograms. Three of the histograms Hl - H3, H4 - H6 are generated for the target region T5 i.e. for the ROI of the search area SBl, SB2 of the previous frame FR (= Y, U and V) and three of the histograms Hl' - H3' , H4' - H6' are generated for the background region BG5 of the search area SBl, SB2 of the previous frame FR (= Y, U and V) . Now the histograms Hl - H3 are YUV histograms of the case one target region T5 i.e. the component-wise histograms of the search block SBl of the Figure 12a. Correspondingly, the histograms H4 - H6 are YUV histograms of the case two target region T5 i.e. the component wise histograms of the search block SB2 of the Figure 13a. In the same manner the histograms Hl' - H3' are background portion BG5 of search block SBl and histograms H4' - H6' are the background' s SB2. The status of the each pixel of the search block SBl, SB2 (i.e. whether the pixel is a ROI pixel or a background pixel) is got from the ROI mask M5 presented in Figure 12b that corresponds the previous image frame FR.
These six histograms may be constructed as follows. For every pixel in search block SBl, SB2 in previous frame FR including also the projected macroblock area PMBl, PMB2 of the previous frame FR is performed an analysis according to which:
(i) The corresponding Y, U and V values of each pixel inside the search block SBl, SB2 of the previous frame FR (where the ROI is already defined by applying the ROI mask M5) are divided by the corresponding histogram bin size in order to find to which bins in the color histograms Hl - H6, Hl' - H6' do these values correspond,
(ii) If the current pixel of the search block SBl, SB2 of the previous frame FR is based on the ROI mask M5 discovered to be the background pixel, then the three background histograms Hl' - H3' , H4' - H6' are incremented in the appropriate bins computed in step (i) ,
(iii) If the current pixel is of the search block SBl, SB2 of the previous frame FR is based on the ROI mask M5 discovered to be inside the ROI, i.e. part of the ROI in previous frame FR falls inside SBl, SB2 and the pixel is in that part of the ROI, then the three ROI histograms Hl - H3, H4 - H6 are incremented in the appropriate bins computed in step (i) . Now, each bin (i.e. X-axis) in ROI histograms Hl - H6 represents the number of pixels (Y-axis) , whose color values fall into a specific range, belong to the target region T5 in the search area SBl, SB2 of the previous frame FR. Similarly, background BG5 histograms Hl' - H6' represent number of pixels, within the search block SBl, SB2 of the previous frame FR, that are discovered to be background pixels based on the ROI mask M5.
On the basis of these color histograms Hl - H6, Hl' - H6' it is possible to generate a probability histograms Pl - P6. These indicate the probability of each pixel having certain color being in the ROI. On the basis of the probability histograms Pl - P6 it is also possible to generate a probability macro-block Pmbl, Pmb2 for the color content of the current macroblock MB of the current frame FRC (Figures lib and lie) .
The above means that each pixel in a given macroblock MB, MBl, MB2 of the current frame FR, FRC is assigned a probability of being in the region-of-interest (ROI, T4, T5) based on the computed histograms of the background and the target regions in a corresponding search block SB, SBl, SB2 in the previous frame FRP, FR.
More particular, now it is the question of the probability of a given pixel of the given macroblock of the current frame, with color values Y(i,j), U(i,j) and V(i,j), being in the ROI or in the background. There are defined Y, U and V probabilities Py, Pu and Pv. For each bin, for example, with index k,
Py(k)= ROI_Y_hist (k) / (ROI_Y_hist (k) + Background_Y_hist (k) ) , Pu(k)= ROI_U_hist (k) / (ROI_U_hist (k) + Background_U_hist (k) ) , Pv(k)= ROI V hist(k) / (ROI V hist(k) + Background V hist(k)). The equations presented above means that the sum of pixels with similar values belonging to the ROI is divided by the number of the pixels with similar values in the search block SBl in previous frame FR. This means that the probabilities are computed based on color distributions in search block of previous frame. The computed probabilities are then applied to the current macroblock in current frame. The probabilities indicate the status of the pixels of the current macroblock (MB) i.e. whether a given pixel of the current macroblock is more likely a ROI pixel or a background pixel (code means 31.12) .
In Figures lib and lie are presented two hypothetical examples of the probability macroblocks Pmbl, Pmb2 for shape matching. These may be imagined to relate to the imaging case presented in Figures 10a and 10b. In Figure 11a is presented a hypothetical ROI mask of the search block SB in which is projected i.e. defined a macroblock PMB. In Figures 13a - 13f are also presented correspond probability histograms Pl - P6 which relate to the real imaging case presented in Figure 12a (YUV color components of search blocks SBl, SB2) .
In stage 905 is performed an examination relating to the colour content differences between the object and background of the search block area SBl, SB2. If histograms for at least one of the components Y, U, V are disjoint, i.e. for each bin either ROI or background histogram value is equal to zero, a step to stage 906 is then performed. In stage 906 all pixels of the current macro-block MBl, MB2 in the current frame FRC corresponding to that bin are all assigned as ROI pixels or background pixels depending on the color of each pixel. More general, the here the statuses of the pixels of the macroblock are determined. This means that there may be both ROI pixels and/or background pixels in the macroblock. The ROI and background histograms for one of the color components are said to be disjoint if for every bin either the ROI histogram or back- ground histogram is empty. On the other words, if there are pixels on the same bin in the histograms of both areas, then the disjoint condition is not valid. This basically means that for that particular color range, all pixels of the macroblock of the current frame are in ROI or all pixels are in background. This implies that for each pixel it is clear whether it belongs to the target or to the background and therefore no further processing is required in this case.
In the example of Figure 12a this disjoint condition is valid for the case one which histograms are presented in Figures 13a, 13c, 13e. There one can easily see that in the Y- histogram Hl, Hl' the disjoint condition is in force in certain bin range (about 25 - 45) . Correspondingly, in the prob- ability histogram there are only values 1 and 0 (i.e. not any- intermediate values between 1 and 0 which will indicate that there are common colors for the compared ROI and background areas) .
In the case explained above where the colours of the target and the neighbouring background regions are exclusive, i.e. corresponding histograms have zero intersection, probabilities will be ones and zeros. In Figure 11, for example, the case b indicates this kind of situation. There the black indicates probability 1 and white implies probability 0. The above also means that the current mask is updated as stage 909 based on the colour histograms of the current macroblock MBl, MB2 and any other measures, such as, for example, shape matching is not required because the situation is so clear. The next step 910 is to check that is there left any unmatched macroblocks in the tracking window TWC of the current frame FRC and if there are then get the next macro-block in stage 911 and get back to stage 902. The procedure is performed for each macroblock inside the projected tracking window TWC of the previous frame FRP, FR. However, if in stage 905 is detected that all histograms of the components U, Y, V between the regions is congruent, then a step to stage 907 is performed. In stage 907 is constructed a probability macro-block (if that was not constructed already in connection with analysing disjoint/congruent condition) . Congruence means in this connection that the ROI and the background regions, inside the large block SBl, SB2 in previous frame FR share some colors in the case of each component YUV and due to this reason the nature of the macroblock is not so clear. Due to the congruence the probability macroblock (Figure lie) and all probability histograms P4 - P6 will have values less than 1 and more than 0 (at least one intermediate value is enough to indicate congruence) . In the probability macroblock Pmb2 each intermediate color between black and white and also in the probability histograms P4 - P6 each entry (Y-axis) represents the probability Pu, Py, Pv of the corresponding pixel value (X-axis) being in the target.
These second stages 907 and 908 of the algorithm presented above are thus executed only when there is colour similarity between the target and background regions inside the search block SB2. After stage 907 may be applied the shape-based technique. Shape matching may be performed in the probability domain. In the stage 908 the probability macro-block Pmb2 of the current frame FRC is matched to a mask region within the search block SB of the previous frame FRP. This is presented in Figures 11a and lie.
ROI mask M5 is a binary picture representation of the shape of the ROI indicated by a value 1 if the pixel is an ROI (= target) and a value 0 if it is in background. In addition to the shape-based matching the ROI mask M5 is also applied when deciding if the pixel of the search block SB belongs to ROI or background in the previous frame. The shape-based matching of the ROI region performed in stage 908 is applied within a "shape representation" i.e. mask with values equal to 0 or 1 and a probability macro-block with values between 0 and 1. At the end of this stage, false alarms can be eliminated and a decision on whether or not a pixel inside the macro-block of the current frame is in the background or in the target.
Owing to the shape based matching is achieved elimination of false alarms. According to one embodiment the shape-based matching of the ROI may be performed by minimizing the sum of absolute difference (SAD) . This is performed between the probability values of the current macroblock (the probability block) and values of candidate blocks in ROI mask (code means 31.14, 31.15) .
After finding the probabilities of each pixel in the current macroblock in stage 907, the next step is to find the best block in the ROI mask on stage 908. So the algorithm makes a search in the ROI mask to find the macroblock sized area that is the closest, i.e has least sum of absolute difference, to the current probability macroblock. In other words, if the current macroblock has probabilities Pmb(i,j) and ROI mask has values M(i,j) then it is performed a looking procedure for a block in the mask that has the minimum:
∑\Pmb{i,j)-M{i-k\,j-k2)\
The indexes (i,j) are going through the values of the current block. The parameters kl and k2 indicate the displacements of the mask block when performing the search. These displacements are performed in each direction pixel by pixel i.e. the matched macroblock is fitted to each location of the ROI mask area. When the best match has been found, then a step to stage 909 is performed in which the mask is updated. After steps 910, 911 a loop is again initiated for the next macroblock of the current frame. If all macroblocks of the current frame have already been went through, then a step to main stage 205 is performed with the determined ROI mask. Also, the current tracking window is stored in order to be used that in the next loop cycle.
The real time tracking algorithm described just above is very robust to all variabilities already described in prior art section. At the same time it is a computationally effective and memory-efficient technique so that it can be adopted in the current and upcoming electronic devices generating video sequences . It provides a capability to detect and handle colour similarity between target and background and robustness to shape deformation and partial occlusion. Generally speaking, it provides means for performing shape matching, i.e. matching in the probability domain.
The algorithm according to the invention performs the matching without using any detailed features. This can act as an advantage in that it provides robustness to flexibility in position and shape.
The auto-focus 12 according to the invention is based on, for example, the passive scheme. It means that there is applied the image data produced by the sensor 11. In that the auto- focus unit 12 relies mainly on the sharpness (by computing edges on horizontal and vertical direction) and by using the contrast info. Only areas of the image may be considered for this analysis, for example. Based on the results the actuating motor 13 of the lens system updates the position of one or more lens 14. The method according to the invention may be used in that kind of auto-focus scheme by including the ROI identifier 15 and tracker 16. Of course, the invention itself doesn't depend on the used auto-focus scheme but the invention may also be implemented in connection with different kind of auto-focus schemes. The tracker 16 according to the invention will indicate the coordinates of the areas to analyze in the auto-focus unit 12. This is performed via the spatial positioning unit of the ROI 17. It determines these coordinates in stages 205 and 206. The tracker 16 will also add a third dimension on the displacement of the object along the Z-axis. Owing to this spatial handling without any pre-estimation of the location of the ROI in the frame a more robust auto-focus will be achieved and also the moving targets are kept in good focus .
It must be understood that the above description and the related figures are only intended to illustrate the present invention. The invention is thus in no way restricted to only the embodiments disclosed or stated in the Claims, but many- different variations and adaptations of the invention, which are possible within the scope on the inventive idea defined in the accompanying Claims, will be obvious to one versed in the art .

Claims

1. An electronic device (10) equipped with a video imaging process capability, which device (10) includes - a camera unit (11) arranged to produce image frames
(I, FRC, FRP, FR, VFVl - VFV3, FRl - FR3, FRl' - FR3' ) from an imaging view which includes a region- of-interest ROI (T, Tl - T5) ,
- an adjustable optics (14) arranged in connection with the camera unit (11) in order to focus the ROI
(T, Tl - T5) on the camera unit (11),
- an identifier unit (CPU, 15) in order to identify a ROI (T, Tl - T5) from the image frame (VFVl - VFV3) ,
- a tracking unit (CPU, 16) in order to track the ROI (T, Tl - T5) from the image frames (FRC, FRP, FR) during the video imaging process and
- an auto-focus unit (12) arranged to analyze the ROI (T, Tl - T5) on the basis of the tracking results provided by the tracking unit (CPU, 16) in order to adjust the optics (14), characterized in that the device (10) is arranged to determine the spatial position of the ROI (T) in the produced image frame (FRl - FR3, FRl' - FR3' ) without any estimation measures .
2. An electronic device (10) according to Claim 1, characterized in that the spatial position is a plane position (XY) of the ROI (T) in the image frame (FRl - FR3) .
3. An electronic device (10) according to Claim 1 or 2, characterized in that the spatial position of the ROI (T) along the Z-axis is arranged to be determined in order to detect the movement of the ROI (T) in depth of the imaging view.
4. An electronic device (10) according to Claim 3, characterized in that the Z-axis position of the ROI (T) is arranged to be determined from a change in the size of the ROI (T) between produced consecutive image frames (FRl' - FR3' ) and on the ba- sis of the determination the optics (14) is arranged to be adjusted in an established manner.
5. An electronic device (10) according to any of Claims 1 - 4, characterized in that the tracking unit (16) is arranged to perform the ROI tracking on macroblock basis in which ROI tracking is arranged to be decided whether the pixels of the current macroblock (MB) of the current image frame (FRC) belong to the ROI region (T4) or to the background region (BG4) and which decision is arranged to be based on the color con- tent of the previous image frame (FRP) in which the ROI region is already known.
6. An electronic device (10) according to Claim 5, characterized in that the tracking unit (16) is arranged to project each macroblock (MB) of the tracking window (TWC) of the current image frame (FRC) into the previous image frame (FRP) in which for each of the macroblocks (PMB) a search area (SB) is arranged to be defined in order to determine the color content of the ROI region (T4) and background region (BG4) .
7. An electronic device (10) according to Claim 6, characterized in that search area (SB) is arranged to be constructed by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each direction in order to ensure that the best match for the current macroblock (MB) is inside the search area (SB) defined for it.
8. An electronic device (10) according to Claim I1 characterized in that the search area (SB) is arranged to be con- structed by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each direction by a distance equal to the estimated motion range.
9. An electronic device (10) according to any of Claims 5 - 8, characterized in that the tracking unit (16) is arranged
- to define a ROI region (T4) and a background region (BG4) in the search area (SB) of the previous image frame (FRP) which definitions are arranged to based on the ROI mask (M4) of the previous image frame (FRP) ,
- to form color histograms of the ROI region (T4) and the background region (BG4) of the search area (SB),
- to analyze the said colour histograms of the ROI region (T4) and background region (BG4) and on the basis of the results of the analysis,
- to determine the status of the pixels of the current macroblock (MB) of the current image frame (FRC) whether they belong to the ROI region (T4) or to the background region (BG4) and - to update the current ROI mask based on this determination.
10. An electronic device (10) according to Claim 9, character- ized in that the analysis of the colour histograms of the ROI region (T4) and background region (BG4) is arranged to be performed on the basis of the probabilities, which state if the pixel of the current macroblock (MB) is more a ROI pixel or a background pixel .
11. An electronic device (10) according to Claim 9 or 10, characterized in that if the ROI region (T4) inside the search area (SB) of the previous image frame (FRP) is discovered to share some color content with the background region (BG4) of the search area (SB) of the previous image frame (FRP) the tracking unit (16) is arranged to perform a shape matching procedure in order to find the best location for the current macroblock (MB) in the search area (SB) in the previous ROI mask (M4) .
12. An electronic device (10) according to Claim 11, characterized in that the tracking unit (16) is arranged to apply SAD method (Sum of Absolute Difference) in the shape matching procedure .
13. An electronic device (10) according to Claim 12, characterized in that the SAD method is arranged to be performed on a probability domain in which the best match is arranged to be determined for the current macroblock (MB) in the search area (SB) defined for that.
14. An electronic device (10) according to any of Claims 1 - 13, characterized in that the identifier unit (15) is arranged to generate the ROI mask on the basis of the statistics of the color-content in the middle and around of the defined area (WINl, WIN2) including the ROI.
15. An electronic device (10) according to Claim 14, characterized in that identifier unit (15) is arranged to generate search areas (RECl, REC2) inside and around the defined area (WIN3) and to analyse the local color-content between these areas (RECl, REC2, WIN3) in order to decide whether the pixels of the defined area (WIN3) belong to target (T3) or not.
16. An electronic device (10) according to Claim 14 or 15, characterized in that identifier unit (15) is arranged to perform histogram-based matching for each pixel within the defined area (WIN3) and a binary dilation process in order to unify the neighbourhood of the pixel in the ROI mask.
17. A method in a video imaging process in order to adjust focus as follows
- image frames (I, FRC, FRP, FR, VFVl - VFV3, FRl - FR3, FRl' - FR3' ) are produced from an imaging view which image frames (I, FRC, FRP, FR, VFVl - VFV3, FRl
- FR3, FRl' - FR3') includes a region-of-interest ROI (T, Tl - T5) ,
- a ROI (T, Tl - T5) is identified from the image frame (VFVl - VFV3) in order to perform ROI tracking process during the video imaging process,
- the ROI (T, Tl - T5) is tracked from the image frames (FRC, FRP, FR) during the video imaging process,
- the tracking results of the ROI (T, Tl - T5) is provided for an auto-focus unit (12) in order to adjust an optics (14) and
- the optics (14) arranged in connection with the camera unit (11) is adjusted in order to focus the ROI (T, Tl - T5) on the camera unit (11), characterized in that the in the imaging process the spatial position of the ROI (T) in the produced image frame (FRl - FR3, FRl' - FR3' ) is determined without any estimation measures .
18. A method according to Claim 17, characterized in that the spatial position is a plane position (XY) of the ROI (T) in the image frame (FRl - FR3) .
19. A method according to Claim 17 or 18, characterized in that the spatial position of the ROI (T) along the Z-axis is determined in order to detect the movement of the ROI (T) in depth of the imaging view..
20. A method according to Claim 19, characterized in that the Z-axis position of the ROI (T) is determined from a change in the size of the ROI (T) between produced consecutive image frames (FRl' - FR3' ) and on the basis of the determination the optics (14) is adjusted in an established manner.
21. A method according to any of Claims 17 - 20, characterized in that in the ROI tracking stage the ROI tracking is performed on macroblock basis in which is decided whether the pixels of the current macroblock (MB) of the current image frame (FRC) belong to the ROI region (T4) or to the background region (BG4) and which decision is based on the color content of the previous image frame (FRP) in which the ROI region is already known.
22. A method according to Claim 21, characterized in that in the ROI tracking stage each macroblock (MB) of the tracking window (TWC) of the current image frame (FRC) is projected into the previous image frame (FRP) in which for each of the macroblocks (PMB) a search area (SB) is defined in order to determine the color content of the ROI region (T4) and back- ground region (BG4) .
23. A method according to Claim 22, characterized in that search area (SB) is constructed by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each di- rection in order to ensure that the best match for the current macroblock (MB) is inside the search area (SB) defined for it.
24. A method according to Claim 23, characterized in that the search area (SB) is constructed by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each direction by a distance equal to the estimated motion range.
25. A method according to any of Claims 21 - 24, characterized in that in the ROI tracking stage - a ROI region (T4) and a background region (BG4) are defined in the search area (SB) of the previous image frame (FRP) which definitions are based on the ROI mask (M4) of the previous image frame (FRP), - color histograms of the ROI region (T4) and the background region (BG4) of the search area (SB) are formed,
- colour histograms of the ROI region (T4) and background region (BG4) are analyzed and based on the re- suits of the analysis,
- the status of the pixels of the current macroblock (MB) of the current image frame (FRC) is determined whether they belong to the ROI region (T4) or to the background region (BG4) and - the current ROI mask is updated based on this determination .
26. A method according to Claim 25, characterized in that the analysis of the colour histograms of the ROI region (T4) and background region (BG4) is performed on the basis of the probabilities, which state if the pixel of the current macroblock (MB) is more a ROI pixel or a background pixel.
27. A method according to Claim 25 or 26, characterized in that if the ROI region (T4) inside the search area (SB) of the previous image frame (FRP) shares some color content with the background region (BG4) of the search area (SB) of the previous image frame (FRP) then a shape matching procedure is performed in order to find the best location for the current mac- roblock (MB) in the search area (SB) in the previous ROI mask (M4) .
28. A method according to Claim 27, characterized in that in the ROI tracking stage SAD method (Sum of Absolute Difference) is applied in the shape matching procedure.
29. A method according to Claim 28, characterized in that the SAD method is performed on a probability domain in which the best match is determined for the current macroblock (MB) in the search area (SB) defined for that.
30. A method according to any of Claims 17 - 29, characterized in that in the ROI identifying stage the ROI mask is generated based on the statistics of the color-content in the middle and around of the defined area (WINl, WIN2) including the ROI.
31. A method according to Claim 30, characterized in that in the ROI identifying stage search areas (RECl, REC2) are generated inside and around the defined area (WIN3) and the local color-content between these areas (RECl, REC2, WIN3) is analyzed in order to decide whether the pixels of the defined area (WIN3) belong to target (T3) or not.
32. A method according to Claim 30 or 31, characterized in that in the ROI identifying stage a histogram-based matching is performed for each pixel within the defined area (WIN3) followed by a binary dilation process in order to unify the neighbourhood of the pixel in the ROI mask.
33. Program product (30) for video imaging process in order to provide ROI tracking results for an auto-focus unit (12) which is arranged to analyze the ROI (T, Tl - T5) on the basis of the tracking results in order to adjust the optics (14) arranged in connection with the camera unit (11) in order to fo- cus the ROI (T, Tl - T5) on the camera unit (11), and which program product (30) includes a storing means (MEM) and a program code (31) executable by a processor (CPU) and written in the storing means (MEM), and which program code (31) includes - first code means (31.1) configured to identify the ROI (T, Tl - T5) from the image frame (VFVl - VFV3) produced by a camera unit (11) and second code means (31.2) configured to perform tracking of the ROI (T, Tl - T5) from the image frames (FRC, FRP, FR) during the video imaging process, characterized in that the program code (31) includes in addition third code means (31.3) configured to determine the spa- tial position of the ROI (T) in the produced image frame (FRl - FR3, FRl' - FR3' ) without any estimation measures.
34. Program product (30) according to Claim 33, characterized in that the program code (31) includes fourth code means (31.4) configured to determine a plane position (XY) of the ROI (T) in the image frame (FRl - FR3) as the said the spatial position.
35. Program product (30) according to Claim 33 or 34, charac- terized in that the program code (31) includes fifth code means (31.5) configured to determine the spatial position of the ROI (T) along the Z-axis in order to detect the movement of the ROI (T) in depth of the imaging view.
36. Program product (30) according to Claim 35, characterized in that the program code (31) includes sixth code means (31.6) configured to determine the Z-axis position of the ROI (T) from a change in the size of the ROI (T) between the produced image frames (FRl' - FR3' ) and on the basis of the determina- tion the optics (14) is arranged to be adjusted in an established manner.
37. Program product (30) according to any of Claims 33 - 36, characterized in that the program code (31) includes seventh code means (31.7) configured to perform the ROI tracking on macroblock basis in which ROI tracking is arranged to be decided whether the pixels of the current macroblock (MB) of the current image frame (FRC) belong to the ROI region (T4) or to the background region (BG4) and which decision is arranged to be based on the color content of the previous image frame (FRP) in which the ROI region is already known.
38. Program product (30) according to Claim 37, characterized in that the program code (31) includes eighth code means (31.8) configured to project each macroblock (MB) of the tracking window (TWC) of the current image frame (FRC) into the previous image frame (FRP) in which for each of the mac- roblocks (PMB) a search area (SB) is arranged to be defined in order to determine the color content of the ROI region (T4) and background region (BG4) .
39. Program product (30) according to Claim 38, characterized in that the program code (31) includes ninth code means (31.9) configured to construct the search area (SB) by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each direction in order to ensure that the best match for the current macroblock (MB) is inside the search area (SB) defined for it.
40. Program product (30) according to Claim 39, characterized in that the program code (31) includes tenth code means (31.10) configured to construct the search area (SB) by enlarging the projected macroblock (PMB) in the previous image frame (FRP) in each direction by a distance equal to the esti- mated motion range.
41. Program product (30) according to any of Claims 37 - 40, characterized in that the program code (31) includes eleventh code means (31.11) configured - to define a ROI region (T4) and a background region (BG4) in the search area (SB) of the previous image frame (FRP) which definitions are arranged to based on the ROI mask (M4) of the previous image frame (FRP) ,
- to form color histograms of the ROI region (T4) and the background region (BG4) of the search area (SB),
- to analyze the said colour histograms of the ROI region (T4) and background region (BG4) and on the basis of the results of the analysis,
- to determine the status of the pixels of the current macroblock (MB) of the current image frame (FRC) whether they belong to the ROI region (T4) or to the background region (BG4) and - to update the current ROI mask based on this determination .
42. Program product (30) according to Claim 41, characterized in that the program code (31) includes twelfth code means (31.12) configured to perform the analysis of the said colour histograms of the ROI region (T4) and background region (BG4) on the basis of the probabilities, which state if the pixel of the current macroblock (MB) is more a ROI pixel or a background pixel .
43. Program product (30) according to Claim 42, characterized in that the program code (31) includes thirteenth code means (31.13) configured to discover if the ROI region (T4) inside the search area (SB) of the previous image frame (FRP) shares some color content with the background region (BG4) of the search area (SB) of the previous image frame (FRP) , then to perform a shape matching procedure in order to find the best location for the current macroblock (MB) in the search area
(SB) in the previous ROI mask (M4) .
44. Program product (30) according to Claim 43, characterized in that the program code (31) includes fourteenth code means
(31.14) configured to apply SAD method (Sum of Absolute Difference) in the shape matching procedure.
45. Program product (30) according to Claim 44, characterized in that the program code (31) includes fifteenth code means
(31.15) configured to perform the SAD method on a probability domain in which the best match is arranged to be determined for the current macroblock (MB) in the search area (SB) defined for that.
46. Program product (30) according to any of Claims 33 - 45, characterized in that the program code (31) includes sixteenth code means (31.16) configured to generate the ROI mask on the basis of the statistics of the color-content in the middle and around of the defined area (WINl, WIN2) including the ROI.
47. Program product (30) according to Claim 46, characterized in that the program code (31) includes seventeenth code means
(31.17) configured to generate search areas (RECl, REC2) inside and around the defined area (WIN3) and to analyse the local color-content between these areas (RECl, REC2, WIN3) in order to decide whether the pixels of the defined area (WIN3) belong to target (T3) or not.
48. Program product (30) according to Claim 46 or 47, characterized in that the program code (31) includes eighteenth code means (31.18) configured to perform histogram-based matching for each pixel within the defined area (WIN3) and a binary dilation process in order to unify the neighbourhood of the pixel in the ROI mask.
PCT/FI2005/050495 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest WO2007077283A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US12/087,207 US8089515B2 (en) 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest
EP05821826A EP1966648A4 (en) 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest
JP2008547998A JP2009522591A (en) 2005-12-30 2005-12-30 Method and apparatus for controlling autofocus of a video camera by tracking a region of interest
PCT/FI2005/050495 WO2007077283A1 (en) 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2005/050495 WO2007077283A1 (en) 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest

Publications (1)

Publication Number Publication Date
WO2007077283A1 true WO2007077283A1 (en) 2007-07-12

Family

ID=38227943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2005/050495 WO2007077283A1 (en) 2005-12-30 2005-12-30 Method and device for controlling auto focusing of a video camera by tracking a region-of-interest

Country Status (4)

Country Link
US (1) US8089515B2 (en)
EP (1) EP1966648A4 (en)
JP (1) JP2009522591A (en)
WO (1) WO2007077283A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8228383B2 (en) * 2008-04-21 2012-07-24 Sony Corporation Image pickup apparatus and method for controlling ranging area based on detected object
CN103780841A (en) * 2014-01-23 2014-05-07 深圳市金立通信设备有限公司 Shooting method and shooting device
WO2015094977A1 (en) * 2013-12-21 2015-06-25 Qualcomm Incorporated System and method to stabilize display of an object tracking box
CN105319725A (en) * 2015-10-30 2016-02-10 中国科学院遗传与发育生物学研究所 Ultra-high resolution imaging method used for rapid moving object
CN107636682A (en) * 2015-07-20 2018-01-26 三星电子株式会社 Image collecting device and its operating method
WO2020042126A1 (en) * 2018-08-30 2020-03-05 华为技术有限公司 Focusing apparatus, method and related device
WO2021080524A1 (en) * 2019-10-23 2021-04-29 Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ Passive and adaptive focus optimization method for an optical system
CN113711123A (en) * 2020-03-10 2021-11-26 华为技术有限公司 Focusing method and device and electronic equipment

Families Citing this family (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101354899B1 (en) * 2007-08-29 2014-01-27 삼성전자주식회사 Method for photographing panorama picture
FR2925705A1 (en) * 2007-12-20 2009-06-26 Thomson Licensing Sas IMAGE CAPTURE ASSISTING DEVICE
KR101445606B1 (en) * 2008-02-05 2014-09-29 삼성전자주식회사 Digital photographing apparatus, method for controlling the same, and recording medium storing program to implement the method
KR101588877B1 (en) 2008-05-20 2016-01-26 펠리칸 이매징 코포레이션 Capturing and processing of images using monolithic camera array with heterogeneous imagers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
KR101271098B1 (en) * 2008-09-24 2013-06-04 삼성테크윈 주식회사 Digital photographing apparatus, method for tracking, and recording medium storing program to implement the method
KR20100095833A (en) * 2009-02-23 2010-09-01 주식회사 몬도시스템즈 Apparatus and method for compressing pictures with roi-dependent compression parameters
JP5279653B2 (en) * 2009-08-06 2013-09-04 キヤノン株式会社 Image tracking device, image tracking method, and computer program
JP5279654B2 (en) * 2009-08-06 2013-09-04 キヤノン株式会社 Image tracking device, image tracking method, and computer program
EP2502115A4 (en) 2009-11-20 2013-11-06 Pelican Imaging Corp Capturing and processing of images using monolithic camera array with heterogeneous imagers
US8964103B2 (en) * 2010-02-16 2015-02-24 Blackberry Limited Method and apparatus for reducing continuous autofocus power consumption
EP2569935B1 (en) 2010-05-12 2016-12-28 Pelican Imaging Corporation Architectures for imager arrays and array cameras
US9135514B2 (en) * 2010-05-21 2015-09-15 Qualcomm Incorporated Real time tracking/detection of multiple targets
KR101026410B1 (en) * 2010-07-29 2011-04-07 엘아이지넥스원 주식회사 Apparatus and method for extracting target, and the recording media storing the program performing the said method
TWI420906B (en) 2010-10-13 2013-12-21 Ind Tech Res Inst Tracking system and method for regions of interest and computer program product thereof
TWI424361B (en) * 2010-10-29 2014-01-21 Altek Corp Object tracking method
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
JP2014519741A (en) 2011-05-11 2014-08-14 ペリカン イメージング コーポレイション System and method for transmitting and receiving array camera image data
JP5979967B2 (en) * 2011-06-30 2016-08-31 キヤノン株式会社 Image pickup apparatus having subject detection function, image pickup apparatus control method, and program
WO2013043761A1 (en) 2011-09-19 2013-03-28 Pelican Imaging Corporation Determining depth from multiple views of a scene that include aliasing using hypothesized fusion
WO2013049699A1 (en) 2011-09-28 2013-04-04 Pelican Imaging Corporation Systems and methods for encoding and decoding light field image files
JP6083987B2 (en) * 2011-10-12 2017-02-22 キヤノン株式会社 Imaging apparatus, control method thereof, and program
WO2013089662A1 (en) * 2011-12-12 2013-06-20 Intel Corporation Scene segmentation using pre-capture image motion
CN103988490B (en) * 2011-12-13 2018-05-22 索尼公司 Image processing apparatus, image processing method and recording medium
WO2013087974A1 (en) * 2011-12-16 2013-06-20 Nokia Corporation Method and apparatus for image capture targeting
EP2817955B1 (en) 2012-02-21 2018-04-11 FotoNation Cayman Limited Systems and methods for the manipulation of captured light field image data
DE102012008986B4 (en) * 2012-05-04 2023-08-31 Connaught Electronics Ltd. Camera system with adapted ROI, motor vehicle and corresponding method
US20130328760A1 (en) * 2012-06-08 2013-12-12 Qualcomm Incorporated Fast feature detection by reducing an area of a camera image
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US20140002674A1 (en) 2012-06-30 2014-01-02 Pelican Imaging Corporation Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors
US9131143B2 (en) 2012-07-20 2015-09-08 Blackberry Limited Dynamic region of interest adaptation and image capture device providing same
US10334181B2 (en) 2012-08-20 2019-06-25 Microsoft Technology Licensing, Llc Dynamically curved sensor for optical zoom lens
SG11201500910RA (en) 2012-08-21 2015-03-30 Pelican Imaging Corp Systems and methods for parallax detection and correction in images captured using array cameras
US20140055632A1 (en) 2012-08-23 2014-02-27 Pelican Imaging Corporation Feature based high resolution motion estimation from low resolution images captured using an array source
WO2014052974A2 (en) 2012-09-28 2014-04-03 Pelican Imaging Corporation Generating images from light fields utilizing virtual viewpoints
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
WO2014164550A2 (en) 2013-03-13 2014-10-09 Pelican Imaging Corporation System and methods for calibration of an array camera
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10122993B2 (en) * 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
WO2014145856A1 (en) 2013-03-15 2014-09-18 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
CN104252343B (en) * 2013-06-27 2019-09-06 腾讯科技(深圳)有限公司 A kind of method and apparatus for replacing application program vision control
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
WO2015074078A1 (en) 2013-11-18 2015-05-21 Pelican Imaging Corporation Estimating depth from projected texture using camera arrays
US9426361B2 (en) 2013-11-26 2016-08-23 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
WO2015134996A1 (en) 2014-03-07 2015-09-11 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
US9613181B2 (en) * 2014-05-29 2017-04-04 Globalfoundries Inc. Semiconductor device structure including active region having an extension portion
US9609200B2 (en) * 2014-09-24 2017-03-28 Panavision International, L.P. Distance measurement device for motion picture camera focus applications
WO2016054089A1 (en) 2014-09-29 2016-04-07 Pelican Imaging Corporation Systems and methods for dynamic calibration of array cameras
AU2015202286A1 (en) 2015-05-01 2016-11-17 Canon Kabushiki Kaisha Method, system and apparatus for determining distance to an object in a scene
GB2539027B (en) 2015-06-04 2019-04-17 Thales Holdings Uk Plc Video compression with increased fidelity near horizon
US9584716B2 (en) 2015-07-01 2017-02-28 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
KR102380862B1 (en) * 2015-09-01 2022-03-31 삼성전자주식회사 Method and apparatus for image processing
US9898665B2 (en) 2015-10-29 2018-02-20 International Business Machines Corporation Computerized video file analysis tool and method
US9699371B1 (en) * 2016-03-29 2017-07-04 Sony Corporation Image processing system with saliency integration and method of operation thereof
US10776992B2 (en) * 2017-07-05 2020-09-15 Qualcomm Incorporated Asynchronous time warp with depth data
US10705408B2 (en) 2018-10-17 2020-07-07 Sony Corporation Electronic device to autofocus on objects of interest within field-of-view of electronic device
WO2020161646A2 (en) * 2019-02-05 2020-08-13 Rey Focusing Ltd. Focus tracking system
CN111623810A (en) * 2019-02-27 2020-09-04 多方科技(广州)有限公司 Motion detection method and circuit thereof
CN110248096B (en) * 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium
KR102646521B1 (en) 2019-09-17 2024-03-21 인트린식 이노베이션 엘엘씨 Surface modeling system and method using polarization cue
MX2022004163A (en) 2019-10-07 2022-07-19 Boston Polarimetrics Inc Systems and methods for surface normals sensing with polarization.
KR20230116068A (en) 2019-11-30 2023-08-03 보스턴 폴라리메트릭스, 인크. System and method for segmenting transparent objects using polarization signals
JP7462769B2 (en) 2020-01-29 2024-04-05 イントリンジック イノベーション エルエルシー System and method for characterizing an object pose detection and measurement system - Patents.com
KR20220133973A (en) 2020-01-30 2022-10-05 인트린식 이노베이션 엘엘씨 Systems and methods for synthesizing data to train statistical models for different imaging modalities, including polarized images
CN111565300B (en) * 2020-05-22 2020-12-22 深圳市百川安防科技有限公司 Object-based video file processing method, device and system
WO2021243088A1 (en) 2020-05-27 2021-12-02 Boston Polarimetrics, Inc. Multi-aperture polarization optical systems using beam splitters
US20220292801A1 (en) * 2021-03-15 2022-09-15 Plantronics, Inc. Formatting Views of Whiteboards in Conjunction with Presenters
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
KR20240011793A (en) * 2021-08-23 2024-01-26 삼성전자주식회사 Method and electronic device for autofocusing of scenes

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5552823A (en) * 1992-02-15 1996-09-03 Sony Corporation Picture processing apparatus with object tracking
US5631697A (en) * 1991-11-27 1997-05-20 Hitachi, Ltd. Video camera capable of automatic target tracking
US6130964A (en) * 1997-02-06 2000-10-10 U.S. Philips Corporation Image segmentation and object tracking method and corresponding system
US6226388B1 (en) * 1999-01-05 2001-05-01 Sharp Labs Of America, Inc. Method and apparatus for object tracking for automatic controls in video devices
JP2003075717A (en) 2001-09-06 2003-03-12 Nikon Corp Distance detecting device
US20040004670A1 (en) * 2002-03-14 2004-01-08 Canon Kabushiki Kaisha Image pickup apparatus having auto-focus control and image pickup method
US20040091158A1 (en) * 2002-11-12 2004-05-13 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
US6901110B1 (en) 2000-03-10 2005-05-31 Obvious Technology Systems and methods for tracking objects in video sequences
EP1560425A1 (en) * 2004-01-27 2005-08-03 Fujinon Corporation Autofocus system
US20050270410A1 (en) 2004-06-03 2005-12-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
US20050270408A1 (en) * 2004-06-02 2005-12-08 Samsung Electronics Co., Ltd. Apparatus and method for auto-focusing in a mobile terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2209901B (en) * 1987-09-11 1992-06-03 Canon Kk Image sensing device
JPH06268894A (en) * 1993-03-10 1994-09-22 Hitachi Ltd Automatic image pickup device
JP3487436B2 (en) * 1992-09-28 2004-01-19 ソニー株式会社 Video camera system
KR100276681B1 (en) * 1992-11-07 2001-01-15 이데이 노부유끼 Video camera system
JP2682435B2 (en) 1994-04-15 1997-11-26 株式会社ニコン Camera self-mode setting device
GB9822956D0 (en) * 1998-10-20 1998-12-16 Vsd Limited Smoke detection
US20040212723A1 (en) * 2003-04-22 2004-10-28 Malcolm Lin Image pickup apparatus and operating method
JP3918788B2 (en) * 2003-08-06 2007-05-23 コニカミノルタフォトイメージング株式会社 Imaging apparatus and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5631697A (en) * 1991-11-27 1997-05-20 Hitachi, Ltd. Video camera capable of automatic target tracking
US5552823A (en) * 1992-02-15 1996-09-03 Sony Corporation Picture processing apparatus with object tracking
US6130964A (en) * 1997-02-06 2000-10-10 U.S. Philips Corporation Image segmentation and object tracking method and corresponding system
US6226388B1 (en) * 1999-01-05 2001-05-01 Sharp Labs Of America, Inc. Method and apparatus for object tracking for automatic controls in video devices
US6901110B1 (en) 2000-03-10 2005-05-31 Obvious Technology Systems and methods for tracking objects in video sequences
JP2003075717A (en) 2001-09-06 2003-03-12 Nikon Corp Distance detecting device
US20040004670A1 (en) * 2002-03-14 2004-01-08 Canon Kabushiki Kaisha Image pickup apparatus having auto-focus control and image pickup method
US20040091158A1 (en) * 2002-11-12 2004-05-13 Nokia Corporation Region-of-interest tracking method and device for wavelet-based video coding
EP1560425A1 (en) * 2004-01-27 2005-08-03 Fujinon Corporation Autofocus system
US20050270408A1 (en) * 2004-06-02 2005-12-08 Samsung Electronics Co., Ltd. Apparatus and method for auto-focusing in a mobile terminal
US20050270410A1 (en) 2004-06-03 2005-12-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1966648A4

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8228383B2 (en) * 2008-04-21 2012-07-24 Sony Corporation Image pickup apparatus and method for controlling ranging area based on detected object
WO2015094977A1 (en) * 2013-12-21 2015-06-25 Qualcomm Incorporated System and method to stabilize display of an object tracking box
US9836852B2 (en) 2013-12-21 2017-12-05 Qualcomm Incorporated System and method to stabilize display of an object tracking box
CN103780841A (en) * 2014-01-23 2014-05-07 深圳市金立通信设备有限公司 Shooting method and shooting device
US10511758B2 (en) 2015-07-20 2019-12-17 Samsung Electronics Co., Ltd. Image capturing apparatus with autofocus and method of operating the same
CN107636682B (en) * 2015-07-20 2021-12-28 三星电子株式会社 Image acquisition device and operation method thereof
CN107636682A (en) * 2015-07-20 2018-01-26 三星电子株式会社 Image collecting device and its operating method
EP3326360A4 (en) * 2015-07-20 2018-06-20 Samsung Electronics Co., Ltd. Image capturing apparatus and method of operating the same
CN105319725A (en) * 2015-10-30 2016-02-10 中国科学院遗传与发育生物学研究所 Ultra-high resolution imaging method used for rapid moving object
CN105319725B (en) * 2015-10-30 2018-01-02 中国科学院遗传与发育生物学研究所 Super-resolution imaging method for fast moving objects
WO2020042126A1 (en) * 2018-08-30 2020-03-05 华为技术有限公司 Focusing apparatus, method and related device
WO2021080524A1 (en) * 2019-10-23 2021-04-29 Aselsan Elektroni̇k Sanayi̇ Ve Ti̇caret Anoni̇m Şi̇rketi̇ Passive and adaptive focus optimization method for an optical system
CN113711123A (en) * 2020-03-10 2021-11-26 华为技术有限公司 Focusing method and device and electronic equipment

Also Published As

Publication number Publication date
EP1966648A4 (en) 2011-06-15
JP2009522591A (en) 2009-06-11
EP1966648A1 (en) 2008-09-10
US20100045800A1 (en) 2010-02-25
US8089515B2 (en) 2012-01-03

Similar Documents

Publication Publication Date Title
US8089515B2 (en) Method and device for controlling auto focusing of a video camera by tracking a region-of-interest
KR101722803B1 (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
US9092875B2 (en) Motion estimation apparatus, depth estimation apparatus, and motion estimation method
JP5284048B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN109313799B (en) Image processing method and apparatus
US10853927B2 (en) Image fusion architecture
US11138709B2 (en) Image fusion processing module
JP6800628B2 (en) Tracking device, tracking method, and program
CN108924428A (en) A kind of Atomatic focusing method, device and electronic equipment
JP4373840B2 (en) Moving object tracking method, moving object tracking program and recording medium thereof, and moving object tracking apparatus
JP5453573B2 (en) Imaging apparatus, imaging method, and program
US20110074927A1 (en) Method for determining ego-motion of moving platform and detection system
CN112802033B (en) Image processing method and device, computer readable storage medium and electronic equipment
JP2017229061A (en) Image processing apparatus, control method for the same, and imaging apparatus
JP3988574B2 (en) Image processing device
JPH1021408A (en) Device and method for extracting image
JP2021128537A (en) Image processing device, image processing method, program and storage medium
JPH1098644A (en) Movement detection device
CN113807124B (en) Image processing method, device, storage medium and electronic equipment
JP2016081252A (en) Image processor and image processing method
RU2778355C1 (en) Device and method for prediction autofocus for an object
US20220198683A1 (en) Object tracking apparatus and control method thereof using weight map based on motion vectors
JP2019083407A (en) Image blur correction device and control method therefor, and imaging device
JP2021190922A (en) Imaging apparatus, control method therefor and program
Gurrala et al. Enhancing Safety and Security: Face Tracking and Detection in Dehazed Video Frames Using KLT and Viola-Jones Algorithms.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2008547998

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2005821826

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2005821826

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12087207

Country of ref document: US