WO2012050185A1 - Video processing device, method of video processing and video processing program - Google Patents

Video processing device, method of video processing and video processing program Download PDF

Info

Publication number
WO2012050185A1
WO2012050185A1 PCT/JP2011/073639 JP2011073639W WO2012050185A1 WO 2012050185 A1 WO2012050185 A1 WO 2012050185A1 JP 2011073639 W JP2011073639 W JP 2011073639W WO 2012050185 A1 WO2012050185 A1 WO 2012050185A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
foreground
video
image
unit
Prior art date
Application number
PCT/JP2011/073639
Other languages
French (fr)
Japanese (ja)
Inventor
健史 筑波
正宏 塩井
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2012050185A1 publication Critical patent/WO2012050185A1/en

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to a video processing device, a video processing method, and a video processing program.
  • terminals with an imaging function such as a digital video camera, a digital still camera, and a mobile phone are rapidly spreading.
  • an apparatus that generates a new video by processing and processing video shot by these cameras. For example, a specific image area in a video can be extracted, and the extracted image area can be used as a part for video processing, or a video can be extracted for each individual image area, and the extracted video can be managed in the original video. It is known to use for search. Specifically, the following methods (1) and (2) are known as methods for extracting a desired image region from a video.
  • the chroma key process is a process of extracting a desired object (foreground area) by capturing an object with a certain color (for example, blue) as a background and removing a background portion of the certain color from the captured image. With this process, the video is separated into a video in the foreground image area and a video in the background image area.
  • Non-Patent Document 1 for a gray image, a user represents a marker representing a foreground area in a desired image area (foreground image area) and a marker representing a background area in another area (background image area) in advance.
  • a technique for extracting the foreground image region by graph cuts based on the markers assigned to the foreground image region and the background image region is described.
  • Patent Document 2 describes a technique for extracting a foreground image region by applying graph cuts to a color image.
  • Non-Patent Document 3 discloses a map (trimap) using three markers: a foreground image region, a background image region, and an unknown image region (an image region that has not been determined to belong to either the foreground image region or the background image region). In other words, a technique for extracting the foreground image area by estimating the mixture ratio ⁇ (mat) of the pixels in the foreground image area and the background image area in the unknown area is created.
  • Patent Document 1 Foreground Image Region Extraction Method Based on Color Information and Depth Information
  • Patent Document 2 describes a technique for extracting a foreground image area by roughly extracting a foreground image area from depth information from a camera to a subject, and then recursively repeating an area division and integration method based on color information. Yes.
  • the chroma key process when an object (foreground area) includes a constant color or a similar color of the background, it is determined that the area is a background image area, that is, the object is reliably extracted. There was a disadvantage that it was not possible.
  • the chroma key process has a drawback that when the background is not a constant color, a non-constant color portion of the background image area is extracted as an object, that is, the object cannot be reliably extracted.
  • Non-Patent Document 1-3 and Patent Documents 1 and 2 when the color distribution of the object and the background area is similar, or when the object and the background area have a similar pattern (texture) The boundary of the region cannot be specified, and a defective portion is generated in the object.
  • the background area may be erroneously extracted as an object. That is, the conventional technique has a drawback that the object cannot be reliably extracted.
  • the extracted shape of the target object becomes discontinuous in the time direction, so that flicker or Flickering will occur.
  • the present invention has been made in view of the above points, and provides a video processing apparatus, a video processing method, and a video processing program capable of reliably extracting an image of an object.
  • the present invention has been made to solve the above problems, and one aspect of the present invention is a video processing apparatus that extracts foreground area information indicating a foreground image from a video indicated by video information.
  • a video processing apparatus comprising: a foreground area correction unit that corrects the foreground image indicated by the foreground area information at the first time using the foreground area information and the video information at the second time.
  • the foreground area correction unit converts the foreground image indicated by the foreground area information at the first time into the foreground area information at the plurality of second times. And correction using video information.
  • the video indicated by the video information includes an image of a predetermined object
  • the foreground area correction unit performs the first time. Based on the video information, the target image area information indicating the image of the object, and the video information, the foreground area information, and the target image area information at the second time, from the second time to the first time.
  • the portion in the video at the first time is A foreground region probability map generation unit that calculates a probability of being a foreground image, and foreground region information at the first time is extracted based on the probability calculated by the foreground image probability map generation unit, and the extracted foreground region information indicates Correct foreground images Characterized in that it comprises a Tadashibu, the.
  • a video processing method in a video processing apparatus that extracts foreground area information indicating a foreground image from a video indicated by the video information, wherein the foreground area correction unit performs the first time.
  • the video indicated by the video information includes an image of a predetermined object
  • the foreground area correction step includes a movement amount calculation unit The second time based on the video information and the target image area information indicating the image of the object at the first time, and the video information, foreground area information and the target image area information at the second time.
  • a movement amount calculating step for calculating a movement amount by which the foreground image has moved during a first time from a first time, and a movement amount calculated by the foreground region probability map generation unit in the movement amount calculation step and the foreground image Based on the foreground region probability map generation step for calculating the probability that the portion in the video at the first time is the foreground image, and the correction unit calculates the probability based on the probability calculated in the foreground image probability map generation step.
  • a correction step of extracting the foreground area information to correct the foreground image indicated by, and having a.
  • a foreground image indicated by the foreground area information at the first time is transmitted to a computer of a video processing apparatus that extracts foreground area information indicating a foreground image from an image indicated by the image information.
  • the video indicated by the video information includes an image of a predetermined object
  • the foreground region correction procedure is performed at the first time.
  • the target image area information indicating the image of the object
  • the video information, the foreground area information, and the target image area information at the second time from the second time to the first time.
  • the movement amount calculation procedure for calculating the movement amount by which the foreground image has moved in between the movement amount calculated by the movement amount calculation procedure and the foreground image
  • the portion in the video at the first time is the foreground.
  • a foreground region probability map generation procedure for calculating a probability of being an image of the foreground, foreground region information at a first time is extracted based on the probability calculated by the foreground image probability map generation procedure, and the foreground region information indicated by the extracted foreground region information is Picture A video processing program for correcting procedure, to the execution of correcting.
  • the foreground image area or the background image area can be reliably extracted.
  • FIG. 1 is a block diagram showing a configuration of a video processing apparatus 1 according to an embodiment of the present invention.
  • a video processing apparatus 1 includes a video information acquisition unit 10, a depth information acquisition unit 11, a video information reproduction unit 12, a ROI (Region of Interest; target image region) acquisition unit 13, a video display unit 14, and an object extraction unit. 15 and a mask information recording unit 16.
  • the video information acquisition unit 10 acquires video information (t).
  • the video information (t) is video information of a moving image and is a function of time t (elapsed time from the start time of the moving image).
  • the video information of the present invention is not limited to this, and may be video information of a plurality of still images.
  • the video information (t) may be a moving image or a still image including images that are continuous or adjacent in time with the imaging device fixed, or images captured from consecutive or adjacent positions at the same time. (In the latter case, video information is a function of position).
  • the video information acquisition unit 10 may acquire the video information (t) from the imaging device, or may acquire the video information (t) recorded in advance in a recording unit or an external recording device. .
  • the video information acquisition unit 10 outputs the acquired video information (t) to the video information reproduction unit 12, the ROI acquisition unit 13, and the object extraction unit 15.
  • the depth information acquisition unit 11 acquires depth information (t) of the video information (t) acquired by the video information acquisition unit 10.
  • the depth information (t) is information representing the distance from the imaging device to the imaging object for each pixel of the video information (t).
  • the depth information acquisition unit 11 outputs the acquired depth information (t) to the object extraction unit 15.
  • the video information reproduction unit 12 generates a video signal for controlling the output of each pixel at each time t of the video display unit 14 based on the video information (t) input from the video information acquisition unit 10.
  • the video information reproduction unit 12 displays the video on the video display unit 14 by outputting the generated video signal to the video display unit 14. That is, the video information reproduction unit 12 reproduces the video of the video information (t) and causes the video display unit 14 to display the reproduced video.
  • the video information reproducing unit 12 superimposes the image of the object extracted by the object extracting unit 15 on the video of the video information (t) based on the mask information (t) recorded by the mask information recording unit 16. indicate. That is, the mask information (t) is associated with the video information (t) at time t. When the mask information recording unit 16 does not record the mask information (t), the video of the video information (t) is reproduced as it is.
  • the video display unit 14 is a touch panel type display.
  • the video display unit 14 displays the video of the video information (t) by controlling the output based on the video signal input from the video information reproducing unit 12.
  • the video display unit 14 converts information indicating the touched position into information indicating the position of the video information (ts) in the image (ts) at a certain time ts.
  • the video display unit 14 specifies the ROI in the image (ts) displayed on the display while the user touches the display, whereby the ROI position information (referred to as user-specified ROI information (ts)), that is, the position of the ROI.
  • information indicating the shape (circumscribed shape) are detected. The details of the process for detecting the user-designated ROI information (ts) in the video display unit 14 will be described later.
  • the video display unit 14 outputs the detected user-specified ROI information (ts) to the ROI acquisition unit 13.
  • the ROI acquisition unit 13 is based on the image within the range of the user-specified ROI information (ts) input from the video display unit 14, and the image of the video information (t) at each time t other than the time ts (the image of each frame). ; Hereinafter referred to as processed image (t)), an image that matches or is similar to the image of the user specified ROI information (ts) is detected. Thereafter, the ROI acquisition unit 13 extracts information indicating the position and shape of the matching or similar image as ROI information (t).
  • the ROI acquisition unit 13 calculates and records feature points (referred to as ROI feature points (ts)) from an image within the range of the user-specified ROI information (ts).
  • a feature point is a characteristic point in an image, for example, a point that is extracted as a part or a vertex of a subject based on a change in color or luminance between pixels. Is not limited to this.
  • the ROI acquisition unit 13 calculates an image feature point (t) of the processed image (t) at each time t.
  • the ROI acquisition unit 13 performs matching between the image feature point (t) and the ROI feature point (ts).
  • the ROI acquisition unit 13 sequentially multiplies the ROI feature point (ts) by a transformation matrix, thereby moving (including rotating) the ROI feature point (ts) and enlarging / reducing the feature.
  • the number of points that match (referred to as the number of feature points) is calculated.
  • the ROI acquisition unit 13 determines that the number of feature points is equal to or greater than a predetermined threshold, the ROI acquisition unit 13 records the conversion matrix at that time.
  • the ROI acquisition unit 13 sets the position information obtained by multiplying the user-specified ROI information (ts) by the transformation matrix as ROI information (t). That is, the ROI acquisition unit 13 determines which part in the image (t) the image within the range of the user-specified ROI information (ts) matches.
  • the ROI acquisition unit 13 outputs the extracted ROI information (t) (including user-specified ROI information (ts)) to the video display unit 14 and the object extraction unit 15.
  • the ROI acquisition unit 13 stores the extracted ROI information (t) in the ROI information storage unit 1583.
  • the object extraction unit 15 receives video information (t) from the video information acquisition unit 10, receives depth information (t) from the depth information acquisition unit 11, and receives ROI information (t) from the ROI acquisition unit 13. .
  • the object extraction unit 15 generates mask information (t) at each time t using the input video information (t), depth information (t), and ROI information (t). Details of processing performed by the object extraction unit 15 will be described later.
  • the object extraction unit 15 records the extracted mask information (t) in the mask information recording unit 16.
  • FIG. 2 is a schematic diagram illustrating an example of detection processing for user-specified ROI information (ts) according to the present embodiment.
  • a square with a symbol A is a touch panel display (video display unit 14).
  • symbol O represents the image of the target object (a person in FIG. 2) which a user wants to extract.
  • symbol U represents a user's hand.
  • FIG. 2 is a diagram when a rectangular (quadrangle) selection tool is used, and shows that the user has surrounded the target image to be extracted with the rectangular selection tool.
  • the position information of the frame (the circumscribed rectangle of the object O) labeled with the symbol r1 is user-specified ROI information (ts).
  • the user-specified ROI information (ts) is recorded, for example, as data in Table 1 below.
  • the user-specified ROI information includes time ts (or a number (frame number) attached to a video frame), and whether or not there is an extraction target image (target image) in a circumscribed rectangle.
  • Presence / absence flag referred to as extraction target flag
  • circumscribed rectangle starting point position x0, y0
  • circumscribed rectangle width length indicated by reference sign W1 in FIG. 2
  • circumscribed rectangle Is represented by the vertical width (the length represented by the symbol L1 in FIG. 2).
  • FIG. 3 is a flowchart showing an example of the operation of the video processing apparatus 1 according to the present embodiment.
  • Step S11 The video information acquisition unit 10 acquires video information (t), and outputs the acquired video information (t) to the video information playback unit 12, the ROI acquisition unit 13, and the object extraction unit 15.
  • the depth information acquisition unit 11 acquires depth information (t), and outputs the acquired depth information (t) to the object extraction unit 15. Thereafter, the process proceeds to step S12.
  • Step S12 The video information reproducing unit 12 reproduces the video of the video information (t) input in step S11, and causes the video display unit 14 to display the reproduced video. Thereafter, the process proceeds to step S13.
  • Step S13 The user pauses the reproduction of the video reproduced in step S12 at a certain time ts and designates the ROI.
  • the video display unit 14 detects user-specified ROI information (ts) for the ROI specified by the user, and outputs it to the ROI acquisition unit 13. Then, it progresses to step S14.
  • Step S14 The ROI acquisition unit 13 extracts the ROI information (t) at each time t based on the user-specified ROI information (ts) detected in Step S13.
  • the ROI acquisition unit 13 outputs the extracted ROI information (t) to the video display unit 14 and the object extraction unit 15.
  • the video display unit 14 displays the circumscribed shape indicated by the ROI information (t) (the circumscribed rectangle in the case of Table 1 and the circumscribed circle in the case of Table 2) superimposed on the image of the video information (t) at that position. To do. Then, it progresses to step S2.
  • Step S2 The object extraction unit 15 performs object extraction using the video information (t) and depth information (t) acquired in step S11, and the ROI information (t) extracted in step S14, and mask information (T) is generated.
  • the object extraction unit 15 records the generated mask information (t) in the mask information recording unit 16. Thereafter, the process proceeds to step S15.
  • Step S15 Based on the mask information (t) recorded by the mask information recording unit 16, the video information reproducing unit 12 superimposes the object image extracted by the object extracting unit 15 on the video of the video information (t). To display.
  • the video processing device 1 may perform processing related to all input video information (t) at time t, or video information (t in a range specified by the user) ) (T1 ⁇ t ⁇ t2) may be performed.
  • the video processing apparatus 1 detects the user-specified ROI information (ts) for the ROI specified by the user, so that the ROI can be reliably extracted as compared with the case where the ROI is automatically extracted. it can.
  • the video display unit 14 displays the circumscribed shape indicated by the ROI information (t) so as to be superimposed on the video of the video information (t), so that the user has detected a desired ROI. I can understand that.
  • FIG. 4 is a schematic block diagram illustrating the configuration of the object extraction unit 15 according to the present embodiment.
  • the object extraction unit 15 includes filter units 151a and 151b, a distribution model estimation unit 152, a clustering unit 153, a feature amount calculation unit 154, a foreground region extraction unit 155, a foreground region correction unit 156, a mask information generation unit 157, And a buffer unit 158.
  • the buffer unit 158 includes a video information storage unit 1581, a foreground region information storage unit 1582, an ROI information storage unit 1583, an ROI depth distribution information storage unit 1584, and a corrected foreground region information storage unit 1585.
  • parallelograms denoted by reference signs I1, D1, R1, and M indicate information, and are video information (t), depth information (t), ROI information (t), and mask information (t), respectively.
  • the filter unit 151a removes noise from the input video information (t) and performs a smoothing process. Specifically, the filter unit 151a uses a smoothing filter (hereinafter also referred to as an edge holding smoothing filter) that holds an edge (contour) for each color component with respect to the processed image (t) at each time t. Do.
  • the filter unit 151a uses a bilateral filter represented by the following equation (1) as a smoothing filter.
  • the input image is f (x, y)
  • the output image is g (x, y)
  • W is the window size to which filtering is applied
  • ⁇ 1 is a parameter for controlling the weighting coefficient related to the inter-pixel distance (standard deviation of Gaussian distribution)
  • ⁇ 2 represents a parameter (standard deviation of Gaussian distribution) for controlling a weighting coefficient relating to a difference in pixel values.
  • the filter unit 151 a outputs the video information (t) smoothed by the smoothing filter to the clustering unit 153 and the feature amount calculation unit 154.
  • the filter unit 151b removes noise from the input depth information (t) and performs a smoothing process. Specifically, the filter unit 151b performs an edge holding smoothing filter. Thereby, the filter part 151b removes the noise in the horizontal direction generated by occlusion (shielding).
  • the filter unit 151 b outputs the depth information (t) smoothed by the smoothing filter to the feature amount calculation unit 154 and the distribution model estimation unit 152.
  • the distribution model estimation unit 152 calculates the depth distribution model in the ROI. Estimate the parameters. Specifically, the distribution model estimation unit 152 acquires the parameters of the distribution model by the maximum likelihood estimation method using the following equation (2), that is, GMM (Gaussian Mixture Model) expressed by a mixed model of Gaussian distribution. .
  • GMM Gaussian Mixture Model
  • the acquired parameter is referred to as ROI depth distribution information (t).
  • P (x) represents the probability that the vector x appears.
  • w i represents a weighting factor of a Gaussian distribution of class i
  • ⁇ i represents an average vector of class i
  • ⁇ i represents a covariance matrix of class i
  • D represents the number of dimensions of vector x.
  • ⁇ i , ⁇ i ) represents a Gaussian distribution of class i, and is expressed using a mean vector ⁇ i and a covariance matrix ⁇ i .
  • the distribution model estimation unit 152 obtains each parameter of the distribution model using an EM (Expectation-Maximization) algorithm.
  • the distribution model estimation unit 152 determines the depth distribution of the extraction target region in the ROI as the distribution of the class having the maximum weighting coefficient w i . That is, the depth distribution is calculated by assuming that the area occupied by the area in the ROI is large.
  • the distribution model estimation unit 152 outputs the estimated ROI depth distribution information (t) to the foreground region extraction unit 155 and the buffer unit 158.
  • the clustering unit 153 performs clustering on the smoothed video information (t) input from the filter unit 151a for each processed image (t), thereby processing the processed image (t) into a plurality of regions (superpixels). (Also called). For example, the clustering unit 153 performs clustering in the feature amount space.
  • the clustering by the feature amount space means that each pixel of the image space is mapped to the feature amount space (for example, color, edge, motion vector), and the K-means method, the Mean-Shift method, or the K nearest neighbor in the feature amount space. Clustering is performed by a technique such as a search method (approximate K nearest neighbor search method).
  • the clustering unit 153 divides the processed image (t) into a set (region; class) of pixels having similar feature quantities (feature quantity values are within a predetermined range).
  • the clustering unit 153 replaces the pixel value in the original image space for the pixels in the class with the pixel value (for example, the average value) that is the representative value of each region after the clustering process in the feature amount space is completed.
  • the clustering unit 153 assigns a label for identifying the region to each pixel in each region, and outputs region information (t). Details of the clustering unit 153 will be described below.
  • FIG. 5 is a schematic block diagram illustrating the configuration of the clustering unit 153 according to the present embodiment.
  • the clustering unit 153 includes a feature amount detection unit 1531, a seed generation unit 1532, a region growth unit 1533, and a region integration unit 1534.
  • the clustering unit 153 performs clustering using feature values based on edges and colors, but the present invention is not limited to this feature value.
  • the feature amount detection unit 1531 receives the smoothed video information (t).
  • the feature amount detection unit 1531 calculates the feature amount of the pixel (x, y) (x and y are coordinates representing the position of the pixel in the image) in each processed image (t).
  • the feature amount detection unit 1531 applies a differential operator for each color component (for example, RGB (Red (red), Green (green), Blue (blue))), and each color in the x direction and the y direction.
  • the feature amount detection unit 1531 calculates the edge strength E 2 (x, y
  • t) is a predetermined threshold for the coordinates (x, y) at time (t). Equation (3) indicates that E 2 (x, y
  • t) 0 when E 1 (x, y
  • t) E 1 (x, y
  • the feature amount detection unit 1531 may adjust the threshold value TH_E (x, y
  • the seed generation unit 1532 generates seed information for generating a super pixel using the edge intensity E 2 (x, y
  • the seed generation unit 1532 has a minimum value of the edge strength E 2 (x, y
  • t) is set to “1”, and otherwise, the seed information S (x, y
  • W 1 represents the size of the window in the x direction
  • W 2 represents the size of the window in the y direction.
  • the seed generation unit 1532 outputs the generated seed information S (x, y
  • the region growing unit 1533 applies the region growing method based on the seed information S (x, y
  • the region growing unit 1533 starts this processing from a pixel whose seed information S (x, y
  • the region growing unit 1533 sets the expanded region as the super pixel group R 1 (t), and outputs information indicating the super pixel group R 1 (t) to the region integrating unit 1534.
  • the region integration unit 1534 performs region integration processing for integrating a super pixel group having a small area area into another region from the super pixel group R 1 (t) indicated by the information input from the region growth unit 1533. Specifically, the region integration unit 1534 uses one point of each superpixel group R 1 (t) as a vertex, and is represented by a connection relationship between the vertices and an edge (weight) between the connected vertices. A direction graph is calculated. Here, for an edge (weight) between vertices, for example, a distance in a color space between each vertex and a representative color of each corresponding superpixel is used.
  • the region integration unit 1534 performs region integration on the weighted undirected graph so as to form a minimum spanning tree (MST) using a greedy method, and generates a super-cell group R 2 (t). For each superpixel, the region integration unit 1534 assigns a label for identifying the superpixel to all the pixels in the superpixel, and outputs the labeling result as region information (t).
  • MST minimum spanning tree
  • FIG. 6 is a schematic diagram illustrating an example of a processing result of the clustering unit 153 according to the present embodiment.
  • This figure is an image of a bear doll riding a train toy.
  • bear toys are also moving as train toys move along the tracks.
  • FIG. 6A by reference numeral 6A is an image representing the edge strength E 2 obtained from the input image.
  • FIG. 6B with numeral 6B is an image representing the seed information S obtained from the edge intensity E 2.
  • FIG. 6C attached with reference numeral 6C is an image showing the super pixel group R 1 (t)
  • FIG. 6D attached with reference numeral 6D is an image showing the super pixel group R 2 (t).
  • FIG. 6A by reference numeral 6A is an image representing the edge strength E 2 obtained from the input image.
  • FIG. 6B with numeral 6B is an image representing the seed information S obtained from the edge intensity E 2.
  • FIG. 6C attached with reference numeral 6C is an image showing the super pixel group R 1 (t)
  • a bright (white) portion represents a region having a high edge strength
  • a dark (black) portion represents a region having a low edge strength
  • a bright (white) portion represents a seed
  • a dark (black) portion represents a portion that is determined by a region growth method to which region (class) belongs.
  • FIG. 6C and FIG. 6D there are a large number of superpixels (regions) having a small area in the superpixel group R 1 (t) in FIG. 6C, but small in the superpixel group R 2 (t) in FIG. 6D. The number of superpixels (areas) having an area is decreasing. As described above, the video processing apparatus 1 can obtain a more accurate clustering result by performing the clustering process with fewer superpixels (regions) having a small area.
  • the feature amount calculation unit 154 receives the region information (t) from the clustering unit 153, the smoothed video information (t) from the filter unit 151a, and the depth information (t) from the filter unit 151b. ) And ROI information (t).
  • the feature amount calculation unit 154 calculates a feature amount for each region (label) based on the input region information (t), video information (t), depth information (t), and ROI information (t). Specifically, the feature amount calculation unit 154 calculates the following feature amounts (1) to (7). Thereafter, feature amount information (t) indicating the calculated feature amount is output to the foreground region extraction unit 155.
  • FIG. 7 is a flowchart illustrating an example of the operation of the feature amount calculation unit 154.
  • FIG. 8 is a diagram illustrating an example of labeling (region information) in an 8 ⁇ 8 pixel block for explanation.
  • FIG. 9 is a diagram illustrating an example of a connection relation between regions in FIG. 8 expressed by an unweighted undirected graph and an adjacency matrix.
  • FIG. 10 is a diagram illustrating an example of the method of obtaining the perimeter of the region and the circumscribed rectangle of the region, taking the label 3 in FIG. 8A as an example.
  • Step S154-01 The feature amount calculation unit 154 receives the region information (t) from the clustering unit 153, the video information (t) after smoothing from the filter unit 151a, and the depth information (t) after smoothing from the filter unit 151b. , And ROI information (t). Thereafter, the process proceeds to step S154-02.
  • Step S154-02 The feature amount calculation unit 154 sums the pixel coordinate values for the pixels in the ROI region based on the ROI information (t). Subsequently, the feature amount calculation unit 154 divides the total value by the number of pixels in the ROI area, and uses the calculation result as the center of gravity of the ROI area. Thereafter, the process proceeds to step S154-03.
  • Step S154-03 Based on the region information (t), the feature amount calculation unit 154 scans the processing target image line by line from the origin (raster scan), and coordinates and the number of pixels of all pixels belonging to each label The position (start point) where the pixel belonging to each label first appears and the adjacency relationship between the labels (regions) are obtained. Further, the position (start point) at which the pixel belonging to each acquired label first appears is stored as the start point of contour tracking when acquiring the area perimeter in step S154-08. Thereafter, the process proceeds to step S154-04.
  • an example of a method for obtaining the detection result of the start position of the label and the adjacent relationship between the labels will be described with reference to FIGS.
  • the label 1 is adjacent to the labels 3 and 4.
  • the adjacency relationship between the labels regarding the region information in FIG. 8A can be finally expressed as an unweighted undirected graph shown in FIG.
  • FIG. 9B each node number corresponds to each label number, and an edge between nodes represents a connection relationship.
  • the structure of the graph of FIG. 9B is expressed by the adjacency matrix shown in FIG.
  • “1” is assigned as a value when there is an edge between nodes
  • “0” is assigned as a value when there is no edge between nodes.
  • Step S154-04 The feature amount calculation unit 154 sums the coordinate values for the pixels belonging to each label Li (0 ⁇ i ⁇ MaxLabel).
  • MaxLabel represents the total number of labels for identifying superpixels (regions) acquired from the region information (t).
  • the feature amount calculation unit 154 divides the total value by the number of pixels belonging to the label Li, and uses the result as the center of gravity of the label Li. Then, the distance (center of gravity distance) between the center of gravity of the ROI area obtained in step S154-02 and the center of gravity of the label Li is calculated. Thereafter, the process proceeds to step S154-05.
  • Step S154-05 The feature amount calculation unit 154 calculates the average value, median value, variance value, and standard value for each color component of each label Li (0 ⁇ i ⁇ MaxLabel) from the smoothed video information (t). Calculate the deviation. Thereafter, the process proceeds to step S154-06.
  • Step S154-06 The feature amount calculation unit 154 calculates the average value, median value, variance value, and standard deviation of the depth of each label Li (0 ⁇ i ⁇ MaxLabel) from the smoothed depth information (t). calculate. Thereafter, the process proceeds to step S154-07.
  • Step S154-07 The feature amount calculation unit 154 sets the total number of pixels belonging to the label Li as a region area. Thereafter, the process proceeds to step S154-08.
  • Step S154-08 The feature amount calculation unit 154 calculates the area perimeter of the label Li. Thereafter, the process proceeds to step S154-09.
  • the area peripheral length is a movement amount that goes around the area clockwise (or counterclockwise) from the start point of the label in FIG. 10A. 8 when connected components of the coupling, and C 1 number to track moving up and down and right and left, there are a C 2 to track moving obliquely, region perimeter (Perimeter) has the formula (5) or by the formula (6) Calculated.
  • Step S154-09 The feature amount calculation unit 154 calculates the minimum rectangle (circumscribed rectangle) circumscribing the area indicated by the label Li. Thereafter, the process proceeds to step S154-10.
  • a circumscribed rectangle acquisition method will be described with reference to FIG. In FIG.
  • Equation (7) the symbol Li represents a label number
  • the symbol R Li represents a set of pixels belonging to the label Li
  • the symbols x j and y j represent the pixels j belonging to the set R Li , respectively. Represents the x coordinate and y coordinate.
  • Step S154-10 When the calculation of the feature values of all the labels is completed (Yes in Step S154-10), the feature value calculation unit 154 obtains the feature value information (t) including the feature value of each label (region). The data is output to the foreground area extraction unit 155. If there is an unprocessed label for which the feature amount is calculated (No in step S154-10), the process returns to step S154-04 to calculate the feature amount of the next label.
  • the feature quantities (1) to (7) can be calculated.
  • the operation of the feature amount calculation unit 154 has been described in the order of steps S154-01 to S154-10.
  • the present invention is not limited to this, and can be changed within a range in which the present invention can be implemented.
  • an adjacency matrix is used as an example of a data structure representing a connection relationship between regions, but the present invention is not limited to this, and an adjacency list may be used.
  • the feature amount calculation unit 154 calculates the feature amount using RGB as the color space of the image.
  • YCbCr (YUV), CIE L * a * b * (Elster, Aster) , Biester), CIE L * u * v * (Elster, Yuster, Baster) or other color space.
  • the foreground region extraction unit 155 receives the feature amount information (t) from the feature amount calculation unit 154, the ROI depth distribution information (t), and the ROI information (t) from the distribution model estimation unit 152. Further, the foreground area extraction unit 155 reads the video information (t) from the video information storage unit 158 of the buffer unit 158.
  • the read video information (t) is information stored by the video information acquisition unit 10 and not subjected to the smoothing process, but is not limited thereto, and the video information subjected to the smoothing process is not limited thereto. (T) may be used.
  • the foreground region extraction unit 155 extracts the foreground image region (t) to be extracted from the video information (t) based on the ROI information (t), the feature amount information (t), and the ROI depth distribution information (t). To do.
  • the foreground area extraction unit 155 stores the foreground area information (t) indicating the extracted foreground image area (t) in the foreground area information storage unit 1582.
  • FIG. 11 is a flowchart showing an example of the operation of the foreground area extraction unit 155 according to the present embodiment.
  • the foreground area extraction unit 155 sets parameters for search conditions for searching for a basic foreground area, which is a core area for extracting the foreground area. Specifically, the foreground region extraction unit 155 sets predetermined values as the lower limit value and the upper limit value of each feature amount information (t).
  • Foreground region extraction unit 155 for example, lower limit value (referred to as minimum area) of region area, upper limit value (referred to as maximum area), lower limit value of region peripheral length (referred to as minimum peripheral length), maximum value (referred to as maximum peripheral length), As an initial value of the upper limit value of the center-of-gravity distance, the maximum value (maximum distance) of the radius of the circle inscribed in the circumscribed rectangle of the ROI area is set as a parameter.
  • the search conditions for the basic foreground region in this way, it is possible to detect a region having a center of gravity and a large area within the ROI. In addition, it is possible to prevent erroneous detection of a region belonging to the background region as a basic foreground region.
  • Step S155-02 The foreground region extraction unit 155 selects a region having the minimum centroid distance from among the regions that satisfies the lower limit value of the centroid distance or less than the upper limit value. Thereafter, the process proceeds to step S155-03.
  • Step S155-03 The foreground region extraction unit 155 determines whether or not the feature amount information (t) of the region selected in step S155-02 is a value between the lower limit value and the upper limit value set in step S155-01. Determine whether. When it is determined that the feature amount information (t) is a value between the lower limit value and the upper limit value (Yes), that is, it is determined that the region selected in step S155-02 is the basic foreground region, and step S155- Proceed to 05. On the other hand, when it is determined that the feature amount information (t) is not a value between the lower limit value and the upper limit value (No), the process proceeds to step S155-04.
  • Step S155-04 The foreground region extraction unit 155 subtracts a predetermined value from the lower limit value of each feature amount information (t), or adds a predetermined value to the upper limit value, so that the feature amount information (T) Each lower limit value and upper limit value are updated. Thereafter, the process proceeds to step S155-02.
  • the foreground region extraction unit 155 calculates the average value related to the depth among the feature amount information (t) of the basic foreground region determined in Step S155-03 and each region having a center of gravity within the ROI, or The median values are compared, and it is determined whether or not the difference between the pieces of feature amount information (t) is within a predetermined threshold value (or less than a predetermined threshold value).
  • the foreground area extraction unit 155 integrates the area for which the difference in the feature amount information (t) is determined to be within a predetermined threshold and the basic foreground area, and determines the integrated area as the foreground area.
  • the foreground region extraction unit 155 determines a threshold value of the difference between the feature amount information (t) based on the ROI depth distribution information (t) acquired from the distribution model estimation unit 152. Specifically, the foreground region extraction unit 155 calculates a threshold value (TH_D1) of the difference between the feature amount information (t) using the following equation (8).
  • ⁇ _1 is a predetermined scaling constant.
  • ⁇ _1 is a standard deviation when the depth distribution of the foreground region is assumed to be a Gaussian distribution.
  • the foreground area correction unit 156 stores information (corrected foreground image area information) indicating the corrected foreground image area (t0) in the foreground area information storage unit 1585 of the buffer unit 158.
  • the foreground area correction unit 156 outputs information (corrected foreground image area information) indicating the corrected foreground image area (t0) to the mask information generation unit 157.
  • information corrected foreground image area information
  • the foreground area correction unit 156 will be described in detail.
  • FIG. 12 is a schematic block diagram showing the configuration of the foreground area correction unit 156 according to the present embodiment.
  • the foreground region correction unit 156 includes a movement amount calculation unit 1561, a foreground region probability map generation unit 1562, a foreground region determination unit 1563, and a boundary region correction unit 1564.
  • a movement amount (t0, t0-k) (also referred to as a motion vector) obtained by subtracting the position in the processed image (t0-k) from the position in the processed image (t0) is calculated. That is, the movement amount (t0, t0-k) represents the movement amount that the foreground area image (t0-k) has moved from time t0-k to time t0.
  • the movement amount calculation unit 1561 calculates the movement amount (t0, t0-k) by performing template matching (also referred to as motion search) processing shown in FIG.
  • FIG. 13 is an explanatory diagram for explaining template matching according to the present embodiment.
  • the horizontal axis is time t
  • the vertical axis is the y coordinate
  • the direction perpendicular to the horizontal axis and the vertical axis is the x coordinate.
  • the image with the symbol Ik represents the processed image at time (t0-k).
  • An image region denoted by reference symbol Ok represents a foreground image region at time (t0-k) and a circumscribed rectangle surrounding the foreground image region (object).
  • the coordinates with the reference symbol Ak represent the coordinates of the starting point position of the circumscribed rectangle surrounding the foreground area indicated by the reference symbol Ok.
  • the image with the symbol Mk is an image indicated by the mask information (t0-k) in the circumscribed rectangle at time (t0-k).
  • the mask information (t0-k) is information for identifying the foreground image region (white portion in FIG. 13) and the background image region (black portion in FIG. 13) within the circumscribed rectangle.
  • the foreground image region (t0-k) indicated by t0-k) and the other region in the circumscribed rectangle are set as the background image region (t0-k).
  • a symbol Vk is a vector from the coordinate Ak to the coordinate A0. This vector represents the amount of movement (t0, t0-k) of the foreground image area (t0-k).
  • the movement amount calculation unit 1561 uses the foreground image region Ok as a template, moves the template on the processed image (t0) (may be rotated or enlarged / reduced), and has the highest similarity to the template ( Detected area). The movement amount calculation unit 1561 calculates a difference in coordinates between the detected estimated area and the foreground image area Ok as a movement amount (t0, t0-k).
  • the movement amount calculation unit 1561 calculates coordinates (x0, y0) (referred to as initial search coordinates) of the center of gravity of the ROI area indicated by the ROI information (t0).
  • the movement amount calculation unit 1561 performs a spiral search around the initial search coordinates (x0, y0) to detect the estimated region and calculates the movement amount (t0, t0-k).
  • the spiral search is a coordinate in which the range of the foreground image area (t0) is high (in this case, the search initial coordinates) so as to gradually expand the range in a spiral order as shown in FIG. It is a technique for moving and searching for an estimated area.
  • the movement amount calculation unit 1561 extracts a movement amount having a similarity higher than a predetermined value, the spiral search may end there. Thereby, the movement amount calculation unit 1561 can reduce the calculation amount.
  • the movement amount calculation unit 1561 calculates the similarity R SAD using the following formula (9) using the coordinates selected in the spiral order (referred to as selected coordinates) as the center of gravity, and determines the region having the smallest value as the estimation region.
  • M ⁇ N (W1 ⁇ L1 in the example of FIG. 2) represents the size of the template
  • (i, j) represents the coordinates of the pixels in the template
  • t0 ⁇ k) is The pixel value at the position of coordinates (i, j) is represented.
  • (dx, dy) is a value (offset value) obtained by subtracting the center of gravity of the ROI area indicated by the ROI information (t0-k) from the selected coordinates, and I (i + dx, j + dy
  • the pixel value at the coordinates (i + dx, j + dy) is represented.
  • Equation (9) indicates that the absolute value is calculated by the Manhattan distance (L 1 -distance, L 1 -norm) and the sum of i and j is taken.
  • t0) is the value of each color component in the RGB space
  • t0) is the value of each color component in the RGB space
  • t0) is expressed by the following equation (10).
  • the movement amount calculation unit 1561 outputs the calculated movement amount (t0, t0-k) to the foreground region probability map generation unit 1562.
  • w k represents a weighting factor
  • Dx k and dy k represent the x component and the y component of the movement amount (t, t ⁇ k), respectively.
  • t0-k) is “1” when the pixel at the coordinates (x, y) of the processed image (t0-k) is the foreground image area (t0-k). When the area is not the area (t0-k) (the background image area), the value is “0”.
  • the weighting factor w k may be set according to the time distance from t0, for example, as shown in Expression (12). That is, for the foreground area information at a time away from time t0, the value of the weighting factor is set small.
  • t0) for all coordinates (x, y) of the processed image (t0) is referred to as a foreground region probability map P (t0).
  • the foreground area probability map P (t0) is expressed by the following equation (13).
  • W represents the number of pixels in the horizontal direction of the processed image (t0)
  • H represents the number of pixels in the vertical direction of the processed image (t0).
  • the foreground area probability map generation unit 1562 uses the following expression (14) for the calculated foreground area probability map P (t0), and the pixel at the coordinates (x, y) of the processed image (t0) M (x, y
  • the foreground area probability map generation unit 1562 sets M (x, y
  • t0) is set to “0” (background image region).
  • t0) takes a value of 0 to 1, and is represented by the following equation (15), for example.
  • the foreground area probability map generation unit 1562 outputs the calculated foreground area information M (x, y
  • the boundary line correction unit 1564 performs a contour correction process along the contour of the foreground image region indicated by the foreground region information M (x, y
  • FIG. 15 is a flowchart showing an example of the operation of the foreground area correction process according to the present embodiment.
  • the movement amount calculation unit 1561 includes information from time t0 to time t0-K (video information (t0-k), video information (t0), foreground area information (t0-k), foreground area information ( t0), ROI information (t0-k), and ROI information (t0)) are read from the buffer unit 158.
  • the foreground area probability map generation unit 1562 reads the foreground area information (t0-k) from time t0 to time t0-K from the buffer unit 158. Thereafter, the process proceeds to step S207-02.
  • Step S207-02 The movement amount calculation unit 1561 calculates the movement amount (t0, t0-k) of the foreground image area (t0-k) based on the information read out in step S207-01. Thereafter, the process proceeds to step S207-03.
  • Step S207-03 The movement amount calculation unit 1561 determines whether or not the movement amount (t0, t0-k) from time t0-1 to time t0-K has been calculated (whether there is an unprocessed buffer). To do. If it is determined that the movement amount (t0, t0-k) from time t0-1 to time t0-K has been calculated (Yes), the process proceeds to step S207-04. On the other hand, if it is determined that there is a time t0-k for which the movement amount (t0, t0-k) is not calculated (Yes), the value of k is changed, and the process returns to step S207-02.
  • the foreground area probability map P (t0) is calculated using the foreground area information (t0-k) read out in step (b). Thereafter, the process proceeds to step S207-05.
  • Step S207-05 The foreground area probability map generation unit 1562 uses the expression (13) for the foreground area probability map P (t0) calculated in step S207-04 to calculate the foreground area information M (x, y
  • the foreground area probability map generation unit 1562 extracts the foreground image area as an area having foreground area information M (x, y
  • t0) 1. Thereafter, the process proceeds to step S208-01.
  • the boundary line correction unit 1564 performs outline correction processing along the outline of the foreground image area indicated by the foreground area information M (x, y
  • the mask information generation unit 157 generates a mask representing the corrected foreground image region (t) indicated by the information input from the foreground region correction unit 156. Note that the mask information generation unit 157 may generate a mask representing a background area that is an area other than the foreground image area. The mask information generation unit 157 outputs the generated mask information (t).
  • the buffer unit 158 After completion of the foreground area extraction process of the video at time (t0), the buffer unit 158 performs various data (video information (t), information indicating the foreground image area (t), depth at the time (t) that satisfies the following condition A: Information (t), ROI information (t), etc. are discarded, and various data (video information (t0), information indicating foreground image area (t0), depth information (t0), ROI information (t0) at time (t0) ) And the like.
  • FIG. 16 is a flowchart illustrating an example of the operation of the buffer unit 158 according to the present embodiment.
  • the buffer unit 158 searches for an empty buffer for storing various information at the time (t0). Thereafter, the process proceeds to step S158-02.
  • Step S158-02 The buffer unit 158 determines whether or not there is an empty buffer as a result of the search in step S158. If it is determined that there is an empty buffer (Yes), the process proceeds to step S158-05. On the other hand, if it is determined that there is no empty buffer (No), the process proceeds to step S158-03.
  • Step S158-03 The buffer unit 158 selects a buffer (referred to as a target buffer) in which information satisfying the condition A is stored. Thereafter, the process proceeds to step S158-04.
  • Step S158-04 The buffer unit 158 discards the various data stored in the target buffer selected in Step S158-03, thereby emptying the target buffer (clearing the storage area). Thereafter, the process proceeds to step S158-05.
  • Step S158-05 The buffer unit 158 stores various data at time (t) in the target buffer, and ends the buffer update control.
  • FIG. 17 is a flowchart illustrating an example of the operation of the object extraction unit 15 according to the present embodiment.
  • Step S201 The object extraction unit 15 reads various data (video information (t), depth information (t), ROI information (t)). Specifically, video information (t) is input to the filter unit 151a and the buffer unit 158, depth information (t) is input to the filter unit 151b, and ROI information (t) is input to the distribution model estimation unit 152 and the buffer unit 158. . Thereafter, the process proceeds to step S202.
  • Step S202 The object extraction unit 15 determines whether there is an extraction target image by determining whether the extraction target flag included in the ROI information (t) indicates presence or absence. If it is determined that there is an extraction target image (Yes), the process proceeds to step S203. On the other hand, when it determines with there being no extraction object image (No), it progresses to step S210.
  • Step S203 The filter unit 151a removes noise from the video information (t) input in step S201 and performs a smoothing process.
  • the filter unit 151b removes noise from the depth information (t) input in step S201 and performs a smoothing process. Thereafter, the process proceeds to step S204.
  • Step S204 The distribution model estimation unit 152, based on the depth information (t) smoothed in Step S203 and the ROI information (t) input in Step S201, the ROI depth distribution information (t) in the ROI. Is estimated. Thereafter, the process proceeds to step S205.
  • Step S205 The clustering unit 153 divides the processed image (t) into superpixels by performing clustering on the video information (t) smoothed in step S203.
  • the clustering unit 153 performs labeling for each super pixel to generate region information (t).
  • the process proceeds to step S206.
  • the feature amount calculation unit 154 includes the region information (t) generated in step S205, the video information (t) smoothed in step S203, the depth information (t) smoothed, and the ROI information. Based on (t), a feature value for each region (label) is calculated. Thereafter, the process proceeds to step S207.
  • Step S209 The mask information generation unit 157 generates mask information indicating a mask representing the foreground image region (t0) corrected in step S208. Thereafter, the process proceeds to step S210.
  • Step S210 The mask information generation unit 157 stores the mask information generated in step S209 in the mask information storage unit 16.
  • FIG. 18 is a schematic diagram illustrating an example of depth information (t) according to the present embodiment.
  • images D1, D2, and D3 indicate depth information (t1-2), depth information (t1-1), and depth information (t1), respectively.
  • portions having the same color indicate the same depth.
  • FIG. 18 shows that the bright (light) image portion of the color has a smaller depth (positioned forward) than the dark (dark) image portion of the color.
  • the depth information (t) in FIG. 18 is obtained by acquiring the amount of deviation of the parallax from the right-eye camera based on the left-eye camera with respect to the video shot by the stereo camera.
  • the left part of the image surrounded by the chain line (the part attached with the symbols U1 to U3) has a difference in parallax because the video seen from the left eye camera and the video seen from the right eye camera are different. This is an indefinite area where the amount cannot be determined.
  • FIG. 19 is a schematic diagram illustrating an example of the foreground area probability map P (t0) according to the present embodiment.
  • images P1, P2, and P3 respectively show a foreground area probability map P (t1-2), a foreground area probability map P (t1-1), and a foreground area probability map P (t1).
  • This foreground area probability map P (t) is calculated based on the depth information (t) in FIG.
  • FIG. 20 is an explanatory diagram of an example of the foreground image area (t) according to the present embodiment.
  • images M1a, M2a, and M3a are respectively a foreground image area (t1-2), a foreground image area (t1-1), and a foreground image area (after correction by the foreground area correction unit 156 according to the present embodiment).
  • Images M1b, M2b, and M3b are a foreground image area (t1-2), a foreground image area (t1-1), and a foreground image area (t1), respectively, according to the prior art.
  • the shape of the foreground image area is smoothed (stabilized), and compared with the images M1b, M2b, and M3b, the foreground image area has a missing portion and an erroneous extraction portion. Occurrence has been reduced. Thereby, in this embodiment, even when the images M1a, M2a, and M3a are reproduced along the time, it is possible to suppress the occurrence of flicker and flicker due to the discontinuity of the extracted shape.
  • the foreground area correction unit 156 converts the foreground image area (t0) indicated by the foreground area information (t0) at time t0 to the foreground area information (t0-k) at time t0-k. And video information (t0-k).
  • the video processing apparatus 1 can suppress the occurrence of flicker and flicker due to the discontinuity of the extraction shape, and can reliably extract the image of the object.
  • the movement amount calculation unit 1561 performs the video information (t0) and the ROI information (t0) at the time t0, and the video information (t0-k) and the foreground area information (at the time t0-k).
  • the foreground region probability map generation unit 1562 that calculates the amount of movement of the foreground image region (t0-k) from time t0-k to time t0 based on the t0-k) and ROI information Based on the movement amount calculated by the unit 1561 and the foreground image area (t0-k), a foreground area probability map P (t0) in which each coordinate in the video at time t0 is the foreground image area is calculated.
  • the boundary region correction unit 1564 extracts the foreground region information (t0) at time t0 based on the foreground region probability map P (t0) calculated by the foreground image probability map generation unit 1562, and the extracted foreground region information (t0) The foreground image area (t0) shown is corrected. Thereby, the video processing apparatus 1 can suppress the occurrence of flicker and flicker due to the discontinuity of the extraction shape, and can reliably extract the image of the object.
  • the filter unit 151a uses the bilateral filter of Expression (1), so that each image of the video information (t) is converted into a skeleton component in which an edge component is held, noise, and a pattern. Can be separated into texture components.
  • the filter unit 151b removes the noise of the depth information (t) and smoothes it.
  • the estimation accuracy of the mixed model of the depth distribution model in the distribution model estimation unit 152 can be improved.
  • the foreground area extraction unit 105 can accurately determine the threshold value relating to the depth used for the integration process of the basic foreground area and each area in the ROI.
  • the clustering unit 153 performs clustering on the skeleton component image after the edge holding smoothing filter.
  • the video processing apparatus 1 can obtain a superpixel group that is stable (robust) against noise and texture. Note that the super pixel represents a meaningful area having a certain large area.
  • the video processing apparatus 1 can obtain the depth distribution model of the foreground region with higher accuracy by obtaining the depth distribution model in the ROI using the ROI information by the mixed model.
  • the foreground area extraction unit 155 can accurately determine the threshold value relating to the depth used for the integration process of the basic foreground area and each area in the ROI.
  • the foreground region correction unit 156 corrects a missing portion of the extraction target image region at time (t0) or an erroneous extraction portion of the extraction target image region at time (t0) to extract the extracted image shape in the time direction ( Flickering and flickering due to discontinuity of (contour) can be suppressed.
  • the video processing apparatus 1 performs smoothing processing by removing noise in the video information (t). Thereby, in the video processing apparatus 1, it can suppress that the super pixel group with a small area
  • the movement amount calculation unit 1561 uses a motion search such as a spiral search, so that the amount of calculation required to obtain the movement amount (t0, t0-k) can be reduced. Further, the movement amount calculation unit 1561 may use only the white portion (foreground region) on the mask Mk when calculating the similarity (see FIG. 13). Thereby, in the video processing apparatus 1, it is possible to prevent a search error of the movement amount and to omit unnecessary calculation, compared to a case where template matching is performed including the background region.
  • the present invention is not limited to this, and other selection tools may be used.
  • a selection tool shown in FIGS. 21 and 22 may be used.
  • FIG. 21 is a schematic diagram illustrating another example of the user-specified ROI information (ts) detection process according to the present embodiment.
  • FIG. 21 is a diagram when an ellipse (circle) selection tool is used, and shows that the user has surrounded the target image to be extracted with the ellipse selection tool.
  • the position information of the frame reference circumscribed circle of the object O; the circumscribed circle includes an ellipse
  • the user-specified ROI information (ts) is recorded as data in Table 2 below, for example.
  • user-specified ROI information includes time ts (or frame number), presence / absence flag (extraction target flag) indicating whether or not an extraction target image exists in the circumscribed circle, and the center of the circumscribed circle
  • the position (x0, y0) (the coordinates of the point P2 in FIG. 21), the minor axis direction of the circumscribed circle (the direction of the vector labeled with the symbol D21 in FIG. 21), and the short side length (the length represented by the symbol W2 in FIG. ),
  • the major axis direction of the circumscribed circle (the direction of the vector labeled with D22 in FIG. 21) and the length of the long side (the length represented by the symbol L2 in FIG. 21).
  • FIG. 22 is a schematic diagram showing another example of detection processing of user-specified ROI information (ts) according to the present embodiment.
  • FIG. 22 is a diagram when a freehand selection tool is used, and represents that the user has surrounded the target image to be extracted.
  • the position information of the frame (the circumscribed shape of the object O) denoted by reference numeral r3 is user-specified ROI information (ts).
  • the user-specified ROI information (ts) is recorded, for example, as data in Table 3 below.
  • the user-specified ROI information (ts) includes time ts (or may be a frame number), a presence / absence flag (extraction target flag) indicating whether or not an extraction target image exists in the circumscribed shape, and the starting point of the circumscribed shape Position (x0, y0) (the position of the point at which the user started to input the circumscribed shape) (the coordinates of the point P3 in FIG. 22), on the edge of the circumscribed shape clockwise from the starting point position (or may be counterclockwise) It is represented by a chain code that represents a point.
  • the chain code is a numerical value of a position of a point B adjacent to a certain point A, and a numerical value of a position of a point C (a point other than the point A) adjacent to the adjacent point B. , And so on, and a line is represented by the combination of those numerical values.
  • the ROI information (t) obtained from the ROI acquisition unit 13 is used for each processing image unit, and the shape of the ROI surrounding the image area to be extracted is superimposed on the processing image (t). Then, the user may be notified that the image area to be extracted is selected. Further, the frame number and time information of the currently displayed processed image (t) may be presented to the user.
  • the foreground image area (t0-k) at the previous time t0-k is used for the foreground image area (t0) at the time t0 with the foreground area correction unit 156 has been described.
  • the invention is not limited to this.
  • only the foreground image region (t0 + k) at time t0 + k after a certain time t0 may be used, or only the foreground image region (t0 ⁇ k) at time t0 ⁇ k before and after a certain time t0 may be used. .
  • k 1 may be sufficient.
  • the depth information (t) may not be one piece of information for one pixel of the video information (t), but one piece of information for a plurality of adjacent pixels. Also good. That is, the resolution represented by the depth information (t) may be different from the resolution of the video information (t). Further, the depth information (t) is calculated by, for example, stereo matching in which a subject is imaged by a plurality of adjacent imaging devices, and a displacement such as the position of the subject is detected from a plurality of captured video information to calculate a depth. Information. However, the depth information (t) is not limited to information calculated by a passive stereo method such as stereo matching, but an active three-dimensional measuring instrument (range finder) using light such as TOF (Time-Of-Flight) method. It may be the information acquired by.
  • the video display unit 14 is a touch panel type display
  • the present invention is not limited to this, and other input means may be used, and the video processing apparatus 1 may display video.
  • an input unit for example, a pointing device such as a mouse
  • a pointing device such as a mouse
  • the ROI acquisition unit 13 may extract the ROI information (t) using, for example, any of the following methods (1) to (5).
  • ROI acquisition unit 13 extracts the ROI information (t) using the feature points.
  • the present invention is not limited to this, for example, the distribution of the color information of the user-specified area
  • ROI information (t) may be extracted by a particle filter or Mean-shift.
  • the ROI acquisition unit 13 may extract the ROI information (t) using a known motion search.
  • the filter units 151a and 151b use bilateral filters.
  • the filter units 151a and 151b may be other filters, for example, TV (Total Variation) filters. , K-nearest neighbor averaging filter, median filter, low pass filter may be used only for the flat part with small edge strength.
  • the filter parts 151a and 151b may perform an edge smoothing filter recursively.
  • the video processing device 1 may perform the edge holding smoothing filter processing on the video information (t) and the depth information (t) before inputting them to the object extraction unit 15.
  • the distribution model estimation unit 152 may set the number of classes Kc used for the mixed model to a predetermined value, or may determine a value as in the following example.
  • the distribution model estimation unit 152 sets a predetermined class number Kc ′ as the class number Kc, and performs clustering by the K-means method.
  • the distribution model estimation unit 152 performs a process of merging the class Ci and the class Cj into a new class Ck ′. Do.
  • the distribution model estimation unit 152 determines the number of classes Kc ( ⁇ Kc ′) by repeating this process until the number of classes converges to a constant value.
  • the method used by the distribution model estimation unit 152 to estimate the depth distribution model is not limited to a parametric estimation method such as a mixed model, and may be a non-parametric estimation method such as a Mean-shift method.
  • Clustering in the image space is a method for performing region division based on the similarity between pixels or pixel groups (regions) constituting the region in the original image space without mapping to the feature amount space. is there.
  • the clustering unit 153 may perform clustering in the image space using the following method.
  • (A) Pixel Combining Method For example, the clustering unit 153 represents the connection relationship between pixels as a weighted undirected graph, and performs region integration based on the strength of the edge representing the connection relationship so that the vertices form a global minimum tree.
  • Region growth method also referred to as region growing method
  • C Region division integration method (also referred to as Split & Merge method)
  • D A method combining any one of (a), (b), and (c) Note that the clustering unit 153 performs labeling after the clustering processing in the image space, and region information (label information) indicating the labeling result ( t).
  • the movement amount calculation unit 1561 calculates the similarity R SAD (Equation (9)) (SAD (Sum of Absolute Difference)), and determines the region having the smallest value as the estimation region.
  • R SAD Equivalent Binary Difference
  • SAD Sud of Absolute Difference
  • the movement amount calculation unit 1561 calculates the absolute value of the difference between the corresponding pixel values between the images using the Euclidean distance (L 2 ⁇ distance, L 2 ⁇ norm), and the sum R SDD (the following equation (16)) The region having the smallest value of is determined as the estimated region.
  • Expression (16) indicates that the absolute value is calculated by the Euclidean distance (L 2 -distance, L 2 -norm), and the sum of i and j is taken.
  • NCC Normalized Cross-Correlation
  • the movement amount calculation unit 1561 determines an area where the RNCC value of the following equation (17) is closest to 1 as an estimated area.
  • CCC Cross-Correlation Coefficient
  • the movement amount calculation unit 1561 determines an area where the value of R CCC in the following equation (18) is closest to 1 as an estimated area.
  • the eye bar (“-” (bar) on “I” (eye)) and tea bar (“-” (bar) on “T” (tea)) in formula (18) are respectively This represents an average vector of pixel values in the indicated area.
  • the operation amount of the movement amount calculation unit 1561 increases in the order of the equations (9), (16), (17), and (18).
  • the movement amount calculation unit 1561 may calculate the movement amount using a hierarchical search method (also referred to as multi-resolution method or coarse-to-fine search method) instead of the spiral search. Good.
  • the clustering unit 153 may perform clustering on the images in the ROI based on the ROI information (t) obtained from the ROI acquisition unit 13. Thereby, the clustering unit 153 can reduce the arithmetic operation. Further, the clustering unit 153 may perform clustering on an area wider than the target image area based on the ROI information. As a result, the clustering unit 153 can improve the accuracy of clustering as compared to the case where clustering is performed on images in the ROI.
  • the region integration unit 1534 determines whether or not a part of the region exceeds the ROI boundary with respect to the region having the center of gravity in the ROI when determining region integration. You may determine using a rectangle. Thereby, the video processing apparatus 1 can reduce erroneous extraction of the background area as the foreground area.
  • the area integration unit 1534 may determine the area integration using the adjacent relationship between the areas instead of using the feature amount of the basic foreground area. For example, the region integration unit 1534 may determine the region integration with the region adjacent to the region that has already been determined to be the foreground region, using the feature amount of the region that has already been determined to be the foreground region. As a result, the video processing apparatus 1 can extract the foreground region with higher accuracy.
  • a part of the video processing apparatus 1 in the above-described embodiment may be realized by a computer.
  • the program for realizing the control function may be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be read by a computer system and executed.
  • the “computer system” is a computer system built in the video processing apparatus 1 and includes an OS and hardware such as peripheral devices.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” is a medium that dynamically holds a program for a short time, such as a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line,
  • a volatile memory inside a computer system serving as a server or a client may be included and a program that holds a program for a certain period of time.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
  • a part or all of the video processing device 1 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration).
  • Each functional block of the video processing apparatus 1 may be individually made into a processor, or a part or all of them may be integrated into a processor. Further, the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, in the case where an integrated circuit technology that replaces LSI appears due to progress in semiconductor technology, an integrated circuit based on the technology may be used.
  • the present invention is suitable for use in a video processing apparatus.

Abstract

A foreground region correction unit is a video processing device which extracts foreground region information representing an image of a foreground from a video representing video information, wherein the foreground image represented by the foreground region information is corrected during a first time using the foreground region information and video information during a second time.

Description

映像処理装置、映像処理方法、及び映像処理プログラムVideo processing apparatus, video processing method, and video processing program
 本発明は、映像処理装置、映像処理方法、及び映像処理プログラムに関する。
 本願は、2010年10月14日に、日本に出願された特願2010-231928号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to a video processing device, a video processing method, and a video processing program.
This application claims priority based on Japanese Patent Application No. 2010-231928 filed in Japan on October 14, 2010, the contents of which are incorporated herein by reference.
 近年、デジタルビデオカメラ、デジタルスチルカメラ、携帯電話など撮像機能付き端末が急速に普及している。また、これらのカメラによって撮影された映像に対して、加工や処理を施し、新たな映像を生成する装置がある。例えば、映像中の特定の画像領域を抽出し、抽出した画像領域を部品として映像の加工に利用することや、映像を個々の画像領域毎に抽出し、抽出した映像を元の映像の管理や検索に利用することが知られている。
 具体的には、映像中から所望の画像領域を抽出する手法として次の(1)、(2)の手法が知られている。
In recent years, terminals with an imaging function such as a digital video camera, a digital still camera, and a mobile phone are rapidly spreading. In addition, there is an apparatus that generates a new video by processing and processing video shot by these cameras. For example, a specific image area in a video can be extracted, and the extracted image area can be used as a part for video processing, or a video can be extracted for each individual image area, and the extracted video can be managed in the original video. It is known to use for search.
Specifically, the following methods (1) and (2) are known as methods for extracting a desired image region from a video.
(1)色情報に基づく前景画像領域の抽出手法
 色情報に基づく前景画像領域の抽出手法として、例えば、クロマキー処理や非特許文献1、2、3記載の技術が知られている。
 クロマキー処理とは、一定色(例えば、青色)を背景として対象物を撮影し、撮影した映像から一定色の背景部分を除くことによって、所望の対象物(前景領域)を抽出する処理である。この処理により、映像を前景画像領域の映像と背景画像領域の映像に分離する。
 非特許文献1には、グレー画像を対象に、所望の画像領域(前景画像領域)に前景領域を表すマーカー、及びそれ以外の領域(背景画像領域)に背景領域を表すマーカーを、予めユーザが付け、その前景画像領域と背景画像領域に付与されたマーカーを基に、グラフカット(Graph Cuts)により前景画像領域を抽出する技術が記載されている。特許文献2には、カラー画像に対して、グラフカット(Graph Cuts)を応用し、前景画像領域を抽出する技術が記載されている。
 非特許文献3には、前景画像領域、背景画像領域、及び未知画像領域(前景画像領域、背景画像領域のどちらかに属するか未決定の画像領域)の3つのマーカーを用いてマップ(トライマップと呼ばれる)を予め作成し、未知領域における前景画像領域の画素、背景画像領域の画素の混合率α(マット)を推定することで、前景画像領域を抽出する技術が記載されている。
(1) Foreground Image Region Extraction Method Based on Color Information As foreground image region extraction methods based on color information, for example, chroma key processing and techniques described in Non-Patent Documents 1, 2, and 3 are known.
The chroma key process is a process of extracting a desired object (foreground area) by capturing an object with a certain color (for example, blue) as a background and removing a background portion of the certain color from the captured image. With this process, the video is separated into a video in the foreground image area and a video in the background image area.
In Non-Patent Document 1, for a gray image, a user represents a marker representing a foreground area in a desired image area (foreground image area) and a marker representing a background area in another area (background image area) in advance. In addition, a technique for extracting the foreground image region by graph cuts based on the markers assigned to the foreground image region and the background image region is described. Patent Document 2 describes a technique for extracting a foreground image region by applying graph cuts to a color image.
Non-Patent Document 3 discloses a map (trimap) using three markers: a foreground image region, a background image region, and an unknown image region (an image region that has not been determined to belong to either the foreground image region or the background image region). In other words, a technique for extracting the foreground image area by estimating the mixture ratio α (mat) of the pixels in the foreground image area and the background image area in the unknown area is created.
(2)色情報と奥行情報とに基づく前景画像領域の抽出手法
 色情報と奥行情報に基づく前景画像領域の抽出手法として、例えば、特許文献1、2記載の技術が知られている。
 特許文献1には、カメラから被写体までの奥行情報の画像(距離画像)に基づいてトライマップを作成し、色情報を利用して未知領域における前景画像領域の画素、背景画像領域の画素の混合率αを推定し、前景画像領域を抽出する技術が記載されている。
 特許文献2には、カメラから被写体までの奥行情報から前景画像領域を粗く抽出し、その後、色情報を基に領域分割統合法を再帰的に繰り返し、前景画像領域を抽出する技術が記載されている。
(2) Foreground Image Region Extraction Method Based on Color Information and Depth Information As foreground image region extraction methods based on color information and depth information, for example, techniques described in Patent Documents 1 and 2 are known.
In Patent Document 1, a trimap is created based on an image (distance image) of depth information from a camera to a subject, and a mixture of pixels in a foreground image region and a background image region in an unknown region using color information. A technique for estimating the rate α and extracting the foreground image region is described.
Patent Document 2 describes a technique for extracting a foreground image area by roughly extracting a foreground image area from depth information from a camera to a subject, and then recursively repeating an area division and integration method based on color information. Yes.
特開2010-16546号公報JP 2010-16546 A 特開2009-276294号公報JP 2009-276294 A
 しかしながら、従来技術であるクロマキー処理では、対象物(前景領域)が背景の一定色又は類似色を含む場合に、その領域が背景画像領域であると判定される、つまり、確実に対象物を抽出できないという欠点があった。また、クロマキー処理では、一定色の背景でない場合に、背景画像領域の一定色でない部分が対象物として抽出される、つまり、対象物を確実に抽出できないという欠点があった。
 非特許文献1-3、特許文献1、2記載の従来技術では、対象物と背景領域の色分布が類似である場合や、対象物と背景領域とが類似した模様(テクスチャ)を有する場合に、領域の境界を特定できずに対象物に欠損部分が生じる。また、この場合には、背景領域を対象物として誤抽出することもある。つまり、従来技術では、対象物を確実に抽出できないという欠点があった。なお、対象物に欠損部分や背景画像領域の誤抽出部分があるとき、動画の映像の場合には、対象物の抽出形状が時間方向に不連続となることによって、対象物の画像にフリッカやちらつきが生じてしまう。
However, in the conventional chroma key processing, when an object (foreground area) includes a constant color or a similar color of the background, it is determined that the area is a background image area, that is, the object is reliably extracted. There was a disadvantage that it was not possible. In addition, the chroma key process has a drawback that when the background is not a constant color, a non-constant color portion of the background image area is extracted as an object, that is, the object cannot be reliably extracted.
In the prior arts described in Non-Patent Document 1-3 and Patent Documents 1 and 2, when the color distribution of the object and the background area is similar, or when the object and the background area have a similar pattern (texture) The boundary of the region cannot be specified, and a defective portion is generated in the object. In this case, the background area may be erroneously extracted as an object. That is, the conventional technique has a drawback that the object cannot be reliably extracted. When there is a missing part or an erroneously extracted part of the background image area in the target object, in the case of a moving image, the extracted shape of the target object becomes discontinuous in the time direction, so that flicker or Flickering will occur.
 本発明は上記の点に鑑みてなされたものであり、対象物の画像を確実に抽出できる映像処理装置、映像処理方法、及び映像処理プログラムを提供する。 The present invention has been made in view of the above points, and provides a video processing apparatus, a video processing method, and a video processing program capable of reliably extracting an image of an object.
 (1)本発明は上記の課題を解決するためになされたものであり、本発明の一態様は、映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置であって、第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正部を備えることを特徴とする映像処理装置である。 (1) The present invention has been made to solve the above problems, and one aspect of the present invention is a video processing apparatus that extracts foreground area information indicating a foreground image from a video indicated by video information. A video processing apparatus comprising: a foreground area correction unit that corrects the foreground image indicated by the foreground area information at the first time using the foreground area information and the video information at the second time.
 (2)また、本発明の一態様は、上記の映像処理装置において、前景領域補正部は、第1の時間における前景領域情報が示す前景の画像を、複数の第2の時間における前景領域情報及び映像情報を用いて補正することを特徴とする。 (2) In addition, according to one aspect of the present invention, in the video processing device, the foreground area correction unit converts the foreground image indicated by the foreground area information at the first time into the foreground area information at the plurality of second times. And correction using video information.
 (3)また、本発明の一態様は、上記の映像処理装置において、前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、前景領域補正部は、第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出部と、前記移動量算出部が算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成部と、前記前景画像確率マップ生成部が算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正部と、を備えることを特徴とする。 (3) Further, according to one aspect of the present invention, in the video processing device, the video indicated by the video information includes an image of a predetermined object, and the foreground area correction unit performs the first time. Based on the video information, the target image area information indicating the image of the object, and the video information, the foreground area information, and the target image area information at the second time, from the second time to the first time. Based on the movement amount calculated by the movement amount calculation unit and the foreground image based on the movement amount calculation unit that calculates the movement amount that the foreground image has moved in between, the portion in the video at the first time is A foreground region probability map generation unit that calculates a probability of being a foreground image, and foreground region information at the first time is extracted based on the probability calculated by the foreground image probability map generation unit, and the extracted foreground region information indicates Correct foreground images Characterized in that it comprises a Tadashibu, the.
 (4)また、本発明の一態様は、映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置における映像処理方法であって、前景領域補正部が、第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正ステップを有することを特徴とする映像処理方法である。 (4) According to another aspect of the present invention, there is provided a video processing method in a video processing apparatus that extracts foreground area information indicating a foreground image from a video indicated by the video information, wherein the foreground area correction unit performs the first time. A foreground area correcting step of correcting the foreground image indicated by the foreground area information in the second time using the foreground area information and the video information at the second time.
 (5)また、本発明の一態様は、上記の映像処理方法において、前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、前景領域補正ステップは、移動量算出部が、第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出ステップと、前景領域確率マップ生成部が、前記移動量算出ステップで算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成ステップと、補正部が、前記前景画像確率マップ生成ステップで算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正ステップと、を有することを特徴とする。 (5) According to another aspect of the present invention, in the video processing method described above, the video indicated by the video information includes an image of a predetermined object, and the foreground area correction step includes a movement amount calculation unit The second time based on the video information and the target image area information indicating the image of the object at the first time, and the video information, foreground area information and the target image area information at the second time. A movement amount calculating step for calculating a movement amount by which the foreground image has moved during a first time from a first time, and a movement amount calculated by the foreground region probability map generation unit in the movement amount calculation step and the foreground image Based on the foreground region probability map generation step for calculating the probability that the portion in the video at the first time is the foreground image, and the correction unit calculates the probability based on the probability calculated in the foreground image probability map generation step. Of extracting the foreground area information at the time, a correction step of extracting the foreground area information to correct the foreground image indicated by, and having a.
 (6)また、本発明の一態様は、映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置のコンピュータに、第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正手順を実行させるための映像処理プログラムである。 (6) According to another aspect of the present invention, a foreground image indicated by the foreground area information at the first time is transmitted to a computer of a video processing apparatus that extracts foreground area information indicating a foreground image from an image indicated by the image information. , A video processing program for executing a foreground area correction procedure for correction using foreground area information and video information in a second time period.
 (7)また、本発明の一態様は、上記の映像処理プログラムにおいて、前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、前景領域補正手順は、第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出手順、前記移動量算出手順で算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成手順、前記前景画像確率マップ生成手順が算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正手順、を実行させるための映像処理プログラムである。 (7) According to another aspect of the present invention, in the video processing program described above, the video indicated by the video information includes an image of a predetermined object, and the foreground region correction procedure is performed at the first time. Based on the video information, the target image area information indicating the image of the object, and the video information, the foreground area information, and the target image area information at the second time, from the second time to the first time. Based on the movement amount calculation procedure for calculating the movement amount by which the foreground image has moved in between, the movement amount calculated by the movement amount calculation procedure and the foreground image, the portion in the video at the first time is the foreground. A foreground region probability map generation procedure for calculating a probability of being an image of the foreground, foreground region information at a first time is extracted based on the probability calculated by the foreground image probability map generation procedure, and the foreground region information indicated by the extracted foreground region information is Picture A video processing program for correcting procedure, to the execution of correcting.
 本発明によれば、前景画像領域又は背景画像領域を確実に抽出できる。 According to the present invention, the foreground image area or the background image area can be reliably extracted.
本発明の実施形態に係る映像処理装置1の構成を表すブロック図である。It is a block diagram showing the structure of the video processing apparatus 1 which concerns on embodiment of this invention. 本実施形態に係るユーザ指定ROI情報の検出処理の一例を示す概略図である。It is the schematic which shows an example of the detection process of the user designation ROI information which concerns on this embodiment. 本実施形態に係る映像処理装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the video processing apparatus which concerns on this embodiment. 本実施形態に係るオブジェクト抽出部の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the object extraction part which concerns on this embodiment. 本実施形態に係るクラスタリング部の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the clustering part which concerns on this embodiment. 本実施形態に係るクラスタリング部の処理結果の一例を示す概略図である。It is the schematic which shows an example of the process result of the clustering part which concerns on this embodiment. 本実施形態に係る特徴量算出部の動作の一例を表すフローチャートである。It is a flowchart showing an example of operation | movement of the feature-value calculation part which concerns on this embodiment. 本実施形態に係るラベリング(領域情報)の一例を表す概略図である。It is the schematic showing an example of the labeling (area | region information) which concerns on this embodiment. 領域間の接続関係を重みなし無向グラフ、及び隣接行列による表現の一例を表す概略図である。It is the schematic showing an example of the expression by the unweighted undirected graph and the adjacency matrix about the connection relation between the areas. 本実施形態に係る領域の周囲長の取得方法、及び領域の外接矩形の一例を示す概略図である。It is the schematic which shows an example of the acquisition method of the perimeter of the area | region which concerns on this embodiment, and the surrounding rectangle of an area | region. 本実施形態に係る前景領域抽出部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the foreground area | region extraction part which concerns on this embodiment. 本実施形態に係る前景領域補正部の構成を示す概略ブロック図である。It is a schematic block diagram which shows the structure of the foreground area | region correction | amendment part which concerns on this embodiment. 本実施形態に係るテンプレートマッチングを説明する説明図である。It is explanatory drawing explaining the template matching which concerns on this embodiment. 本実施形態に係るスパイラルサーチを説明する説明図である。It is explanatory drawing explaining the spiral search which concerns on this embodiment. 本実施形態に係る前景領域補正部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the foreground area correction part which concerns on this embodiment. 本実施形態に係るバッファ部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the buffer part which concerns on this embodiment. 本実施形態に係るオブジェクト抽出部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the object extraction part which concerns on this embodiment. 本実施形態に係る奥行情報の一例を示す概略図である。It is the schematic which shows an example of the depth information which concerns on this embodiment. 本実施形態に係る前景領域確率マップPの一例を示す概略図である。It is the schematic which shows an example of the foreground area | region probability map P which concerns on this embodiment. 本実施形態に係る前景画像領域の一例の説明図である。It is explanatory drawing of an example of a foreground image area | region which concerns on this embodiment. 本実施形態に係るユーザ指定ROI情報の検出処理の別の一例を示す概略図である。It is the schematic which shows another example of the detection process of the user designation ROI information which concerns on this embodiment. 本実施形態に係るユーザ指定ROI情報の検出処理の別の一例を示す概略図である。It is the schematic which shows another example of the detection process of the user designation ROI information which concerns on this embodiment.
 以下、図面を参照しながら本発明の実施形態について詳しく説明する。
 図1は、本発明の実施形態に係る映像処理装置1の構成を表すブロック図である。この図において、映像処理装置1は、映像情報取得部10、奥行情報取得部11、映像情報再生部12、ROI(Region of Interest;対象画像領域)取得部13、映像表示部14、オブジェクト抽出部15、及びマスク情報記録部16を含んで備える。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
FIG. 1 is a block diagram showing a configuration of a video processing apparatus 1 according to an embodiment of the present invention. In this figure, a video processing apparatus 1 includes a video information acquisition unit 10, a depth information acquisition unit 11, a video information reproduction unit 12, a ROI (Region of Interest; target image region) acquisition unit 13, a video display unit 14, and an object extraction unit. 15 and a mask information recording unit 16.
 映像情報取得部10は、映像情報(t)を取得する。ここで映像情報(t)は、動画の映像情報であり、時刻t(動画の開始時点からの経過時間)の関数である。ただし、本発明の映像情報はこれに限らず、複数の静止画の映像情報であってもよい。例えば、映像情報(t)は、撮像装置を固定して時間的に連続又は隣接する画像を含む動画像又は静止画であってもよいし、同時刻に、連続又は隣接する位置から撮影した画像を含む動画像又は静止画であってもよい(後者の場合、映像情報は位置の関数となる)。また、映像情報取得部10は、撮像装置から映像情報(t)を取得してもよいし、予め記録部や外部記録装置に記録された映像情報(t)を読み出すことで取得してもよい。
 映像情報取得部10は、取得した映像情報(t)を、映像情報再生部12、ROI取得部13、及びオブジェクト抽出部15に出力する。
The video information acquisition unit 10 acquires video information (t). Here, the video information (t) is video information of a moving image and is a function of time t (elapsed time from the start time of the moving image). However, the video information of the present invention is not limited to this, and may be video information of a plurality of still images. For example, the video information (t) may be a moving image or a still image including images that are continuous or adjacent in time with the imaging device fixed, or images captured from consecutive or adjacent positions at the same time. (In the latter case, video information is a function of position). The video information acquisition unit 10 may acquire the video information (t) from the imaging device, or may acquire the video information (t) recorded in advance in a recording unit or an external recording device. .
The video information acquisition unit 10 outputs the acquired video information (t) to the video information reproduction unit 12, the ROI acquisition unit 13, and the object extraction unit 15.
 奥行情報取得部11は、映像情報取得部10が取得する映像情報(t)の奥行情報(t)を取得する。ここで、奥行情報(t)は、映像情報(t)の各画素について、撮像装置から撮像物までの距離を表す情報である。
 奥行情報取得部11は、取得した奥行情報(t)を、オブジェクト抽出部15に出力する。
The depth information acquisition unit 11 acquires depth information (t) of the video information (t) acquired by the video information acquisition unit 10. Here, the depth information (t) is information representing the distance from the imaging device to the imaging object for each pixel of the video information (t).
The depth information acquisition unit 11 outputs the acquired depth information (t) to the object extraction unit 15.
 映像情報再生部12は、映像情報取得部10から入力された映像情報(t)に基づいて、映像表示部14の各時刻tにおける各画素の出力を制御する映像信号を生成する。映像情報再生部12は、生成した映像信号を映像表示部14に出力することで、映像表示部14に映像を表示させる。つまり、映像情報再生部12は、映像情報(t)の映像を再生し、再生した映像を映像表示部14に表示させる。
 ここで、映像情報再生部12は、マスク情報記録部16が記録するマスク情報(t)に基づいて、映像情報(t)の映像に、オブジェクト抽出部15が抽出したオブジェクトの画像を重畳して表示する。つまり、マスク情報(t)は、時刻tで映像情報(t)に対応付けられている。なお、マスク情報記録部16がマスク情報(t)を記録していない場合には、映像情報(t)の映像をそのまま再生する。
The video information reproduction unit 12 generates a video signal for controlling the output of each pixel at each time t of the video display unit 14 based on the video information (t) input from the video information acquisition unit 10. The video information reproduction unit 12 displays the video on the video display unit 14 by outputting the generated video signal to the video display unit 14. That is, the video information reproduction unit 12 reproduces the video of the video information (t) and causes the video display unit 14 to display the reproduced video.
Here, the video information reproducing unit 12 superimposes the image of the object extracted by the object extracting unit 15 on the video of the video information (t) based on the mask information (t) recorded by the mask information recording unit 16. indicate. That is, the mask information (t) is associated with the video information (t) at time t. When the mask information recording unit 16 does not record the mask information (t), the video of the video information (t) is reproduced as it is.
 映像表示部14は、タッチパネル型のディスプレイである。映像表示部14は、映像情報再生部12から入力された映像信号に基づいて出力を制御することで、映像情報(t)の映像を表示する。映像表示部14は、そのディスプレイにユーザが触れることで、触れた位置を示す情報を、ある時刻tsにおける映像情報(ts)の画像(ts)での位置を示す情報に変換する。
 映像表示部14は、ユーザがディスプレイに触れながらディスプレイに表示された画像(ts)でのROIを指定することによって、ROIの位置情報(ユーザ指定ROI情報(ts)という)、つまり、ROIの位置と形状(外接形状)を示す情報を検出する。なお、映像表示部14でのユーザ指定ROI情報(ts)の検出処理の詳細は後述する。
 映像表示部14は、検出したユーザ指定ROI情報(ts)をROI取得部13に出力する。
The video display unit 14 is a touch panel type display. The video display unit 14 displays the video of the video information (t) by controlling the output based on the video signal input from the video information reproducing unit 12. When the user touches the display, the video display unit 14 converts information indicating the touched position into information indicating the position of the video information (ts) in the image (ts) at a certain time ts.
The video display unit 14 specifies the ROI in the image (ts) displayed on the display while the user touches the display, whereby the ROI position information (referred to as user-specified ROI information (ts)), that is, the position of the ROI. And information indicating the shape (circumscribed shape) are detected. The details of the process for detecting the user-designated ROI information (ts) in the video display unit 14 will be described later.
The video display unit 14 outputs the detected user-specified ROI information (ts) to the ROI acquisition unit 13.
 ROI取得部13は、映像表示部14から入力されたユーザ指定ROI情報(ts)の範囲内の画像に基づいて、時刻ts以外の各時刻tにおける映像情報(t)の画像(各フレームの画像;以下、処理画像(t)という)から、ユーザ指定ROI情報(ts)の画像に一致又は類似する画像を検出する。その後、ROI取得部13は、一致又は類似する画像の位置と形状を示す情報を、ROI情報(t)として抽出する。 The ROI acquisition unit 13 is based on the image within the range of the user-specified ROI information (ts) input from the video display unit 14, and the image of the video information (t) at each time t other than the time ts (the image of each frame). ; Hereinafter referred to as processed image (t)), an image that matches or is similar to the image of the user specified ROI information (ts) is detected. Thereafter, the ROI acquisition unit 13 extracts information indicating the position and shape of the matching or similar image as ROI information (t).
 具体的には、ROI取得部13は、ユーザ指定ROI情報(ts)の範囲内の画像から特徴点(ROI特徴点(ts)という)を算出して記録する。なお、特徴点とは、画像中の特徴的な点であり、例えば、画素間の色や輝度の変化等に基づいて被写体のエッジの一部や頂点として抽出される点であるが、抽出手法はこれに限られない。ROI取得部13は、各時刻tにおける処理画像(t)の画像特徴点(t)を算出する。
 ROI取得部13は、画像特徴点(t)とROI特徴点(ts)とのマッチングを行う。具体的には、ROI取得部13は、ROI特徴点(ts)に変換行列を逐次乗算することで、画像中でROI特徴点(ts)を移動(回転含む)及び拡大・縮小させて、特徴点が一致する数(特徴点数という)を算出する。ROI取得部13は、特徴点数が予め定めた閾値以上になったと判定した場合に、そのときの変換行列を記録する。ROI取得部13は、ユーザ指定ROI情報(ts)に変換行列を乗算した位置情報をROI情報(t)とする。つまり、ROI取得部13は、ユーザ指定ROI情報(ts)の範囲内の画像が、画像(t)中のどの部分と一致するかを判定し、一致した場合に、ユーザ指定ROI情報(ts)に相当する位置情報を、ROI情報(t)とする。ROI取得部13は、抽出したROI情報(t)(ユーザ指定ROI情報(ts)を含む)を映像表示部14、及びオブジェクト抽出部15に出力する。また、ROI取得部13は、抽出したROI情報(t)をROI情報記憶部1583に記憶する。
Specifically, the ROI acquisition unit 13 calculates and records feature points (referred to as ROI feature points (ts)) from an image within the range of the user-specified ROI information (ts). A feature point is a characteristic point in an image, for example, a point that is extracted as a part or a vertex of a subject based on a change in color or luminance between pixels. Is not limited to this. The ROI acquisition unit 13 calculates an image feature point (t) of the processed image (t) at each time t.
The ROI acquisition unit 13 performs matching between the image feature point (t) and the ROI feature point (ts). Specifically, the ROI acquisition unit 13 sequentially multiplies the ROI feature point (ts) by a transformation matrix, thereby moving (including rotating) the ROI feature point (ts) and enlarging / reducing the feature. The number of points that match (referred to as the number of feature points) is calculated. When the ROI acquisition unit 13 determines that the number of feature points is equal to or greater than a predetermined threshold, the ROI acquisition unit 13 records the conversion matrix at that time. The ROI acquisition unit 13 sets the position information obtained by multiplying the user-specified ROI information (ts) by the transformation matrix as ROI information (t). That is, the ROI acquisition unit 13 determines which part in the image (t) the image within the range of the user-specified ROI information (ts) matches. If the images match, the ROI information (ts) The position information corresponding to is ROI information (t). The ROI acquisition unit 13 outputs the extracted ROI information (t) (including user-specified ROI information (ts)) to the video display unit 14 and the object extraction unit 15. The ROI acquisition unit 13 stores the extracted ROI information (t) in the ROI information storage unit 1583.
 オブジェクト抽出部15は、映像情報取得部10から映像情報(t)を入力され、奥行情報取得部11から奥行情報(t)を入力され、ROI取得部13からROI情報(t)を入力される。オブジェクト抽出部15は、入力された映像情報(t)、奥行情報(t)、ROI情報(t)を用いて、各時刻tにおけるマスク情報(t)を生成する。オブジェクト抽出部15が行う処理の詳細については、後述する。
 オブジェクト抽出部15は、抽出したマスク情報(t)をマスク情報記録部16に記録する。
The object extraction unit 15 receives video information (t) from the video information acquisition unit 10, receives depth information (t) from the depth information acquisition unit 11, and receives ROI information (t) from the ROI acquisition unit 13. . The object extraction unit 15 generates mask information (t) at each time t using the input video information (t), depth information (t), and ROI information (t). Details of processing performed by the object extraction unit 15 will be described later.
The object extraction unit 15 records the extracted mask information (t) in the mask information recording unit 16.
<ユーザ指定ROI情報(ts)の検出処理>
 以下、映像表示部14が行うユーザ指定ROI情報(ts)の検出処理の詳細について説明をする。
 図2は、本実施形態に係るユーザ指定ROI情報(ts)の検出処理の一例を示す概略図である。図2において、符号Aを付した四角はタッチパネル型のディスプレイ(映像表示部14)である。符号Bを付した四角は、時刻情報が示す時刻ts(図2では、ts=0.133秒)における画像(ts)である。符号Oを付したものは、ユーザが抽出したい対象物(図2では、人物)の画像を表す。また、符号Uを付したものは、ユーザの手を表す。
<Detection processing of user-specified ROI information (ts)>
The details of the detection process of the user-specified ROI information (ts) performed by the video display unit 14 will be described below.
FIG. 2 is a schematic diagram illustrating an example of detection processing for user-specified ROI information (ts) according to the present embodiment. In FIG. 2, a square with a symbol A is a touch panel display (video display unit 14). A square with a symbol B is an image (ts) at a time ts (in FIG. 2, ts = 0.133 seconds) indicated by the time information. What attached | subjected the code | symbol O represents the image of the target object (a person in FIG. 2) which a user wants to extract. Moreover, what attached | subjected the code | symbol U represents a user's hand.
 図2は、長方形(四角形)の選択ツールを用いた場合の図であり、長方形の選択ツールで、抽出したい対象画像をユーザが囲ったことを表す図である。この図において、符号r1を付した枠(対象物Oの外接矩形)の位置情報がユーザ指定ROI情報(ts)である。具体的には、図2の場合に、ユーザ指定ROI情報(ts)は、例えば、以下の表1のデータとして記録される。 FIG. 2 is a diagram when a rectangular (quadrangle) selection tool is used, and shows that the user has surrounded the target image to be extracted with the rectangular selection tool. In this figure, the position information of the frame (the circumscribed rectangle of the object O) labeled with the symbol r1 is user-specified ROI information (ts). Specifically, in the case of FIG. 2, the user-specified ROI information (ts) is recorded, for example, as data in Table 1 below.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 表1では、ユーザ指定ROI情報(ts)は、時刻ts(又は、映像のフレームに付した番号(フレーム番号)でもよい)、外接矩形内に抽出対象画像(対象物の画像)が有るか無いかを示す有無フラグ(抽出対象フラグという)、外接矩形の始点位置(x0,y0)(図2では点P1の座標)、外接矩形の横幅(図2では符号W1で表す長さ)、外接矩形の縦幅(図2では符号L1で表す長さ)で、表される。 In Table 1, the user-specified ROI information (ts) includes time ts (or a number (frame number) attached to a video frame), and whether or not there is an extraction target image (target image) in a circumscribed rectangle. Presence / absence flag (referred to as extraction target flag), circumscribed rectangle starting point position (x0, y0) (coordinate of point P1 in FIG. 2), circumscribed rectangle width (length indicated by reference sign W1 in FIG. 2), circumscribed rectangle Is represented by the vertical width (the length represented by the symbol L1 in FIG. 2).
<映像処理装置1の動作について>
 以下、映像処理装置1の動作について説明する。
 図3は、本実施形態に係る映像処理装置1の動作の一例を示すフローチャートである。
<Operation of Video Processing Device 1>
Hereinafter, the operation of the video processing apparatus 1 will be described.
FIG. 3 is a flowchart showing an example of the operation of the video processing apparatus 1 according to the present embodiment.
(ステップS11)映像情報取得部10は、映像情報(t)を取得し、取得した映像情報(t)を映像情報再生部12、ROI取得部13、及びオブジェクト抽出部15に出力する。奥行情報取得部11は、奥行情報(t)を取得し、取得した奥行情報(t)をオブジェクト抽出部15に出力する。その後、ステップS12へ進む。 (Step S11) The video information acquisition unit 10 acquires video information (t), and outputs the acquired video information (t) to the video information playback unit 12, the ROI acquisition unit 13, and the object extraction unit 15. The depth information acquisition unit 11 acquires depth information (t), and outputs the acquired depth information (t) to the object extraction unit 15. Thereafter, the process proceeds to step S12.
(ステップS12)映像情報再生部12は、ステップS11で入力された映像情報(t)の映像を再生し、再生した映像を映像表示部14に表示させる。その後、ステップS13へ進む。
(ステップS13)ユーザは、ステップS12で再生された映像の再生をある時刻tsで一時停止し、ROIを指定する。映像表示部14は、ユーザが指定したROIについて、ユーザ指定ROI情報(ts)を検出し、ROI取得部13に出力する。その後、ステップS14へ進む。
(Step S12) The video information reproducing unit 12 reproduces the video of the video information (t) input in step S11, and causes the video display unit 14 to display the reproduced video. Thereafter, the process proceeds to step S13.
(Step S13) The user pauses the reproduction of the video reproduced in step S12 at a certain time ts and designates the ROI. The video display unit 14 detects user-specified ROI information (ts) for the ROI specified by the user, and outputs it to the ROI acquisition unit 13. Then, it progresses to step S14.
(ステップS14)ROI取得部13は、ステップS13で検出されたユーザ指定ROI情報(ts)に基づいて、各時刻tにおけるROI情報(t)を抽出する。ROI取得部13は、抽出したROI情報(t)を映像表示部14、及びオブジェクト抽出部15に出力する。映像表示部14は、ROI情報(t)が示す外接形状(表1の場合は外接矩形、表2の場合は外接円)を、その位置へ、映像情報(t)の映像に重畳して表示する。その後、ステップS2へ進む。 (Step S14) The ROI acquisition unit 13 extracts the ROI information (t) at each time t based on the user-specified ROI information (ts) detected in Step S13. The ROI acquisition unit 13 outputs the extracted ROI information (t) to the video display unit 14 and the object extraction unit 15. The video display unit 14 displays the circumscribed shape indicated by the ROI information (t) (the circumscribed rectangle in the case of Table 1 and the circumscribed circle in the case of Table 2) superimposed on the image of the video information (t) at that position. To do. Then, it progresses to step S2.
(ステップS2)オブジェクト抽出部15は、ステップS11で取得された映像情報(t)、及び奥行情報(t)、ステップS14で抽出されたROI情報(t)を用いてオブジェクト抽出を行い、マスク情報(t)を生成する。オブジェクト抽出部15は、生成したマスク情報(t)をマスク情報記録部16に記録する。その後、ステップS15へ進む。
(ステップS15)映像情報再生部12は、マスク情報記録部16が記録するマスク情報(t)に基づいて、映像情報(t)の映像に、オブジェクト抽出部15が抽出したオブジェクトの画像を重畳して表示する。
(Step S2) The object extraction unit 15 performs object extraction using the video information (t) and depth information (t) acquired in step S11, and the ROI information (t) extracted in step S14, and mask information (T) is generated. The object extraction unit 15 records the generated mask information (t) in the mask information recording unit 16. Thereafter, the process proceeds to step S15.
(Step S15) Based on the mask information (t) recorded by the mask information recording unit 16, the video information reproducing unit 12 superimposes the object image extracted by the object extracting unit 15 on the video of the video information (t). To display.
 なお、上記のステップS14、S2、S15において、映像処理装置1は、入力された全ての時刻tの映像情報(t)に関する処理を行ってもよいし、ユーザが指定した範囲の映像情報(t)(t1≦t≦t2)に関する処理を行ってもよい。
 上記の動作において、映像処理装置1は、ユーザが指定したROIについて、ユーザ指定ROI情報(ts)を検出するので、自動でROIを抽出する場合と比較して、確実にROIを抽出することができる。また、映像処理装置1は、映像表示部14は、ROI情報(t)が示す外接形状を、映像情報(t)の映像に重畳して表示するので、ユーザは所望のROIが検出されていることを把握することができる。
In steps S14, S2, and S15, the video processing device 1 may perform processing related to all input video information (t) at time t, or video information (t in a range specified by the user) ) (T1 ≦ t ≦ t2) may be performed.
In the above operation, the video processing apparatus 1 detects the user-specified ROI information (ts) for the ROI specified by the user, so that the ROI can be reliably extracted as compared with the case where the ROI is automatically extracted. it can. In the video processing apparatus 1, the video display unit 14 displays the circumscribed shape indicated by the ROI information (t) so as to be superimposed on the video of the video information (t), so that the user has detected a desired ROI. I can understand that.
<オブジェクト抽出部15が行う処理について>
 以下、オブジェクト抽出部15が行う処理について詳細を説明する。
 図4は、本実施形態に係るオブジェクト抽出部15の構成を示す概略ブロック図である。この図において、オブジェクト抽出部15は、フィルタ部151a、151b、分布モデル推定部152、クラスタリング部153、特徴量算出部154、前景領域抽出部155、前景領域補正部156、マスク情報生成部157、及びバッファ部158を含んで構成される。バッファ部158は、映像情報記憶部1581、前景領域情報記憶部1582、ROI情報記憶部1583、ROI奥行分布情報記憶部1584、補正前景領域情報記憶部1585を含んで構成される。なお、符号I1、D1、R1、Mを付した平行四辺形は情報を示し、それぞれ、映像情報(t)、奥行情報(t)、ROI情報(t)、マスク情報(t)である。
<About processing performed by the object extraction unit 15>
Hereinafter, the process performed by the object extraction unit 15 will be described in detail.
FIG. 4 is a schematic block diagram illustrating the configuration of the object extraction unit 15 according to the present embodiment. In this figure, the object extraction unit 15 includes filter units 151a and 151b, a distribution model estimation unit 152, a clustering unit 153, a feature amount calculation unit 154, a foreground region extraction unit 155, a foreground region correction unit 156, a mask information generation unit 157, And a buffer unit 158. The buffer unit 158 includes a video information storage unit 1581, a foreground region information storage unit 1582, an ROI information storage unit 1583, an ROI depth distribution information storage unit 1584, and a corrected foreground region information storage unit 1585. Note that parallelograms denoted by reference signs I1, D1, R1, and M indicate information, and are video information (t), depth information (t), ROI information (t), and mask information (t), respectively.
 フィルタ部151aは、入力された映像情報(t)からノイズを除去し、平滑化処理を行う。具体的には、フィルタ部151aは、各時刻tの処理画像(t)に対して、色成分毎に、エッジ(輪郭)を保持する平滑化フィルタ(以降、エッジ保持平滑化フィルタとも呼ぶ)を行う。ここで、フィルタ部151aは、平滑化フィルタとして、次式(1)で表されるバイラテラルフィルタ(bilateral filter)を用いる。 The filter unit 151a removes noise from the input video information (t) and performs a smoothing process. Specifically, the filter unit 151a uses a smoothing filter (hereinafter also referred to as an edge holding smoothing filter) that holds an edge (contour) for each color component with respect to the processed image (t) at each time t. Do. Here, the filter unit 151a uses a bilateral filter represented by the following equation (1) as a smoothing filter.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 ただし、入力画像はf(x,y)、出力画像はg(x,y)、Wはフィルタリングを適用する窓サイズ、σは画素間距離に関する重み付け係数を制御するパラメータ(ガウス分布の標準偏差)、σ2は画素値の差に関する重み付け係数を制御するパラメータ(ガウス分布の標準偏差)を表す。
 フィルタ部151aは、平滑化フィルタによる平滑化後の映像情報(t)を、クラスタリング部153、及び特徴量算出部154に出力する。
However, the input image is f (x, y), the output image is g (x, y), W is the window size to which filtering is applied, σ 1 is a parameter for controlling the weighting coefficient related to the inter-pixel distance (standard deviation of Gaussian distribution) ), Σ 2 represents a parameter (standard deviation of Gaussian distribution) for controlling a weighting coefficient relating to a difference in pixel values.
The filter unit 151 a outputs the video information (t) smoothed by the smoothing filter to the clustering unit 153 and the feature amount calculation unit 154.
 フィルタ部151bは、入力された奥行情報(t)からノイズを除去し、平滑化処理を行う。具体的には、フィルタ部151bは、エッジ保持平滑化フィルタを行う。これにより、フィルタ部151bは、オクルージョン(遮蔽)によって発生している横方向のノイズを除去する。
 フィルタ部151bは、平滑化フィルタによる平滑化後の奥行情報(t)を、特徴量算出部154及び分布モデル推定部152に出力する。
The filter unit 151b removes noise from the input depth information (t) and performs a smoothing process. Specifically, the filter unit 151b performs an edge holding smoothing filter. Thereby, the filter part 151b removes the noise in the horizontal direction generated by occlusion (shielding).
The filter unit 151 b outputs the depth information (t) smoothed by the smoothing filter to the feature amount calculation unit 154 and the distribution model estimation unit 152.
 分布モデル推定部152は、処理画像単位毎に、フィルタ部151bより入力された平滑化後の奥行情報(t)、及び、入力されたROI情報(t)に基づき、ROI内の奥行分布モデルのパラメータを推定する。具体的には、分布モデル推定部152は、次式(2)、つまり、ガウス分布の混合モデルで表現するGMM(Gaussian Mixture Model)を用いて、最尤推定法により分布モデルのパラメータを取得する。以下、取得したパラメータをROI奥行分布情報(t)という。 Based on the smoothed depth information (t) input from the filter unit 151b and the input ROI information (t) for each processed image unit, the distribution model estimation unit 152 calculates the depth distribution model in the ROI. Estimate the parameters. Specifically, the distribution model estimation unit 152 acquires the parameters of the distribution model by the maximum likelihood estimation method using the following equation (2), that is, GMM (Gaussian Mixture Model) expressed by a mixed model of Gaussian distribution. . Hereinafter, the acquired parameter is referred to as ROI depth distribution information (t).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 なお、式(2)において、P(x)は、ベクトルxが出現する確率を表す。wはクラスiのガウス分布の重み係数を表し、μはクラスiの平均ベクトルを表し、Σiはクラスiの共分散行列を表し、Dはベクトルxの次元数を表す。N(x|μ)は、クラスiのガウス分布を表し、平均ベクトルμ、共分散行列Σを用いて表現される。また、分布モデル推定部152は、EM(Expectation-Maximization)アルゴリズムを用いて、分布モデルの各パラメータを求める。つまり、分布モデル推定部152は、ROI内の抽出対象領域の奥行分布を、重み係数wが最大となるクラスの分布として定める。すなわち、奥行分布は、ROI内領域に占める面積が大きい領域であると仮定することによって算出される。
 分布モデル推定部152は、推定したROI奥行分布情報(t)を前景領域抽出部155、及びバッファ部158に出力する。
In Expression (2), P (x) represents the probability that the vector x appears. w i represents a weighting factor of a Gaussian distribution of class i, μ i represents an average vector of class i, Σ i represents a covariance matrix of class i, and D represents the number of dimensions of vector x. N (x | μ i , Σ i ) represents a Gaussian distribution of class i, and is expressed using a mean vector μ i and a covariance matrix Σ i . The distribution model estimation unit 152 obtains each parameter of the distribution model using an EM (Expectation-Maximization) algorithm. That is, the distribution model estimation unit 152 determines the depth distribution of the extraction target region in the ROI as the distribution of the class having the maximum weighting coefficient w i . That is, the depth distribution is calculated by assuming that the area occupied by the area in the ROI is large.
The distribution model estimation unit 152 outputs the estimated ROI depth distribution information (t) to the foreground region extraction unit 155 and the buffer unit 158.
 クラスタリング部153は、処理画像(t)毎に、フィルタ部151aから入力された平滑化後の映像情報(t)に対してクラスタリングを行うことにより、処理画像(t)を複数の領域(スーパーピクセルともいう)に分割する。
 例えば、クラスタリング部153は、特徴量空間でのクラスタリングを行う。特徴量空間によるクラスタリングとは、画像空間の各画素を特徴量空間(例えば、色、エッジ、動きベクトル)に写像し、その特徴量空間においてK-means法、Mean-Shift法、又はK最近傍探索法(近似K最近傍探索法)などの手法により行うクラスタリングである。つまり、クラスタリング部153は、処理画像(t)を、特徴量が類似する(特徴量の値が予め定めた範囲内となる)画素の集合(領域;クラス)に分割する。
 クラスタリング部153は、特徴量空間でのクラスタリング処理の終了後、各領域の代表値となる画素値(例えば平均値)により、そのクラス内の画素について、元の画像空間における画素値を置き換える。クラスタリング部153は、各領域に対して領域を識別するラベルを各領域内の全画素に付与し、領域情報(t)を出力する。
以下、クラスタリング部153の詳細について説明をする。
The clustering unit 153 performs clustering on the smoothed video information (t) input from the filter unit 151a for each processed image (t), thereby processing the processed image (t) into a plurality of regions (superpixels). (Also called).
For example, the clustering unit 153 performs clustering in the feature amount space. The clustering by the feature amount space means that each pixel of the image space is mapped to the feature amount space (for example, color, edge, motion vector), and the K-means method, the Mean-Shift method, or the K nearest neighbor in the feature amount space. Clustering is performed by a technique such as a search method (approximate K nearest neighbor search method). That is, the clustering unit 153 divides the processed image (t) into a set (region; class) of pixels having similar feature quantities (feature quantity values are within a predetermined range).
The clustering unit 153 replaces the pixel value in the original image space for the pixels in the class with the pixel value (for example, the average value) that is the representative value of each region after the clustering process in the feature amount space is completed. The clustering unit 153 assigns a label for identifying the region to each pixel in each region, and outputs region information (t).
Details of the clustering unit 153 will be described below.
 図5は、本実施形態に係るクラスタリング部153の構成を示す概略ブロック図である。この図において、クラスタリング部153は、特徴量検出部1531、シード生成部1532、領域成長部1533、領域統合部1534を含んで構成される。なお、クラスタリング部153は、エッジ及び、色に基づく特徴量を用いてクラスタリングを行うが、本発明は、この特徴量に限られない。 FIG. 5 is a schematic block diagram illustrating the configuration of the clustering unit 153 according to the present embodiment. In this figure, the clustering unit 153 includes a feature amount detection unit 1531, a seed generation unit 1532, a region growth unit 1533, and a region integration unit 1534. The clustering unit 153 performs clustering using feature values based on edges and colors, but the present invention is not limited to this feature value.
 特徴量検出部1531には、平滑化後の映像情報(t)が入力される。特徴量検出部1531は、処理画像(t)各々において、画素(x,y)(x、yは、画像中の画素の位置を表す座標)の特徴量を算出する。具体的には、特徴量検出部1531は、色成分(例えば、RGB(Red(赤)、Green(緑)、Blue(青)))毎に微分オペレータを適用し、x方向、y方向における各色成分iの勾配(グラディエント;gradient)G(x,y|t)=(ΔGix(t), ΔGiy(t))(i=1,2,3)を算出する。例えば、i=1、2、3は、それぞれ、R成分、G成分、B成分である。
 特徴量検出部1531は、次式(3)の演算を、座標(x,y)の画素毎に行うことで、エッジ強度E(x,y|t)を算出する。
The feature amount detection unit 1531 receives the smoothed video information (t). The feature amount detection unit 1531 calculates the feature amount of the pixel (x, y) (x and y are coordinates representing the position of the pixel in the image) in each processed image (t). Specifically, the feature amount detection unit 1531 applies a differential operator for each color component (for example, RGB (Red (red), Green (green), Blue (blue))), and each color in the x direction and the y direction. The gradient of the component i (gradient) G i (x, y | t) = (ΔG ix (t), ΔG iy (t)) (i = 1, 2, 3) is calculated. For example, i = 1, 2, and 3 are an R component, a G component, and a B component, respectively.
The feature amount detection unit 1531 calculates the edge strength E 2 (x, y | t) by performing the calculation of the following equation (3) for each pixel of the coordinates (x, y).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 ここで、TH_E(x,y|t)は、時刻(t)の座標(x,y)に対する予め定められた閾値である。また、式(3)は、E(x,y|t)がTH_E(x,y|t)より小さい場合にはE(x,y|t)=0であることを示し、また、E(x,y|t)がTH_E(x,y|t)以上の場合にはE(x,y|t)=E(x,y|t)であることを示す。なお、特徴量検出部1531は、この閾値TH_E(x,y|t)をピクセル単位、ブロック単位、領域単位、画像単位で調整してもよい。それによって、画像特性に応じて適した特徴量を検出することが可能となる。
 特徴量検出部1531は、算出したエッジ強度E(x,y|t)をシード生成部1532に出力する。
Here, TH_E (x, y | t) is a predetermined threshold for the coordinates (x, y) at time (t). Equation (3) indicates that E 2 (x, y | t) = 0 when E 1 (x, y | t) is smaller than TH_E (x, y | t), and If E 1 (x, y | t) is equal to or greater than TH_E (x, y | t), it indicates that E 2 (x, y | t) = E 1 (x, y | t). Note that the feature amount detection unit 1531 may adjust the threshold value TH_E (x, y | t) in pixel units, block units, region units, and image units. As a result, it is possible to detect a feature amount suitable for the image characteristics.
The feature amount detection unit 1531 outputs the calculated edge strength E 2 (x, y | t) to the seed generation unit 1532.
 シード生成部1532は、特徴量検出部1531から入力されたエッジ強度E(x,y|t)を用いて、スーパーピクセルを生成するためのシード情報を生成する。具体的には、シード生成部1532は、次式(4)を用いてシード情報S(x,y|t)を算出する。 The seed generation unit 1532 generates seed information for generating a super pixel using the edge intensity E 2 (x, y | t) input from the feature amount detection unit 1531. Specifically, the seed generation unit 1532 calculates seed information S (x, y | t) using the following equation (4).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 つまり、シード生成部1532は、座標(x,y)を中心とした窓サイズW×Wの範囲内で、エッジ強度E(x,y|t)が極小値となる場合(Local Minima)にシード情報S(x,y|t)を「1」、それ以外はシード情報S(x,y|t)を「0」とする。ここで、Wはx方向の窓のサイズ、Wはy方向の窓のサイズを表す。
 シード生成部1532は、生成したシード情報S(x,y|t)を領域成長部1533に出力する。
That is, the seed generation unit 1532 has a minimum value of the edge strength E 2 (x, y | t) within the range of the window size W 1 × W 2 with the coordinates (x, y) as the center (Local Minima). ), The seed information S (x, y | t) is set to “1”, and otherwise, the seed information S (x, y | t) is set to “0”. Here, W 1 represents the size of the window in the x direction, and W 2 represents the size of the window in the y direction.
The seed generation unit 1532 outputs the generated seed information S (x, y | t) to the region growth unit 1533.
 領域成長部1533は、シード生成部1532から入力されたシード情報S(x,y|t)に基づいて、領域成長法を適用してスーパーピクセル群R(t)を生成する。具体的には、領域成長部1533は、ある画素(x1,y1)のエッジ強度E(x1,y1|t)と、その画素の近傍の画素(x,y)のエッジ強度E(x,y|t)と、の差が予め定めた値以下である画素(x,y)を元の画素(x1,y1)と同じ領域とすることで、領域を広げる。ここで、領域成長部1533は、この処理を、シード情報S(x,y|t)が「1」の画素から始める。つまり、領域成長部1533は、シード情報S(x,y|t)が「1」の画素から、特徴量の値がほぼ等しい領域を少しずつ成長させる。
 領域成長部1533は、広げた領域をスーパーピクセル群R(t)とし、スーパーピクセル群R(t)を示す情報を領域統合部1534に出力する。
The region growing unit 1533 applies the region growing method based on the seed information S (x, y | t) input from the seed generating unit 1532 to generate the super pixel group R 1 (t). Specifically, region growing unit 1533, the edge strength of a pixel (x1, y1) E 2 | and (x1, y1 t), pixels near the pixel (x, y) of the edge intensities E 2 (x , Y | t) and the pixel (x, y) whose difference is equal to or less than a predetermined value is set to the same region as the original pixel (x1, y1), thereby expanding the region. Here, the region growing unit 1533 starts this processing from a pixel whose seed information S (x, y | t) is “1”. That is, the region growing unit 1533 gradually grows a region having almost the same feature value from the pixel having the seed information S (x, y | t) of “1”.
The region growing unit 1533 sets the expanded region as the super pixel group R 1 (t), and outputs information indicating the super pixel group R 1 (t) to the region integrating unit 1534.
 領域統合部1534は、領域成長部1533から入力された情報が示すスーパーピクセル群R(t)から、領域の面積が小さいスーパーピクセル群を、他の領域へ統合する領域統合処理を行う。具体的には、領域統合部1534は、各スーパーピクセル群R(t)の一点を頂点として、頂点間の接続関係、及び接続する頂点間のエッジ(重み)によって表現される、重み付無向グラフを算出する。ここで、頂点間のエッジ(重み)には、例えば、各頂点と対応する各スーパーピクセルの代表色との色空間での距離を用いる。
 領域統合部1534は、貪欲法を用いて全域最小木(Minimum Spanning Tree:MST)を構成するように、重み付無向グラフを領域統合し、スーパークセル群R(t)を生成する。領域統合部1534は、スーパーピクセル毎に、スーパーピクセルを識別するラベルをスーパーピクセル内の全画素に付与し、ラベリング結果を領域情報(t)とし、出力する。
The region integration unit 1534 performs region integration processing for integrating a super pixel group having a small area area into another region from the super pixel group R 1 (t) indicated by the information input from the region growth unit 1533. Specifically, the region integration unit 1534 uses one point of each superpixel group R 1 (t) as a vertex, and is represented by a connection relationship between the vertices and an edge (weight) between the connected vertices. A direction graph is calculated. Here, for an edge (weight) between vertices, for example, a distance in a color space between each vertex and a representative color of each corresponding superpixel is used.
The region integration unit 1534 performs region integration on the weighted undirected graph so as to form a minimum spanning tree (MST) using a greedy method, and generates a super-cell group R 2 (t). For each superpixel, the region integration unit 1534 assigns a label for identifying the superpixel to all the pixels in the superpixel, and outputs the labeling result as region information (t).
 図6は、本実施形態に係るクラスタリング部153の処理結果の一例を示す概略図である。この図は、熊のぬいぐるみが電車のおもちゃに乗っている画像である。また、電車のおもちゃが線路に沿って移動することで、熊のぬいぐるみも移動している。
 この図において、符号6Aを付した図6Aは入力画像より得られたエッジ強度Eを表す画像である。また、符号6Bを付した図6Bはエッジ強度Eより得られたシード情報Sを表す画像である。また、符号6Cを付した図6Cはスーパーピクセル群R(t)を示す画像、符号6Dを付した図6Dはスーパーピクセル群R(t)を示す画像である。
 図6Aにおいて、明るい(白い)部分はエッジ強度が大きい領域を表し、暗い(黒い)部分はエッジ強度が小さい領域を表す。図6Bにおいて、明るい(白い)部分がシードを表し、暗い(黒い)部分は、どの領域(クラス)に属するかを領域成長法により決定する部分を表す。
 図6Cと図6Dを比較すると、図6Cのスーパーピクセル群R(t)では多数の小さい面積を有するスーパーピクセル(領域)が存在するが、図6Dのスーパーピクセル群R(t)では小さい面積を有するスーパーピクセル(領域)が減少している。このように、映像処理装置1では、クラスタリング処理を実施することで、小面積のスーパーピクセル(領域)が少なく、より精度のよいクラスタリング結果を取得できる。
FIG. 6 is a schematic diagram illustrating an example of a processing result of the clustering unit 153 according to the present embodiment. This figure is an image of a bear doll riding a train toy. In addition, bear toys are also moving as train toys move along the tracks.
In this figure, FIG. 6A by reference numeral 6A is an image representing the edge strength E 2 obtained from the input image. Further, FIG. 6B with numeral 6B is an image representing the seed information S obtained from the edge intensity E 2. Further, FIG. 6C attached with reference numeral 6C is an image showing the super pixel group R 1 (t), and FIG. 6D attached with reference numeral 6D is an image showing the super pixel group R 2 (t).
In FIG. 6A, a bright (white) portion represents a region having a high edge strength, and a dark (black) portion represents a region having a low edge strength. In FIG. 6B, a bright (white) portion represents a seed, and a dark (black) portion represents a portion that is determined by a region growth method to which region (class) belongs.
Comparing FIG. 6C and FIG. 6D, there are a large number of superpixels (regions) having a small area in the superpixel group R 1 (t) in FIG. 6C, but small in the superpixel group R 2 (t) in FIG. 6D. The number of superpixels (areas) having an area is decreasing. As described above, the video processing apparatus 1 can obtain a more accurate clustering result by performing the clustering process with fewer superpixels (regions) having a small area.
 図4に戻って、特徴量算出部154には、クラスタリング部153から領域情報(t)、フィルタ部151aから平滑化後の映像情報(t)、フィルタ部151bから平滑化後の奥行情報(t)、及びROI情報(t)が入力される。特徴量算出部154は、入力された領域情報(t)、映像情報(t)、奥行情報(t)、及びROI情報(t)に基づいて、領域(ラベル)毎の特徴量を算出する。具体的には、特徴量算出部154は、以下の(1)~(7)の特徴量を算出する。その後、算出した特徴量を示す特徴量情報(t)を前景領域抽出部155に出力する。
(1)領域間の隣接関係
(2)ROI領域の重心と各領域の重心との間の距離(以下、重心距離という)
(3)各色成分の平均値、中央値、分散及び標準偏差
(4)奥行の平均値、中央値、分散及び標準偏差
(5)領域面積
(6)領域周囲長
(7)領域の外接矩形
Returning to FIG. 4, the feature amount calculation unit 154 receives the region information (t) from the clustering unit 153, the smoothed video information (t) from the filter unit 151a, and the depth information (t) from the filter unit 151b. ) And ROI information (t). The feature amount calculation unit 154 calculates a feature amount for each region (label) based on the input region information (t), video information (t), depth information (t), and ROI information (t). Specifically, the feature amount calculation unit 154 calculates the following feature amounts (1) to (7). Thereafter, feature amount information (t) indicating the calculated feature amount is output to the foreground region extraction unit 155.
(1) Adjacent relationship between regions (2) Distance between the centroid of the ROI region and the centroid of each region (hereinafter referred to as the centroid distance)
(3) Average value, median, variance, and standard deviation of each color component (4) Average value, median, variance, and standard deviation of depth (5) Area area (6) Area circumference (7) circumscribed rectangle of area
<特徴量算出部154の動作について>
 (1)~(7)の特徴量の算出方法の一例について、図7~図10を用いて説明する。図7は、特徴量算出部154の動作の一例を表すフローチャートである。図8は、説明のため、8×8画素ブロックでのラベリング(領域情報)の一例を表す図である。図9は、図8における領域間の接続関係を重みなし無向グラフ、及び隣接行列による表現の一例を表す図である。図10は、図8Aのラベル3を例に領域の周囲長の取得方法、及び領域の外接矩形の一例を表す図である。
<Operation of Feature Quantity Calculation Unit 154>
An example of the feature value calculation method (1) to (7) will be described with reference to FIGS. FIG. 7 is a flowchart illustrating an example of the operation of the feature amount calculation unit 154. FIG. 8 is a diagram illustrating an example of labeling (region information) in an 8 × 8 pixel block for explanation. FIG. 9 is a diagram illustrating an example of a connection relation between regions in FIG. 8 expressed by an unweighted undirected graph and an adjacency matrix. FIG. 10 is a diagram illustrating an example of the method of obtaining the perimeter of the region and the circumscribed rectangle of the region, taking the label 3 in FIG. 8A as an example.
(ステップS154-01)特徴量算出部154は、クラスタリング部153から領域情報(t)、フィルタ部151aから平滑化後の映像情報(t)、フィルタ部151bから平滑化後の奥行情報(t)、及びROI情報(t)を取得する。その後、ステップS154-02へ進む。 (Step S154-01) The feature amount calculation unit 154 receives the region information (t) from the clustering unit 153, the video information (t) after smoothing from the filter unit 151a, and the depth information (t) after smoothing from the filter unit 151b. , And ROI information (t). Thereafter, the process proceeds to step S154-02.
(ステップS154-02)特徴量算出部154は、ROI情報(t)に基づき、ROI領域内の画素について、画素の座標の値を合計する。続いて、特徴量算出部154は、合計した値をROI領域内の画素数で除算し、その計算結果をROI領域の重心とする。その後、ステップS154-03へ進む。 (Step S154-02) The feature amount calculation unit 154 sums the pixel coordinate values for the pixels in the ROI region based on the ROI information (t). Subsequently, the feature amount calculation unit 154 divides the total value by the number of pixels in the ROI area, and uses the calculation result as the center of gravity of the ROI area. Thereafter, the process proceeds to step S154-03.
(ステップS154-03)特徴量算出部154は、領域情報(t)に基づき、処理対象画像について、原点から一ライン毎に走査(ラスタースキャン)し、各ラベルに属する全画素の座標、画素数、各ラベルに属する画素が最初に出現する位置(始点)、及びラベル(領域)間の隣接関係を求める。また、取得された各ラベルに属する画素が最初に出現する位置(始点)は、ステップS154-08において領域周囲長を取得する際の輪郭追跡の始点として記憶される。その後、ステップS154-04へ進む。ここで、図8、図9を用いて、ラベルの始点位置の検出結果と、ラベル間の隣接関係の取得方法の一例を説明する。例えば、符号8Aを付した図8Aに示す、8×8画素ブロックでのラベリング結果(領域情報)があるとする。図8Aの場合、8×8画素ブロックの原点(図8A上の左上)より、一ライン毎に走査していくと、符号8Bを付した図8Bにおいて、符号Li(i=1,2,・・・,5)を付した画素が、各ラベルの始点として検出される。また、領域間の接続関係は、各画素において、当該画素のラベルと、当該画素に隣接する画素のラベルとを比較し、異なるラベルを逐次検出していくことで取得される。例えば、符号9Aを付した図9Aに示す、注目画素(符号Q)の場合、当該画素のラベルは1であり、当該画素に隣接する画素(隣接画素:符号Ni(i=1,2,・・・,8))のうち、異なるラベルは3、4となる。その結果、ラベル1は、ラベル3、ラベル4と隣接していることを把握することができる。上記処理を全画素に対して実施することで、図8Aの領域情報に関するラベル間の隣接関係を、符号9Bを付した図9Bに示す重みなし無向グラフとして最終的に表現することができる。図9Bにおいて、各ノード番号が各ラベル番号に対応し、ノード間のエッジが接続関係を表す。図9Bのグラフの構造は、符号9Cを付した図9Cに示す隣接行列によって表現される。図9Cの隣接行列において、ノード間にエッジがある場合は“1”を、ノード間にエッジがない場合は“0”を、値として割り当てている。 (Step S154-03) Based on the region information (t), the feature amount calculation unit 154 scans the processing target image line by line from the origin (raster scan), and coordinates and the number of pixels of all pixels belonging to each label The position (start point) where the pixel belonging to each label first appears and the adjacency relationship between the labels (regions) are obtained. Further, the position (start point) at which the pixel belonging to each acquired label first appears is stored as the start point of contour tracking when acquiring the area perimeter in step S154-08. Thereafter, the process proceeds to step S154-04. Here, an example of a method for obtaining the detection result of the start position of the label and the adjacent relationship between the labels will be described with reference to FIGS. For example, it is assumed that there is a labeling result (region information) in an 8 × 8 pixel block shown in FIG. In the case of FIG. 8A, when scanning is performed line by line from the origin of the 8 × 8 pixel block (upper left in FIG. 8A), the reference symbol Li (i = 1, 2,. .., 5) are detected as the starting point of each label. In addition, the connection relationship between the regions is acquired by comparing the label of the pixel with the label of the pixel adjacent to the pixel and sequentially detecting different labels in each pixel. For example, in the case of a pixel of interest (reference numeral Q) shown in FIG. 9A with reference numeral 9A, the label of the pixel is 1, and a pixel adjacent to the pixel (adjacent pixel: reference numeral Ni (i = 1, 2,... .., 8)), the different labels are 3, 4. As a result, it can be understood that the label 1 is adjacent to the labels 3 and 4. By performing the above processing for all the pixels, the adjacency relationship between the labels regarding the region information in FIG. 8A can be finally expressed as an unweighted undirected graph shown in FIG. In FIG. 9B, each node number corresponds to each label number, and an edge between nodes represents a connection relationship. The structure of the graph of FIG. 9B is expressed by the adjacency matrix shown in FIG. In the adjacency matrix of FIG. 9C, “1” is assigned as a value when there is an edge between nodes, and “0” is assigned as a value when there is no edge between nodes.
(ステップS154-04)特徴量算出部154は、各ラベルLi(0≦i<MaxLabel)に属する画素について、座標の値を合計する。ここで、MaxLabelは、領域情報(t)より取得されるスーパーピクセル(領域)を識別するラベルの総数を表す。続いて、特徴量算出部154は、合計した値をラベルLiに属する画素数で除算し、その結果をラベルLiの重心とする。そして、ステップS154-02において求めたROI領域の重心とラベルLiの重心との距離(重心距離)を算出する。その後、ステップS154-05へ進む。 (Step S154-04) The feature amount calculation unit 154 sums the coordinate values for the pixels belonging to each label Li (0 ≦ i <MaxLabel). Here, MaxLabel represents the total number of labels for identifying superpixels (regions) acquired from the region information (t). Subsequently, the feature amount calculation unit 154 divides the total value by the number of pixels belonging to the label Li, and uses the result as the center of gravity of the label Li. Then, the distance (center of gravity distance) between the center of gravity of the ROI area obtained in step S154-02 and the center of gravity of the label Li is calculated. Thereafter, the process proceeds to step S154-05.
(ステップS154-05)特徴量算出部154は、平滑化後の映像情報(t)から、各ラベルLi(0≦i<MaxLabel)の色成分毎の平均値、中央値、分散値、及び標準偏差を計算する。その後、ステップS154-06へ進む。 (Step S154-05) The feature amount calculation unit 154 calculates the average value, median value, variance value, and standard value for each color component of each label Li (0 ≦ i <MaxLabel) from the smoothed video information (t). Calculate the deviation. Thereafter, the process proceeds to step S154-06.
(ステップS154-06)特徴量算出部154は、平滑化後の奥行情報(t)から、各ラベルLi(0≦i<MaxLabel)の奥行の平均値、中央値、分散値、及び標準偏差を計算する。その後、ステップS154-07へ進む。 (Step S154-06) The feature amount calculation unit 154 calculates the average value, median value, variance value, and standard deviation of the depth of each label Li (0 ≦ i <MaxLabel) from the smoothed depth information (t). calculate. Thereafter, the process proceeds to step S154-07.
(ステップS154-07)特徴量算出部154は、ラベルLiに属する画素の総数を領域面積とする。その後、ステップS154-08へ進む。 (Step S154-07) The feature amount calculation unit 154 sets the total number of pixels belonging to the label Li as a region area. Thereafter, the process proceeds to step S154-08.
(ステップS154-08)特徴量算出部154は、ラベルLiの領域周囲長を算出する。その後、ステップS154-09へ進む。ここで、符号10Aを付した図10Aを用いて、ラベル3を例に領域周囲長の算出方法の一例を説明する。領域周囲長とは、図10Aにおいて、ラベルの始点から領域の周囲を時計周り(または、反時計周り)に一周する移動量である。8連結の連結成分の場合、上下左右に追跡移動する数のCと、斜めに追跡移動するCとがあり、領域周囲長(Perimeter)は、式(5)または、式(6)により計算される。 (Step S154-08) The feature amount calculation unit 154 calculates the area perimeter of the label Li. Thereafter, the process proceeds to step S154-09. Here, an example of a method for calculating the area perimeter will be described using the label 3 as an example, with reference to FIG. The area peripheral length is a movement amount that goes around the area clockwise (or counterclockwise) from the start point of the label in FIG. 10A. 8 when connected components of the coupling, and C 1 number to track moving up and down and right and left, there are a C 2 to track moving obliquely, region perimeter (Perimeter) has the formula (5) or by the formula (6) Calculated.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
(ステップS154-09)特徴量算出部154は、ラベルLiが示す領域に外接する最小の矩形(外接矩形)を算出する。その後、ステップS154-10へ進む。ここで、符号10Bを付した図10Bを用いて、外接矩形の取得方法の一例について説明する。図10Bにおいて、ラベル3の外接矩形を構成する4点の頂点(符号P1,P2,P3,P4)の座標は、領域内で左端にある画素のX座標(Xと表す)、右端にある画素のX座標(Xと表す)、上端にある画素のY座標(Y)、及び下端にある画素のY座標(Yと表す)を用いて、
P1:=(XL,)、P2:=(XR,)、P3:=(XR,)、P4:=(XL,)、と表現される。また、X、X、YT、は式(7)により計算される。なお、式(7)において、符号Liは、ラベル番号を表し、符号RLiは、ラベルLiに属する画素の集合を表し、符号x、及び符号yは、それぞれ集合RLiに属する画素jのx座標、及びy座標を表す。
(Step S154-09) The feature amount calculation unit 154 calculates the minimum rectangle (circumscribed rectangle) circumscribing the area indicated by the label Li. Thereafter, the process proceeds to step S154-10. Here, an example of a circumscribed rectangle acquisition method will be described with reference to FIG. In FIG. 10B, the coordinates of the vertices of the four points constituting the enclosing rectangle of the label 3 (code P1, P2, P3, P4), (represented as X L) X coordinate of the pixel at the left edge in the region, at the far right X coordinate of the pixel (represented as X R), Y coordinates of the pixels in the upper end (Y T), and with the Y coordinate of the pixel (represented as Y B) at the lower end,
P1: = (X L, Y T), P2: = (X R, Y T), P3: = (X R, Y B), P4: = (X L, Y B), and is expressed. Further, X L , X R , Y T and Y B are calculated by the equation (7). In Equation (7), the symbol Li represents a label number, the symbol R Li represents a set of pixels belonging to the label Li, and the symbols x j and y j represent the pixels j belonging to the set R Li , respectively. Represents the x coordinate and y coordinate.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
(ステップS154-10)特徴量算出部154は、全ラベルの特徴量の算出が完了すれば(ステップS154-10においてYes)、各ラベル(領域)の特徴量を含む特徴量情報(t)を前景領域抽出部155へ出力する。また、特徴量の算出が未処理のラベルがあれば(ステップS154-10においてNo)、ステップS154-04へ戻り、次のラベルの特徴量算出を行う。 (Step S154-10) When the calculation of the feature values of all the labels is completed (Yes in Step S154-10), the feature value calculation unit 154 obtains the feature value information (t) including the feature value of each label (region). The data is output to the foreground area extraction unit 155. If there is an unprocessed label for which the feature amount is calculated (No in step S154-10), the process returns to step S154-04 to calculate the feature amount of the next label.
 以上のようにすれば、前記(1)~(7)の特徴量について算出を行うことができる。
 なお、本実施形態では、特徴量算出部154の動作について、ステップS154-01~S154-10の順に説明したが、これに限定されるものではなく、本発明を実施できる範囲において変更可能である。また、本実施形態では、領域間の接続関係を表すデータ構造の一例として、隣接行列を用いているが、これに限定されるものではなく、隣接リストを用いてもよい。また、特徴量算出部154は、画像の色空間としてRGBを用いて特徴量を算出するが、本発明はこれに限らず、YCbCr(YUV)、CIE L*a*b*(エルスター、エースター、ビースター)、CIE L*u*v*(エルスター、ユースター、ブイースター)であっても良いし、他の色空間であってもよい。
In this way, the feature quantities (1) to (7) can be calculated.
In the present embodiment, the operation of the feature amount calculation unit 154 has been described in the order of steps S154-01 to S154-10. However, the present invention is not limited to this, and can be changed within a range in which the present invention can be implemented. . In this embodiment, an adjacency matrix is used as an example of a data structure representing a connection relationship between regions, but the present invention is not limited to this, and an adjacency list may be used. The feature amount calculation unit 154 calculates the feature amount using RGB as the color space of the image. However, the present invention is not limited to this, and YCbCr (YUV), CIE L * a * b * (Elster, Aster) , Biester), CIE L * u * v * (Elster, Yuster, Baster) or other color space.
 前景領域抽出部155には、特徴量算出部154から特徴量情報(t)、分布モデル推定部152からROI奥行分布情報(t)、及び、ROI情報(t)が入力される。また、前景領域抽出部155は、バッファ部158の映像情報記憶部158から映像情報(t)を読み出す。ここで、読み出される映像情報(t)は、映像情報取得部10が記憶した情報であって平滑化処理を行っていない情報であるが、これに限定されず、平滑化処理を行った映像情報(t)を用いてもよい。
 前景領域抽出部155は、ROI情報(t)、特徴量情報(t)、ROI奥行分布情報(t)に基づいて、映像情報(t)から、抽出対象となる前景画像領域(t)を抽出する。前景領域抽出部155は、抽出した前景画像領域(t)を示す前景領域情報(t)を、前景領域情報記憶部1582に記憶する。また、前景領域抽出部155は、時刻t0―k(k=1、2、・・・、K;kはフレームの時刻を示す)の前景画像領域(t0-k)を示す情報を前景領域情報記憶部1582に記憶した後に、時刻t0を示す情報を前景領域補正部156に出力する。
The foreground region extraction unit 155 receives the feature amount information (t) from the feature amount calculation unit 154, the ROI depth distribution information (t), and the ROI information (t) from the distribution model estimation unit 152. Further, the foreground area extraction unit 155 reads the video information (t) from the video information storage unit 158 of the buffer unit 158. Here, the read video information (t) is information stored by the video information acquisition unit 10 and not subjected to the smoothing process, but is not limited thereto, and the video information subjected to the smoothing process is not limited thereto. (T) may be used.
The foreground region extraction unit 155 extracts the foreground image region (t) to be extracted from the video information (t) based on the ROI information (t), the feature amount information (t), and the ROI depth distribution information (t). To do. The foreground area extraction unit 155 stores the foreground area information (t) indicating the extracted foreground image area (t) in the foreground area information storage unit 1582. The foreground area extraction unit 155 also obtains information indicating the foreground image area (t0-k) at the time t0-k (k = 1, 2,..., K; k indicates the time of the frame). After storing in the storage unit 1582, information indicating the time t0 is output to the foreground region correction unit 156.
 以下、前景領域抽出部155が行う処理の詳細を説明する。
 図11は、本実施形態に係る前景領域抽出部155の動作の一例を示すフローチャートである。
(ステップS155-01)前景領域抽出部155は、前景領域を抽出するための核となる領域である基本前景領域を探索するための探索条件のパラメータ設定を行う。具体的には、前景領域抽出部155は、特徴量情報(t)各々の下限値及び上限値として、予め定めた値を設定する。前景領域抽出部155は、例えば、領域面積の下限値(最小面積という)、上限値(最大面積という)、領域周囲長の下限値(最小周囲長という)、最大値(最大周囲長という)、重心距離の上限値の初期値として、ROI領域の外接矩形に内接する円の半径の最大値(最大距離)、をパラメータとして設定する。このように基本前景領域の探索条件を設定することで、ROI内に重心があり、かつ、面積が大きい領域を検出できる。また、背景領域に属する領域を、基本前景領域として誤検出することを防止できる。その後、ステップS155-02へ進む。
(ステップS155-02)前景領域抽出部155は、重心距離の上限値以下、または、未満を満たす、領域の中から重心距離が最小となる領域を選択する。その後、ステップS155-03へ進む。
Details of the processing performed by the foreground area extraction unit 155 will be described below.
FIG. 11 is a flowchart showing an example of the operation of the foreground area extraction unit 155 according to the present embodiment.
(Step S155-01) The foreground area extraction unit 155 sets parameters for search conditions for searching for a basic foreground area, which is a core area for extracting the foreground area. Specifically, the foreground region extraction unit 155 sets predetermined values as the lower limit value and the upper limit value of each feature amount information (t). Foreground region extraction unit 155, for example, lower limit value (referred to as minimum area) of region area, upper limit value (referred to as maximum area), lower limit value of region peripheral length (referred to as minimum peripheral length), maximum value (referred to as maximum peripheral length), As an initial value of the upper limit value of the center-of-gravity distance, the maximum value (maximum distance) of the radius of the circle inscribed in the circumscribed rectangle of the ROI area is set as a parameter. By setting the search conditions for the basic foreground region in this way, it is possible to detect a region having a center of gravity and a large area within the ROI. In addition, it is possible to prevent erroneous detection of a region belonging to the background region as a basic foreground region. Thereafter, the process proceeds to step S155-02.
(Step S155-02) The foreground region extraction unit 155 selects a region having the minimum centroid distance from among the regions that satisfies the lower limit value of the centroid distance or less than the upper limit value. Thereafter, the process proceeds to step S155-03.
(ステップS155-03)前景領域抽出部155は、ステップS155-02で選択した領域の特徴量情報(t)が、ステップS155-01で設定した下限値と上限値の間の値であるか否かを判定する。特徴量情報(t)が下限値と上限値の間の値であると判定した場合(Yes)、つまり、ステップS155-02で選択した領域が基本前景領域であると決定して、ステップS155-05へ進む。一方、特徴量情報(t)が下限値と上限値の間の値でないと判定した場合(No)、ステップS155-04へ進む。
(ステップS155-04)前景領域抽出部155は、特徴量情報(t)各々の下限値から予め定めた値を減算し、又は、上限値に予め定めた値を加算することで、特徴量情報(t)各々の下限値及び上限値を更新する。その後ステップS155-02へ進む。
(Step S155-03) The foreground region extraction unit 155 determines whether or not the feature amount information (t) of the region selected in step S155-02 is a value between the lower limit value and the upper limit value set in step S155-01. Determine whether. When it is determined that the feature amount information (t) is a value between the lower limit value and the upper limit value (Yes), that is, it is determined that the region selected in step S155-02 is the basic foreground region, and step S155- Proceed to 05. On the other hand, when it is determined that the feature amount information (t) is not a value between the lower limit value and the upper limit value (No), the process proceeds to step S155-04.
(Step S155-04) The foreground region extraction unit 155 subtracts a predetermined value from the lower limit value of each feature amount information (t), or adds a predetermined value to the upper limit value, so that the feature amount information (T) Each lower limit value and upper limit value are updated. Thereafter, the process proceeds to step S155-02.
(ステップS155-05)前景領域抽出部155は、ステップS155-03で決定した基本前景領域と、ROI内に重心がある各領域と、の特徴量情報(t)のうち奥行に関する平均値、または、中央値を比較して、これらの特徴量情報(t)の差が予め定めた閾値以内(未満でもよい)であるか否かを判定する。前景領域抽出部155は、特徴量情報(t)の差が予め定めた閾値以内であると判定した領域と、基本前景領域と、を統合し、統合した領域を前景領域に決定する。
 ここで、前景領域抽出部155は、特徴量情報(t)の差の閾値を、分布モデル推定部152より取得したROI奥行分布情報(t)に基づいて定める。具体的には、前景領域抽出部155は、次式(8)を用いて、特徴量情報(t)の差の閾値(TH_D1)を算出する。
(Step S155-05) The foreground region extraction unit 155 calculates the average value related to the depth among the feature amount information (t) of the basic foreground region determined in Step S155-03 and each region having a center of gravity within the ROI, or The median values are compared, and it is determined whether or not the difference between the pieces of feature amount information (t) is within a predetermined threshold value (or less than a predetermined threshold value). The foreground area extraction unit 155 integrates the area for which the difference in the feature amount information (t) is determined to be within a predetermined threshold and the basic foreground area, and determines the integrated area as the foreground area.
Here, the foreground region extraction unit 155 determines a threshold value of the difference between the feature amount information (t) based on the ROI depth distribution information (t) acquired from the distribution model estimation unit 152. Specifically, the foreground region extraction unit 155 calculates a threshold value (TH_D1) of the difference between the feature amount information (t) using the following equation (8).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 ここで、α_1は、予め定めたスケーリング定数である。また、σ_1は、前景領域の奥行分布をガウス分布と仮定した場合の標準偏差である。 Here, α_1 is a predetermined scaling constant. Σ_1 is a standard deviation when the depth distribution of the foreground region is assumed to be a Gaussian distribution.
 図4に戻って、前景領域補正部156は、前景領域抽出部155から入力された情報が示す時刻t0(第1の時間)における前景画像領域(t0)を、その時刻に近接する時刻t0-k(第2の時間)(k=1、2、・・・、K)の前景画像領域(t0-k)に基づいて、補正する。前景領域補正部156は、時刻が小さい方から大きい方へt0を、逐次この補正を繰り返す。前景領域補正部156は、補正後の前景画像領域(t0)を示す情報(補正前景画像領情報)を、バッファ部158の前景領域情報記憶部1585に記憶する。また、前景領域補正部156は、補正後の前景画像領域(t0)を示す情報(補正前景画像領情報)を、マスク情報生成部157に出力する。
 以下、前景領域補正部156が行う補正について詳細を説明する。
Returning to FIG. 4, the foreground area correction unit 156 sets the foreground image area (t0) at the time t0 (first time) indicated by the information input from the foreground area extraction unit 155 to the time t0− Correction is performed based on the foreground image area (t0-k) of k (second time) (k = 1, 2,..., K). The foreground area correction unit 156 repeats this correction sequentially from t0 to time t0. The foreground area correction unit 156 stores information (corrected foreground image area information) indicating the corrected foreground image area (t0) in the foreground area information storage unit 1585 of the buffer unit 158. Further, the foreground area correction unit 156 outputs information (corrected foreground image area information) indicating the corrected foreground image area (t0) to the mask information generation unit 157.
Hereinafter, the correction performed by the foreground area correction unit 156 will be described in detail.
 図12は、本実施形態に係る前景領域補正部156の構成を示す概略ブロック図である。この図において、前景領域補正部156は、移動量算出部1561、前景領域確率マップ生成部1562、前景領域確定部1563、及び境界領域補正部1564を含んで構成される。 FIG. 12 is a schematic block diagram showing the configuration of the foreground area correction unit 156 according to the present embodiment. In this figure, the foreground region correction unit 156 includes a movement amount calculation unit 1561, a foreground region probability map generation unit 1562, a foreground region determination unit 1563, and a boundary region correction unit 1564.
 移動量算出部1561には、前景領域抽出部155から時刻t0を示す情報が入力される。移動量算出部1561は、時刻t0から時刻t0-Kの情報(映像情報(t0-k)、映像情報(t0)、前景領域情報(t0-k)、前景領域情報(t0)、ROI情報(t0-k)、ROI情報(t0)、(k=1、2、・・・、K))をバッファ部158から読み出す。
 移動量算出部1561は、読み出した映像情報(t0-k)、前景領域情報(t0-k)、及びROI情報(t0-k)に基づいて、前景領域情報(t0-k)が示す前景領域画像(t0-k)について、処理画像(t0)での位置から処理画像(t0-k)での位置を差し引いた移動量(t0,t0-k)(動きベクトルともいう)を算出する。つまり、移動量(t0,t0-k)は、前景領域画像(t0-k)が時刻t0-kから時刻t0までに移動した移動量を表す。具体的には、移動量算出部1561は、図13に示すテンプレートマッチング(動き探索ともいう)処理を行うことで、移動量(t0,t0-k)を算出する。
Information indicating the time t0 is input from the foreground region extraction unit 155 to the movement amount calculation unit 1561. The movement amount calculation unit 1561 includes information from time t0 to time t0-K (video information (t0-k), video information (t0), foreground area information (t0-k), foreground area information (t0), ROI information ( t0-k), ROI information (t0), (k = 1, 2,..., K)) is read from the buffer unit 158.
Based on the read video information (t0-k), foreground area information (t0-k), and ROI information (t0-k), the movement amount calculation unit 1561 calculates the foreground area indicated by the foreground area information (t0-k). For the image (t0-k), a movement amount (t0, t0-k) (also referred to as a motion vector) obtained by subtracting the position in the processed image (t0-k) from the position in the processed image (t0) is calculated. That is, the movement amount (t0, t0-k) represents the movement amount that the foreground area image (t0-k) has moved from time t0-k to time t0. Specifically, the movement amount calculation unit 1561 calculates the movement amount (t0, t0-k) by performing template matching (also referred to as motion search) processing shown in FIG.
 図13は、本実施形態に係るテンプレートマッチングを説明する説明図である。この図において、横軸が時刻tであり、縦軸がy座標あり、横軸と縦軸に垂直な方向がx座標である。また、符号Ikを付した画像は、時刻(t0-k)における処理画像を表す。また、符号Okを付した画像領域は、時刻(t0-k)における前景画像領域と、前景画像領域(対象物)を囲む外接矩形を表す。また、符号Akを付した座標は、符号Okで表される前景領域を囲む外接矩形の始点位置の座標を表す。また符号Mkを付した画像は、時刻(t0-k)における外接矩形内でのマスク情報(t0-k)が示す画像である。ここで、マスク情報(t0-k)は、外接矩形内で前景画像領域(図13では白の部分)と背景画像領域(図13では黒の部分)を識別する情報であり、前景領域情報(t0-k)が示す前景画像領域(t0-k)とそれ以外の外接矩形内の領域を背景画像領域(t0-k)とするものである。また、符号Vkは、座標Akから座標A0へのベクトルである。このベクトルは、前景画像領域(t0-k)の移動量(t0,t0-k)を表す。 FIG. 13 is an explanatory diagram for explaining template matching according to the present embodiment. In this figure, the horizontal axis is time t, the vertical axis is the y coordinate, and the direction perpendicular to the horizontal axis and the vertical axis is the x coordinate. Also, the image with the symbol Ik represents the processed image at time (t0-k). An image region denoted by reference symbol Ok represents a foreground image region at time (t0-k) and a circumscribed rectangle surrounding the foreground image region (object). In addition, the coordinates with the reference symbol Ak represent the coordinates of the starting point position of the circumscribed rectangle surrounding the foreground area indicated by the reference symbol Ok. The image with the symbol Mk is an image indicated by the mask information (t0-k) in the circumscribed rectangle at time (t0-k). Here, the mask information (t0-k) is information for identifying the foreground image region (white portion in FIG. 13) and the background image region (black portion in FIG. 13) within the circumscribed rectangle. The foreground image region (t0-k) indicated by t0-k) and the other region in the circumscribed rectangle are set as the background image region (t0-k). A symbol Vk is a vector from the coordinate Ak to the coordinate A0. This vector represents the amount of movement (t0, t0-k) of the foreground image area (t0-k).
 移動量算出部1561は、前景画像領域Okをテンプレートとし、処理画像(t0)上で、テンプレートを移動し(回転や、拡大・縮小をしてもよい)、テンプレートと最も類似性が高い領域(推定領域という)を検出する。移動量算出部1561は、検出した推定領域と、前景画像領域Okと、の座標の差を移動量(t0,t0-k)として算出する。 The movement amount calculation unit 1561 uses the foreground image region Ok as a template, moves the template on the processed image (t0) (may be rotated or enlarged / reduced), and has the highest similarity to the template ( Detected area). The movement amount calculation unit 1561 calculates a difference in coordinates between the detected estimated area and the foreground image area Ok as a movement amount (t0, t0-k).
 具体的には、移動量算出部1561は、ROI情報(t0)が示すROI領域の重心の座標(x0、y0)(探索初期座標という)を算出する。移動量算出部1561は、探索初期座標(x0、y0)を中心にスパイラルサーチを行うことで、推定領域を検出し、移動量(t0,t0-k)を算出する。ここで、スパイラルサーチとは、前景画像領域(t0)の存在する確率が高い座標(ここでは、前記探索初期座標)から、図14に示すように、螺旋順に段々と範囲を広げるように座標を移動して、推定領域を探索する手法である。なお、移動量算出部1561は、予め定めた値より類似性が高い移動量を抽出した場合に、そこでスパイラルサーチを終了してもよい。これにより、移動量算出部1561は、計算量を削減できる。
 移動量算出部1561は、螺旋順に選択した座標(選択座標という)を重心として、次式(9)を用いて類似度RSADを算出し、その値が最も小さい領域を推定領域に決定する。
Specifically, the movement amount calculation unit 1561 calculates coordinates (x0, y0) (referred to as initial search coordinates) of the center of gravity of the ROI area indicated by the ROI information (t0). The movement amount calculation unit 1561 performs a spiral search around the initial search coordinates (x0, y0) to detect the estimated region and calculates the movement amount (t0, t0-k). Here, the spiral search is a coordinate in which the range of the foreground image area (t0) is high (in this case, the search initial coordinates) so as to gradually expand the range in a spiral order as shown in FIG. It is a technique for moving and searching for an estimated area. When the movement amount calculation unit 1561 extracts a movement amount having a similarity higher than a predetermined value, the spiral search may end there. Thereby, the movement amount calculation unit 1561 can reduce the calculation amount.
The movement amount calculation unit 1561 calculates the similarity R SAD using the following formula (9) using the coordinates selected in the spiral order (referred to as selected coordinates) as the center of gravity, and determines the region having the smallest value as the estimation region.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 ただし、M×N(図2の例では、W1×L1)はテンプレートの大きさを表し、(i,j)はテンプレート内の画素の座標を表し、T(i,j|t0-k)は座標(i,j)の位置の画素値を表す。また、(dx,dy)は選択座標からROI情報(t0-k)が示すROI領域の重心を減算した値(オフセット値)であり、I(i+dx,j+dy|t0)は処理画像(t0)における座標(i+dx,j+dy)での画素値を表す。また、式(9)は、絶対値をマンハッタン距離(L-距離、L-ノルム)で算出し、iとjについて、その総和をとることを示す。
 なお、色空間がRGBの場合、I(x,y|t0)は、RGB空間における各色成分の値、r(x,y|t0)、g(x,y|t0)、b(x,y|t0)を用いて次式(10)で表される。
However, M × N (W1 × L1 in the example of FIG. 2) represents the size of the template, (i, j) represents the coordinates of the pixels in the template, and T (i, j | t0−k) is The pixel value at the position of coordinates (i, j) is represented. Further, (dx, dy) is a value (offset value) obtained by subtracting the center of gravity of the ROI area indicated by the ROI information (t0-k) from the selected coordinates, and I (i + dx, j + dy | t0) is in the processed image (t0). The pixel value at the coordinates (i + dx, j + dy) is represented. Equation (9) indicates that the absolute value is calculated by the Manhattan distance (L 1 -distance, L 1 -norm) and the sum of i and j is taken.
When the color space is RGB, I (x, y | t0) is the value of each color component in the RGB space, r (x, y | t0), g (x, y | t0), b (x, y | T0) is expressed by the following equation (10).
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 移動量算出部1561は、算出した移動量(t0,t0-k)を前景領域確率マップ生成部1562に出力する。
 前景領域確率マップ生成部1562は、K個の移動量(t0,t0-k)(k=1、2、・・・K)と前景領域情報(t0-k)、及び前景領域抽出部155より取得した時刻t0の前景領域情報(t0)とを用いて、処理画像(t0)上の各座標(x,y)が前景画像領域に含まれる確率を表す確率P(x,y|t0)を算出する。具体的には、前景領域確率マップ生成部1562は、次式(11)を用いて確率P(x,y|t0)を算出する。
The movement amount calculation unit 1561 outputs the calculated movement amount (t0, t0-k) to the foreground region probability map generation unit 1562.
The foreground area probability map generation unit 1562 includes K movement amounts (t0, t0-k) (k = 1, 2,... K), foreground area information (t0-k), and the foreground area extraction unit 155. Using the acquired foreground area information (t0) at time t0, a probability P (x, y | t0) representing the probability that each coordinate (x, y) on the processed image (t0) is included in the foreground image area is obtained. calculate. Specifically, the foreground area probability map generation unit 1562 calculates the probability P (x, y | t0) using the following equation (11).
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 ただし、wは重み係数を表し、k=0からKまでのwの総和は1である。また、dx、dyは、それぞれ、移動量(t,t±k)のx成分、y成分を表す。また、M(x,y|t0-k)は、処理画像(t0-k)の座標(x,y)の画素が前景画像領域(t0-k)である場合には「1」、前景画像領域(t0-k)でない(背景画像領域である)場合には「0」となる値である。なお、重み係数wは、例えば、式(12)に示すようにt0からの時間距離に応じて設定してもよい。つまり、時刻t0から離れた時刻の前景領域情報に関しては、重み係数の値を小さく設定する。 Here, w k represents a weighting factor, and the sum of w k from k = 0 to K is 1. Dx k and dy k represent the x component and the y component of the movement amount (t, t ± k), respectively. M (x, y | t0-k) is “1” when the pixel at the coordinates (x, y) of the processed image (t0-k) is the foreground image area (t0-k). When the area is not the area (t0-k) (the background image area), the value is “0”. Note that the weighting factor w k may be set according to the time distance from t0, for example, as shown in Expression (12). That is, for the foreground area information at a time away from time t0, the value of the weighting factor is set small.
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 なお、処理画像(t0)の全ての座標(x,y)についての確率P(x,y|t0)の集合を前景領域確率マップP(t0)という。前景領域確率マップP(t0)は、次式(13)で表される。 Note that a set of probabilities P (x, y | t0) for all coordinates (x, y) of the processed image (t0) is referred to as a foreground region probability map P (t0). The foreground area probability map P (t0) is expressed by the following equation (13).
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 ここで、Wは処理画像(t0)の横方向のピクセル数、Hは処理画像(t0)の縦方向のピクセル数を表す。 Here, W represents the number of pixels in the horizontal direction of the processed image (t0), and H represents the number of pixels in the vertical direction of the processed image (t0).
 前景領域確率マップ生成部1562は、算出した前景領域確率マップP(t0)に対して、次式(14)を用いて、処理画像(t0)の座標(x,y)の画素が前景画像領域であるか否かを示すM(x,y|t0)(前景領域情報)を算出する。 The foreground area probability map generation unit 1562 uses the following expression (14) for the calculated foreground area probability map P (t0), and the pixel at the coordinates (x, y) of the processed image (t0) M (x, y | t0) (foreground area information) indicating whether or not
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 前景領域確率マップ生成部1562は、P(x,y|t0)が閾値Th(x,y|t0)より大きい場合はM(x,y|t0)を「1」(前景画像領域)、P(x,y|t)が閾値Th(x,y|t0)以下の場合はM(x,y|t0)を「0」(背景画像領域)とする。ここで、閾値Th(x,y|t0)は、0~1の値をとり、例えば、次式(15)で表される。 The foreground area probability map generation unit 1562 sets M (x, y | t0) to “1” (foreground image area) and P when P (x, y | t0) is larger than the threshold Th (x, y | t0). When (x, y | t) is equal to or smaller than the threshold Th (x, y | t0), M (x, y | t0) is set to “0” (background image region). Here, the threshold value Th (x, y | t0) takes a value of 0 to 1, and is represented by the following equation (15), for example.
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 ここで、Nは、前景領域確率マップを生成するために用いるフレーム数(画像数)であり(本実施形態ではN=K)、NF0は(1≦NF0<N-1)を満たす所定の値である。
 前景領域確率マップ生成部1562は、算出した前景領域情報M(x,y|t0)を境界線補正部1564に出力する。
Here, N F is the number of frames (number of images) used to generate the foreground region probability map (N F = K in this embodiment), and N F0 is (1 ≦ N F0 <N F −1). It is a predetermined value satisfying
The foreground area probability map generation unit 1562 outputs the calculated foreground area information M (x, y | t0) to the boundary line correction unit 1564.
 境界線補正部1564は、前景領域確率マップ生成部1562から入力された前景領域情報M(x,y|t0)が示す前景画像領域の輪郭に沿って、輪郭の補正処理を行う。具体的には、モルフォロジー画像処理のオープニング及びクロージングを行うことで、平滑化された輪郭(滑らかな輪郭)とする。 The boundary line correction unit 1564 performs a contour correction process along the contour of the foreground image region indicated by the foreground region information M (x, y | t0) input from the foreground region probability map generation unit 1562. Specifically, a smoothed outline (smooth outline) is obtained by performing opening and closing of the morphological image processing.
<前景領域補正部156の動作について>
 以下、前景領域補正処理の動作の詳細について説明をする。
 図15は、本実施形態に係る前景領域補正処理の動作の一例を示すフローチャートである。
<Operation of Foreground Area Correction Unit 156>
Hereinafter, the details of the operation of the foreground area correction processing will be described.
FIG. 15 is a flowchart showing an example of the operation of the foreground area correction process according to the present embodiment.
(ステップS207-01)移動量算出部1561は、時刻t0から時刻t0-Kの情報(映像情報(t0-k)、映像情報(t0)、前景領域情報(t0-k)、前景領域情報(t0)、ROI情報(t0-k)、ROI情報(t0))をバッファ部158から読み出す。前景領域確率マップ生成部1562は、時刻t0から時刻t0-Kの前景領域情報(t0-k)をバッファ部158から読み出す。その後、ステップS207-02へ進む。 (Step S207-01) The movement amount calculation unit 1561 includes information from time t0 to time t0-K (video information (t0-k), video information (t0), foreground area information (t0-k), foreground area information ( t0), ROI information (t0-k), and ROI information (t0)) are read from the buffer unit 158. The foreground area probability map generation unit 1562 reads the foreground area information (t0-k) from time t0 to time t0-K from the buffer unit 158. Thereafter, the process proceeds to step S207-02.
(ステップS207-02)移動量算出部1561は、ステップS207-01で読み出した情報に基づいて、前景画像領域(t0-k)の移動量(t0,t0-k)を算出する。その後、ステップS207-03へ進む。
(ステップS207-03)移動量算出部1561は、時刻t0-1から時刻t0-Kまでの移動量(t0,t0-k)を算出したか否か(未処理のバッファはないか)を判定する。時刻t0-1から時刻t0-Kまでの移動量(t0,t0-k)を算出したと判定した場合(Yes)、ステップS207-04へ進む。一方、移動量(t0,t0-k)を算出していない時刻t0-kがあると判定した場合(Yes)、kの値を変更し、ステップS207-02へ戻る。
(Step S207-02) The movement amount calculation unit 1561 calculates the movement amount (t0, t0-k) of the foreground image area (t0-k) based on the information read out in step S207-01. Thereafter, the process proceeds to step S207-03.
(Step S207-03) The movement amount calculation unit 1561 determines whether or not the movement amount (t0, t0-k) from time t0-1 to time t0-K has been calculated (whether there is an unprocessed buffer). To do. If it is determined that the movement amount (t0, t0-k) from time t0-1 to time t0-K has been calculated (Yes), the process proceeds to step S207-04. On the other hand, if it is determined that there is a time t0-k for which the movement amount (t0, t0-k) is not calculated (Yes), the value of k is changed, and the process returns to step S207-02.
(ステップS207-04)前景領域確率マップ生成部1562は、ステップS207-02で算出された移動量(t0,t0-k)(k=1、2、・・・K)と、ステップS207-01で読み出した前景領域情報(t0-k)とを用いて、前景領域確率マップP(t0)を算出する。その後、ステップS207-05へ進む。
(ステップS207-05)前景領域確率マップ生成部1562は、ステップS207-04で算出した前景領域確率マップP(t0)に対して、式(13)を用いて、前景領域情報M(x,y|t0)を算出する。つまり、前景領域確率マップ生成部1562は、前景画像領域を前景領域情報M(x,y|t0)=1の領域として抽出する。その後、ステップS208-01へ進む。
(ステップS208-01)境界線補正部1564は、前景領域情報M(x,y|t0)が示す前景画像領域の輪郭に沿って、輪郭の補正処理を行う。その後、動作を終了する。
(Step S207-04) The foreground region probability map generation unit 1562 calculates the movement amount (t0, t0-k) (k = 1, 2,... K) calculated in Step S207-02, and Step S207-01. The foreground area probability map P (t0) is calculated using the foreground area information (t0-k) read out in step (b). Thereafter, the process proceeds to step S207-05.
(Step S207-05) The foreground area probability map generation unit 1562 uses the expression (13) for the foreground area probability map P (t0) calculated in step S207-04 to calculate the foreground area information M (x, y | T0) is calculated. That is, the foreground area probability map generation unit 1562 extracts the foreground image area as an area having foreground area information M (x, y | t0) = 1. Thereafter, the process proceeds to step S208-01.
(Step S208-01) The boundary line correction unit 1564 performs outline correction processing along the outline of the foreground image area indicated by the foreground area information M (x, y | t0). Thereafter, the operation is terminated.
 図4に戻って、マスク情報生成部157は、前景領域補正部156から入力された情報が示す補正後の前景画像領域(t)を表すマスクを生成する。なお、マスク情報生成部157は、マスクは前景画像領域以外の領域である背景領域を表すマスクを生成してもよい。マスク情報生成部157は、生成したマスク情報(t)を出力する。 Returning to FIG. 4, the mask information generation unit 157 generates a mask representing the corrected foreground image region (t) indicated by the information input from the foreground region correction unit 156. Note that the mask information generation unit 157 may generate a mask representing a background area that is an area other than the foreground image area. The mask information generation unit 157 outputs the generated mask information (t).
 図4に戻って、バッファ部158の動作の詳細について説明をする。
 バッファ部158は、時刻(t0)における映像の前景領域抽出処理完了後に、以下の条件Aを満たす時刻(t)における各種データ(映像情報(t)、前景画像領域(t)を示す情報、奥行情報(t)、ROI情報(t)等)を破棄し、時刻(t0)における各種データ(映像情報(t0)、前景画像領域(t0)を示す情報、奥行情報(t0)、ROI情報(t0)等)に更新する動作を行う記憶及び更新手段である。
Returning to FIG. 4, the details of the operation of the buffer unit 158 will be described.
After completion of the foreground area extraction process of the video at time (t0), the buffer unit 158 performs various data (video information (t), information indicating the foreground image area (t), depth at the time (t) that satisfies the following condition A: Information (t), ROI information (t), etc. are discarded, and various data (video information (t0), information indicating foreground image area (t0), depth information (t0), ROI information (t0) at time (t0) ) And the like.
<条件A>
(1)時刻(t0)と時刻(t0-k)との時間距離が最も離れている(過去、未来の時刻を問わない)時刻tの各種データ
(2)時刻(t0)と時刻(t0-k)との形状特徴パラメータ(例えば、モーメント不変量)の類似性が最も小さい時刻tの各種データ
<Condition A>
(1) Various data at time t (2) Time (t0) and time (t0-) where the time distance between time (t0) and time (t0-k) is the longest (regardless of past or future time) k) Various data at time t with the smallest similarity of shape feature parameters (for example, moment invariants) to k)
 図16は、本実施形態に係るバッファ部158の動作の一例を示すフローチャートである。
(ステップS158-01)バッファ部158は、時刻(t0)における各種情報を記憶するための空きバッファを検索する。その後、ステップS158-02へ進む。
(ステップS158-02)バッファ部158は、ステップS158で検索した結果、空きバッファがあるか否かを判定する。空きバッファがあると判定した場合(Yes)、ステップS158-05へ進む。一方、空きバッファがないと判定した場合(No)、ステップS158-03へ進む。
FIG. 16 is a flowchart illustrating an example of the operation of the buffer unit 158 according to the present embodiment.
(Step S158-01) The buffer unit 158 searches for an empty buffer for storing various information at the time (t0). Thereafter, the process proceeds to step S158-02.
(Step S158-02) The buffer unit 158 determines whether or not there is an empty buffer as a result of the search in step S158. If it is determined that there is an empty buffer (Yes), the process proceeds to step S158-05. On the other hand, if it is determined that there is no empty buffer (No), the process proceeds to step S158-03.
(ステップS158-03)バッファ部158は、条件Aを満たす情報が記憶されているバッファ(対象バッファという)を選択する。その後、ステップS158-04へ進む。
(ステップS158-04)バッファ部158は、ステップS158-03で選択した対象バッファに格納されている各種データを破棄することで、対象バッファを空にする(記憶領域をクリアする)。その後、ステップS158-05へ進む。
(ステップS158-05)バッファ部158は、対象バッファへ時刻(t)における各種データを格納し、バッファの更新制御を終了する。
(Step S158-03) The buffer unit 158 selects a buffer (referred to as a target buffer) in which information satisfying the condition A is stored. Thereafter, the process proceeds to step S158-04.
(Step S158-04) The buffer unit 158 discards the various data stored in the target buffer selected in Step S158-03, thereby emptying the target buffer (clearing the storage area). Thereafter, the process proceeds to step S158-05.
(Step S158-05) The buffer unit 158 stores various data at time (t) in the target buffer, and ends the buffer update control.
<オブジェクト抽出部15の動作について>
 オブジェクト抽出部15が行う動作について説明をする。
 図17は、本実施形態に係るオブジェクト抽出部15の動作の一例を示すフローチャートである。
<Operation of Object Extraction Unit 15>
An operation performed by the object extraction unit 15 will be described.
FIG. 17 is a flowchart illustrating an example of the operation of the object extraction unit 15 according to the present embodiment.
(ステップS201)オブジェクト抽出部15は、各種データ(映像情報(t)、奥行情報(t)、ROI情報(t))を読み込む。具体的には、フィルタ部151a及びバッファ部158には映像情報(t)、フィルタ部151bは奥行情報(t)、分布モデル推定部152及びバッファ部158にはROI情報(t)が入力される。その後、ステップS202へ進む。
(ステップS202)オブジェクト抽出部15は、ROI情報(t)に含まれる抽出対象フラグが有りを示すか、無しを示すかを判定することにより、抽出対象画像が有るか否かを判定する。抽出対象画像が有ると判定した場合(Yes)、ステップS203へ進む。一方、抽出対象画像が無いと判定した場合(No)、ステップS210へ進む。
(Step S201) The object extraction unit 15 reads various data (video information (t), depth information (t), ROI information (t)). Specifically, video information (t) is input to the filter unit 151a and the buffer unit 158, depth information (t) is input to the filter unit 151b, and ROI information (t) is input to the distribution model estimation unit 152 and the buffer unit 158. . Thereafter, the process proceeds to step S202.
(Step S202) The object extraction unit 15 determines whether there is an extraction target image by determining whether the extraction target flag included in the ROI information (t) indicates presence or absence. If it is determined that there is an extraction target image (Yes), the process proceeds to step S203. On the other hand, when it determines with there being no extraction object image (No), it progresses to step S210.
(ステップS203)フィルタ部151aは、ステップS201で入力された映像情報(t)からノイズを除去し、平滑化処理を行う。フィルタ部151bは、ステップS201で入力された奥行情報(t)からノイズを除去し、平滑化処理を行う。その後、ステップS204へ進む。
(ステップS204)分布モデル推定部152は、ステップS203で平滑化処理された奥行情報(t)及びステップS201で入力されたROI情報(t)に基づいて、ROI内のROI奥行分布情報(t)を推定する。その後、ステップS205へ進む。
(Step S203) The filter unit 151a removes noise from the video information (t) input in step S201 and performs a smoothing process. The filter unit 151b removes noise from the depth information (t) input in step S201 and performs a smoothing process. Thereafter, the process proceeds to step S204.
(Step S204) The distribution model estimation unit 152, based on the depth information (t) smoothed in Step S203 and the ROI information (t) input in Step S201, the ROI depth distribution information (t) in the ROI. Is estimated. Thereafter, the process proceeds to step S205.
(ステップS205)クラスタリング部153は、ステップS203で平滑化処理された映像情報(t)に対してクラスタリングを行うことにより、処理画像(t)をスーパーピクセルに分割する。クラスタリング部153は、スーパーピクセル毎に、ラベリング付けを行って、領域情報(t)を生成する。その後、ステップS206へ進む。
(ステップS206)特徴量算出部154は、ステップS205で生成された領域情報(t)、ステップS203で平滑化処理された映像情報(t)、平滑化処理された奥行情報(t)及びROI情報(t)に基づいて、領域(ラベル)毎の特徴量を算出する。その後、ステップS207へ進む。
(Step S205) The clustering unit 153 divides the processed image (t) into superpixels by performing clustering on the video information (t) smoothed in step S203. The clustering unit 153 performs labeling for each super pixel to generate region information (t). Thereafter, the process proceeds to step S206.
(Step S206) The feature amount calculation unit 154 includes the region information (t) generated in step S205, the video information (t) smoothed in step S203, the depth information (t) smoothed, and the ROI information. Based on (t), a feature value for each region (label) is calculated. Thereafter, the process proceeds to step S207.
(ステップS207)前景領域抽出部155は、ステップS201で入力されたROI情報(t)、ステップS204で推定されたROI奥行分布情報(t)、ステップS206で算出された特徴量情報(t)に基づいて、映像情報(t)から、抽出対象となる前景画像領域(t)を抽出する(前景領域抽出処理)。前景領域抽出部155は、抽出した前景画像領域(t)を示す前景領域情報(t)を生成する。その後、ステップS208へ進む。
(ステップS208)前景領域補正部156は、ステップS207で抽出された前景領域(t)について、時刻t0における前景画像領域(t0)を、その時刻に近接する時刻t0-k(k=1、2、・・・、K)の前景画像領域(t0-k)に基づいて、補正する(前景領域補正処理)。
(Step S207) The foreground region extraction unit 155 applies the ROI information (t) input in step S201, the ROI depth distribution information (t) estimated in step S204, and the feature amount information (t) calculated in step S206. Based on the video information (t), the foreground image area (t) to be extracted is extracted (foreground area extraction processing). The foreground area extraction unit 155 generates foreground area information (t) indicating the extracted foreground image area (t). Thereafter, the process proceeds to step S208.
(Step S208) The foreground area correction unit 156, for the foreground area (t) extracted in step S207, sets the foreground image area (t0) at time t0 close to the time t0-k (k = 1, 2). ,..., K) based on the foreground image area (t0-k) (foreground area correction processing).
(ステップS209)マスク情報生成部157は、ステップS208で補正した前景画像領域(t0)を表すマスクを示すマスク情報を生成する。その後、ステップS210へ進む。
(ステップS210)マスク情報生成部157は、ステップS209で生成したマスク情報をマスク情報記憶部16に記憶する。
(Step S209) The mask information generation unit 157 generates mask information indicating a mask representing the foreground image region (t0) corrected in step S208. Thereafter, the process proceeds to step S210.
(Step S210) The mask information generation unit 157 stores the mask information generated in step S209 in the mask information storage unit 16.
 図18は、本実施形態に係る奥行情報(t)の一例を示す概略図である。この図において、画像D1、D2、D3は、それぞれ奥行情報(t1-2)、奥行情報(t1-1)、奥行情報(t1)を示す。画像D1、D2、D3において、色が同じ箇所は奥行が同じであることを示す。また、図18では、色の明るい(淡い)画像部分は、色の暗い(濃い)画像部分と比較して奥行が小さい(手前に位置する)ことを示す。
 なお、図18の奥行情報(t)は、ステレオカメラにより撮影した映像に対して、左眼カメラを基準に右眼カメラとの視差のずれ量をステレオマッチングにより取得したものでる。また、奥行情報D1~D3において、鎖線で囲まれた画像の左部分(符号U1~U3を付した部分)は、左眼カメラから見える映像と右眼カメラから見える映像が異なるため、視差のずれ量が求まらない不定領域である。
FIG. 18 is a schematic diagram illustrating an example of depth information (t) according to the present embodiment. In this figure, images D1, D2, and D3 indicate depth information (t1-2), depth information (t1-1), and depth information (t1), respectively. In the images D1, D2, and D3, portions having the same color indicate the same depth. Further, FIG. 18 shows that the bright (light) image portion of the color has a smaller depth (positioned forward) than the dark (dark) image portion of the color.
Note that the depth information (t) in FIG. 18 is obtained by acquiring the amount of deviation of the parallax from the right-eye camera based on the left-eye camera with respect to the video shot by the stereo camera. Also, in the depth information D1 to D3, the left part of the image surrounded by the chain line (the part attached with the symbols U1 to U3) has a difference in parallax because the video seen from the left eye camera and the video seen from the right eye camera are different. This is an indefinite area where the amount cannot be determined.
 図19は、本実施形態に係る前景領域確率マップP(t0)の一例を示す概略図である。この図において、画像P1、P2、P3は、それぞれ前景領域確率マップP(t1-2)、前景領域確率マップP(t1-1)、前景領域確率マップP(t1)を示す。この前景領域確率マップP(t)は、ぞれぞれ、図18の奥行情報(t)に基づいて算出したものである。この図において、色の明るい(淡い)画像部分は、色の暗い(濃い)画像部分と比較して奥行が小さい(手前に位置する)可能性が高いことを示す。 FIG. 19 is a schematic diagram illustrating an example of the foreground area probability map P (t0) according to the present embodiment. In this figure, images P1, P2, and P3 respectively show a foreground area probability map P (t1-2), a foreground area probability map P (t1-1), and a foreground area probability map P (t1). This foreground area probability map P (t) is calculated based on the depth information (t) in FIG. In this figure, it is shown that a bright (light) color image portion is more likely to have a smaller depth (positioned in front) than a dark (dark) color image portion.
 図20は、本実施形態に係る前景画像領域(t)の一例の説明図である。この図において、画像M1a、M2a、M3aは、それぞれ、本実施形態に係る前景領域補正部156による補正後の前景画像領域(t1-2)、前景画像領域(t1-1)、前景画像領域(t1)である。画像M1b、M2b、M3bは、それぞれ、従来技術による前景画像領域(t1-2)、前景画像領域(t1-1)、前景画像領域(t1)である。 FIG. 20 is an explanatory diagram of an example of the foreground image area (t) according to the present embodiment. In this figure, images M1a, M2a, and M3a are respectively a foreground image area (t1-2), a foreground image area (t1-1), and a foreground image area (after correction by the foreground area correction unit 156 according to the present embodiment). t1). Images M1b, M2b, and M3b are a foreground image area (t1-2), a foreground image area (t1-1), and a foreground image area (t1), respectively, according to the prior art.
 図20において、従来技術による画像M1b、M2b、M3bには、符号E1~E6の符号を付した部分に、欠損部分や誤抽出部分が発生している。また、画像M1b、M2b、M3bを時間に沿って再生した場合、前景画像領域の欠損部分や誤抽出部分が画像毎に発生することによって、抽出形状(輪郭)の不連続性によるフリッカ、ちらつきが生じる。
 一方、本実施形態による画像M1a、M2a、M3aでは、前景画像領域の形状が平滑化(安定化)され、画像M1b、M2b、M3bと比較して、前景画像領域の欠損部分、誤抽出部分の発生が低減されている。これにより、本実施形態では、画像M1a、M2a、M3aを時間に沿って再生した場合でも、抽出形状の不連続性によるフリッカ、ちらつきが生じることを抑制できる。
In FIG. 20, in the images M1b, M2b, and M3b according to the conventional technique, a missing portion and an erroneous extraction portion are generated in the portions denoted by the symbols E1 to E6. In addition, when the images M1b, M2b, and M3b are played back in time, the foreground image area has a missing portion or an erroneously extracted portion for each image, thereby causing flicker and flicker due to discontinuity of the extracted shape (contour). Arise.
On the other hand, in the images M1a, M2a, and M3a according to the present embodiment, the shape of the foreground image area is smoothed (stabilized), and compared with the images M1b, M2b, and M3b, the foreground image area has a missing portion and an erroneous extraction portion. Occurrence has been reduced. Thereby, in this embodiment, even when the images M1a, M2a, and M3a are reproduced along the time, it is possible to suppress the occurrence of flicker and flicker due to the discontinuity of the extracted shape.
 このように、本実施形態によれば、前景領域補正部156が、時刻t0における前景領域情報(t0)が示す前景画像領域(t0)を、時刻t0-kにおける前景領域情報(t0-k)及び映像情報(t0-k)を用いて補正する。これにより、映像処理装置1は、抽出形状の不連続性によるフリッカ、ちらつきが生じることを抑制でき、対象物の画像を確実に抽出できる。 Thus, according to the present embodiment, the foreground area correction unit 156 converts the foreground image area (t0) indicated by the foreground area information (t0) at time t0 to the foreground area information (t0-k) at time t0-k. And video information (t0-k). Thereby, the video processing apparatus 1 can suppress the occurrence of flicker and flicker due to the discontinuity of the extraction shape, and can reliably extract the image of the object.
 また、本実施形態によれば、移動量算出部1561が、時刻t0における映像情報(t0)とROI情報(t0)、及び、時刻t0-kおける映像情報(t0-k)と前景領域情報(t0-k)とROI情報に基づいて、時刻t0-kから時刻t0の間に前景画像領域(t0-k)が移動した移動量を算出する、前景領域確率マップ生成部1562は、移動量算出部1561が算出した移動量と前景画像領域(t0-k)とに基づいて、時刻t0における映像中の各座標が前景画像領域である前景領域確率マップP(t0)を算出する。境界領域補正部1564は、前景画像確率マップ生成部1562が算出した前景領域確率マップP(t0)に基づいて時刻t0における前景領域情報(t0)を抽出し、抽出した前景領域情報(t0)が示す前景画像領域(t0)を補正する。これにより、映像処理装置1は、抽出形状の不連続性によるフリッカ、ちらつきが生じることを抑制でき、対象物の画像を確実に抽出できる。 Further, according to the present embodiment, the movement amount calculation unit 1561 performs the video information (t0) and the ROI information (t0) at the time t0, and the video information (t0-k) and the foreground area information (at the time t0-k). The foreground region probability map generation unit 1562 that calculates the amount of movement of the foreground image region (t0-k) from time t0-k to time t0 based on the t0-k) and ROI information Based on the movement amount calculated by the unit 1561 and the foreground image area (t0-k), a foreground area probability map P (t0) in which each coordinate in the video at time t0 is the foreground image area is calculated. The boundary region correction unit 1564 extracts the foreground region information (t0) at time t0 based on the foreground region probability map P (t0) calculated by the foreground image probability map generation unit 1562, and the extracted foreground region information (t0) The foreground image area (t0) shown is corrected. Thereby, the video processing apparatus 1 can suppress the occurrence of flicker and flicker due to the discontinuity of the extraction shape, and can reliably extract the image of the object.
 また、本実施形態によれば、フィルタ部151aは、式(1)のバイラテラルフィルタを用いることにより、映像情報(t)の画像各々を、エッジ成分が保持された骨格成分と、ノイズや模様を含むテクスチャ成分と、へ分離することができる。 In addition, according to the present embodiment, the filter unit 151a uses the bilateral filter of Expression (1), so that each image of the video information (t) is converted into a skeleton component in which an edge component is held, noise, and a pattern. Can be separated into texture components.
 また、本実施形態によれば、フィルタ部151bが奥行情報(t)のノイズを除去し平滑化する。これにより、映像処理装置1では、分布モデル推定部152における奥行分布モデルの混合モデルの推定精度を向上できる。その結果、映像処理装置1では、前景領域抽出部105において、基本前景領域とROI内の各領域の統合処理に用いる奥行に関する閾値を精度良く決定できる。 Further, according to the present embodiment, the filter unit 151b removes the noise of the depth information (t) and smoothes it. Thereby, in the video processing apparatus 1, the estimation accuracy of the mixed model of the depth distribution model in the distribution model estimation unit 152 can be improved. As a result, in the video processing device 1, the foreground area extraction unit 105 can accurately determine the threshold value relating to the depth used for the integration process of the basic foreground area and each area in the ROI.
 また、本実施形態によれば、映像処理装置1では、エッジ保持平滑化フィルタ後の骨格成分の画像に対して、クラスタリング部153においてクラスタリングを行う。これにより、映像処理装置1では、ノイズ、テクスチャに対して安定した(ロバストな)スーパーピクセル群を得ることができる。なお、スーパーピクセルとは、ある程度の大きな面積を持ち、かつ意味のある領域のことを表す。 Also, according to the present embodiment, in the video processing device 1, the clustering unit 153 performs clustering on the skeleton component image after the edge holding smoothing filter. As a result, the video processing apparatus 1 can obtain a superpixel group that is stable (robust) against noise and texture. Note that the super pixel represents a meaningful area having a certain large area.
 また、本実施形態によれば、映像処理装置1では、ROI情報を用いてROI内の奥行分布モデルを混合モデルにより求めることで、より精度良く前景領域の奥行分布モデルを取得できる。その結果、前景領域抽出部155において、基本前景領域とROI内の各領域の統合処理に用いる奥行に関する閾値を精度良く決定することが可能となる。 Further, according to the present embodiment, the video processing apparatus 1 can obtain the depth distribution model of the foreground region with higher accuracy by obtaining the depth distribution model in the ROI using the ROI information by the mixed model. As a result, the foreground area extraction unit 155 can accurately determine the threshold value relating to the depth used for the integration process of the basic foreground area and each area in the ROI.
 また、本実施形態によれば、前景領域補正部156による補正によって、時刻(t0)における抽出対象画像領域の欠損部分、または抽出対象画領域の誤抽出部分を補正し時間方向に関する抽出画像形状(輪郭)の不連続性によるフリッカ、ちらつきを抑制できる。 In addition, according to the present embodiment, the foreground region correction unit 156 corrects a missing portion of the extraction target image region at time (t0) or an erroneous extraction portion of the extraction target image region at time (t0) to extract the extracted image shape in the time direction ( Flickering and flickering due to discontinuity of (contour) can be suppressed.
 また、本実施形態において、映像処理装置1では、映像情報(t)におけるノイズを除去して平滑化処理を行う。これにより、映像処理装置1では、ノイズ、テクスチャ成分が起因となる、領域面積が小さいスーパーピクセル群が発生することを抑制できる。 In the present embodiment, the video processing apparatus 1 performs smoothing processing by removing noise in the video information (t). Thereby, in the video processing apparatus 1, it can suppress that the super pixel group with a small area | region area resulting from noise and a texture component arises.
 また、本実施形態において、移動量算出部1561がスパイラルサーチのような動き探索を用いることで、移動量(t0,t0-k)を求めるのに必要な計算量を削減できる。また、移動量算出部1561は、類似度の算出時には、マスクMk上の白部分(前景領域)のみを用いてもよい(図13参照)。これにより、映像処理装置1では、背景領域を含んでテンプレートマッチングを行う場合よりも、移動量の探索誤りを防止でき、また、不要な計算を省略できる。なお、本実施形態では、過去の時刻(t-k)(k=1~K)から時刻(t)への前景領域の移動量を求める例を述べたが、未来の時刻(t+k)(k=1~K)から時刻(t)への移動量を求めてもよい。 In the present embodiment, the movement amount calculation unit 1561 uses a motion search such as a spiral search, so that the amount of calculation required to obtain the movement amount (t0, t0-k) can be reduced. Further, the movement amount calculation unit 1561 may use only the white portion (foreground region) on the mask Mk when calculating the similarity (see FIG. 13). Thereby, in the video processing apparatus 1, it is possible to prevent a search error of the movement amount and to omit unnecessary calculation, compared to a case where template matching is performed including the background region. In this embodiment, the example in which the amount of movement of the foreground area from the past time (t−k) (k = 1 to K) to the time (t) has been described, but the future time (t + k) (k = 1 to K), the amount of movement from time (t) may be obtained.
 なお、上記実施形態におけるユーザ指定ROI情報(ts)の検出処理では、長方形(四角形)の選択ツールを用いた場合について説明をした。しかし、本発明はこれに限らず、他の選択ツールを用いてもよい。例えば、図21、22に示す選択ツールを用いてもよい。 In addition, in the detection process of the user-specified ROI information (ts) in the above embodiment, the case where a rectangular (quadrangle) selection tool is used has been described. However, the present invention is not limited to this, and other selection tools may be used. For example, a selection tool shown in FIGS. 21 and 22 may be used.
 図21は、本実施形態に係るユーザ指定ROI情報(ts)の検出処理の別の一例を示す概略図である。図21は、楕円(円)の選択ツールを用いた場合の図であり、楕円の選択ツールで、抽出したい対象画像をユーザが囲ったことを表す。
この図において、符号r2を付した枠(対象物Oの外接円;外接円には楕円も含まれる)の位置情報がユーザ指定ROI情報(ts)である。具体的には、図21の場合に、ユーザ指定ROI情報(ts)は、例えば、以下の表2のデータとして記録される。
FIG. 21 is a schematic diagram illustrating another example of the user-specified ROI information (ts) detection process according to the present embodiment. FIG. 21 is a diagram when an ellipse (circle) selection tool is used, and shows that the user has surrounded the target image to be extracted with the ellipse selection tool.
In this figure, the position information of the frame (reference circumscribed circle of the object O; the circumscribed circle includes an ellipse) with the symbol r2 is user-specified ROI information (ts). Specifically, in the case of FIG. 21, the user-specified ROI information (ts) is recorded as data in Table 2 below, for example.
Figure JPOXMLDOC01-appb-T000016
Figure JPOXMLDOC01-appb-T000016
 表2では、ユーザ指定ROI情報(ts)は、時刻ts(又は、フレーム番号でもよい)、外接円内に抽出対象画像が有るか無いかを示す有無フラグ(抽出対象フラグ)、外接円の中心位置(x0,y0)(図21では点P2の座標)、外接円の短軸方向(図21では符号D21を付したベクトルの方向)及び短辺の長さ(図2では符号W2で表す長さ)、外接円の長軸方向(図21では符号D22を付したベクトルの方向)及び長辺の長さ(図21では符号L2で表す長さ)で、表される。 In Table 2, user-specified ROI information (ts) includes time ts (or frame number), presence / absence flag (extraction target flag) indicating whether or not an extraction target image exists in the circumscribed circle, and the center of the circumscribed circle The position (x0, y0) (the coordinates of the point P2 in FIG. 21), the minor axis direction of the circumscribed circle (the direction of the vector labeled with the symbol D21 in FIG. 21), and the short side length (the length represented by the symbol W2 in FIG. ), The major axis direction of the circumscribed circle (the direction of the vector labeled with D22 in FIG. 21) and the length of the long side (the length represented by the symbol L2 in FIG. 21).
 図22は、本実施形態に係るユーザ指定ROI情報(ts)の検出処理の別の一例を示す概略図である。図22は、フリーハンドの選択ツールを用いた場合の図であり、抽出したい対象画像をユーザが囲ったことを表す。この図において、符号r3を付した枠(対象物Oの外接形状)の位置情報がユーザ指定ROI情報(ts)である。具体的には、図22の場合に、ユーザ指定ROI情報(ts)は、例えば、以下の表3のデータとして記録される。 FIG. 22 is a schematic diagram showing another example of detection processing of user-specified ROI information (ts) according to the present embodiment. FIG. 22 is a diagram when a freehand selection tool is used, and represents that the user has surrounded the target image to be extracted. In this figure, the position information of the frame (the circumscribed shape of the object O) denoted by reference numeral r3 is user-specified ROI information (ts). Specifically, in the case of FIG. 22, the user-specified ROI information (ts) is recorded, for example, as data in Table 3 below.
Figure JPOXMLDOC01-appb-T000017
Figure JPOXMLDOC01-appb-T000017
 表3では、ユーザ指定ROI情報(ts)は、時刻ts(又は、フレーム番号でもよい)、外接形状内に抽出対象画像が有るか無いかを示す有無フラグ(抽出対象フラグ)、外接形状の始点位置(x0,y0)(ユーザが外接形状の入力を始めた点の位置)(図22では点P3の座標)、始点位置から時計周り(又は反時計周りでもよい)に外接形状の縁上の点を表したチェインコードで、表される。なお、チェインコードとは、ある点Aに対して隣接する点Bの位置を数値化し、さらに、その隣接する点Bに対して隣接する点C(点Aではない点)の位置を数値化する、ことを繰り返し、それらの数値の結合によって、線を表すものである。 In Table 3, the user-specified ROI information (ts) includes time ts (or may be a frame number), a presence / absence flag (extraction target flag) indicating whether or not an extraction target image exists in the circumscribed shape, and the starting point of the circumscribed shape Position (x0, y0) (the position of the point at which the user started to input the circumscribed shape) (the coordinates of the point P3 in FIG. 22), on the edge of the circumscribed shape clockwise from the starting point position (or may be counterclockwise) It is represented by a chain code that represents a point. The chain code is a numerical value of a position of a point B adjacent to a certain point A, and a numerical value of a position of a point C (a point other than the point A) adjacent to the adjacent point B. , And so on, and a line is represented by the combination of those numerical values.
 また、上記実施形態において、処理画像単位毎に、ROI取得部13より得られたROI情報(t)を利用し、抽出対象となる画像領域を囲むROIの形状を、処理画像(t)に重畳し、抽出対象の画像領域が選択されていることをユーザに提示してもよい。また、現在表示されている処理画像(t)のフレーム番号、時刻情報をユーザに提示してもよい。 Further, in the above embodiment, the ROI information (t) obtained from the ROI acquisition unit 13 is used for each processing image unit, and the shape of the ROI surrounding the image area to be extracted is superimposed on the processing image (t). Then, the user may be notified that the image area to be extracted is selected. Further, the frame number and time information of the currently displayed processed image (t) may be presented to the user.
 また、上記実施形態において、前景領域補正部156がある時刻t0の前景画像領域(t0)に対して前の時刻t0-kの前景画像領域(t0-k)を用いる場合について説明したが、本発明はこれに限られない。例えば、ある時刻t0より後の時刻t0+kの前景画像領域(t0+k)のみを用いてもよいし、ある時刻t0の前後の時刻t0±kの前景画像領域(t0±k)のみを用いてもよい。また、k=1であってもよい。 In the above embodiment, the case where the foreground image area (t0-k) at the previous time t0-k is used for the foreground image area (t0) at the time t0 with the foreground area correction unit 156 has been described. The invention is not limited to this. For example, only the foreground image region (t0 + k) at time t0 + k after a certain time t0 may be used, or only the foreground image region (t0 ± k) at time t0 ± k before and after a certain time t0 may be used. . Moreover, k = 1 may be sufficient.
 また、上記実施形態において、奥行情報(t)は、映像情報(t)の1画素に対して1個の情報でなくてもよく、隣接する複数の画素に対して1個の情報であってもよい。つまり、奥行情報(t)が表す解像度は映像情報(t)の解像度と異なっていてもよい。
 また、奥行情報(t)は、例えば、近接する複数の撮像装置で被写体を撮像し、撮像した複数の映像情報から被写体の位置等のズレを検出して奥行きを算出するステレオマッチングによって、算出された情報である。ただし、奥行情報(t)は、ステレオマッチング等のパッシブステレオ方式によって算出された情報に限られず、TOF(Time-Of-Flight)法等の光を利用した能動的3次元計測器(レンジファインダ)によって取得した情報であってもよい。
In the above embodiment, the depth information (t) may not be one piece of information for one pixel of the video information (t), but one piece of information for a plurality of adjacent pixels. Also good. That is, the resolution represented by the depth information (t) may be different from the resolution of the video information (t).
Further, the depth information (t) is calculated by, for example, stereo matching in which a subject is imaged by a plurality of adjacent imaging devices, and a displacement such as the position of the subject is detected from a plurality of captured video information to calculate a depth. Information. However, the depth information (t) is not limited to information calculated by a passive stereo method such as stereo matching, but an active three-dimensional measuring instrument (range finder) using light such as TOF (Time-Of-Flight) method. It may be the information acquired by.
 また、上記実施形態において、映像表示部14はタッチパネル型のディスプレイである場合について説明したが、本発明はこれに限らず、他の入力手段であってもよいし、映像処理装置1が映像表示部14とは別に入力部(例えば、マウス等のポインティングデバイス)を備えてもよい。 In the above embodiment, the case where the video display unit 14 is a touch panel type display has been described. However, the present invention is not limited to this, and other input means may be used, and the video processing apparatus 1 may display video. In addition to the unit 14, an input unit (for example, a pointing device such as a mouse) may be provided.
 また、上記実施形態において、ROI取得部13は、例えば、以下の(1)~(5)の手法のいずれかを用いて、ROI情報(t)を抽出してもよい。
(1)Harris corner detector
(2)FAST conrner detection
(3)SUSAN(Smallest Univalue Segment Assimilating Nucleus) corner detector
(4)SURF(speed up robust features)
(5)SIFT(Scale-invariant Feature Transform)
In the above embodiment, the ROI acquisition unit 13 may extract the ROI information (t) using, for example, any of the following methods (1) to (5).
(1) Harris corner detector
(2) FAST corner detection
(3) Small Universe Segment Assessing Nucleus (corner detector)
(4) SURF (speed up robust features)
(5) SIFT (Scale-invariant Feature Transform)
 また、上記実施形態において、ROI取得部13が特徴点を用いてROI情報(t)を抽出する場合について説明したが、本発明はこれに限らず、例えば、ユーザの指定領域の色情報の分布に基づき、パーティクルフィルタやMean-shiftによってROI情報(t)を抽出してもよい。また、例えば、ROI取得部13は、公知の動き探索を用いて、ROI情報(t)を抽出してもよい。 In the above-described embodiment, the case where the ROI acquisition unit 13 extracts the ROI information (t) using the feature points has been described. However, the present invention is not limited to this, for example, the distribution of the color information of the user-specified area Based on the above, ROI information (t) may be extracted by a particle filter or Mean-shift. For example, the ROI acquisition unit 13 may extract the ROI information (t) using a known motion search.
 また、上記実施形態において、フィルタ部151a、151bがバイラテラルフィルタを用いる場合について説明したが、本発明はこれに限らず、フィルタ部151a、151bは他のフィルタ、例えば、TV(Total Variation)フィルタ、k最近隣平均化フィルタ(k-nearest neighbor averaging filter)、メディアンフィルタ、エッジ強度が小さい平坦部のみにローパスフィルタを用いてもよい。
 また、上記実施形態において、フィルタ部151a、151bは、エッジ平滑化フィルタは再帰的に行ってもよい。
 また、上記実施形態において、映像処理装置1は、映像情報(t)、奥行情報(t)に対するエッジ保持平滑化フィルタ処理を、オブジェクト抽出部15に入力する前に行ってもよい。
In the above embodiment, the case where the filter units 151a and 151b use bilateral filters has been described. However, the present invention is not limited to this, and the filter units 151a and 151b may be other filters, for example, TV (Total Variation) filters. , K-nearest neighbor averaging filter, median filter, low pass filter may be used only for the flat part with small edge strength.
Moreover, in the said embodiment, the filter parts 151a and 151b may perform an edge smoothing filter recursively.
In the above-described embodiment, the video processing device 1 may perform the edge holding smoothing filter processing on the video information (t) and the depth information (t) before inputting them to the object extraction unit 15.
 また、上記実施形態において、分布モデル推定部152が混合モデルに用いる分布モデルとしてガウス分布を用いる場合について説明したが、本発明はこれに限らず、例えば、指数型分布族(ラプラス分布、 ベータ分布、ベルヌーイ分布など)を用いてもよい。また、分布モデル推定部152は、混合モデルに用いるクラス数Kcを予め定めた値としてもよいし、次の一例のように値を決定してもよい。
 分布モデル推定部152は、クラス数Kcに予め定めたクラス数Kc’を設定し、K-means法により、クラスタリングを行う。その後、分布モデル推定部152は、クラス間距離が所定閾値以下または未満を満たすクラスCiとクラスCjがある場合は、クラスCiとクラスCjとを併合して、新たなクラスCk’とする処理を行う。分布モデル推定部152は、この処理を、クラス数が一定値へ収束するまで繰り返すことにより、クラス数Kc(≦Kc’)を決定する。
 なお、分布モデル推定部152が奥行分布モデルの推定に用いる手法は、混合モデルなどのパラメトリックの推定手法に限定されず、Mean-shift法などのノンパラメトリックの推定手法であってもよい。
In the above-described embodiment, the case where the Gaussian distribution is used as the distribution model used by the distribution model estimation unit 152 for the mixed model has been described. However, the present invention is not limited to this, and for example, an exponential distribution family (Laplace distribution, beta distribution) , Bernoulli distribution, etc.) may be used. In addition, the distribution model estimation unit 152 may set the number of classes Kc used for the mixed model to a predetermined value, or may determine a value as in the following example.
The distribution model estimation unit 152 sets a predetermined class number Kc ′ as the class number Kc, and performs clustering by the K-means method. Thereafter, when there is a class Ci and a class Cj where the distance between classes satisfies a predetermined threshold value or less, the distribution model estimation unit 152 performs a process of merging the class Ci and the class Cj into a new class Ck ′. Do. The distribution model estimation unit 152 determines the number of classes Kc (≦ Kc ′) by repeating this process until the number of classes converges to a constant value.
Note that the method used by the distribution model estimation unit 152 to estimate the depth distribution model is not limited to a parametric estimation method such as a mixed model, and may be a non-parametric estimation method such as a Mean-shift method.
 また、上記実施形態において、クラスタリング部153が特徴量空間でのクラスタリングを行う場合について説明したが、本発明はこれに限らず、画像空間でのクラスタリングを行ってもよい。画像空間でのクラスタリングとは、特徴量空間に写像せず、元の画像空間において、画素間、または領域を構成する画素群(領域)間の類似度を基に、領域分割を実施する手法である。例えば、クラスタリング部153は、以下の手法の画像空間でのクラスタリングを行ってもよい。
(a)画素結合法
例えば、クラスタリング部153は、ピクセル間の連結関係を重み付無向グラフで表し、頂点が全域最小木を構成するように連結関係を表すエッジの強度によって領域統合を行う。
(b)領域成長法(Region Growing法ともいう)
(c)領域分割統合法(Split&Merge法ともいう)
(d)(a)、(b)、(c)のいずれかを組み合わせた手法
なお、クラスタリング部153は、画像空間でのクラスタリング処理の終了後、ラベリング付けを行い、ラベリング結果を表す領域情報(t)を生成する。
In the above embodiment, the case where the clustering unit 153 performs clustering in the feature amount space has been described. However, the present invention is not limited to this, and clustering in the image space may be performed. Clustering in the image space is a method for performing region division based on the similarity between pixels or pixel groups (regions) constituting the region in the original image space without mapping to the feature amount space. is there. For example, the clustering unit 153 may perform clustering in the image space using the following method.
(A) Pixel Combining Method For example, the clustering unit 153 represents the connection relationship between pixels as a weighted undirected graph, and performs region integration based on the strength of the edge representing the connection relationship so that the vertices form a global minimum tree.
(B) Region growth method (also referred to as region growing method)
(C) Region division integration method (also referred to as Split & Merge method)
(D) A method combining any one of (a), (b), and (c) Note that the clustering unit 153 performs labeling after the clustering processing in the image space, and region information (label information) indicating the labeling result ( t).
 また、上記実施形態において、移動量算出部1561が類似度RSAD(式(9))を算出し(SAD(Sum of Absolute Difference))、その値が最も小さい領域を推定領域に決定したが、本発明はこれに限らず、例えば、(1)~(3)に示す他の手法を用いて推定領域を決定してもよい。 In the above embodiment, the movement amount calculation unit 1561 calculates the similarity R SAD (Equation (9)) (SAD (Sum of Absolute Difference)), and determines the region having the smallest value as the estimation region. The present invention is not limited to this. For example, the estimation region may be determined using other methods shown in (1) to (3).
(1)SSD(Sum of Squared Difference)
 移動量算出部1561は、各画像間で対応する画素の値の差の絶対値をユークリッド距離(L-距離、L-ノルム)で算出し、その総和RSDD(次式(16))の値が最も小さい領域を推定領域に決定する。
(1) SSD (Sum of Squared Difference)
The movement amount calculation unit 1561 calculates the absolute value of the difference between the corresponding pixel values between the images using the Euclidean distance (L 2 −distance, L 2 −norm), and the sum R SDD (the following equation (16)) The region having the smallest value of is determined as the estimated region.
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
 ここで、式(16)は、絶対値をユークリッド距離(L-距離、L-ノルム)で算出し、iとjについて、その総和をとることを示す。 Here, Expression (16) indicates that the absolute value is calculated by the Euclidean distance (L 2 -distance, L 2 -norm), and the sum of i and j is taken.
(2)NCC(Normalized Cross-Correlation)
 正規化相互相関とも称す。移動量算出部1561は、次式(17)のRNCCの値が最も1に近い領域を推定領域に決定する。
(2) NCC (Normalized Cross-Correlation)
Also called normalized cross-correlation. The movement amount calculation unit 1561 determines an area where the RNCC value of the following equation (17) is closest to 1 as an estimated area.
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000019
(3)CCC(Cross-Correlation Coefficient)
 相互相関係数ともいう。移動量算出部1561は、次式(18)のRCCCの値が最も1に近い領域を推定領域に決定する。なお、式(18)中のアイバー(「I」(アイ)の上に「-」(バー))及びティーバー(「T」(ティー)の上に「-」(バー))は、それぞれが示す領域内の画素値の平均ベクトルを表す。
(3) CCC (Cross-Correlation Coefficient)
Also called cross-correlation coefficient. The movement amount calculation unit 1561 determines an area where the value of R CCC in the following equation (18) is closest to 1 as an estimated area. In addition, the eye bar (“-” (bar) on “I” (eye)) and tea bar (“-” (bar) on “T” (tea)) in formula (18) are respectively This represents an average vector of pixel values in the indicated area.
Figure JPOXMLDOC01-appb-M000020
Figure JPOXMLDOC01-appb-M000020
 なお、移動量算出部1561の演算量は、式(9)、(16)、(17)、(18)の順で大きくなる。なお、移動量算出部1561は上記のスパイラルサーチに代えて、階層的探索法(多重解像度法、疎密探索法(coarse-to-fine search)ともいう)を用いて、移動量を算出してもよい。 Note that the operation amount of the movement amount calculation unit 1561 increases in the order of the equations (9), (16), (17), and (18). Note that the movement amount calculation unit 1561 may calculate the movement amount using a hierarchical search method (also referred to as multi-resolution method or coarse-to-fine search method) instead of the spiral search. Good.
 また、上記実施形態において、クラスタリング部153は、ROI取得部13より得られるROI情報(t)に基づいて、ROI内の画像に関してクラスタリングを行なってもよい。これにより、クラスタリング部153は、演算理を削減させることができる。また、クラスタリング部153は、ROI情報に基づいて、対象画像領域よりも広い領域に対して、クラスタリングを行なってもよい。これにより、クラスタリング部153は、ROI内の画像に関してクラスタリングを行なう場合に比べて、クラスタリングの精度を向上させることができる。 In the above-described embodiment, the clustering unit 153 may perform clustering on the images in the ROI based on the ROI information (t) obtained from the ROI acquisition unit 13. Thereby, the clustering unit 153 can reduce the arithmetic operation. Further, the clustering unit 153 may perform clustering on an area wider than the target image area based on the ROI information. As a result, the clustering unit 153 can improve the accuracy of clustering as compared to the case where clustering is performed on images in the ROI.
 また、上記実施形態において、領域統合部1534は、領域統合の判定時に、ROI内に重心がある領域に関して、領域の一部がROIの境界を越えるか否か、領域特徴量(t)の外接矩形を用いて判定してもよい。これにより、映像処理装置1は、背景領域を前景領域として誤抽出することを低減させることができる。また、領域統合部1534は、基本前景領域の特徴量を用いる代わりに、領域間の隣接関係を用いて領域統合の判定を行なってもよい。例えば、領域統合部1534は、既に前景領域であると判定した領域の特徴量を用いて、既に前景領域であると判定した領域に隣接する領域との領域統合の判定を行なってもよい。これにより、映像処理装置1では、より精度良く前景領域を抽出することが可能となる。 In the above-described embodiment, the region integration unit 1534 determines whether or not a part of the region exceeds the ROI boundary with respect to the region having the center of gravity in the ROI when determining region integration. You may determine using a rectangle. Thereby, the video processing apparatus 1 can reduce erroneous extraction of the background area as the foreground area. In addition, the area integration unit 1534 may determine the area integration using the adjacent relationship between the areas instead of using the feature amount of the basic foreground area. For example, the region integration unit 1534 may determine the region integration with the region adjacent to the region that has already been determined to be the foreground region, using the feature amount of the region that has already been determined to be the foreground region. As a result, the video processing apparatus 1 can extract the foreground region with higher accuracy.
 なお、上述した実施形態における映像処理装置1の一部をコンピュータで実現するようにしても良い。その場合、この制御機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することによって実現しても良い。なお、ここでいう「コンピュータシステム」とは、映像処理装置1に内蔵されたコンピュータシステムであって、OSや周辺機器等のハードウェアを含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含んでも良い。また上記プログラムは、前述した機能の一部を実現するためのものであっても良く、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであっても良い。
 また、上述した実施形態における映像処理装置1の一部、または全部を、LSI(Large Scale Integration)等の集積回路として実現しても良い。映像処理装置1の各機能ブロックは個別にプロセッサ化してもよいし、一部、または全部を集積してプロセッサ化しても良い。また、集積回路化の手法はLSIに限らず専用回路、または汎用プロセッサで実現しても良い。また、半導体技術の進歩によりLSIに代替する集積回路化の技術が出現した場合、当該技術による集積回路を用いても良い。
A part of the video processing apparatus 1 in the above-described embodiment may be realized by a computer. In that case, the program for realizing the control function may be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be read by a computer system and executed. Here, the “computer system” is a computer system built in the video processing apparatus 1 and includes an OS and hardware such as peripheral devices. The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” is a medium that dynamically holds a program for a short time, such as a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line, In such a case, a volatile memory inside a computer system serving as a server or a client may be included and a program that holds a program for a certain period of time. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
Further, a part or all of the video processing device 1 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration). Each functional block of the video processing apparatus 1 may be individually made into a processor, or a part or all of them may be integrated into a processor. Further, the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, in the case where an integrated circuit technology that replaces LSI appears due to progress in semiconductor technology, an integrated circuit based on the technology may be used.
 以上、図面を参照してこの発明の一実施形態について詳しく説明してきたが、具体的な構成は上述のものに限られることはなく、この発明の要旨を逸脱しない範囲内において様々な設計変更等をすることが可能である。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to that described above, and various design changes and the like can be made without departing from the scope of the present invention. It is possible to
 本発明は、映像処理装置に用いて好適である。 The present invention is suitable for use in a video processing apparatus.
 1・・・映像処理装置、10・・・映像情報取得部、11・・・奥行情報取得部、12・・・映像情報再生部、13・・・ROI取得部、14・・・映像表示部、15・・・オブジェクト抽出部、16・・・マスク情報記録部、151a、151b・・・フィルタ部、152・・・分布モデル推定部、153・・・クラスタリング部、154・・・特徴量算出部、155・・・前景領域抽出部、156・・・前景領域補正部、157・・・マスク情報生成部、158・・・バッファ部、1531・・・特徴量検出部、1532・・・シード生成部、1533・・・領域成長部、1534・・・領域統合部、1561・・・移動量算出部、1562・・・前景領域確率マップ生成部、1563・・・補正前景領域確定部、1564・・・境界領域補正部 DESCRIPTION OF SYMBOLS 1 ... Video processing apparatus, 10 ... Video information acquisition part, 11 ... Depth information acquisition part, 12 ... Video information reproduction part, 13 ... ROI acquisition part, 14 ... Video display part 15 ... Object extraction unit, 16 ... Mask information recording unit, 151a, 151b ... Filter unit, 152 ... Distribution model estimation unit, 153 ... Clustering unit, 154 ... Feature amount calculation , 155... Foreground region extraction unit, 156... Foreground region correction unit, 157... Mask information generation unit, 158... Buffer unit, 1531. Generating unit, 1533 ... area growing unit, 1534 ... area integrating unit, 1561 ... movement amount calculating unit, 1562 ... foreground area probability map generating unit, 1563 ... corrected foreground area determining part, 1564 ···boundary Frequency correction unit

Claims (7)

  1.  映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置であって、
     第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正部を備えることを特徴とする映像処理装置。
    A video processing apparatus that extracts foreground region information indicating a foreground image from a video indicated by video information,
    A video processing apparatus comprising: a foreground area correction unit that corrects a foreground image indicated by foreground area information at a first time by using foreground area information and video information at a second time.
  2.  前景領域補正部は、第1の時間における前景領域情報が示す前景画像を、複数の第2の時間における前景領域情報及び映像情報を用いて補正することを特徴とする請求項1に記載の映像処理装置。 2. The video according to claim 1, wherein the foreground area correction unit corrects the foreground image indicated by the foreground area information at the first time by using the foreground area information and the video information at a plurality of second times. Processing equipment.
  3.  前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、
     前景領域補正部は、
     第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出部と、
     前記移動量算出部が算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成部と、
     前記前景画像確率マップ生成部が算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正部と、
     を備えることを特徴とする請求項1又は2に記載の映像処理装置。
    The video indicated by the video information includes an image of a predetermined object,
    The foreground area correction unit
    Based on the video information and the target image area information indicating the image of the object at the first time, and the video information, the foreground area information and the target image area information at the second time, A movement amount calculation unit for calculating a movement amount by which the foreground image has moved during one time;
    A foreground region probability map generation unit that calculates a probability that a portion in the video at the first time is a foreground image based on the movement amount calculated by the movement amount calculation unit and the foreground image;
    A correction unit that extracts foreground region information at a first time based on the probability calculated by the foreground image probability map generation unit, and corrects a foreground image indicated by the extracted foreground region information;
    The video processing apparatus according to claim 1, further comprising:
  4.  映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置における映像処理方法であって、
     前景領域補正部が、第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正ステップを有することを特徴とする映像処理方法。
    A video processing method in a video processing apparatus for extracting foreground area information indicating a foreground image from a video indicated by video information,
    The foreground area correction unit includes a foreground area correction step for correcting the foreground image indicated by the foreground area information at the first time by using the foreground area information and the video information at the second time. Method.
  5.  前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、
     前景領域補正ステップは、
     移動量算出部が、第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出ステップと、
     前景領域確率マップ生成部が、前記移動量算出ステップで算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成ステップと、
     補正部が、前記前景画像確率マップ生成ステップで算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正ステップと、
     を有することを特徴とする請求項4に記載の映像処理装置。
    The video indicated by the video information includes an image of a predetermined object,
    The foreground area correction step
    Based on the video information and the target image area information indicating the image of the object in the first time, and the video information, foreground area information and target image area information in the second time, the movement amount calculation unit, A moving amount calculating step for calculating a moving amount by which the foreground image has moved from a second time to a first time;
    A foreground area probability map generation unit calculates a probability that a portion in the video at the first time is a foreground image based on the movement amount calculated in the movement amount calculation step and the foreground image. A map generation step;
    A correction unit that extracts the foreground area information at the first time based on the probability calculated in the foreground image probability map generation step and corrects the foreground image indicated by the extracted foreground area information;
    The video processing apparatus according to claim 4, further comprising:
  6.  映像情報が示す映像から前景の画像を示す前景領域情報を抽出する映像処理装置のコンピュータに、
     第1の時間における前景領域情報が示す前景の画像を、第2の時間における前景領域情報及び映像情報を用いて補正する前景領域補正手順を実行させるための映像処理プログラム。
    In the computer of the video processing apparatus that extracts the foreground area information indicating the foreground image from the video indicated by the video information,
    A video processing program for executing a foreground area correction procedure for correcting a foreground image indicated by foreground area information at a first time using foreground area information and video information at a second time.
  7.  前記映像情報が示す映像には、予め定められた対象物の画像が含まれ、
     前景領域補正手順は、
     第1の時間における、映像情報と前記対象物の画像を示す対象画像領域情報、及び、第2の時間における、映像情報と前景領域情報と対象画像領域情報に基づいて、第2の時間から第1の時間の間に前記前景の画像が移動した移動量を算出する移動量算出手順、
     前記移動量算出手順で算出した移動量と前記前景の画像とに基づいて、第1の時間における映像中の部分が前景の画像である確率を算出する前景領域確率マップ生成手順、
     前記前景画像確率マップ生成手順が算出した確率に基づいて第1の時間における前景領域情報を抽出し、抽出した前景領域情報が示す前景の画像を補正する補正手順、
     を実行させるための映像処理プログラム。
    The video indicated by the video information includes an image of a predetermined object,
    The foreground area correction procedure is
    Based on the video information and the target image area information indicating the image of the object at the first time, and the video information, the foreground area information and the target image area information at the second time, A moving amount calculation procedure for calculating a moving amount by which the foreground image has moved during one time;
    A foreground region probability map generation procedure for calculating a probability that a portion in the video at the first time is a foreground image based on the movement amount calculated in the movement amount calculation procedure and the foreground image;
    A correction procedure for extracting the foreground area information at the first time based on the probability calculated by the foreground image probability map generation procedure and correcting the foreground image indicated by the extracted foreground area information;
    Video processing program for executing
PCT/JP2011/073639 2010-10-14 2011-10-14 Video processing device, method of video processing and video processing program WO2012050185A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010231928A JP5036084B2 (en) 2010-10-14 2010-10-14 Video processing apparatus, video processing method, and program
JP2010-231928 2010-10-14

Publications (1)

Publication Number Publication Date
WO2012050185A1 true WO2012050185A1 (en) 2012-04-19

Family

ID=45938406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/073639 WO2012050185A1 (en) 2010-10-14 2011-10-14 Video processing device, method of video processing and video processing program

Country Status (2)

Country Link
JP (1) JP5036084B2 (en)
WO (1) WO2012050185A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
US20200402256A1 (en) * 2017-12-28 2020-12-24 Sony Corporation Control device and control method, program, and mobile object
US11087169B2 (en) * 2018-01-12 2021-08-10 Canon Kabushiki Kaisha Image processing apparatus that identifies object and method therefor

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013248207A1 (en) * 2012-11-15 2014-05-29 Thomson Licensing Method for superpixel life cycle management
KR101896301B1 (en) * 2013-01-03 2018-09-07 삼성전자주식회사 Apparatus and method for processing depth image
JP6174894B2 (en) * 2013-04-17 2017-08-02 キヤノン株式会社 Image processing apparatus and image processing method
JP6341650B2 (en) * 2013-11-20 2018-06-13 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP6445775B2 (en) * 2014-04-01 2018-12-26 キヤノン株式会社 Image processing apparatus and image processing method
JP6546385B2 (en) * 2014-10-02 2019-07-17 キヤノン株式会社 IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
JP6403207B2 (en) * 2015-01-28 2018-10-10 Kddi株式会社 Information terminal equipment
JP6655513B2 (en) * 2016-09-21 2020-02-26 株式会社日立製作所 Attitude estimation system, attitude estimation device, and range image camera
JP2020167441A (en) * 2019-03-28 2020-10-08 ソニーセミコンダクタソリューションズ株式会社 Solid-state image pickup device and electronic apparatus
CN117377985A (en) * 2021-05-24 2024-01-09 京瓷株式会社 Teaching data generation device, teaching data generation method, and image processing device
WO2023047643A1 (en) * 2021-09-21 2023-03-30 ソニーグループ株式会社 Information processing apparatus, image processing method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11112871A (en) * 1997-09-30 1999-04-23 Sony Corp Image extraction device, image extraction method, image encoder, image encoding method, image decoder, image decoding method, image recorder, image recording method, image reproducing device and image producing method
JP2001076161A (en) * 1999-09-02 2001-03-23 Canon Inc Method and device for image processing and storage medium
JP2008523454A (en) * 2004-12-15 2008-07-03 ミツビシ・エレクトリック・リサーチ・ラボラトリーズ・インコーポレイテッド How to model background and foreground regions
JP2009526495A (en) * 2006-02-07 2009-07-16 クゥアルコム・インコーポレイテッド Region of interest image object segmentation between modes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024834A (en) * 2000-07-11 2002-01-25 Canon Inc Image processing device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11112871A (en) * 1997-09-30 1999-04-23 Sony Corp Image extraction device, image extraction method, image encoder, image encoding method, image decoder, image decoding method, image recorder, image recording method, image reproducing device and image producing method
JP2001076161A (en) * 1999-09-02 2001-03-23 Canon Inc Method and device for image processing and storage medium
JP2008523454A (en) * 2004-12-15 2008-07-03 ミツビシ・エレクトリック・リサーチ・ラボラトリーズ・インコーポレイテッド How to model background and foreground regions
JP2009526495A (en) * 2006-02-07 2009-07-16 クゥアルコム・インコーポレイテッド Region of interest image object segmentation between modes

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200402256A1 (en) * 2017-12-28 2020-12-24 Sony Corporation Control device and control method, program, and mobile object
US11822341B2 (en) * 2017-12-28 2023-11-21 Sony Corporation Control device, control method, and mobile object to estimate the mobile object's self-position
US11087169B2 (en) * 2018-01-12 2021-08-10 Canon Kabushiki Kaisha Image processing apparatus that identifies object and method therefor
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
CN111709328B (en) * 2020-05-29 2023-08-04 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment

Also Published As

Publication number Publication date
JP5036084B2 (en) 2012-09-26
JP2012085233A (en) 2012-04-26

Similar Documents

Publication Publication Date Title
JP5036084B2 (en) Video processing apparatus, video processing method, and program
Crabb et al. Real-time foreground segmentation via range and color imaging
KR102121707B1 (en) Object digitization
US9396569B2 (en) Digital image manipulation
Gallup et al. Piecewise planar and non-planar stereo for urban scene reconstruction
CN109636732B (en) Hole repairing method of depth image and image processing device
KR101670282B1 (en) Video matting based on foreground-background constraint propagation
US9542735B2 (en) Method and device to compose an image by eliminating one or more moving objects
US10249029B2 (en) Reconstruction of missing regions of images
CN111563908B (en) Image processing method and related device
Wang et al. Simultaneous matting and compositing
Le et al. Object removal from complex videos using a few annotations
JP5656768B2 (en) Image feature extraction device and program thereof
CN108537868A (en) Information processing equipment and information processing method
Recky et al. Façade segmentation in a multi-view scenario
JP6272071B2 (en) Image processing apparatus, image processing method, and program
JP2011517226A (en) System and method for enhancing the sharpness of an object in a digital picture
AU2014277855A1 (en) Method, system and apparatus for processing an image
Luo et al. Depth-aided inpainting for disocclusion restoration of multi-view images using depth-image-based rendering
CN114600160A (en) Method of generating three-dimensional (3D) model
WO2022056875A1 (en) Method and apparatus for segmenting nameplate image, and computer-readable storage medium
Engels et al. Automatic occlusion removal from façades for 3D urban reconstruction
Finger et al. Video Matting from Depth Maps
Xiang et al. A modified joint trilateral filter based depth map refinement method
JP2013120504A (en) Object extraction device, object extraction method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11832615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11832615

Country of ref document: EP

Kind code of ref document: A1