WO2005004060A1 - 輪郭抽出装置、輪郭抽出方法及び輪郭抽出プログラム - Google Patents
輪郭抽出装置、輪郭抽出方法及び輪郭抽出プログラム Download PDFInfo
- Publication number
- WO2005004060A1 WO2005004060A1 PCT/JP2004/009325 JP2004009325W WO2005004060A1 WO 2005004060 A1 WO2005004060 A1 WO 2005004060A1 JP 2004009325 W JP2004009325 W JP 2004009325W WO 2005004060 A1 WO2005004060 A1 WO 2005004060A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- contour
- nodes
- distance
- node
- target area
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/755—Deformable models or variational models, e.g. snakes or active contours
Definitions
- the present invention relates to an apparatus, method and program for extracting an outline of an object from an image obtained by imaging the object by a camera.
- the snake method is a method of extracting the contour of an object by transforming a closed curve so as to minimize a previously defined energy function. Specifically, after the outline of the object is set, the outline of the object is repeatedly contracted and deformed so that the energy function represented by the following equation (1) becomes equal to or less than a predetermined value.
- Equation (1) p (s) is a node forming an outline
- Eint (p (s)) is an “internal energy” representing the smoothness of the outline
- Eimage (p (s)) is an image "Image energy” representing the brightness gradient of Econ (p (s)) represents “external energy” representing a force externally applied to the contour.
- the outline of the object can not be accurately extracted in the case where there are a plurality of objects or the outline of the object is given in advance! ,,, there were problems.
- the outline of the object and the outline of the other object are one. There is a problem that it is extracted as a contour.
- connection line LI connecting “node P”, “node P”, and b b + 1 a b + 1, “node P”
- connection line L2 connecting "" and "node P" ba + 1
- contour V is split into the contour VI and the contour V2.
- Non-Patent Document 1 discloses the following technology.
- Patent Document 1 Japanese Patent Application Laid-Open No. 8-329254 (pages 5-6, FIG. 4)
- Patent Document 2 Japanese Patent Application Laid-Open No. 9 270014 (page 48, FIG. 2)
- Non-Patent Document 1 Sakaguchi, Mino, Ikeda, "Study on setting of SNAKE parameters", Technical Report of IEICE, (PRU 90-21), P 43-49
- an object of the present invention is to provide a contour extraction device, a contour extraction method and a contour extraction program capable of performing necessary contour segmentation processing at an earlier timing while reducing the calculation cost.
- Another object of the present invention is to provide an outline extraction device, an outline extraction method and an outline extraction program capable of flexibly performing a plurality of outline processes regardless of the distance to the object.
- a contour extraction device is a device for extracting the contour of the object from an image obtained by imaging the object with a camera, With respect to a plurality of nodes that are formed and moved so as to minimize the predefined energy function, when the distance between two different nodes becomes equal to or less than a threshold set according to the distance to the object, It is characterized in that a new connection line is provided at the 2 nodal point part and the outline is split.
- This apparatus sets the distance between different two nodes according to the distance to the object, with respect to a plurality of nodes which move so as to minimize the predefined energy function after forming the contour. If it becomes less than the threshold value, a new connection line is provided at the two nodal points to split the contour.
- the contour extraction device is a device for extracting the contour of the object from an image obtained by imaging the object with a camera, the object being extracted in the image.
- the nodes are formed by connecting the nodes in a predetermined order by moving a node arrangement part for arranging a plurality of nodes and an energy function defined in advance so that the energy function defined in advance becomes minimum at the periphery of the region including.
- the distance between the nodes becomes equal to or less than the first threshold value, and the distance between the nodes becomes equal to or less than the first threshold value. If two nodes are detected,
- One of the nodal forces is characterized by including a connecting line setting unit that splits the contour by setting a new connecting line at an adjacent node on the front side or the rear side of the other node.
- a plurality of nodes are arranged on the periphery of the object in the node arrangement portion, and after the nodes formed in the contour deforming portion are connected in a predetermined order, the contour formed is deformed.
- the measurement section measure the internode distance for all combinations of nodes excluding adjacent nodes. Then, if two nodes whose inter-node distance is less than or equal to the first threshold are detected by the connection line setting unit, new nodes are added from one of the nodes to the next node at the front or back of the other node. Break the contour by setting connection lines.
- the first threshold is set so that the distance to the object is closer and larger (claim 3).
- the method further comprises target area setting means for setting an area including the pixels of the target based on the plurality of nodes on the periphery of the target area set by the target area setting means. It is characterized by arranging nodes.
- the apparatus can set an area including the pixel of the object based on the distance information, the motion information, and the edge information of the object generated from the image by the object area setting unit. Then, the node placement unit places a plurality of nodes on the periphery of the target area set by the target area setting means.
- the target area setting unit sets an image within an area having a predetermined front and rear width from the target distance, with a distance where a pixel whose motion has been detected is detected more than the second threshold.
- the edge information of the reflected pixel is obtained, and the target area is set around the pixel row in which the accumulated value of the detected pixels is maximized (claim 5).
- the target area setting means sets a rectangular area having a width of 50 cm to the left and right and a height of 2 m as the target area.
- the target area setting unit repeats setting processing of a new target area until it is determined that a predetermined number of contours set in advance are extracted or that another target area can not be set. ).
- the target area setting means has already extracted A new area is set from the area other than the outline area and the area judged as an area where no object exists.
- the target area setting means sets the target area also using color information obtained from the camera.
- a contour extraction method is a method for extracting the contour of the object from an image obtained by imaging the object by a camera, wherein the object is included in the image.
- a node arrangement step of arranging a plurality of nodes at the periphery of the region and moving the nodes so as to minimize a previously defined energy function forms the nodes in a predetermined order by connecting the nodes.
- a contour deformation step of deforming the contour an inter-node distance measurement step of measuring the inter-node distance for all combinations of the nodes excluding the adjacent nodes, and the two inter-node distances being less than a first threshold If a node is detected, including a connection line setting step for dividing the contour by setting a new connection line at the next node on the front side or the back side of one of the node forces of the other node. It is characterized in.
- the node arranging step a plurality of node points are arranged at the periphery of the region including the object, and in the contour deforming step, the nodes are moved so as to minimize the previously defined energy function.
- the contour formed is deformed, and then in the distance measuring step between the nodes, the distance between the nodes is measured for all combinations of the nodes except the adjacent nodes. .
- the connection line setting step when two nodes whose inter-node distance is equal to or less than the first threshold are detected based on the measurement result in the inter-node distance measuring step, one of the two nodes is Break the contour by setting a new connection line to the next node on the front or back side
- the contour extraction program according to claim 11 includes a computer, in the image, for extracting a contour of the object from an image of the object captured by a camera.
- a contour formed by connecting the nodes in a predetermined order is formed by moving a node arrangement means for arranging a plurality of nodes around the periphery and moving the node points so as to minimize a previously defined energy function.
- Contour deforming means to deform and adjacent nodes
- Inter-node distance measuring means for measuring inter-node distances for combinations of all nodes excluding points, and when two nodes whose inter-node distance is less than the first threshold are detected, one of the nodes Force It is characterized in that it functions as a connecting line setting means for dividing the contour by setting a new connecting line at the next node on the front side or the rear side of the other node.
- the program causes the computer to function as nodal placement means, contour deformation means, internodal distance measurement means, and connection line setting means, whereby the nodal placement means arranges a plurality of parts around the area including the object.
- the nodes After arranging the nodes in the predetermined order by deforming the contours formed by connecting the nodes in a predetermined order, the nodes are arranged by moving the nodes so as to minimize the energy function defined in advance by the contour deforming means.
- the internode distance measuring means measures the internode distance for all combinations of nodes except for adjacent nodes.
- connection line setting means based on the measurement result in the distance between nodes measuring step.
- the present invention it is possible to perform necessary contour processing division more quickly and in timing while reducing the calculation cost. Further, according to the present invention, it is possible to perform a plurality of contour processes flexibly regardless of the distance to the object.
- FIG. 1 is a block diagram showing the overall configuration of the contour extraction system A. As shown in FIG. Here, it is assumed that the outline of a person (hereinafter referred to as a “target person”) is extracted. doing.
- the contour extraction system A analyzes two force cameras (la, lb) that capture an object person (not shown) and an image (captured image) captured by the camera 1 It comprises a captured image analysis device 2 that generates various information, and a contour extraction device 3 that extracts the contour of a target person based on the various information generated by the captured image analysis device 2.
- the camera 1, the captured image analysis device 2, and the contour extraction device 3 will be described in order.
- Camera 1 (la, lb) is a color CCD camera, and right camera la and left camera lb are juxtaposed apart by distance B on the left and right.
- the right camera la is used as a reference camera.
- Images (captured images) captured by the cameras la and lb at predetermined timings (every frame) are stored in a frame grabber (not shown) for each frame and then input in synchronization with the captured image analysis device 2.
- the calibration device performs calibration processing and correction processing, corrects the image, and is input to the captured image analysis device 2.
- the camera 1 may not be a color CCD camera, but may be one that can simply obtain, for example, black-and-white gradation values of 0-255 gradation.
- the captured image analysis device 2 is a device that analyzes an image (captured image) input from the cameras la and lb, and generates “distance information”, “motion information”, and “edge information”.
- the captured image analysis device 2 includes a distance information generation unit 21 that generates “distance information”, a motion information generation unit 22 that generates “motion information”, and an edge information generation unit 23 that generates “edge information”. (See Figure 1).
- the distance information generation unit 21 detects the distance from the camera 1 to the object corresponding to each pixel, based on the parallax of the two captured images captured by the cameras la and lb at the same time. Specifically, the parallax is determined using the block correlation method from the first captured image captured by the camera la, which is the reference camera, and the second captured image captured by the camera lb. In this embodiment, the parallax is obtained as a scalar value of 0-32 for each pixel. Note that the parallax The value of R) is not limited to 0-32, but may take values in other ranges. That is, the parallax (scalar value) can be appropriately set according to the calculation capability of the captured image analysis device 2, the positional relationship of the camera 1, and the like.
- the block correlation method is a comparison of the same block (for example, 8 ⁇ 3 pixels) of a specific size between the first captured image and the second captured image.
- This is a method of extracting pixels (areas) capturing the same image in the second and the second captured images. By using this method, it is possible to detect how many pixels in the two image frames the corresponding pixels (areas) in the two images are offset from and positioned in the frame.
- distance information from the camera 1 to “the object imaged in each pixel” is determined using trigonometry.
- distance information disparity values in the range of 0-32 are associated with position information of (x, y, z) in real space for each pixel.
- the parallax value 0-32 is converted to a density value of 256 of 0-255, an image in which the density changes according to the distance is obtained.
- a pixel of density value “0” means that an object at infinity from camera 1 is imaged
- a pixel of density value “225” is at a position of about 80 cm from camera 1 It was set to mean that the subject was imaged.
- the value "80 cm” is determined according to the focal length of the camera 1, the pixel size, the distance between the two cameras la and lb, and the like.
- the distance information generated for each pixel is stored in a storage means (not shown) provided in the distance information generation unit 21 and then input to the contour extraction device 3 as necessary.
- FIG. 2 (a) is a distance image P1 represented in gray scale according to the distance information of each pixel generated by the distance information generation unit 21.
- FIG. 2A imaging was performed in an environment where there was no object other than the target person C at all.
- the pixels in the background portion are represented by gray level 0, that is, black.
- the “target person” is a person whose contour is to be extracted.
- the motion information generation unit 22 generates “captured image (t)” at “time t” captured in time series by the camera la which is the reference camera and “captured image (t + ⁇ t)” at “time t + ⁇ tj”. Motion information in the image is detected based on the difference between the Specifically, the difference between the luminance value of each pixel is taken between “captured image (t)” and “captured image (t + A t)”, and the difference value is a pixel larger than a predetermined threshold T 1 Extract
- the threshold T1 is appropriately determined in accordance with the complexity of the environment to be imaged, the number of objects in the image, the complexity of the movement, and the like. In the present embodiment, the threshold value T1 is finally determined to an appropriate value while adjusting the threshold value T1 by repeating trial and error so that only the motion of the target person C is extracted.
- Motion information is obtained by converting the extracted difference value into any scalar value of 0 to 255 for each pixel.
- Fig. 2 (b) is a difference image P2 represented by the luminance value according to the scalar value.
- the movement of the left arm of the target person C is particularly strongly detected. Note that other methods can be used as a method for detecting the movement of the animal.
- shooting while moving the camera 1 when one image is converted and motion information is detected so that the information extracted as the amount of change of the camera parameters and the motion of the background is cancelled. Good.
- the edge information generation unit 23 extracts edges existing in the captured image based on gray level information or color information of each pixel in an image (captured image) captured by the camera la which is a reference camera.
- FIG. 2 (c) is an edge image D3 represented by a luminance value corresponding to the size of the edge.
- Sobel operator is multiplied for each pixel, and a line segment having a predetermined difference from an adjacent line segment is detected as an edge (horizontal edge or vertical edge) in row or column units.
- the Sobel operator is an example of a coefficient row having weighting coefficients for pixels in the vicinity of a certain pixel.
- the edge may be detected using a force other method performed using a Sobel operator.
- the “distance information” and the “motion information” generated by the captured image analysis device 2 are input to the target distance detection unit 31 A of the contour extraction device 3.
- the “edge information” is also input to the target area setting unit 31 B of the contour extraction device 3. (Outline Extraction Device 3)
- the contour extraction device 3 is a device that extracts the contour of the target person C based on the “distance information”, the “motion information”, and the “edge information” generated by the captured image analysis device 2.
- the contour extraction device 3 estimates that there is a target person C based on “distance information”, “motion information”, and “edge information”, and sets a target region for setting a region (target region) where contour extraction should be performed.
- the target area setting unit 31 detects a target distance detection unit 31A that detects a distance from the camera 1 to a target person C (target distance), and a “target area” corresponding to the target distance detected by the target distance detection unit 31A. It comprises the target area setting unit 31B to be set.
- the target distance detection unit 31A is a distance from the camera 1 to the target person C based on the distance information generated by the distance information generation unit 21 and the motion information generated by the motion information generation unit 22. "Dl" is detected. Specifically, the number of pixels in which motion is detected is accumulated for each distance represented by a scalar value of 0-255. Then, if the cumulative value is larger than a predetermined threshold value T2, a pixel whose movement is detected at that distance is detected as a pixel reflecting the movement of the target person.
- the threshold T2 corresponds to the "second threshold” in [claims].
- threshold value T2 is appropriately set so that the target person C can be detected accurately, in accordance with the number of the target person C, the characteristics of its movement, the existence range, and the like.
- the target distance D1 detected by the target distance detection unit 31A is input to the target area setting unit 31B.
- the target area setting unit 31B is positioned before and behind the target distance D1 detected by the target distance detection unit 31A.
- Edge information of pixels obtained by imaging an object at a distance of ⁇ ⁇ is extracted (see FIG. 2 (c)).
- ⁇ is 50 cm in consideration of the width and margin of the human body.
- the target area setting unit 31B sets a rectangular target area A for performing the contour extraction process. Since the target distance D1 (depth) has already been set, by further setting the target area A, contour extraction processing described later is performed on image information corresponding to the rectangular parallelepiped space area as a result. It will be Specifically, with regard to edge information of only the pixel portion obtained by imaging the target person C, the number of edge-detected pixels is accumulated for each column (vertical line) of the image frame. Then, the position of the pixel row whose cumulative value (histogram H) is maximum is specified as the center line of the target person C.
- Fig. 4 (a) is an image diagram showing the center line identified.
- the target area setting unit 31B sets the target area A based on the identified center line.
- the floor surface at the target distance D1 from the camera 1 is set as the lower side as the reference of the height of 2 m, but other methods may be used to include the target for which outline extraction is desired. You may set A.
- the floor surface position is recognized and the target area A is determined by referring to various tilt camera parameters such as the tilt angle of the camera 1 and the height of the camera 1. You can do it correctly. As a result, the target person C is surely included in the target area A.
- FIG. 4 (b) is a diagram showing a state in which the target area A is set.
- edge pixels and a histogram may be actually displayed on the image to set the center line and the target area A, but FIG. 4 (a) It is not a requirement of the present invention to create an image such as (b).
- the contour extracting means 32 comprises node arranging units 32A for arranging nodes at equal intervals on the periphery of the target region A set by the target region setting unit 31B, a contour deforming unit 32B for deforming the contour, and a contour. From the internode distance measuring unit 32C for measuring the internode distance of each node to be connected, and the connection line setting unit 32D for setting the connecting line for splitting the contour based on the measurement result by the internode distance measuring unit 32C. It is composed and beats.
- this contour extraction means 32 is a force to extract the contour of the target person C from within the target area A in Fig. 4 (b).
- this contour extraction means 32 is a force to extract the contour of the target person C from within the target area A in Fig. 4 (b).
- the contour V is obtained by connecting each node Pi in the order of arrangement.
- the positional information of these nodes Pi is input to the contour transformation unit 32B.
- the number n of nodes is appropriately determined according to the processing capability of the contour extraction device 3, the complexity of the shape of the object to be extracted, the speed of movement, and the like. In the present embodiment, the number of nodes n is 100.
- the contour deforming unit 32B deforms the contour V by moving the node Pi so as to minimize the previously defined energy function (see FIG. 5 (b)).
- the “predefined energy function” for example, the energy function represented by the equation (1) in the column of [Prior Art] can be used.
- each term of Formula (1) is as follows. One node is composed of one pixel.
- moving each node so as to minimize the above energy function means moving to a point where the energy to be calculated becomes smaller as the following equation (1) one (4) force
- the line connecting the three consecutive nodes should be as close as possible Close to a straight line.
- the inter-nodal distance measurement unit 32C measures (calculates) the inter-node distance D2 for all combinations of nodes excluding adjacent nodes for each node Pi constituting the contour V deformed by the contour deformation unit 32 ⁇ Do.
- the measurement result in constant part 32C is input to connection line setting part 32D.
- the connection line setting unit 32D first determines, based on the measurement result input from the inter-node distance measurement unit 32C, that there is a “combination of node Pi” in which the inter-node distance D2 is equal to or less than a predetermined distance D3. Determine if The distance D3 corresponds to the “first threshold” in [claims]. Here, the distance D3 is set to be smaller according to the distance from the camera 1! This enables accurate contour extraction regardless of whether the object is near or far. In the present invention, since the distance to the imaged object is calculated for each pixel, the present determination processing using the threshold value according to the distance is possible.
- the distance D3 is appropriately set in accordance with the contour extraction target so that it is smaller than the distance between nodal points D2 (for example, the distance between the wrists) which becomes minimum when the contour of the target object to be detected is detected. It is set. Therefore, it is possible to prevent two contours from being extracted for one object.
- the contour V is set to b a + 1
- the outline is divided by setting the connection line connecting “b + 1” respectively, but “self (P)” and “other (P)” are connected to the back side in the node connection order a b
- the outline extracting means 32 sets the outline of the outline by the nodal point arranging section 32A, and after deforming the outline by the outline deforming section 32B, the adjacent nodal point measuring section 32C. Measure the internode distance D2 for all combinations of nodes Pi except Pi. Then, the connection line setting unit 32D detects a combination of nodes whose inter-node distance D2 is less than or equal to the distance D3 based on the measurement result of the inter-node distance measuring unit 32C.
- P 1 a node P
- Set the contour line V by setting the connection line L2 connecting the node P and the node P.
- FIG. 8 is a flowchart for explaining the “captured image analysis step” and the “target area setting step” in the operation of the contour extraction system A.
- FIG. 9 is a first flow chart for explaining “contour extraction step” in the operation of the contour extraction system A.
- FIG. 10 is a second flow chart for explaining “contour extraction step” in the operation of the contour extraction system A.
- step S1 a captured image is input to captured image analysis device 2. Subsequently, in step S2, the distance information generation unit 21 generates “distance information” from the captured image input in step S1. Next, in step S3, the motion information generation unit 22 generates “motion information” from the captured image input in step S1. Then, in step S4, the edge information generation unit 22 generates “edge information” from the captured image input in step S1.
- step S5 the camera 1 from the “distance information” generated in step S2 and the “motion information” generated in step S3 in the target distance detection unit 31A.
- “Target distance Dl” which is the distance from the target person C to the target person C is detected.
- step S6 the target distance setting unit 31B sets "target region A", which is a region for performing contour extraction processing, based on the "target distance Dl" detected in step S5.
- the processes of step S2, step S3 and step S4 may be performed in parallel.
- the force contour V is obtained by connecting each node Pi in the order of arrangement.
- step In S8 the contour V is deformed by moving each node Pi arranged in step S7 so as to minimize the previously defined energy function (see FIG. 5 (b)).
- the internode distance D2 is measured for all combinations of the nodes except for the adjacent nodes ((a), (6) b) see (c)).
- step S10 based on the measurement result in step S9, it is determined whether or not there is a “combination of node Pi” with which the internode distance D2 is less than or equal to a predetermined distance D3. If it is determined in step S10 that “a combination of nodes Pi is a distance between nodes D2 and a distance D3”, the process proceeds to step S11, and “a combination of nodes Pi is a distance between nodes D2 and a distance D3 is If it is determined that there is no, the process proceeds to step S12.
- step S11 the “combination of nodes Pi (node P, node P) which is the distance between nodes D 2 ⁇ distance D 3 detected in step S 10”, and the nodes P and P Connect with aba b + 1
- step S12 it is determined whether the number of repetitions of the determination process in step S12 is greater than a predetermined threshold value Thl. If it is determined in step S12 that "the number of repetitions is greater than the predetermined threshold Thl", the process proceeds to the next step S13. Conversely, if it is determined in step S12 that "the number of repetitions is not more than the predetermined threshold value Thl", the process returns to the previous step S8. Next, in step S13, it is determined whether the actual height calculated based on the extracted contour is higher than a predetermined threshold value Th2.
- the threshold value Th2 may be set according to the shape of the object, the speed of movement, the processing capability of the contour extraction device 3 and the like so as to be repeated until the contour of the object is properly extracted. If it is determined in step S13 that "the height of the contour is higher than a predetermined threshold Th2", the process proceeds to step S15. Conversely, if it is determined in step S13 that "the height of the contour is not higher than the predetermined threshold Th2", the process proceeds to step S14.
- the threshold value Th2 is set to a value too high, it is not possible to extract the contour of the object to be extracted.
- the threshold value Th2 is set to a value too low, even the contours of unnecessary objects other than the object are extracted. Therefore, the threshold value Th2 used in this process is appropriately set based on the height of the object to be contour extracted.
- step S14 the image information in the extracted contour is updated.
- updating image information means making distance information in the image information zero.
- step S15 it is determined whether the number of already extracted contours (the number of objects) has reached a predetermined threshold Th3. If it is determined in step S15 that "the number of already extracted contours has reached the threshold value Th3 (for example, five human contours have been extracted)", the process ends. On the other hand, if it is determined at step S15 that “the number of already extracted contours has reached the threshold value Th3”, then the process at step S6 is performed.
- the threshold value Th3 may be determined in accordance with the number of objects to be extracted simultaneously, but if the number is too large, computational load may increase, which may hinder high-speed and accurate contour extraction. Therefore, the threshold value Th3 needs to be set appropriately according to the calculation ability of the contour extraction device 3, the purpose of the contour extraction, and the like.
- steps S1 to S15 are performed on the image captured at time t. After completing the process of (4), the same process is performed on the image captured at time t + 1.
- step S14 the image information (distance information) determined not to be an object is canceled. Then, the process of step S6 performed after “NO” is determined in step S15 is the area other than the area canceled in step S14 and the area already extracted. It is performed on the remaining image area. Therefore, in the present invention, it is possible to efficiently execute a plurality of contour extractions.
- step S5 if it is determined that the object to be subjected to outline extraction does not clearly exist, such as when no motion is detected at all in step S3 or when the target distance D1 can not be set in step S5, all processing ends. .
- the calculation cost is about 200 msec Zframe, whereas in the present invention, the calculation cost is about It was lOOmsec / frame.
- Non-Patent Document 1 while it was difficult to make the fixed threshold for contour division have versatility, in the present invention, the distance information is associated with each pixel, and the object Since the threshold distance D3 is varied according to the distance of the object, it is possible to accurately extract the contour as needed regardless of whether the object is far or near.
- Each component of the captured image analysis device 2 and the contour extraction device 3 included in the contour extraction system A may be configured by hardware, or a program. It may be configured.
- the same effect as the configuration by hardware can be obtained by operating the CPU, memory, etc. in the computer according to the program.
- the camera 1 may be a fixed camera or may have a variable imaging direction.
- the camera 1 may be mounted on a fixed body or may be mounted on a mobile body.
- the camera 1, the imaging image analysis device 2 and the contour extraction device 3 may be provided separately or may be integrated.
- information communication between the devices may be performed by wire or wirelessly.
- the camera 1 By extracting the skin color area using the color information acquired by, it is possible to specify a human pixel area and perform outline extraction only on the specific area. Furthermore, if the target person is cut out by combining the method of determining the part of the skin having a substantially elliptical shape as a face, the outline extraction of the person will be performed more efficiently and reliably.
- the contour information extracted continuously from time to time can be used for many things such as indoor surveillance cameras, traffic measurement, and autonomous biped robots.
- the present invention it is possible to simultaneously extract a plurality of contours while reducing the calculation load, but for example, if the present invention is applied to an indoor monitoring camera to cancel information other than the region where the active contours are extracted, further. It is also possible to save for a long time only the image of the target object to be noticed while reducing the amount of information.
- the administrator can grasp the presence or absence of the object in the image, etc. It is effective for adjusting the threshold.
- FIG. 1 is a block diagram showing an entire configuration of a contour extraction system A.
- FIG. 2 (&) is a distance image! 31 ; (b) is a difference image P2; (c) is an edge image P3.
- FIG. 3 is a diagram for describing setting of a target distance.
- FIG. 4 is a diagram for explaining setting of a target area A.
- FIG. 5 (a) is a diagram for explaining the arrangement of the nodes, and (b) is a diagram for explaining the deformation of the contour.
- FIG. 6 is a diagram for explaining measurement of the distance between nodes.
- FIG. 7 is a diagram for explaining setting of connection lines.
- FIG. 8 "Operation of image capture analysis” and “target area setting” in operation of contour extraction system A It is a flowchart for demonstrating fixed step.
- FIG. 9 A flowchart for explaining the “contour extraction step” in the operation of the contour extraction system A.
- FIG. 10 is a second flow chart for explaining “contour extraction step” in the operation of the contour extraction system A.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/563,064 US7418139B2 (en) | 2003-07-01 | 2004-07-01 | Contour extraction apparatus, contour extraction method, and contour extraction program |
EP04746794.9A EP1640917B1 (en) | 2003-07-01 | 2004-07-01 | Contour extracting device, contour extracting method, and contour extracting program |
JP2005511351A JP4523915B2 (ja) | 2003-07-01 | 2004-07-01 | 輪郭抽出装置、輪郭抽出方法及び輪郭抽出プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-270070 | 2003-07-01 | ||
JP2003270070 | 2003-07-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005004060A1 true WO2005004060A1 (ja) | 2005-01-13 |
Family
ID=33562603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/009325 WO2005004060A1 (ja) | 2003-07-01 | 2004-07-01 | 輪郭抽出装置、輪郭抽出方法及び輪郭抽出プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US7418139B2 (ja) |
EP (1) | EP1640917B1 (ja) |
JP (1) | JP4523915B2 (ja) |
WO (1) | WO2005004060A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006309455A (ja) * | 2005-04-27 | 2006-11-09 | Tokai Rika Co Ltd | 特徴点検出装置及び距離測定装置 |
JP2015095691A (ja) * | 2013-11-08 | 2015-05-18 | 株式会社リコー | 情報処理装置、情報処理方法およびプログラム |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672516B2 (en) * | 2005-03-21 | 2010-03-02 | Siemens Medical Solutions Usa, Inc. | Statistical priors for combinatorial optimization: efficient solutions via graph cuts |
JP4516516B2 (ja) * | 2005-12-07 | 2010-08-04 | 本田技研工業株式会社 | 人物検出装置、人物検出方法及び人物検出プログラム |
JP2008243184A (ja) * | 2007-02-26 | 2008-10-09 | Fujifilm Corp | 濃淡画像の輪郭補正処理方法及びその装置 |
US20110123117A1 (en) * | 2009-11-23 | 2011-05-26 | Johnson Brian D | Searching and Extracting Digital Images From Digital Video Files |
JP5720488B2 (ja) * | 2011-08-16 | 2015-05-20 | リコーイメージング株式会社 | 撮像装置および距離情報取得方法 |
US8970693B1 (en) * | 2011-12-15 | 2015-03-03 | Rawles Llc | Surface modeling with structured light |
KR101373603B1 (ko) * | 2012-05-04 | 2014-03-12 | 전자부품연구원 | 홀 발생 억제를 위한 3d워핑 방법 및 이를 적용한 영상 처리 장치 |
CN113781505B (zh) * | 2021-11-08 | 2022-11-18 | 深圳市瑞图生物技术有限公司 | 染色体分割方法、染色体分析仪及存储介质 |
CN114554294A (zh) * | 2022-03-04 | 2022-05-27 | 天比高零售管理(深圳)有限公司 | 一种直播内容的过滤和提示方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002092622A (ja) * | 2000-09-14 | 2002-03-29 | Honda Motor Co Ltd | 輪郭抽出装置、輪郭抽出方法、及び輪郭抽出プログラムを記録した記録媒体 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5590261A (en) * | 1993-05-07 | 1996-12-31 | Massachusetts Institute Of Technology | Finite-element method for image alignment and morphing |
US5487116A (en) * | 1993-05-25 | 1996-01-23 | Matsushita Electric Industrial Co., Ltd. | Vehicle recognition apparatus |
JPH08329254A (ja) | 1995-03-24 | 1996-12-13 | Matsushita Electric Ind Co Ltd | 輪郭抽出装置 |
JP3750184B2 (ja) | 1996-04-03 | 2006-03-01 | 松下電器産業株式会社 | 移動物体の抽出装置及び抽出方法 |
JP3678378B2 (ja) * | 1996-09-20 | 2005-08-03 | 富士写真フイルム株式会社 | 異常陰影候補の検出方法および装置 |
JPH10336439A (ja) * | 1997-05-28 | 1998-12-18 | Minolta Co Ltd | 画像読取り装置 |
JPH1156828A (ja) * | 1997-08-27 | 1999-03-02 | Fuji Photo Film Co Ltd | 異常陰影候補検出方法および装置 |
US6031935A (en) * | 1998-02-12 | 2000-02-29 | Kimmel; Zebadiah M. | Method and apparatus for segmenting images using constant-time deformable contours |
JP2001266158A (ja) * | 2000-01-11 | 2001-09-28 | Canon Inc | 画像処理装置、画像処理システム、画像処理方法、及び記憶媒体 |
US20030076319A1 (en) * | 2001-10-10 | 2003-04-24 | Masaki Hiraga | Method and apparatus for encoding and decoding an object |
JP3996015B2 (ja) * | 2002-08-09 | 2007-10-24 | 本田技研工業株式会社 | 姿勢認識装置及び自律ロボット |
-
2004
- 2004-07-01 WO PCT/JP2004/009325 patent/WO2005004060A1/ja active Application Filing
- 2004-07-01 EP EP04746794.9A patent/EP1640917B1/en not_active Expired - Lifetime
- 2004-07-01 JP JP2005511351A patent/JP4523915B2/ja not_active Expired - Fee Related
- 2004-07-01 US US10/563,064 patent/US7418139B2/en not_active Expired - Lifetime
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002092622A (ja) * | 2000-09-14 | 2002-03-29 | Honda Motor Co Ltd | 輪郭抽出装置、輪郭抽出方法、及び輪郭抽出プログラムを記録した記録媒体 |
Non-Patent Citations (2)
Title |
---|
SAKAGUCHI Y. ET AL.: "SNAKE parameter no settei ni tsuite no kento", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS GIJUTSU KENKYU HOKOKU, vol. 90, no. 74, 7 June 1990 (1990-06-07), pages 43 - 49, XP002984776 * |
See also references of EP1640917A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006309455A (ja) * | 2005-04-27 | 2006-11-09 | Tokai Rika Co Ltd | 特徴点検出装置及び距離測定装置 |
JP2015095691A (ja) * | 2013-11-08 | 2015-05-18 | 株式会社リコー | 情報処理装置、情報処理方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP1640917A1 (en) | 2006-03-29 |
US20070086656A1 (en) | 2007-04-19 |
EP1640917A4 (en) | 2009-09-16 |
US7418139B2 (en) | 2008-08-26 |
EP1640917B1 (en) | 2017-06-14 |
JP4523915B2 (ja) | 2010-08-11 |
JPWO2005004060A1 (ja) | 2006-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11087169B2 (en) | Image processing apparatus that identifies object and method therefor | |
KR20180087994A (ko) | 스테레오 매칭 방법 및 영상 처리 장치 | |
EP2426642A1 (en) | Method, device and system for motion detection | |
US20210343026A1 (en) | Information processing apparatus, control method, and program | |
US9449389B2 (en) | Image processing device, image processing method, and program | |
CN105335955A (zh) | 对象检测方法和对象检测装置 | |
KR100651034B1 (ko) | 대상 물체 검출 시스템 및 그 방법 | |
KR20140045854A (ko) | 단일객체에 대한 기울기를 추정하는 영상을 감시하는 장치 및 방법 | |
WO2008020598A1 (fr) | Dispositif et procédé de détection d'un nombre d'objets | |
US11727637B2 (en) | Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera | |
WO2005004060A1 (ja) | 輪郭抽出装置、輪郭抽出方法及び輪郭抽出プログラム | |
EP2372652B1 (en) | Method for estimating a plane in a range image and range image camera | |
US8243124B2 (en) | Face detection apparatus and distance measurement method using the same | |
KR20140074201A (ko) | 추적 장치 | |
KR20100104272A (ko) | 행동인식 시스템 및 방법 | |
KR20200113743A (ko) | 인체 자세 추정 및 보정을 하는 방법 및 장치 | |
JP2010039617A (ja) | 対象物追跡装置及びプログラム | |
CN102163335B (zh) | 一种无需像机间特征点匹配的多像机网络结构参数自标定方法 | |
KR100994722B1 (ko) | 카메라 핸드오프를 이용한 다중 카메라상의 연속적인 물체추적 방법 | |
KR100792172B1 (ko) | 강건한 대응점을 이용한 기본행렬 추정 장치 및 그 방법 | |
JP2006041939A (ja) | 監視装置及び監視プログラム | |
JP4584405B2 (ja) | 3次元物体検出装置と3次元物体検出方法及び記録媒体 | |
JP7354767B2 (ja) | 物体追跡装置および物体追跡方法 | |
KR101804157B1 (ko) | 개선된 sgm 기반한 시차 맵 생성 방법 | |
JPH07120416B2 (ja) | 高速視覚認識装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005511351 Country of ref document: JP |
|
REEP | Request for entry into the european phase |
Ref document number: 2004746794 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004746794 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004746794 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007086656 Country of ref document: US Ref document number: 10563064 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10563064 Country of ref document: US |