WO2012169119A1 - 物体検出枠表示装置及び物体検出枠表示方法 - Google Patents
物体検出枠表示装置及び物体検出枠表示方法 Download PDFInfo
- Publication number
- WO2012169119A1 WO2012169119A1 PCT/JP2012/003148 JP2012003148W WO2012169119A1 WO 2012169119 A1 WO2012169119 A1 WO 2012169119A1 JP 2012003148 W JP2012003148 W JP 2012003148W WO 2012169119 A1 WO2012169119 A1 WO 2012169119A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- object detection
- detection frame
- frame
- display
- size
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to an object detection frame display device and an object detection frame display method, and more particularly to a technique for displaying an object detection frame such as a face detection frame on a display in an imaging apparatus such as a digital camera.
- an imaging apparatus such as a digital camera
- an area such as a person or a face is detected from a captured image, and this area is surrounded by a frame (hereinafter referred to as an object detection frame) and displayed on a display.
- an object detection frame a frame
- FIG. 1 See, for example, Patent Document 1.
- the user can instantly determine where a target such as a person or face (hereinafter also referred to as a detection target object) is in the subject image. Operations such as placing the image at the center of the captured image can be performed smoothly. Further, in an imaging apparatus that performs automatic focus control (AF) or automatic exposure control (AE) in accordance with a target surrounded by an object detection frame, the user is in focus and exposure based on the object detection frame. It becomes possible to recognize the area.
- AF automatic focus control
- AE automatic exposure control
- Patent Document 2 describes a technique for detecting a face in a captured image.
- an index value (score) indicating the similarity between a face sample image obtained in advance by learning and a captured image is calculated, and an image area whose index value is equal to or greater than a threshold is detected as a face image candidate area.
- a plurality of candidate areas that is, candidate area groups, are detected around the same face image.
- the same face image is determined by further determining the threshold value of these candidate area groups.
- the candidate area group is integrated.
- an object detection candidate frame is formed around the target object by raster scanning the input image using the object detector.
- a final integrated frame is formed by integrating neighboring object detection candidate frames, and this final integrated frame is displayed. Specifically, grouping is performed using the score of the detection candidate frame and the like, and the grouped nearby detection candidate frames are integrated and displayed. As a result, an object detection frame (final integrated frame) surrounding the target object is displayed.
- the final integrated frame is not separated, and the final integrated frame is formed and displayed between the plurality of detection target objects.
- the final integrated frame cannot include the detection target object in the frame, and the appearance is deteriorated.
- Figure 1 shows a specific example.
- 1A, FIG. 1B, FIG. 1C, and FIG. 1D show time-series images obtained by imaging substantially the same position in the order of FIG. 1A ⁇ FIG. 1B ⁇ FIG. 1C ⁇ FIG.
- the object detection frame display device detects two persons in the captured image.
- a rectangular frame indicated by a thin line is an object detection candidate frame
- a rectangular frame indicated by a thick line is a final integrated frame. What is actually displayed is the captured image and the final integrated frame superimposed thereon, and the object detection candidate frame may or may not be displayed.
- FIG. 1A and 1D show cases where the final integrated frame has been successfully separated. In this successful case, each final integrated frame is displayed so as to include each person to be detected.
- FIG. 1B and FIG. 1C are cases where separation of the final integrated frame has failed, and the final integrated frame is displayed between two persons. In this failure case, the final integrated frame cannot include the person to be detected. Therefore, as can be seen from FIG. 1B and FIG. 1C, the final integrated frame looks bad in relation to the detection target object.
- One way to solve this problem is to devise an integration algorithm when forming the final integration frame. However, if this is done, the algorithm becomes complicated, so that the amount of processing increases and the configuration becomes complicated.
- the present invention has been made in consideration of the above points, and an object of the present invention is to provide an object frame display device and an object frame display method capable of displaying an object detection frame that is easy to see for a user with a relatively small processing amount. To do.
- One aspect of the object detection frame display device obtains a first object detection frame indicating a region of a detection target object from an input image, and is further assumed to be an object detection frame related to the same detection target.
- One aspect of the object detection frame display method obtains a first object detection frame indicating a region of a detection target object from an input image, and further analogizes that the object detection frame is related to the same detection target.
- an object detection frame that is easy for the user to view can be displayed with a relatively small amount of processing.
- FIG. 2 is a block diagram illustrating a configuration of an object detection frame display device according to the first embodiment.
- the figure used for description of the third object detection frame (inner frame) The figure which shows the mode of the process by a several object presence estimation part and a display frame formation part
- the figure which showed object detection frame formation processing by Embodiment 1 in an easy-to-understand manner The figure which shows the example of a display of the object detection frame by Embodiment 1.
- FIG. 3 is a block diagram illustrating a configuration of an object detection frame display device according to a second embodiment.
- FIG. 9 is a block diagram illustrating a configuration of an object detection frame display device according to a third embodiment.
- FIG. 10 is a diagram for explaining object detection frame formation processing by a display frame formation unit according to the third embodiment, particularly when the determined number of object detection frames and the number of object detection frame candidate positions do not match. Diagram for explaining processing The figure which showed object detection frame formation processing by Embodiment 3 in an easy-to-understand manner
- FIG. 2 shows a configuration of the object detection frame display device according to Embodiment 1 of the present invention.
- the object detection frame display device 100 is provided in, for example, a digital camera, an in-vehicle navigation device, a surveillance camera system, or the like.
- the object detection frame display device 100 inputs an image to the image input unit 101.
- the input image is, for example, an image captured by a digital camera, an in-vehicle navigation device, a surveillance camera system, or the like.
- the image input unit 101 sends the input image to the display unit 110 and the object detection frame calculation unit 102.
- the object detection frame calculation unit 102 obtains a first object detection frame (object detection candidate frame) indicating a region of the detection target object by performing pattern recognition processing on the input image, and further, an object related to the same detection target
- the second object detection frame is obtained by integrating the first object detection frames that are estimated to be detection frames.
- the object detection frame calculation unit 102 obtains the second object detection frame by grouping the first object detection frames into clusters.
- the first object detection frame is a frame indicated by a thin line in FIG.
- the second object detection frame is a frame indicated by a thick line in FIG.
- the object detection frame calculation unit 102 obtains the first object detection frame and the second object detection frame by adopting a process as described in Patent Document 2, for example.
- the first object detection frame is a rectangular frame that surrounds the partial image region whose index value indicating similarity to the detection target object is equal to or greater than the first threshold value.
- the first object detection frame is a so-called object detection candidate frame, and a plurality of first object detection frames are actually obtained around the detection target object.
- the object detection frame calculation unit 102 sets the region as the attention candidate region for each region (for each candidate region) surrounded by the first object detection frame, and another candidate different from the attention candidate region Among the regions, when there is a neighborhood candidate region whose coordinate distance from the attention candidate region is equal to or less than a predetermined distance, the attention candidate region and the neighborhood candidate region are set as one candidate group.
- the object detection frame calculation unit 102 reflects the magnitudes of the plurality of index values for each candidate group based on the plurality of index values calculated for each candidate area constituting the candidate group. Calculate a comprehensive index value.
- the object detection frame calculation unit 102 when the total index value is equal to or greater than a second threshold value, the predetermined area on the input image including the candidate group for which the total index value is calculated A second object detection frame surrounding this image is formed assuming that the image inside is a detection target object image.
- the processing performed by the object detection frame calculation unit 102 is not limited to the above-described processing.
- an image region having high similarity to a detection target object image for example, a human image, a face image, a vehicle, or the like
- the present invention is not limited to the method of obtaining the first object detection frame and the second object detection frame.
- the object detection frame calculation unit 102 sends the position information of the first object detection frame and the position information of the second object detection frame to the inclusion frame calculation unit 103. In addition, the object detection frame calculation unit 102 sends the position information of the second object detection frame to the multiple object presence estimation unit 104.
- the position information of the object detection frame includes information on the rectangle size of the object detection frame (information on the size of the rectangle). That is, the position information of the object detection frame is information that can indicate the position of the entire object detection frame. The same applies to the position information of the object detection frame described below.
- the inner frame calculation unit 103 sets, as the third object detection frame 13, an inner frame that includes the first object detection frame 11 that is the basis of each second object detection frame 12.
- the third object detection frame (inclusive frame) 13 may be a frame that includes the first object detection frame 11 as the name suggests.
- the third object detection frame 13 is, for example, a minimum rectangle that includes the plurality of first object detection frames 11.
- the third object detection frame 13 is, for example, a union of a plurality of first object detection frames 11.
- the inner frame calculation unit 103 sends the obtained position information of the third object detection frame 13 to the multiple object existence estimation unit 104.
- the multiple object presence estimation unit 104 inputs the position information of the second object detection frame 12 and the position information of the third object detection frame 13, and uses these information to determine the position of the third object detection frame 13. The relationship of the size of the second object detection frame 12 to the size is examined. In this way, the multiple object presence estimation unit 104 estimates whether there are multiple detection target objects in the vicinity of the second object detection frame 12. The multiple object presence estimation unit 104 displays estimation result information indicating whether or not multiple objects exist, positional information of the second object detection frame 12, and positional information of the third object detection frame 13. It is sent to the forming unit 105.
- the display frame forming unit 105 forms an object detection frame to be displayed (hereinafter referred to as a display object detection frame).
- a display object detection frame When the display frame forming unit 105 inputs estimation result information indicating that a plurality of detection target objects do not exist in the vicinity of the second object detection frame 12 from the multiple object presence estimation unit 104, the display frame detection unit 105 As a result, the second object detection frame 12 is output.
- the display frame forming unit 105 inputs estimation result information indicating that there are a plurality of detection target objects in the vicinity of the second object detection frame 12 from the multiple object presence estimation unit 104, As a display object detection frame, a display object detection frame obtained by enlarging the second object detection frame 12 is formed and output.
- FIG. 4 shows a state of processing by the multiple object existence estimation unit 104 and the display frame formation unit 105.
- a fine dotted line in the figure indicates the second object detection frame 12
- a rough dotted line indicates the third object detection frame 13
- a solid line indicates the display object detection frame 14.
- FIG. 4A shows an example of the second object detection frame 12 and the third object detection frame 13 input to the multiple object presence estimation unit 104. In the figure, four examples are shown.
- FIG. 4B shows a state of the display object detection frame 14 formed by the display frame forming unit 105.
- the vertical and horizontal lengths of the third object detection frame 13 are A_H and A_W, respectively, and the vertical and horizontal lengths of the second object detection frame 12 are B_H and B_W, respectively. To do.
- the multiple object presence estimation unit 104 sets the second object detection frame 12 when the condition of
- the display frame forming unit 105 is centered on the center position of the second object detection frame 12 and is The display object detection frame 14 having a length of (A_H + B_H) / 2 and a horizontal length of (A_W + B_W) / 2 is formed.
- the size of the display object detection frame 14 is not limited to this, and may be greater than or equal to the size of the second object detection frame 12 and less than or equal to the size of the third object detection frame 13.
- the example at the left end shows a case where the multiple object existence estimation unit 104 estimates that there are no multiple objects in the vicinity of the second object detection frame 12.
- the difference in size between the second object detection frame 12 and the third object detection frame 13 is equal to or smaller than the threshold value, and the display frame forming unit 105
- the second object detection frame 12 is output as the display object detection frame 14 as shown in the leftmost example of FIG.
- FIG. 4 three examples other than the left end show a case where the multiple object existence estimation unit 104 estimates that multiple objects exist in the vicinity of the second object detection frame 12. It is.
- the difference in size between the second object detection frame 12 and the third object detection frame 13 is larger than the threshold (the second example from the left is the horizontal
- the length difference is greater than the threshold
- the third example from the left has a vertical length difference greater than the threshold
- the fourth example from the left is both a horizontal length difference and a vertical length difference.
- the display frame forming unit 105 forms a display object detection frame 14 between the second object detection frame 12 and the third object detection frame 13 as shown in an example other than the left end of FIG. 4B. To do. More specifically, the display object detection frame 14 is larger than the second object detection frame 12 and smaller than the size of the third object detection frame 13.
- the display unit 110 superimposes and displays the display object detection frame 14 input from the display frame forming unit 105 on the captured image input from the image input unit 101.
- FIG. 5 is a flowchart showing a processing procedure of the object detection frame display device 100.
- the object detection frame display device 100 inputs an image to the image input unit 101 in step ST1.
- the object detection frame calculation unit 102 calculates the first object detection frame (object detection candidate frame) 11.
- the object detection frame calculation unit 102 calculates the second object detection frame 12 by integrating the first object detection frame 11.
- the inner frame calculation unit 103 calculates a third object detection frame (inner frame) 13.
- the multiple object presence estimation unit 104 determines the second object detection from the relationship between the size of the second object detection frame (integrated frame) 12 and the size of the third object detection frame (internal frame) 13. It is estimated whether there are a plurality of detection target objects in the vicinity of the frame 12.
- the object detection frame display device 100 When the object detection frame display device 100 obtains an estimation result in which a plurality of detection target objects exist in the vicinity of the second object detection frame 12 (step ST5; YES), the object detection frame display device 100 moves to step ST6 and displays the display frame forming unit 105. Thus, a display object detection frame 14 having a shape obtained by enlarging the second object detection frame 12 is formed, and in step ST7, the display object detection frame 14 is displayed on the display unit 110 together with the captured image.
- the object detection frame display device 100 obtains an estimation result that a plurality of detection target objects do not exist in the vicinity of the second object detection frame 12 (step ST5; NO)
- the object detection frame display device 100 moves to step ST7.
- the second object detection frame 12 is displayed on the display unit 110 together with the captured image.
- FIG. 6 shows the relationship between a detection target object (a person in the example in the figure) and each object detection frame in an easy-to-understand manner.
- the upper diagram shows the relationship among the detection target object, the second object detection frame (integrated frame) 12, and the third object detection frame (internal frame) 13.
- the lower diagram shows the relationship between the detection target object and the finally displayed display object detection frame 14.
- FIG. 6A shows an ideal state in which each of the second object detection frames 12 accurately surrounds each person.
- the second object detection frame 12 has The display object detection frame 14 is displayed as it is.
- FIG. 6B shows a state where there is a person protruding from the second object detection frame 12 because the second object detection frame 12 is inaccurate.
- a display object detection frame 14 formed by enlarging the two object detection frames 12 is displayed.
- a person who protrudes from the frame when the second object detection frame 12 is displayed as it is can be surrounded by the display object detection frame 14.
- the inaccuracy of the second object detection frame 12 can be determined from the fact that the size of the second object detection frame 12 is equal to or smaller than the threshold with respect to the size of the third object detection frame 13. .
- the second object detection frame 12-1 on the left is accurate, but the second object detection frame 12-2 on the right is inaccurate.
- the second object detection frame 12-1 on the left is displayed as it is as the display object detection frame 14-1, and the second object detection frame 12-2 on the right is enlarged and displayed. It is displayed as a detection frame 14-2.
- a person who protrudes from the frame when the second object detection frame 12-2 on the right side is displayed as it is can be surrounded by the display object detection frame 14-2.
- the fact that the right second object detection frame 12-2 is inaccurate means that the right second object detection frame 12-2 is smaller than the right third object detection frame 13-2. Can be determined from the fact that the size of is less than or equal to the threshold.
- FIG. 7 shows an example of an image displayed by the object detection frame display device of the present embodiment.
- 7A, FIG. 7B, FIG. 7C, and FIG. 7D show time-series images obtained by imaging substantially the same position in the order of FIG. 7A ⁇ FIG. 7B ⁇ FIG. 7C ⁇ FIG.
- the object detection frame display device 100 detects two persons in the captured image.
- a frame indicated by a thin line in the figure is the first object detection frame 11
- a rectangular frame indicated by a thick line is the object detection frame 14 that is finally displayed in the present embodiment.
- FIG. 7 which is a display example of the present embodiment, has succeeded in separating the second object detection frame 12 in the time-series images of FIGS. 7A and 7D compared to FIG. 1 which is a conventional display example. Therefore, as in FIGS. 1A and 1D, the second object detection frame (explained as the final integrated frame in the description of FIG. 1) is displayed as the object detection frame 14 as it is.
- the second object detection frame (explained as the final integrated frame in the description of FIG. 1) is displayed as the object detection frame 14 as it is.
- the separation of the second object detection frame 12 has failed (see FIGS. 1B and 1C), so the second object detection frame 12 is enlarged.
- An object detection frame 14 is displayed. Since the display object detection frame 14 includes two persons as detection target objects without protruding, the second object detection frame (final integrated frame) 12 displayed in FIGS. 1B and 1C In comparison, the object detection frame has a good appearance and is easy to see.
- the first object detection frame 11 indicating the area of the detection target object is obtained, and the same detection target object is obtained.
- the object detection frame calculation unit 102 that obtains the second object detection frame 12 by integrating the first object detection frames 11 that are presumed to be object detection frames related to each other, and the second object detection frame 12
- the display frame forming unit 105 for forming the object detection frame 14 to be displayed is provided based on the 12 size relationship.
- the display frame forming unit 105 can form the object detection frame 14 in which the second object detection frame 12 is enlarged. Therefore, it is possible to display the object detection frame 14 which has a good appearance and is easy to see.
- FIG. 8 in which the same reference numerals are assigned to the parts corresponding to FIG. 2 shows the configuration of the object detection frame display device 200 of the present embodiment.
- An object detection frame display device 200 in FIG. 8 includes a display frame integration unit 201 in addition to the configuration of the object detection frame display device 100 in FIG.
- the display frame integration unit 201 inputs the position information of the object detection frame formed by the display frame forming unit 105.
- the display frame integration unit 201 inputs position information of the second object detection frame (including the enlarged second object detection frame) from the display frame forming unit 105.
- the display frame integration unit 201 detects a second object detection frame that satisfies a condition that the mutual distance is equal to or smaller than the first threshold value and the ratio of the mutual sizes is equal to or smaller than the second threshold value, The detected second object detection frames are integrated to form a display object detection frame including a plurality of second object detection frames that satisfy this condition, and this is output to the display unit 110.
- the display frame integration unit 201 outputs the second object detection frame that does not satisfy the above condition to the display unit 110 without integration.
- “the ratio of the size of each other is equal to or less than the threshold” is added to the integration condition. For example, the detection frame of the person in front of the screen and the detection frame of the person in the back of the screen are integrated.
- the display frame integration unit 201 may integrate, for example, the second object detection frames in which some areas overlap each other. This corresponds to the case where the distance threshold is zero.
- the threshold value is not limited to this, and may be set as appropriate.
- FIG. 9 shows the state of the integration process performed by the display frame integration unit 201.
- the display frame integrating unit 201 has a distance equal to or smaller than the threshold as illustrated in FIG. 9B.
- the second object detection frames 12 are integrated to form an object detection frame 15 that contains them.
- the formed object detection frame 15 is displayed on the display unit 110.
- 9B also shows frames other than the object detection frame 15 displayed by the display frame integration unit 201 for convenience, but the object detection frame displayed on the display unit 110 in FIG. 9B is the object detection frame 15. Only.
- FIG. 10 is a flowchart showing a processing procedure of the object detection frame display device 200. 10, the same processing steps as those in FIG. 5 are denoted by the same reference numerals as those in FIG. Hereinafter, a procedure different from that in FIG. 5 will be described.
- the object detection frame display device 200 proceeds to step ST10 when the display frame forming unit 201 forms the display object detection frame 14 having a shape obtained by enlarging the second object detection frame 12 in step ST6.
- the display frame integration unit 201 performs the above-described distance determination for each of the second object detection frames 12 (including the enlarged second object detection frame 14), thereby integrating the object detection frames to be integrated. It is determined whether or not.
- the display frame integration unit 201 obtains a negative result (step ST10; NO) in step ST10 for the second object detection frames 12 and 14 whose distance is greater than the threshold, and the second object detection frames 12 and 14 are detected. Are directly output to the display unit 110 without being integrated. Accordingly, the second object detection frames 12 and 14 are displayed as they are in step ST7.
- step ST10 the display frame integration unit 201 obtains an affirmative result (step ST10; YES) in step ST10 for the second object detection frames 12 and 14 whose distance is equal to or less than the threshold value, and proceeds to step ST11.
- step ST11 the display frame integration unit 201 integrates the second object detection frames 12 and 14 whose distances are equal to or smaller than the threshold, thereby forming an object detection frame 15 that includes them, and the integrated object detection frame 15 Is output to the display. Thereby, the second object detection frame 15 integrated in step ST7 is displayed.
- FIG. 11 shows the state of the object detection frame displayed in this embodiment in an easy-to-understand manner. Compared with FIG. 6 described in the first embodiment, the features of the object detection frame displayed in the present embodiment can be understood well. Therefore, differences from FIG. 6 will be described below.
- the distance between these object detection frames 12 is equal to or less than the threshold value.
- the integration unit 201 integrates these object detection frames 12 to form an object detection frame 15 as shown in the lower part and displays it.
- the display frame formation is performed.
- the second object detection frame 12 is enlarged by the unit 105 to form an object detection frame 14.
- the object detection frame 14 is displayed as shown in the lower stage without being integrated.
- the display frame formation is performed.
- the second object detection frame 12 is enlarged by the unit 105 to form an object detection frame 14.
- the plurality of object detection frames 14 are displayed as an integrated object detection frame 15 as shown in the lower part.
- FIG. 12 shows an example of an image displayed by the object detection frame display device 200 of the present embodiment.
- FIG. 12A, FIG. 12B, FIG. 12C, and FIG. 12D show time-series images obtained by imaging substantially the same position in the order of FIG. 12A ⁇ FIG. 12B ⁇ FIG. 12C ⁇ FIG.
- a frame indicated by a thin line in the drawing is the first object detection frame 11, and a rectangular frame indicated by a thick line is the object detection frame 15 that is finally displayed in the present embodiment.
- FIG. 12 which is a display example of the present embodiment, is compared with FIG. 7 which is a display example of the first embodiment.
- FIGS. 12B and 12C In the time-series images of FIGS. 12B and 12C, there is no object detection frame whose distance from the object detection frame 14 is equal to or less than the threshold value, so the object detection frame 14 is displayed as the object detection frame 15 without being integrated. .
- the display frame integration unit 201 that integrates the adjacent second object detection frames 12 and 14 is provided.
- the object detection frames 15 since the number of displayed object detection frames 15 in the time-series image can be suppressed, the object detection frames 15 that are easier to see can be displayed.
- the configuration of the first embodiment is adopted, an object detection frame in which the detection object does not protrude excessively can be formed, but the number of object detection frames in the same object region is 2 in the time series image. There is a possibility of becoming one, becoming one, and changing frequently. According to the configuration of the present embodiment, this can be prevented, and fluctuations in the number of object detection frames for the same detected object can be suppressed in the time-series image.
- the object detection frames cause flickering because the sizes are similar (that is, the ratio of the sizes is less than or equal to the threshold) and there is an overlap (that is, the distance between each other is less than or equal to the threshold).
- an object detection frame is eliminated by integration, so that flicker can be eliminated.
- FIG. 13 in which the same reference numerals are assigned to corresponding parts as in FIG. 2, shows the configuration of the object detection frame display device 300 of the present embodiment.
- the object detection frame display device 300 in FIG. 13 is different from the object detection frame display device 100 in FIG. 2 in the configuration of the display frame formation unit 301 from the configuration of the display frame formation unit 105.
- the display frame forming unit 301 When the display frame forming unit 301 inputs estimation result information indicating that there are no plurality of detection target objects in the vicinity of the second object detection frame 12 from the multiple object presence estimation unit 104, the display frame detection unit 301 As a result, the second object detection frame 12 is output. On the other hand, when the display frame forming unit 301 inputs estimation result information indicating that there are a plurality of detection target objects in the vicinity of the second object detection frame 12 from the multiple object presence estimation unit 104. A plurality of second object detection frames are formed inside the third object detection frame 13 as display object detection frames.
- the display frame forming unit 301 A plurality of object detection frames are formed inside and displayed.
- the display frame forming unit 301 determines the third object detection frame 13 based on the ratio of the size of the second object detection frame 12 to the size of the third object detection frame 13. The number of display object detection frames formed inside is determined.
- a fine dotted line indicates the second object detection frame 12
- a rough dotted line indicates the third object detection frame 13
- a solid line indicates the display object detection frame 16.
- the number of display object detection frames 16 to be formed is determined.
- the vertical and horizontal lengths of the third object detection frame 13 are A_H and A_W, respectively, and the vertical and horizontal lengths of the second object detection frame 12 are respectively Let B_H and B_W.
- the area ratio R (A_W ⁇ A_H) / (B_W ⁇ B_H).
- the number of display object detection frames 16 is determined by comparing this area ratio with a predetermined threshold value.
- threshold values TH1, TH2, TH3, TH4 in a relationship of TH1>TH2>TH3> TH4 are set.
- the number of object detection frames 16 is one when TH1 ⁇ R, two when TH1 ⁇ R> TH2, three when TH2 ⁇ R> TH3, and TH3 ⁇ R> TH4. In such a case, it may be determined such as four.
- FIG. 14B shows an example in which the number of display object detection frames 16 is two.
- Size of the object detection frame 16 Regarding the size of the object detection frame 16, the vertical and horizontal lengths are B_H and B_W, respectively. That is, the size of each object detection frame 16 is the same as that of the second object detection frame 12. In other words, each object detection frame 16 is a copy of the second object detection frame 12.
- the determined number of object detection frames 16 may not match the number of positions of the object detection frames 16. Specifically, there is no problem when the detected object (person) is close in the horizontal and vertical directions, but the above numbers may not match when the detected object is close in the vertical and horizontal directions.
- the reason and countermeasures will be described with reference to FIG.
- FIG. 15A shows a case where the determined number of object detection frames 16 and the number of positions of the object detection frames 16 match, and there is no problem in such a case.
- FIG. 15B there arises a problem of whether the number of object detection frames 16 is three or four (in practice, it is preferable to determine three).
- a position obtained by equally dividing A_W and A_H into X + 1 and Y + 1 respectively is set as a candidate point for the center position of the object detection frame 16 to be finally displayed. If the number of candidate points matches the number of object detection frames determined from the area ratio, the object detection frame 16 having the candidate point as the center position is formed and displayed as it is.
- the region of the object detection frame 16 centered on each candidate point position and the third object detection frame 13
- the overlapping area with the area of the first object detection frame 11 that is the basis for obtaining the above is obtained, and the candidate points having the largest overlapping area are adopted in order.
- the region of the first object detection frame 11 from which the third object detection frame 13 is obtained is based on the basis of obtaining the third object detection frame 13 as shown by the shaded region in FIG. 15C.
- the center is formed around the candidate point K1.
- the object detection frame 16-1 to be displayed has a small overlap with the shaded area in FIG. 15C, so that the object detection frame 16-1 formed around the candidate point K1 is excluded from the object detection frame to be finally displayed. do it.
- the frame to be finally displayed can be made to coincide with the number of object detection frames determined from the area ratio, and the object detection frames 16-2 and 16 are left with the accurate candidate points remaining among the plurality of candidate points.
- -3 and 16-4 can be formed (see FIG. 15B).
- FIG. 16 shows the state of the object detection frame 16 displayed in the present embodiment in an easily understandable manner. Compared with FIG. 6 described in the first embodiment, the features of the object detection frame 16 displayed in the present embodiment can be understood well, so the difference from FIG. 6 will be described below.
- the size of the second object detection frame 12 is the third size. Since the relationship with the size of the object detection frame 13 is equal to or greater than the threshold value, the second object detection frame 12 is displayed as it is as the display object detection frame 16 as shown in the lower part.
- the size of the second object detection frame 12 is In relation to the size of the third object detection frame 13, it is less than the threshold value, so that a plurality of object detection frames 16 are formed inside the third object detection frame 13 as shown in the lower part.
- the second object detection frame 12-1 on the left side is the first object detection frame 12-1.
- the size of the second object detection frame 12-2 on the right side is less than the threshold in relation to the third object detection frame 13-2. It is. Therefore, as shown in the lower part, the left second object detection frame 12-1 is displayed as it is as the display object detection frame 16, and the right second object detection frame 12-2 is displayed as the third object detection frame 13-. 2, a plurality of object detection frames 16 are formed and displayed.
- the size of the second object detection frame 12 is increased by the display frame forming unit 301 to the third object detection frame 13.
- the plurality of object detection frames 16 are formed inside the third object detection frame 13 when the value is less than the threshold value. Further, based on the ratio of the size of the second object detection frame 12 to the size of the third object detection frame 13, the number of display object detection frames 16 formed inside the third object detection frame 13 is determined. I decided to decide.
- the object detection frames 16 to be displayed can be suppressed from changing in the time-series image, the object detection frames 16 that are easier to see can be displayed.
- components other than the image input unit 101 and the display unit 110 in the object detection frame display devices 100, 200, and 300 according to the above-described embodiments can be configured by a computer such as a personal computer including a memory / CPU.
- the function of each component can be realized by the CPU reading and executing the computer program stored in the memory.
- the present invention is suitable when image recognition processing is performed on a captured image obtained by, for example, a digital camera or an in-vehicle camera.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
図2に、本発明の実施の形態1における物体検出枠表示装置の構成を示す。物体検出枠表示装置100は、例えばディジタルカメラや、車載用のナビゲーション装置、監視カメラシステム等に設けられる。
図2との対応部分に同一符号を付して示す図8に、本実施の形態の物体検出枠表示装置200の構成を示す。図8の物体検出枠表示装置200は、図2の物体検出枠表示装置100の構成に加えて、表示枠統合部201を有する。
図2との対応部分に同一符号を付して示す図13に、本実施の形態の物体検出枠表示装置300の構成を示す。図13の物体検出枠表示装置300は、図2の物体検出枠表示装置100と比較して、表示枠形成部301の構成が表示枠形成部105の構成と異なる。
第3の物体検出枠13と第2の物体検出枠12との面積比を、閾値判定することで、形成する表示物体検出枠16の個数を決定する。ここで、図14Aに示したように、第3の物体検出枠13の縦、横の長さをそれぞれ、A_H、A_Wとし、第2の物体検出枠12の縦、横の長さをそれぞれ、B_H、B_Wとする。すると、面積比R = (A_W × A_H)/(B_W × B_H)となる。この面積比を所定の閾値と比較することで、表示物体検出枠16の個数を決定する。例えば、TH1>TH2>TH3>TH4の関係の閾値TH1、TH2、TH3、TH4を設定する。そして、物体検出枠16の個数は、TH1<Rの場合には1個、TH1≧R>TH2の場合には2個、TH2≧R>TH3の場合には3個、TH3≧R>TH4の場合には4個、といったように決定すればよい。図14Bは、表示物体検出枠16の個数が2個の例を示したものである。
物体検出枠16の大きさは、縦横の長さがそれぞれB_H、B_Wである。つまり、各物体検出枠16の大きさは、第2の物体検出枠12と同じ大きさである。換言すれば、各物体検出枠16は、第2の物体検出枠12をコピーしたものである。
各物体検出枠16の位置は、X=(A_W)/(B_W)とし、Y=(A_H)/(B_H)としたときに、第3の物体検出枠13の横A_W、縦A_Hを、それぞれX+1、Y+1等分した位置を中心位置とする。図14Bの例は、X=2、Y=1の場合の例であり、A_W、A_Hをそれぞれ2+1=3、1+1=2等分した位置を中心とする物体検出枠16が形成され表示される。
12 第2の物体検出枠
13 第3の物体検出枠
14、15、16 表示物体検出枠
100、200、300 物体検出枠表示装置
102 物体検出枠算出部
103 内包枠算出部
104 複数物体存在推定部
105、301 表示枠形成部
110 表示部
201 表示枠統合部
Claims (8)
- 入力画像から、検出対象物体の領域を示す第1の物体検出枠を求め、さらに同一の検出対象物に関する物体検出枠であると類推される前記第1の物体検出枠同士を統合することで、第2の物体検出枠を求める、物体検出枠算出部と、
前記第2の物体検出枠ごとに、その基になった前記第1の物体検出枠を内包する、第3の物体検出枠を求める、内包枠算出部と、
前記第3の物体検出枠の大きさに対する、前記第2の物体検出枠の大きさの関係に基づいて、表示する物体検出枠を形成する、表示枠形成部と、
前記表示枠形成部によって形成された前記物体検出枠を表示する表示部と、
を具備する物体検出枠表示装置。 - 前記表示枠形成部は、前記第2の物体検出枠の大きさが、前記第3の物体検出枠の大きさとの関係において、閾値未満の場合、前記第2の物体検出枠を拡大した物体検出枠を形成する、
請求項1に記載の物体検出枠表示装置。 - 前記表示部に表示される前記物体検出枠の大きさは、前記第2の物体検出枠の大きさ以上であり、かつ、前記第3の物体検出枠の大きさ以下である、
請求項1に記載の物体検出枠表示装置。 - 互いの距離が第1の閾値以下であり、かつ、互いの大きさの比率が第2の閾値以下である、条件を満たす、前記第2の物体検出枠を検出し、検出した前記第2の物体検出枠を統合して、前記条件を満たす複数の前記第2の物体検出枠を内包する表示物体検出枠を形成する、物体検出枠統合部を、さらに具備し、
前記表示部は、前記表示枠形成部及び前記物体検出枠統合部によって形成された前記表示物体検出枠を表示する、
請求項1に記載の物体検出枠表示装置。 - 前記物体検出枠統合部が統合する前記第2の物体検出枠は、互いに一部の領域が重なっている複数の前記第2の物体検出枠である、
請求項4に記載の物体検出枠表示装置。 - 前記表示枠形成部は、前記第2の物体検出枠の大きさが、前記第3の物体検出枠の大きさとの関係において、閾値未満の場合、前記第3の物体検出枠の内部に、複数の物体検出枠を形成する、
請求項1に記載の物体検出枠表示装置。 - 前記表示枠形成部は、前記第3の物体検出枠の大きさに対する、前記第2の物体検出枠の大きさの比率に基づいて、前記第3の物体検出枠の内部に形成する前記表示物体検出枠の個数を決定する、
請求項1に記載の物体検出枠表示装置。 - 入力画像から、検出対象物体の領域を示す第1の物体検出枠を求め、さらに同一の検出対象物に関する物体検出枠であると類推される前記第1の物体検出枠同士を統合することで、第2の物体検出枠を求める、物体検出枠算出ステップと、
前記第2の物体検出枠ごとに、その基になった前記第1の物体検出枠を内包する、第3の物体検出枠を求める、内包枠算出ステップと、
前記第3の物体検出枠の大きさに対する、前記第2の物体検出枠の大きさの関係に基づいて、表示する物体検出枠を形成する、表示枠形成ステップと、
を含む、物体検出枠表示方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/123,610 US9165390B2 (en) | 2011-06-10 | 2012-05-15 | Object detection frame display device and object detection frame display method |
CN201280028590.8A CN103597514B (zh) | 2011-06-10 | 2012-05-15 | 物体检测框显示装置和物体检测框显示方法 |
JP2013519359A JP5923746B2 (ja) | 2011-06-10 | 2012-05-15 | 物体検出枠表示装置及び物体検出枠表示方法 |
EP12797288.3A EP2696326A4 (en) | 2011-06-10 | 2012-05-15 | DEVICE AND METHOD FOR DISPLAYING OBJECT DETECTION FRAME |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-130200 | 2011-06-10 | ||
JP2011130200 | 2011-06-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012169119A1 true WO2012169119A1 (ja) | 2012-12-13 |
Family
ID=47295712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/003148 WO2012169119A1 (ja) | 2011-06-10 | 2012-05-15 | 物体検出枠表示装置及び物体検出枠表示方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9165390B2 (ja) |
EP (1) | EP2696326A4 (ja) |
JP (1) | JP5923746B2 (ja) |
CN (1) | CN103597514B (ja) |
WO (1) | WO2012169119A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014187451A (ja) * | 2013-03-22 | 2014-10-02 | Dainippon Printing Co Ltd | 画像回転装置、画像回転方法、およびプログラム |
CN104885122A (zh) * | 2012-12-25 | 2015-09-02 | 本田技研工业株式会社 | 车辆周围监测装置 |
WO2020067100A1 (ja) * | 2018-09-26 | 2020-04-02 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、医用画像処理方法、及びプログラム |
JP7491260B2 (ja) | 2021-04-26 | 2024-05-28 | トヨタ自動車株式会社 | 人検出装置、人検出方法及び人検出用コンピュータプログラム |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9400929B2 (en) | 2012-03-05 | 2016-07-26 | Panasonic Intellectual Property Management Co., Ltd. | Object detection device and method for detecting an object by performing a raster scan on a scan window |
JP5842110B2 (ja) * | 2013-10-10 | 2016-01-13 | パナソニックIpマネジメント株式会社 | 表示制御装置、表示制御プログラム、および記録媒体 |
US20170055844A1 (en) * | 2015-08-27 | 2017-03-02 | Canon Kabushiki Kaisha | Apparatus and method for acquiring object information |
US10878225B2 (en) * | 2016-12-21 | 2020-12-29 | Panasonic Intellectual Property Management Co., Ltd. | Comparison device and comparison method |
JP6907774B2 (ja) * | 2017-07-14 | 2021-07-21 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
WO2019114954A1 (en) | 2017-12-13 | 2019-06-20 | Telefonaktiebolaget Lm Ericsson (Publ) | Indicating objects within frames of a video segment |
JP6977624B2 (ja) * | 2018-03-07 | 2021-12-08 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
JP7171212B2 (ja) * | 2018-04-02 | 2022-11-15 | キヤノン株式会社 | 情報処理装置、画像表示方法、コンピュータプログラム、及び記憶媒体 |
CN109948497B (zh) * | 2019-03-12 | 2022-01-28 | 北京旷视科技有限公司 | 一种物体检测方法、装置及电子设备 |
JP7200965B2 (ja) * | 2020-03-25 | 2023-01-10 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN113592874B (zh) * | 2020-04-30 | 2024-06-14 | 杭州海康威视数字技术股份有限公司 | 图像显示方法、装置和计算机设备 |
CN115601793B (zh) * | 2022-12-14 | 2023-04-07 | 北京健康有益科技有限公司 | 一种人体骨骼点检测方法、装置、电子设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0335399A (ja) * | 1989-06-30 | 1991-02-15 | Toshiba Corp | 変化領域統合装置 |
JP2007306463A (ja) * | 2006-05-15 | 2007-11-22 | Fujifilm Corp | トリミング支援方法および装置ならびにプログラム |
JP2008252713A (ja) * | 2007-03-30 | 2008-10-16 | Nikon Corp | 撮像装置 |
JP2009060291A (ja) * | 2007-08-30 | 2009-03-19 | Canon Inc | 画像処理装置、画像処理方法及びプログラム |
JP2010039968A (ja) * | 2008-08-08 | 2010-02-18 | Hitachi Ltd | オブジェクト検出装置及び検出方法 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4130641B2 (ja) | 2004-03-31 | 2008-08-06 | 富士フイルム株式会社 | ディジタル・スチル・カメラおよびその制御方法 |
JP4712563B2 (ja) * | 2006-01-16 | 2011-06-29 | 富士フイルム株式会社 | 顔検出方法および装置並びにプログラム |
JP2007274017A (ja) * | 2006-03-30 | 2007-10-18 | Fujifilm Corp | 自動トリミング方法および装置ならびにプログラム |
JP4264663B2 (ja) | 2006-11-21 | 2009-05-20 | ソニー株式会社 | 撮影装置、画像処理装置、および、これらにおける画像処理方法ならびに当該方法をコンピュータに実行させるプログラム |
JP4725802B2 (ja) * | 2006-12-27 | 2011-07-13 | 富士フイルム株式会社 | 撮影装置、合焦方法および合焦プログラム |
JP4639208B2 (ja) | 2007-03-16 | 2011-02-23 | 富士フイルム株式会社 | 画像選択装置、画像選択方法、撮像装置及びプログラム |
FR2915816A1 (fr) * | 2007-09-26 | 2008-11-07 | Thomson Licensing Sas | Procede d'acquisition d'une image a l'aide d'un appareil dont la focale est reglable et appareil d'acquisition d'image associe au procede |
JP4904243B2 (ja) * | 2007-10-17 | 2012-03-28 | 富士フイルム株式会社 | 撮像装置及び撮像制御方法 |
US8406515B2 (en) * | 2009-06-24 | 2013-03-26 | Hewlett-Packard Development Company, L.P. | Method for automatically cropping digital images |
JP5473551B2 (ja) * | 2009-11-17 | 2014-04-16 | 富士フイルム株式会社 | オートフォーカスシステム |
JP5427577B2 (ja) | 2009-12-04 | 2014-02-26 | パナソニック株式会社 | 表示制御装置及び表示画像形成方法 |
US9025836B2 (en) * | 2011-10-28 | 2015-05-05 | Intellectual Ventures Fund 83 Llc | Image recomposition from face detection and facial features |
-
2012
- 2012-05-15 WO PCT/JP2012/003148 patent/WO2012169119A1/ja active Application Filing
- 2012-05-15 JP JP2013519359A patent/JP5923746B2/ja not_active Expired - Fee Related
- 2012-05-15 EP EP12797288.3A patent/EP2696326A4/en not_active Ceased
- 2012-05-15 US US14/123,610 patent/US9165390B2/en not_active Expired - Fee Related
- 2012-05-15 CN CN201280028590.8A patent/CN103597514B/zh not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0335399A (ja) * | 1989-06-30 | 1991-02-15 | Toshiba Corp | 変化領域統合装置 |
JP2007306463A (ja) * | 2006-05-15 | 2007-11-22 | Fujifilm Corp | トリミング支援方法および装置ならびにプログラム |
JP2008252713A (ja) * | 2007-03-30 | 2008-10-16 | Nikon Corp | 撮像装置 |
JP2009060291A (ja) * | 2007-08-30 | 2009-03-19 | Canon Inc | 画像処理装置、画像処理方法及びプログラム |
JP2010039968A (ja) * | 2008-08-08 | 2010-02-18 | Hitachi Ltd | オブジェクト検出装置及び検出方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2696326A4 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104885122A (zh) * | 2012-12-25 | 2015-09-02 | 本田技研工业株式会社 | 车辆周围监测装置 |
EP2940656A4 (en) * | 2012-12-25 | 2016-10-12 | Honda Motor Co Ltd | DEVICE FOR MONITORING VEHICLE PERIPHERY |
CN104885122B (zh) * | 2012-12-25 | 2017-06-23 | 本田技研工业株式会社 | 车辆周围监测装置 |
US9776564B2 (en) | 2012-12-25 | 2017-10-03 | Honda Motor Co., Ltd. | Vehicle periphery monitoring device |
JP2014187451A (ja) * | 2013-03-22 | 2014-10-02 | Dainippon Printing Co Ltd | 画像回転装置、画像回転方法、およびプログラム |
WO2020067100A1 (ja) * | 2018-09-26 | 2020-04-02 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、医用画像処理方法、及びプログラム |
JPWO2020067100A1 (ja) * | 2018-09-26 | 2021-09-30 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、医用画像処理方法、及びプログラム |
JP7137629B2 (ja) | 2018-09-26 | 2022-09-14 | 富士フイルム株式会社 | 医用画像処理装置、プロセッサ装置、医用画像処理装置の作動方法、及びプログラム |
JP7491260B2 (ja) | 2021-04-26 | 2024-05-28 | トヨタ自動車株式会社 | 人検出装置、人検出方法及び人検出用コンピュータプログラム |
Also Published As
Publication number | Publication date |
---|---|
US9165390B2 (en) | 2015-10-20 |
EP2696326A1 (en) | 2014-02-12 |
JPWO2012169119A1 (ja) | 2015-02-23 |
EP2696326A4 (en) | 2015-12-02 |
CN103597514A (zh) | 2014-02-19 |
CN103597514B (zh) | 2016-04-06 |
US20140104313A1 (en) | 2014-04-17 |
JP5923746B2 (ja) | 2016-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5923746B2 (ja) | 物体検出枠表示装置及び物体検出枠表示方法 | |
US9600732B2 (en) | Image display apparatus and image display method | |
WO2016147644A1 (en) | Image processing apparatus, image processing system, method for image processing, and computer program | |
JP5964108B2 (ja) | 物体検出装置 | |
US8995714B2 (en) | Information creation device for estimating object position and information creation method and program for estimating object position | |
US10127702B2 (en) | Image processing device and image processing method | |
JP2011170684A (ja) | 対象物追跡装置、対象物追跡方法、および対象物追跡プログラム | |
US20170104915A1 (en) | Display control apparatus, display control method, and storage medium | |
JP5457606B2 (ja) | 画像処理方法及び装置 | |
US20140362215A1 (en) | Optimum camera setting device and optimum camera setting method | |
JP5756709B2 (ja) | 身長推定装置、身長推定方法、及び身長推定プログラム | |
JP6377970B2 (ja) | 視差画像生成装置及び視差画像生成方法 | |
JP2013215549A (ja) | 画像処理装置、画像処理プログラムおよび画像処理方法 | |
JP5147760B2 (ja) | 画像監視装置 | |
JP5369873B2 (ja) | 判定プログラムおよびキャリブレーション装置 | |
JP2017215666A (ja) | 制御装置、制御方法およびプログラム | |
JP2007219603A (ja) | 人物追跡装置、人物追跡方法および人物追跡プログラム | |
JP2007257132A (ja) | 動き検出方法および動き検出装置 | |
JP2011097217A (ja) | 動き補正装置およびその方法 | |
JP5448952B2 (ja) | 同一人判定装置、同一人判定方法および同一人判定プログラム | |
JP2011071925A (ja) | 移動体追尾装置および方法 | |
US10019635B2 (en) | Vehicle-mounted recognition device | |
JP5587068B2 (ja) | 運転支援装置及び方法 | |
JP2016021712A (ja) | 画像処理装置、及び、運転支援システム | |
JPWO2018155269A1 (ja) | 画像処理装置および方法、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201280028590.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12797288 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013519359 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2012797288 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012797288 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14123610 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |