WO2011155152A1 - 車両の周辺監視装置 - Google Patents
車両の周辺監視装置 Download PDFInfo
- Publication number
- WO2011155152A1 WO2011155152A1 PCT/JP2011/003007 JP2011003007W WO2011155152A1 WO 2011155152 A1 WO2011155152 A1 WO 2011155152A1 JP 2011003007 W JP2011003007 W JP 2011003007W WO 2011155152 A1 WO2011155152 A1 WO 2011155152A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- head
- image
- edge
- vehicle
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to a vehicle periphery monitoring device, and more particularly to detecting a head region of an object including a pedestrian from an image obtained by an imaging unit mounted on the vehicle.
- Patent Document 1 discloses a periphery monitoring device that monitors the periphery of a vehicle.
- a part (for example, a head) of an object (for example, a pedestrian) to be monitored is identified from a position of a horizontal edge obtained by applying an edge filter to an image obtained by an imaging unit mounted on a vehicle. To do.
- the distance to the object is calculated by obtaining the parallax of the object in the left and right images acquired by a pair of left and right infrared cameras (stereo cameras), and the distance information is obtained.
- An area (mask area) including an object to be monitored on the image is set.
- the present invention reduces or eliminates the problems of the prior art, that is, without using distance information between the vehicle and the object to be monitored (for example, a pedestrian), the head region of the object on the image is determined.
- the object is to detect with high accuracy and improve the detection accuracy of the object.
- the present invention provides a vehicle periphery monitoring device.
- the periphery monitoring device is mounted on a vehicle, an image capturing unit that acquires an image around the vehicle, a multi-value conversion unit that multi-values a grayscale image acquired by the image capture unit, and a multi-valued image
- a first edge extraction mask is applied to a region including a candidate for an object in step 1 to detect a plurality of horizontal edges in an image in the mask, and an image in the first edge extraction mask
- a head upper end detecting means for detecting the position of the horizontal edge having the maximum edge strength among a plurality of horizontal edges as the position of the upper end of the head of the target;
- a second edge extraction mask extending downward from the position of the first edge extraction mask to detect a plurality of vertical edges in an image in the second edge extraction mask, and a second edge extraction mask In the image
- a head lower end detecting means for detecting the position of the lower end of the head of the object based on a change in position of a pluralit
- the head region of the object on the image is accurately detected, and the object identification accuracy is improved. It becomes possible.
- the imaging means is composed of a single infrared camera, and further, based on the size of the predetermined target object to be monitored on the multi-valued image or the time change of the size in the real space.
- Distance calculating means for calculating the distance between the vehicle and the object to be monitored is provided.
- the head region of the object on the image can be detected with high accuracy, and the vehicle and the monitoring target in real space can be detected. It is possible to calculate the distance to the object.
- FIG. 6 is a diagram for explaining application of a horizontal (first) edge extraction mask according to an embodiment of the present invention.
- FIG. 6 is a diagram for explaining application of a vertical (second) edge extraction mask according to an embodiment of the present invention. It is a figure for demonstrating calculation of the height of a head, a width
- FIG. 1 is a block diagram showing a configuration of a vehicle periphery monitoring device according to an embodiment of the present invention.
- the periphery monitoring device is mounted on the vehicle and detects an object around the vehicle based on image data captured by the infrared camera 10, and sounds or sounds based on the detection result of the image processing unit 12.
- a display device 16 for displaying an image obtained through imaging by the infrared camera 10 and for causing the driver to recognize an object around the vehicle.
- the corresponding function provided in the navigation device may be used as the speaker 14 and the display device 16.
- the number of infrared cameras 10 is not limited to one, and two or more infrared cameras may be provided.
- a camera CCD camera or the like
- another wavelength band visible light or the like
- the image processing unit 12 in FIG. 1 has functions indicated by blocks 121 to 129 as its configuration (function). That is, the image processing unit 12 includes a multi-value quantization unit 121 that multi-values a grayscale image acquired by the infrared camera 10 and a region extraction unit that extracts a region including a candidate for an object in the multi-value image. 122, a horizontal edge detection unit 123 that applies a first edge extraction mask to the region to detect a plurality of horizontal edges in the image in the mask, and a plurality of images in the image in the first edge extraction mask.
- Head edge detecting means 124 for detecting the position of the horizontal edge having the maximum edge strength as the position of the upper edge of the head of the object, and in the multi-valued image, the upper edge of the head of the object is detected.
- Vertical edge detection means 125 for detecting a plurality of vertical edges in an image in the second edge extraction mask by applying a second edge extraction mask extending downward from the position; The image of the edge extraction mask, based on the change in position of a plurality of vertical edges, to function as a head bottom detection means 126 for detecting the position of the head bottom of the object.
- the image processing unit 12 further includes head region specifying means for specifying the head region of the object in the multi-valued image based on the interval between the position of the upper end of the head and the position of the lower end of the head. 127 and multi-valued object determination means 128 for determining whether or not the object is a predetermined object to be monitored based on the multi-valued image including at least the head region of the object. It functions as distance calculation means 129 that calculates the distance between the vehicle and the monitoring target object in the real space based on the size of the predetermined monitoring target object on the image or the time change of the size.
- the image processing unit 12 also needs to receive detection signals from a vehicle speed sensor that detects the speed (vehicle speed) of the host vehicle, a brake sensor, a yaw rate (change speed of the rotation angle in the turning direction), and the like. Has a function to perform various processing.
- each block is realized by a computer (CPU) included in the image processing unit 12.
- the configuration of the image processing unit 12 may be incorporated in the navigation device.
- the image processing unit 12 includes, for example, an A / D conversion circuit that converts an input analog signal into a digital signal, an image memory that stores a digitized image signal, and a central processing unit (CPU) that performs various arithmetic processes. ), RAM used by the CPU to store data for calculation, ROM to store programs executed by the CPU and data to be used (including tables and maps), drive signals for the speakers 14, display signals to the display device 16, etc. An output circuit for outputting is provided. The output signal of the infrared camera 10 is converted into a digital signal and input to the CPU.
- CPU central processing unit
- FIG. 2 is a diagram for explaining the mounting position of the infrared camera 10 shown in FIG. 1 according to one embodiment of the present invention.
- the infrared camera 10 is disposed on the front bumper portion of the vehicle 20 and at the center in the vehicle width direction.
- the infrared camera 10 has a characteristic that the output signal level increases (the luminance increases) as the temperature of the object increases.
- Reference numeral 16 a in FIG. 2 shows an example in which a head-up display (hereinafter referred to as “HUD”) is used as the display device 16.
- the HUD 16a is provided such that the display screen is displayed at a position that does not obstruct the front view of the driver of the front windshield of the vehicle 20.
- FIG. 3 is a processing flow executed by the image processing unit 12 according to an embodiment of the present invention. This processing flow is executed at predetermined time intervals by the CPU of the image processing unit 12 calling a processing program stored in the memory.
- a black and white image is obtained by binarizing the acquired gray scale image is described as an example.
- multi-value quantization of three or more values may be performed. In that case, the number of threshold values to be set increases, but a multi-valued image can be obtained by performing basically the same processing as in the case of binarization.
- step S10 an analog signal of an infrared image, which is an output signal for each frame taken by the infrared camera 10, is input, and a grayscale image obtained by digitizing the analog signal by A / D conversion is stored in a memory. .
- step S11 the obtained grayscale image is subjected to a binarization process (a process in which a pixel having a luminance equal to or higher than a threshold is set to “1 (white) and a pixel smaller than the threshold is set to“ 0 (black) ”).
- a binarization process a process in which a pixel having a luminance equal to or higher than a threshold is set to “1 (white) and a pixel smaller than the threshold is set to“ 0 (black) ”).
- step S12 “1” (white) of the binarized image is converted into run-length data for each scanning line in the X direction (horizontal direction), and a line with a portion overlapping in the Y direction is regarded as one object. Labeling is performed for each circumscribed rectangle of the target object, and the target object candidate area is labeled.
- FIG. 4 is a diagram showing a routine (processing flow) for extracting the head of the object.
- FIG. 5 is a diagram for explaining application of the edge extraction mask.
- FIG. 5A is a binarized image including the target object candidate region 22. There is an image that looks like a human body (pedestrian) as an object in the object candidate area 22, and it is divided into two parts, a head part 23 and a torso part 24.
- An edge extraction mask is applied as an image as indicated by reference numeral 25. Specifically, an edge filter having a noise removing function such as Sobel or Prewitt is applied to the image (pixel) in the edge extraction mask 25.
- step S132 the horizontal edge 26 in the image in the edge extraction mask 25 is detected.
- the detection of the horizontal edge is performed by using a conventional method, for example, based on whether the output value of the edge filter is larger than a predetermined threshold value.
- step S133 the pixel position having the highest edge strength in the horizontal edge 26 is detected as the position of the upper end of the head.
- the position of the point PT in (b) is the position of the upper end of the head 23.
- the position of the upper end of the head of the object can be specified from the horizontal edge (its edge strength) detected in the edge extraction mask.
- an edge extraction mask for detecting vertical edges is applied to the image of the target object candidate area.
- FIG. 6 is a diagram for explaining application of the edge extraction mask.
- FIG. 6A is a binarized image including the target object candidate area 22 as in FIG.
- the edge extraction mask is applied as a mask extending downward from the position PT of the upper end of the head 23 detected in step S133.
- a filter that extracts a luminance difference of a predetermined gradation or higher is applied to the image (pixel) in the edge extraction mask 28. At that time, noise components are removed as necessary.
- step S135 the vertical edge 29 in the image in the edge extraction mask 28 is detected.
- the detection of the vertical edge is performed by using a conventional method, for example, based on whether the output value of the edge filter is larger than a predetermined threshold value.
- step S136 when the change in the position of the pixel in which the vertical edge 29 is detected matches a predetermined pattern, that pixel position is detected as the position of the lower end of the head 23.
- the pixel position is the position of the lower end of the head 23. Detected as PB.
- one area (square) indicated by reference numeral 30 represents one pixel.
- the pattern (b) is merely an example, and any pattern that can extract the position of the lower end of the head 23 can be adopted as the predetermined pattern.
- the position of the lower end (shoulder) of the head of the object is specified from the positional change (pattern) of the pixel including the vertical edge detected in the edge extraction mask. Can do.
- step S137 the height, width, and center position of the head 23 are calculated.
- FIG. 7 is a diagram for explaining calculation of the height, width, and center position of the head 23.
- a mask area 30 is set in an area specified by the upper end PT and the lower end PB of the head 23 already detected first.
- the height of the head 23 is calculated as the height of the mask region 30, that is, the interval h between the upper end PT and the lower end PB of the head 23.
- the vertical edge 29 is extracted in the set mask area 30. At that time, edge pixels having no continuity are removed as noise. Then, the number of edge-extracted pixels consecutive for each X line is calculated, and the head width is calculated with both ends of the pixel position larger than a predetermined threshold as both head ends. More specifically, the mask area 30 is divided into two left and right areas, and in each area, a search is performed from the outside to the inside, and a predetermined condition is met at the pixel position where the total value of the edge points first exists. In this case, the positions of both ends of the head are used. In the example of FIG. 7, the positions Xa and Xb in (b) are the positions at both ends of the head 23, and the head width W is calculated from the interval between the two positions. The center position of the head 23 is calculated as the center pixel position with the calculated head width W. In the example of FIG. 7, the position of the code PC is the center position.
- the size (upper end, lower end, height, width) of the head region of the object without using distance information between the vehicle and the object (for example, a pedestrian). ) can be accurately detected.
- the region of the object is specified in step S14.
- a mask region extending downward from the position PT of the upper end of the head first detected in step S13 is set.
- a search is performed while sequentially scanning pixel values from the position PB of the lower end of the head in the mask area to the lower side and from the left side to the right side.
- the scanning portion is determined as the boundary between the object candidates 23 and 24 in the image and the road surface, and the boundary position is set as the lower end PF of the object.
- the upper end of the object is the position PT of the upper end of the head.
- An area between the positions PT and PF is specified as an object area.
- step S15 the type of the object is determined. For example, it is determined whether an object candidate corresponds to a specific object such as a pedestrian.
- the specific determination method is, as conventionally performed, for example, when a pedestrian is a target, whether the target candidate corresponds to a pedestrian characteristic (head, leg, etc.), Alternatively, using a well-known pattern matching, a similarity with a predetermined pattern representing a pedestrian stored in advance is calculated, and it is determined whether or not the user is a pedestrian from the similarity.
- step S16 the size of the object is estimated.
- step S17 the distance between the vehicle and the object is calculated.
- the distance Z to the upper pedestrian candidate is calculated by the following equation (1).
- (1) Formula is a calculation formula at the time of assuming that the average height of a pedestrian is about 170 cm.
- Z HT ⁇ F / H (1)
- step S18 a moving object (moving object) at a high temperature such as a pedestrian is detected as an object from a grayscale image and a binarized image obtained for each frame over time, and a moving vector ( Speed and direction). Further, in step S18, the vehicle is detected based on the brake operation amount, vehicle speed, and yaw rate, which are outputs of the brake sensor, vehicle speed sensor, and yaw rate sensor, and the distance Z to the object calculated in step S17. It is determined whether or not there is a possibility of contact with the pedestrian. If it is determined that there is a possibility of contact, the driver is notified in step S19. Specifically, a gray scale image of a pedestrian is displayed on the display device 16 (HUD 16a), an alarm is generated through the speaker 14 to notify the driver, and the driver of the vehicle is urged to perform a contact avoidance operation.
- HUD 16a display device 16
- information on the head of the object is calculated using a conventional luminance profile on a grayscale image, and the binary image according to the above-described embodiment of the present invention is used.
- the distance between the vehicle and the object is calculated using the edge of the vehicle.
Abstract
Description
Z=HT×F/H (1)
12 画像処理ユニット
14 スピーカ
16 表示装置
16a HUD
20 車両
Claims (4)
- 車両に搭載され、該車両の周辺の画像を取得する撮像手段と、
前記撮像手段により取得されたグレースケール画像を多値化する多値化手段と、
多値化された画像において対象物の候補を含む領域に第1のエッジ抽出マスクを適用して、該マスク内の画像での複数の水平エッジを検出する水平エッジ検出手段と、
前記第1のエッジ抽出マスク内の画像において、前記複数の水平エッジの中でエッジ強度が最大な水平エッジの位置を対象物の頭部上端の位置として検出する頭部上端検出手段と、
前記多値化された画像において、前記対象物の頭部上端の位置から下方に伸びる第2のエッジ抽出マスクを適用して、当該第2のエッジ抽出マスク内の画像での複数の垂直エッジを検出する垂直エッジ検出手段と、
前記第2のエッジ抽出マスク内の画像において、前記複数の垂直エッジの位置変化に基づいて、前記対象物の頭部下端の位置を検出する頭部下端検出手段と、
前記対象物の頭部上端の位置と頭部下端の位置との間隔に基づいて、前記多値化された画像において前記対象物の頭部領域を特定する頭部領域特定手段と、
少なくとも前記対象物の頭部領域を含む前記多値化された画像に基づいて、前記対象物が監視対象の所定対象物であるか否かを判定する対象物判定手段と、
を備える車両の周辺監視装置。 - 前記撮像手段は単一の赤外線カメラからなり、さらに、前記多値化された画像上の前記監視対象の所定対象物のサイズまたは当該サイズの時間変化に基づいて、実空間上での前記車両と前記監視対象の対象物との距離を算出する距離算出手段を備える、請求項1に記載の周辺監視装置。
- 前記距離算出手段により算出された前記距離と、前記車両のブレーキ操作量、車速およびヨーレートの中から選択した少なくとも1つとに基づき、前記車両と前記所定対象物との接触可能性を判定する接触判定手段をさらに備える、請求項2に記載の周辺監視装置。
- 前記グレースケール画像上の輝度プロファイルを利用して、前記対象物の頭部領域の情報を算出する頭部領域情報算出手段と、
当該算出された頭部領域の情報と前記頭部領域特定手段により特定された前記対象物の頭部領域の情報とを比較して、当該特定された前記対象物の頭部領域の情報の信頼性を判定する信頼性判定手段と、をさらに備える請求項1~3のいずれかに記載の周辺監視装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11792112.2A EP2579229B1 (en) | 2010-06-07 | 2011-05-30 | Apparatus and method for monitoring surroundings of a vehicle |
CN2011800261010A CN102906801A (zh) | 2010-06-07 | 2011-05-30 | 车辆周围监测装置 |
US13/700,289 US9030560B2 (en) | 2010-06-07 | 2011-05-30 | Apparatus for monitoring surroundings of a vehicle |
JP2012519232A JP5642785B2 (ja) | 2010-06-07 | 2011-05-30 | 車両の周辺監視装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010129841 | 2010-06-07 | ||
JP2010-129841 | 2010-06-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011155152A1 true WO2011155152A1 (ja) | 2011-12-15 |
Family
ID=45097771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/003007 WO2011155152A1 (ja) | 2010-06-07 | 2011-05-30 | 車両の周辺監視装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9030560B2 (ja) |
EP (1) | EP2579229B1 (ja) |
JP (1) | JP5642785B2 (ja) |
CN (1) | CN102906801A (ja) |
WO (1) | WO2011155152A1 (ja) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6163453B2 (ja) | 2014-05-19 | 2017-07-12 | 本田技研工業株式会社 | 物体検出装置、運転支援装置、物体検出方法、および物体検出プログラム |
JP6483360B2 (ja) | 2014-06-30 | 2019-03-13 | 本田技研工業株式会社 | 対象物認識装置 |
KR20180069147A (ko) * | 2016-12-14 | 2018-06-25 | 만도헬라일렉트로닉스(주) | 차량의 보행자 경고장치 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08313632A (ja) * | 1995-05-19 | 1996-11-29 | Omron Corp | 警報発生装置および方法,ならびにこの警報発生装置を搭載した車両 |
JP2000030197A (ja) * | 1998-07-09 | 2000-01-28 | Nissan Motor Co Ltd | 温体検出装置 |
JP2003216937A (ja) * | 2002-01-18 | 2003-07-31 | Honda Motor Co Ltd | ナイトビジョンシステム |
JP2004303219A (ja) * | 2003-03-20 | 2004-10-28 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2006151300A (ja) * | 2004-11-30 | 2006-06-15 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2006185434A (ja) * | 2004-11-30 | 2006-07-13 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2009048558A (ja) * | 2007-08-22 | 2009-03-05 | Tokai Rika Co Ltd | 画像処理式物体検出装置 |
JP2009301242A (ja) * | 2008-06-11 | 2009-12-24 | Nippon Telegr & Teleph Corp <Ntt> | 頭部候補抽出方法、頭部候補抽出装置、頭部候補抽出プログラムおよびそのプログラムを記録した記録媒体 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4085959B2 (ja) * | 2003-11-14 | 2008-05-14 | コニカミノルタホールディングス株式会社 | 物体検出装置、物体検出方法、および記録媒体 |
WO2005086079A1 (en) * | 2004-03-02 | 2005-09-15 | Sarnoff Corporation | Method and apparatus for differentiating pedestrians, vehicles, and other objects |
US7403639B2 (en) * | 2004-11-30 | 2008-07-22 | Honda Motor Co., Ltd. | Vehicle surroundings monitoring apparatus |
EP2227406B1 (en) | 2007-11-12 | 2015-03-18 | Autoliv Development AB | A vehicle safety system |
JP2009211311A (ja) * | 2008-03-03 | 2009-09-17 | Canon Inc | 画像処理装置及び方法 |
JP4644273B2 (ja) * | 2008-07-15 | 2011-03-02 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP4482599B2 (ja) | 2008-10-24 | 2010-06-16 | 本田技研工業株式会社 | 車両の周辺監視装置 |
CN102640182B (zh) * | 2009-11-25 | 2014-10-15 | 本田技研工业株式会社 | 被监测对象距离测定装置和搭载了该装置的车辆 |
-
2011
- 2011-05-30 US US13/700,289 patent/US9030560B2/en active Active
- 2011-05-30 CN CN2011800261010A patent/CN102906801A/zh active Pending
- 2011-05-30 WO PCT/JP2011/003007 patent/WO2011155152A1/ja active Application Filing
- 2011-05-30 JP JP2012519232A patent/JP5642785B2/ja active Active
- 2011-05-30 EP EP11792112.2A patent/EP2579229B1/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08313632A (ja) * | 1995-05-19 | 1996-11-29 | Omron Corp | 警報発生装置および方法,ならびにこの警報発生装置を搭載した車両 |
JP2000030197A (ja) * | 1998-07-09 | 2000-01-28 | Nissan Motor Co Ltd | 温体検出装置 |
JP2003216937A (ja) * | 2002-01-18 | 2003-07-31 | Honda Motor Co Ltd | ナイトビジョンシステム |
JP2004303219A (ja) * | 2003-03-20 | 2004-10-28 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2006151300A (ja) * | 2004-11-30 | 2006-06-15 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2006185434A (ja) * | 2004-11-30 | 2006-07-13 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2009048558A (ja) * | 2007-08-22 | 2009-03-05 | Tokai Rika Co Ltd | 画像処理式物体検出装置 |
JP2009301242A (ja) * | 2008-06-11 | 2009-12-24 | Nippon Telegr & Teleph Corp <Ntt> | 頭部候補抽出方法、頭部候補抽出装置、頭部候補抽出プログラムおよびそのプログラムを記録した記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
CN102906801A (zh) | 2013-01-30 |
US9030560B2 (en) | 2015-05-12 |
EP2579229B1 (en) | 2016-06-29 |
EP2579229A4 (en) | 2013-12-25 |
EP2579229A1 (en) | 2013-04-10 |
JP5642785B2 (ja) | 2014-12-17 |
JPWO2011155152A1 (ja) | 2013-08-01 |
US20130070098A1 (en) | 2013-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8077204B2 (en) | Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method | |
JP4173901B2 (ja) | 車両周辺監視装置 | |
WO2010047054A1 (ja) | 車両の周辺監視装置 | |
JP4173902B2 (ja) | 車両周辺監視装置 | |
KR102021152B1 (ko) | 원적외선 카메라 기반 야간 보행자 인식 방법 | |
JP4528283B2 (ja) | 車両周辺監視装置 | |
JP2007334751A (ja) | 車両周辺監視装置 | |
WO2014002534A1 (ja) | 対象物認識装置 | |
JP5760090B2 (ja) | 生体認識装置 | |
WO2010007718A1 (ja) | 車両周辺監視装置 | |
JP2006318059A (ja) | 画像処理装置、画像処理方法、および画像処理用プログラム | |
JP5642785B2 (ja) | 車両の周辺監視装置 | |
JP4813304B2 (ja) | 車両周辺監視装置 | |
JP4887540B2 (ja) | 車両周辺監視装置、車両、車両周辺監視用プログラム、車両周辺監視方法 | |
JP4765113B2 (ja) | 車両周辺監視装置、車両、車両周辺監視用プログラム、車両周辺監視方法 | |
JP2008028478A (ja) | 障害物検出システム、及び障害物検出方法 | |
JP2006317193A (ja) | 画像処理装置、画像処理方法、および画像処理用プログラム | |
JP2008040724A (ja) | 画像処理装置、及び画像処理方法 | |
JP4937243B2 (ja) | 車両の周辺監視装置 | |
KR101440293B1 (ko) | 횡단 보도를 검출하는 장치 및 방법 | |
JP2006314060A (ja) | 画像処理装置及びノイズ検出方法 | |
JP4887539B2 (ja) | 物体種別判定装置 | |
JP2008052568A (ja) | 障害物検出システム、及び障害物検出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180026101.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11792112 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012519232 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011792112 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13700289 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |