KR101747761B1 - Obstacle detecting device and moving object provided therewith - Google Patents

Obstacle detecting device and moving object provided therewith Download PDF

Info

Publication number
KR101747761B1
KR101747761B1 KR1020160022324A KR20160022324A KR101747761B1 KR 101747761 B1 KR101747761 B1 KR 101747761B1 KR 1020160022324 A KR1020160022324 A KR 1020160022324A KR 20160022324 A KR20160022324 A KR 20160022324A KR 101747761 B1 KR101747761 B1 KR 101747761B1
Authority
KR
South Korea
Prior art keywords
obstacle
mainly
information
unit
person
Prior art date
Application number
KR1020160022324A
Other languages
Korean (ko)
Other versions
KR20160108153A (en
Inventor
아키히로 야마자키
Original Assignee
야마하하쓰도키 가부시키가이샤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 야마하하쓰도키 가부시키가이샤 filed Critical 야마하하쓰도키 가부시키가이샤
Publication of KR20160108153A publication Critical patent/KR20160108153A/en
Application granted granted Critical
Publication of KR101747761B1 publication Critical patent/KR101747761B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/40Indexing codes relating to other road users or special conditions
    • B60Q2300/45Special conditions, e.g. pedestrians, road signs or potential dangers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/107Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using stereoscopic cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Acoustics & Sound (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)

Abstract

A distance information detecting section 21 for detecting distance information to an object in front of the vehicle 1, a parallax image generating section 23 for creating a parallax image in front of the vehicle 1, A position information storage section 25 mainly storing position information mainly specifying the main body Tr on the parallax image in accordance with the position of the vehicle 1 of the vehicle 1; A main parallax image extracting unit 66 for extracting a parallax image mainly on the (Tr) basis from the estimated position and mainly the positional information, and a main parallax image extracting unit 66 for predicting an obstacle And the position information is mainly the obstacle detection device 56 which is specified by the distance information and the coordinate information in the horizontal direction of the vehicle 1. The obstacle detection device 31 '

Description

[0001] OBJECT DETECTING DEVICE AND MOVING OBJECT PROVIDED THEREWITH [0002]

The present invention relates to an obstacle detection device and a mobile body having the obstacle detection device.

Golf carts that run on predetermined courses within a golf course may have autonomous driving capabilities. The autonomous travel is performed by detecting a magnetic field generated from the electromagnetic induction line. In addition, a person traversing a work vehicle or a main body mainly exists as an obstacle to a golf cart. It is necessary to detect these obstacles when autonomously driving without a driver. There is also an obstacle detecting device for detecting an obstacle in front of the golf cart by providing an obstacle sensor using ultrasonic waves or the like on the front surface of the golf cart.

Conventional obstacle detection devices often detect an object that is not in the image as an obstacle, and consequently control the golf cart to decelerate. Therefore, in the technique of Patent Document 1, it is judged whether or not an obstacle mainly exists on the image. As a result, it is possible to suppress the stop control by the obstacle which is not mainly in the image.

The technology of Patent Document 1

In Patent Document 1, the coordinate position data of the traveling path in the moving object coordinate system is obtained from the coordinate position of the moving object obtained as the whole coordinate system and the coordinate position data of each point of the traveling path side obtained as the entire coordinate system, It is determined whether or not the obstacle detected on the running road surface is specified by specifying the running road surface in the image.

Japanese Patent Application Laid-Open No. 10-141954

However, since the traveling road surface in the three-dimensional image is specified using three-dimensional data, the CPU of the obstacle detecting apparatus is subjected to a calculation load and requires calculation time. If the computation time increases, the moving object travels, and therefore, the distance from the obstacle approaches and the possibility of contact with the obstacle increases. In addition, in the case of a vehicle apparatus that passes through a side of a tree just like a golf course, when the estimated error of the current position is large, a case occurs in which the surrounding trees are judged as obstacles in the main body and stopped. Therefore, it is necessary to be able to estimate the current position with high accuracy. In Patent Document 1, although the three-dimensional position coordinate is used at the current position of the moving object, expensive GPS, a sensor and the like are required to estimate the current position with high accuracy in three dimensions, and they require vehicle to moisture.

SUMMARY OF THE INVENTION The present invention has been made in view of such circumstances, and an object of the present invention is to provide an obstacle detection device for mainly detecting an obstacle in an image by reducing the amount of computation and a moving object having the obstacle detection device.

In order to achieve the above object, the present invention has the following configuration.

That is, the first invention of the present invention is an obstacle detecting device mounted on a moving body traveling on a predetermined main road, comprising: a distance information detecting section for detecting distance information from the obstacle detecting device to an object ahead of the moving object; A three-dimensional information generating unit for generating three-dimensional information in front of the mobile body based on the information; a storage unit for storing position information mainly based on the position of the mobile body from a predetermined position; A three-dimensional information extracting unit for extracting three-dimensional information of a main image in the three-dimensional information on the basis of the position information, And an obstacle detection unit for detecting an obstacle in the moving direction of the moving object, It is specified by the coordinate information.

According to the present invention, the distance information detecting unit detects distance information from the obstacle detecting device to an object in front of the moving object. The three-dimensional information generation unit generates three-dimensional information in front of the moving object based on the detected distance information. The storage unit mainly stores position information that specifies the main image of the three-dimensional information according to the position of the moving object from a predetermined position. The position estimating unit mainly estimates the position of the moving object on the image. The three-dimensional information extracting unit mainly extracts the three-dimensional information of the three-dimensional information mainly from the estimated position and mainly the position information. The obstacle detection unit mainly detects the obstacle based on the three-dimensional information of the image.

Since the position information mainly used for extracting the three-dimensional information of the image is two-dimensional information of the distance information and the coordinate information in the horizontal direction of the moving object, it is possible to reduce the amount of information to be handled, The computation load can be reduced. This makes it possible to shorten the calculation time and to judge whether or not the obstacle is mainly on the time when the distance from the obstacle is secured, and it is possible to improve the avoidance of contact with the obstacle.

It is also preferable to provide a person judging section which judges whether or not the obstacle mainly is the person. Or a person judging section for judging whether or not the obstacle determined to be mainly on the person is a person. By narrowing human detection to mainly an obstacle, it is possible to shorten the processing time for human detection. In addition, since no person on the far side from the moving body is not detected, the moving body can smoothly run.

And an extended area three-dimensional information extracting unit for extracting three-dimensional information in a mainly extended area, which is a predetermined area adjacent to the main area based mainly on the positional information, It is preferable that the obstacle detecting unit detects the obstacle and the person judging unit judges whether or not the obstacle detected in the mainly extending area is a person. This makes it possible to improve the detection rate for a person in the vicinity of both ends in the width direction of the main body, which is likely to be confused with surrounding trees.

An extension area three-dimensional information extraction unit for extracting three-dimensional information mainly in an extension area, which is a predetermined area adjacent to the main area based mainly on the position information; And a person detecting unit for detecting a person. It is possible to shorten the processing time of human detection by performing human detection within a wider range as a predetermined region in the specified main body. In addition, since a person in the vicinity of the main body can be detected, a person who intends to enter the main body can be detected. This can improve the avoidance of contact with a person. In addition, since the obstacle detecting portion and the human detecting portion are provided separately, it is difficult to determine the possibility of contact with the obstacle on the main image having a high possibility of contact, and in the vicinity of both ends in the width direction You can do different things for a person.

A second invention of the present invention is a obstacle detecting device mounted on a moving body traveling on a predetermined main road, comprising: a distance information detecting section for detecting distance information from the obstacle detecting device to an object ahead of the moving object; A three-dimensional information generating unit for generating three-dimensional information in front of the moving body on the basis of the position of the moving object from the predetermined position; , A position estimation unit for estimating a position of the moving object on the principal plane, a main specifying unit for specifying a principal in the three-dimensional information by the estimated position and mainly the position information, An obstacle detection unit for detecting an obstacle based on the detected obstacle, Having determined the determination portion, the predominantly position information is specified by the coordinate information of the horizontal direction of the distance information and the mobile body.

According to the present invention, the distance information detecting unit detects distance information from the obstacle detecting device to an object in front of the moving object. The three-dimensional information generation unit generates three-dimensional information in front of the moving object based on the detected distance information. The storage unit mainly stores position information that specifies the main image of the three-dimensional information according to the position of the moving object from a predetermined position. The position estimating unit mainly estimates the position of the moving object on the image. The specific part mainly specifies the principal in the three-dimensional information by the estimated position and mainly the position information. The obstacle detection unit detects obstacles based on the three-dimensional information. The judgment unit judges whether or not the detected obstacle is on the specified principal.

Since the position information mainly specifying the three-dimensional information image is two-dimensional information of the distance information and the coordinate information in the horizontal direction of the moving object, it is possible to reduce the amount of information to be handled, Can be reduced. This makes it possible to shorten the calculation time and to judge whether or not the obstacle is mainly on the time when the distance from the obstacle is secured, and it is possible to improve the avoidance of contact with the obstacle.

The apparatus may further include an area specifying unit that specifies a main area, which is a predetermined area adjacent to the specified main area, and the determining unit determines whether or not the detected obstacle is on the specified main expansion area, It is preferable that the person judging unit judges whether or not the obstacle determined to be on the extended area is a person. This makes it possible to improve the detection rate for a person in the vicinity of both ends in the width direction of the main body, which is likely to be confused with surrounding trees.

It is also possible to provide a region specifying unit that specifies a predetermined region, which is a predetermined region adjacent to the specified region, and a person detecting unit that detects a person in the specified region. It is possible to shorten the processing time of human detection by performing human detection within a wider range as a predetermined region in the specified main body. In addition, since a person in the vicinity of the main body can be detected, a person who intends to enter the main body can be detected. This can improve the avoidance of contact with a person. In addition, since the obstacle detection unit and the human detection unit are provided separately, it is difficult to determine the possibility of contact with an obstacle on a main image having a high possibility of contact, and a region near both edges in the width direction You can do different things for a person in.

Preferably, the position estimating unit estimates the position of the moving body on the main body by a moving distance from a predetermined position. Since the mobile body runs on a predetermined basis, it can mainly specify the position of the mobile body on the moving distance from the predetermined position. The position estimating unit estimates the position of the moving object on the basis of the moving distance from the predetermined position, so that the position of the moving object can be estimated only by using the vehicle coordinate system without using the entire coordinate system. Further, since the moving distance is a one-dimensional data amount, the amount of data to be handled can be significantly reduced.

Further, the distance information detecting unit may have a stereo camera, and the three-dimensional information may be a parallax image. By using a stereo camera as the distance information detecting unit, a parallax image can be used as three-dimensional information. This makes it possible to appropriately perform the determination of the main image in the parallax image and the detection of the obstacle. Further, in the one-dimensional scanning laser radar, it is easy to detect the road surface as an obstacle by the up-down of the road surface. However, by using the stereo camera, it is difficult to be affected by the bending of the road surface, and the obstacle can be detected.

The distance information detecting unit may have a radar. By using the radar as the distance information detecting unit, it is possible to appropriately acquire the three-dimensional information. The distance can be measured with high precision particularly far from the use of the stereo camera, so that it is possible to appropriately perform the determination of the main part and the detection of the obstacle in the three-dimensional information.

Further, the moving body according to the present invention is a moving body provided with the obstacle detecting device. Since the obstacle detecting device is provided, it is possible to determine whether or not an obstacle mainly exists on the image by reducing the amount of calculation.

Preferably, the moving body further comprises an alarm for issuing a warning that the obstacle is detected on the main body. It is possible to improve the avoidance of contact with these obstacles by issuing a warning to the pedestrians, the animals, and the drivers of other moving objects existing mainly on the image. In addition, warning can be issued to the driver of his or her moving body.

Preferably, the moving body further includes a speed control unit that decelerates or stops the moving body when the obstacle is detected on the main body. It is possible to improve the avoidance of contact with the obstacle even in the case where the obstacle existing on the main body is not a person or an animal or an autonomous running without a driver of the moving body.

Preferably, the moving body further includes a speed control unit for decelerating or stopping the moving body with respect to the detected obstacle on the main body, and an alarm for issuing a warning to the detected person on the mainly extending area. Thus, when an obstacle including a person is mainly present, the moving object is decelerated or stopped by detecting this obstacle. Also, when there is a person in the extended area, a warning is issued from the alarm by detecting this person. With this warning, it is possible to prevent a person who is mainly in the extended area from entering into the main body, and as a result, unnecessary deceleration or stop of the moving body can be suppressed.

(Effects of the Invention)

According to the present invention, it is possible to provide an obstacle detecting device for mainly detecting an obstacle of an image by reducing the amount of calculation and a moving body having the obstacle detecting device.

1 is a front view of a vehicle according to an embodiment.
2 is a block diagram showing a configuration of a vehicle according to the first embodiment.
3 is a block diagram showing a configuration of an obstacle detection device according to the first embodiment.
Fig. 4 is an explanatory view showing the relationship between the direction of the distance information and the stereo camera according to the first embodiment. Fig.
Fig. 5 is an explanatory view showing an example of a vehicle on which the vehicle travels according to the embodiment. Fig.
Fig. 6 is an explanatory diagram for explaining mainly the parallax image according to the first embodiment; Fig.
7 is an explanatory diagram showing an obstacle mainly in an image in the parallax image according to the first embodiment;
8 is a flowchart showing the flow of obstacle detection according to the first embodiment.
9 is a block diagram showing the configuration of a vehicle according to the second embodiment.
10 is a block diagram showing the configuration of a vehicle according to the third embodiment.
11 is an explanatory diagram mainly showing an extended area in a parallax image according to the third embodiment.
12 is a block diagram showing a configuration of a vehicle according to a fourth embodiment.
13 is an explanatory diagram showing an obstacle mainly in an image in the parallax image according to the fourth embodiment.
Fig. 14 is an explanatory diagram mainly showing an extended area in the parallax image according to the fourth embodiment; Fig.
15 is a flowchart showing the flow of obstacle detection according to the fourth embodiment.

(Example 1)

Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. Golf carts that run autonomously are loaded as an embodiment of the moving body in the present invention. The moving body according to the present invention is not limited to a golf cart, but also includes an unmanned conveyance vehicle traveling in a factory or an orchard. The moving body in the present invention is not limited to a four-wheeled vehicle, but may be a three-wheeled vehicle or a monorail type. In the following description, forward, backward, leftward, and rightward movements refer to the forward direction.

1. Schematic composition of vehicle

Please refer to Fig. 1 is a front view of the vehicle 1 according to the embodiment. The vehicle 1 is a golf cart that autonomously travels within a golf course. The vehicle 1 is guided by the electromagnetic wave emitted from the guide wire embedded in the main body and can run autonomously. A stereo camera (3) is provided at the center of the front surface of the vehicle (1).

The vehicle 1 also has a right front wheel 5 and a left front wheel 6 which are steered by the rotation of the handle 4. [ A reading section 7 for reading a vertex Fp (FIG. 5) embedded in advance on the main body Tr is formed in the lower portion of the vehicle 1. [ In addition to the start position So, a plurality of apexes Fp are embedded mainly on the (Tr). The apex Fp is composed of a combination of magnetic poles of a plurality of magnets. The reading section 7 is a magnetic force sensor that reads such magnetic field information from the apex Fp. The magnetic field information includes position information indicating a start position So or a specific position, and speed control information such as stop and speed change of the vehicle speed. The vertex Fp may be an RF tag used as a radio frequency identifier (RFID) as well as a magnet.

The right front wheel 5 is provided with a rotational angle sensor 8 for detecting the rotational angle of the right front wheel 5 and a wheel speed sensor 9 for detecting the rotational speed of the right front wheel 5. In the vicinity of the handle 4, a touch panel display 12 for displaying various information is provided. The rotation angle sensor 8 detects the rotation angle of the wheel, and is, for example, a rotary encoder. The rotation angle sensor 8 and the wheel speed sensor 9 may be provided in the left front wheel 6 or the rear wheel instead of the right front wheel 5.

Reference is now made to Fig. Fig. 2 is a functional block diagram showing the configuration of the vehicle 1. Fig. The vehicle 1 is mainly provided with an obstacle detection device 14 for detecting an obstacle on the image, an autonomous drive control section 15 for controlling autonomous travel of the vehicle 1 along the guide line, A vehicle speed control section 16 for controlling the acceleration / deceleration of the vehicle speed, and an alarm 17 for issuing a warning to the front of the vehicle and the occupant by detection of the obstacle. Since the autonomous running control unit 15 uses the conventional autonomous running control method, the description thereof is omitted here. The alarm 17 is installed on the front surface of the vehicle 1 and can notify the surroundings that the vehicle 1 is approaching by sound.

2. Configuration of obstacle detection device

Next, the configuration of the obstacle detecting device 14 provided in the vehicle 1 will be described with reference to Fig. 3 is a block diagram showing a configuration of the obstacle detecting device 14. As shown in Fig.

The obstacle detecting device 14 includes a distance information detecting section 21 for detecting distance information to an object ahead of the vehicle 1, a parallax image generating section 23 for generating a parallax image in front of the vehicle 1 A main position information storage section 25 mainly storing position information mainly specifying the main image on the parallax image in accordance with the position of the vehicle 1 from the start position So, A predicting unit 27 for predicting the parallax image based on the estimated position and mainly the positional information and an obstacle detecting unit 31 for detecting the obstacle based on the parallax image A judging section 33 for judging whether or not the detected obstacle is present on the specified principal, a distance calculating section 34 for calculating distance information from the parallax information, and a judging section 33 for judging whether or not the detected obstacle is a person And a person determining section 35. [

The distance information detection unit 21 includes a stereo camera 3 for photographing an image in front of the vehicle 1 and a buffer 41 for temporarily storing an image photographed by the stereo camera 3. [

The stereo camera 3 is composed of two image sensors, that is, a left image sensor 3a and a right image sensor 3b. The left image sensor 3a and the right image sensor 3b are general visible light sensors such as CCD and CMOS. The left image sensor 3a and the right image sensor 3b are installed in the vehicle 1 under a predetermined geometric condition. In the first embodiment, the left image sensor 3a and the right image sensor 3b are provided at a constant distance in the horizontal direction. That is, the left image sensor 3a and the right image sensor 3b are arranged in a positional relationship of parallel stereo, respectively.

The left image sensor 3a and the right image sensor 3b are arranged such that the positions of the respective rows of the photographed images coincide with each other, that is, the epipolar lines coincide with each other. The image photographed by the left image sensor 3a is set as the left image and the image photographed by the right image sensor 3b is set as the right image. In the first embodiment, the left image is used as the reference image and the left image is used as the reference image. In addition, the number of image sensors provided in the stereo camera 3 is not limited to two but may be three or more. The right image may be used as the reference image.

The coordinate system of the photographing sensors 3a and 3b will be described with reference to Fig. The X-axis is formed in the horizontal direction (left-right direction) of the stereo camera 3, the Y-axis is formed in the vertical direction (vertical direction) of the stereo camera 3, ), And a Z-axis is formed. The X and Y axes are the coordinate axes of the left and right images. The left and right image parallax information is distance information from the obstacle detecting device 14 to the object ahead of the vehicle 1. [

Returning to Fig. The buffer 41 temporarily stores an image sent from the stereo camera 3, that is, each image sent from the image sensors 3a and 3b. The buffer 41 uses a memory, a flash memory, a hard disk (HDD) or the like. 3, the buffer 41 is provided for each of the image sensors 3a and 3b, but it may be a single buffer. Further, it is preferable that each image to be stored is an image obtained by correcting lens deformation, irregularity of focal length, and the like using correction parameters. By such correction, the texture of the plane perpendicular to the optical axis of the image sensor is projected on the image projection plane as well.

The parallax image generating section 23 generates a parallax image from each input image. That is, a parallax image is generated on the basis of the left image and right image stored in the buffer 41. The parallax image can be generated by stereo matching such as SAD. The stereo matching method may be area correlation or Census transform. The parallax here represents the amount of displacement of pixels between a plurality of images. In Embodiment 1, the parallax is a shift amount of the pixel in the horizontal direction of the allegorical image with respect to the left image. In addition, the X coordinate and the Y coordinate on the parallax image are the same as the X coordinate and Y coordinate on the reference image. The generated parallax image is output to the obstacle detecting unit 31 and mainly to the specifying unit 29. [

Mainly in the position information storage section 25, position information for mainly specifying the time difference on the parallax image and the X coordinate value according to the distance from the start position So is stored. As shown in Fig. 5, mainly the parallax values and X coordinate values on the parallax images at the respective points S1 to Sn are measured in advance from the start point So on the (Tr) . In Fig. 5, the display of the point is omitted in the figure.

Will be described with reference to FIG. Fig. 6 shows a parallax image P1 at the point Sa of the distance La from the start position So in Fig. 5, for example. The X-coordinate value of mainly (Tr) in the parallax image P1 is stored in advance in the positional information storage unit 25 in association with the parallax value. I.e., X-coordinate region (X 2 -X 9) with respect to X coordinate area (X 1 -X 10), a differential value (db) for the differential value (da) a predominantly position information of the points (Sa), a differential value ( dc) to the X coordinate area (X 3 -X 8), for the time difference values (dd) X coordinate area (X 4 -X 7), X coordinate area for the time difference value (de) (X 5 -X 6 for) Is stored mainly in the positional information storage unit 25. [ Also, the relationship of the time difference value (da to dg) is da>db>dc>dd>de>df> dg.

As described above, the positional information storage unit 25 mainly stores the X coordinate value of the (Tr) in association with the parallax value in the parallax image at that point for each of the points S1 to Sn. The position information storage unit 25 mainly uses a memory, a flash memory, a hard disk (HDD) or the like.

Returning to Fig. The position estimating unit 27 includes a reading unit 7, a rotation angle sensor 8, and a position calculating unit 43. [ The detected values of the reading section 7 and the rotation angle sensor 8 are output to the position calculating section 43. [

The position calculating section 43 calculates the current position of the vehicle 1 from the start position So by using the method of the odometry simplified based on the input detected values. That is, the position information of the start position So or each vertex Fp is inputted from the reading section 7 to the position calculating section 43. [ The movement distance from the start position So or each vertex Fp can be calculated by counting the detection values of the rotation angle sensor 8. The position information of the calculated vehicle 1 is mainly output to the position information storage unit 25, and mainly the position information corresponding to the position information of the vehicle 1 is read. The mainly-read position information is mainly output to the specifying unit 29. [

The vehicle 1 travels on the predetermined main road Tr and therefore the position of the vehicle 1 on the main road Tr can be specified by the movement distance from the start position So or the apex Fp which is a predetermined position . Since the position estimating unit 27 estimates the position of the vehicle 1 on the basis of the start position So or the movement distance from each vertex Fp in this way, The position of the vehicle 1 can be estimated only by the coordinate system. Further, since the moving distance is a one-dimensional data amount, the amount of data to be handled in specifying the vehicle position can be significantly reduced. The position estimating unit 27 can correct the error of the position calculated by reading the magnetic field information of the vertex Fp buried in the position other than the start position So. Thus, the current position of the vehicle 1 can be estimated with high accuracy.

Mainly the specific unit 29 specifies mainly Tr in the parallax image generated by the parallax image generating unit 23 on the basis of mainly inputted position information. The specification of the principal is specified by a time difference value corresponding to the previously stored X coordinate value. The parallax image P2 shown in Fig. 7 is the same point as the parallax image P1 shown in Fig. In Fig. 7 X coordinate area of the time difference value (de) of the location information is mainly X 5 -X 6, but the X-coordinate by the obstacle (Ob4) there is no region of the time difference value (de) X 5. In this case, the X coordinate area in the range of the time difference value (de) between X 5 -X 6 is specified as mainly. The specified main region is output to the determination section 33. [

The obstacle detecting section 31 detects an area in which a parallax value in the X direction is different from the parallax value in the parallax image and the area of the same parallax value has the number of pixels in the Y direction equal to or larger than a predetermined value, Area is detected as an obstacle. 6 and 7, the regions Ob1, Ob2, Ob3 and Ob4 are detected as obstacles. And the detected obstacle areas Ob1 to Ob4 are output to the determining section 33. [ The area of the obstacle having a plurality of parallax values may be specified by performing a process of collecting a predetermined range of parallax values on the basis of a certain parallax value, instead of the same parallax value as the area of the obstacle.

The judging unit 33 judges whether or not the detected obstacle exists on the main Tr. The judging unit 33 judges whether or not the range of the X-coordinate value of the lower end of the area of the input obstacle is included in the range of the X-coordinate of the main parallax. As a result, for example, it is determined that only the obstacle Ob4 among the obstructions Ob1 to Ob4 shown in Fig. 7 is mainly on the (Tr).

Further, the judgment section 33 may judge whether or not the rectangular area surrounding the obstacle area mainly overlaps with the area, and then make the judgment of the two-step judgment for the obstacle having the rectangular area mainly overlapping the area. It is possible to judge whether or not the obstacle exists mainly on the image at the higher speed by this two-stage judgment. When the Y coordinate value (height) of the lower end of the rectangle is larger than a predetermined height, it is possible to improve the precision of the obstacle judgment by determining that the obstacle is not an obstacle as a part of a tree such as a leg or a branch which hangs in the sky.

The distance calculating unit 34 calculates the distance on the actual space to the obstacle on the basis of the parallax information of the obstacle area if it is determined that the obstacle exists mainly on the (Tr). As a method of calculating the distance from the parallax image, a parallel stereo method which has been conventionally used is used. The calculated distance is output to the vehicle speed control unit 16. [ The vehicle speed control unit 16 decelerates or stops the vehicle speed of the vehicle 1 according to the distance to the obstacle. The closer the distance between the vehicle 1 and the obstacle is, the larger the deceleration becomes. This can improve the avoidance of contact between the vehicle 1 and the obstacle.

The person judgment section 35 judges whether or not the area on the reference image corresponding to the area of the obstacle existing mainly on the parallax image is a person. The method of human judgment uses a combination of an image local feature quantity (HOG characteristic quantity) and a statistical learning method. In addition, a person determination may be performed by matching using a template prepared in advance. If it is determined that the area of the obstacle is a person in the reference image, the user is cautioned that the person is in front of the vehicle 1 on the touch panel display 12. Further, an alarm is sounded from the alarm 17 to alert the person in front of the vehicle 1. When it is judged that the area of the obstacle is not a person in the reference image, the occupant is cautioned from the touch panel display 12 that an obstacle exists in front of the main obstacle.

The parallax image generating unit 23, the main part 29, the obstacle detecting unit 31, the judging unit 33, the distance calculating unit 34, the person judging unit 35 and the position calculating unit 43, Or a field programmable gate array (FPGA).

Next, the operation of the obstacle detection in the first embodiment will be described with reference to Fig. 8 is a flowchart showing a procedure of distance estimation.

When the vehicle 1 starts to move from the start point So of the main road Tr, the position estimation unit 27 estimates the current position on the main road Tr of the vehicle 1 (step S01). Further, the stereo camera 3 takes an image of the front of the vehicle 1 at a predetermined time interval. By taking a plurality of images in this manner, time difference information (distance information) is acquired. In addition, the parallax image generating section 23 generates parallax images based on a plurality of photographed images to generate three-dimensional information (step S02). Next, mainly the specific section 29 specifies mainly Tr in the parallax image on the basis of the main position information corresponding to the current position of the vehicle 1 (step S03). Further, the obstacle detecting section 31 detects an obstacle in the generated parallax image (step S04). Subsequently, the judging section 33 judges whether the detected obstacle is on the main Tr (step S05).

In the obstacle judgment, if it is judged that there is no obstacle mainly on the tr (No in step S06), the obstacle detection operation is ended, and the processing from step S01 is started. If it is determined in step S06 that there is an obstacle in the obstacle determination, the person determination unit 35 determines whether the obstacle is a person (step S07). If the obstacle is not a person (No in step S08), the vehicle speed control unit 16 controls the vehicle 1 to decelerate or stop the vehicle 1 based on the distance to the obstacle (step S10). When it is determined that the obstacle is a person (step S08), an alarm is sounded to warn that the vehicle 1 is approaching the front of the vehicle from the alarm 17 of the vehicle 1 (step S09). In addition, the vehicle speed control section 16 performs deceleration or stop control of the vehicle 1 including the brake braking in accordance with the distance from the person (step S10).

As described above, according to the first embodiment, mainly the position information specifying the main (Tr) of the three-dimensional information image is the two-dimensional information of the time difference as the distance information and the X coordinate value as the coordinate information in the horizontal direction of the vehicle 1, It is possible to reduce the amount of information to be handled, which is specified by using the information amount of the information processing apparatus, and to reduce the calculation load. This makes it possible to shorten the calculation time and to judge whether or not the obstacle is mainly on the time when the distance from the obstacle is secured, and it is possible to improve the avoidance of contact with the obstacle.

In addition, by limiting the determination of the obstacle mainly to the phase (Tr), the three-dimensional object such as a tree mainly on the side is not judged as an obstacle, so that the vehicle 1 can be prevented from erroneously stopping and running smoothly. Thus, the vehicle 1 can be reliably decelerated and stopped with respect to an obstacle on the main Tr that is likely to be in contact with the vehicle.

Further, the distance information detecting unit 21 has the stereo camera 3, so that the parallax image can be used as the three-dimensional information. This makes it possible to appropriately determine the predominant (Tr) in the parallax image and detect the obstacle. When the laser radar of one-dimensional scan is used, it is easy to detect the road surface as an obstacle by up-down of the road surface. However, by using the stereo camera 3, it is possible to detect obstacles more easily.

Further, when it is desired to further improve the precision of the current position estimation, the interval for correcting the error may be shortened. For example, it is only necessary to increase the number of vertexes Fp buried in the main body. In this way, it is not necessary to mount an expensive GPS or sensor for each vehicle, so that it is possible to perform current position estimation at a low cost and high density.

(Example 2)

Next, an obstacle detecting device according to the second embodiment will be described with reference to Fig. Fig. 9 is a block diagram showing a configuration of an obstacle detecting device according to the second embodiment. In the second embodiment, the parts denoted by the same reference numerals as those in the first embodiment are the same as those in the first embodiment, and a description thereof will be omitted. The structure of the vehicle and the obstacle detecting device other than those described below is the same as that of the first embodiment.

A feature of the obstacle detecting device 14 'of the second embodiment is that the distance measuring device 21' is used instead of the stereo camera 3 as the distance information detecting unit 21 of the obstacle detecting device 14 in the first embodiment It is a point. In the first embodiment, distance information is detected by obtaining parallax information using two image sensors of the stereo camera 3. In the second embodiment, distance information from the obstacle detecting device 14 'directly to the object ahead of the vehicle 1 is detected by using the distance measuring device 21'. Therefore, in the second embodiment, the measured value of the distance information is used instead of the parallax value of the first embodiment.

The obstacle detecting device 14 'according to the second embodiment has a configuration in which the distance measuring device 21' is provided instead of the stereo camera 3. The distance measuring device 21 'is, for example, a distance measuring radar or a laser radar (LIDAR). Distance information in polar coordinates can be obtained by radar scan. Therefore, in the second embodiment, the scan angle? Is used instead of the X coordinate value of the first embodiment. In place of the parallax image generating section 23 of the first embodiment, the distance image generating section 23 'is provided in the second embodiment. The distance image generation unit 23 'generates a distance image based on the detection result of the distance measuring unit 21', the horizontal axis of which is the scan angle?, The vertical axis is the Y axis, and the pixel value is distance information. The Y coordinate value is preferably at least two points. That is, it is preferable to obtain distance information at different heights by the distance measuring device 21 '. Also, the position information storage unit 25 'in the second embodiment mainly associates the distance value r with the &thetas; value of (Tr) in the distance image at that point for each of the points S1 to Sn It is kept. For example, the value of? Indicating the range of mainly (Tr) is stored as data mainly for every 1 m with respect to the distance value r. The specific portion 29 mainly in the second embodiment is configured to mainly use the distance Tr in the distance image generated by the distance image generating portion 23 ' Specify data as the base. The specification of the principal is specified by the distance value corresponding to the previously stored scan angle [theta]. The obstacle detecting unit 31 in the second embodiment first discriminates the measurement point group detected on the distance image below the predetermined Y coordinate as a ground and excludes it. The predetermined Y coordinate value changes in accordance with the r value, and both values are related and stored mainly in the position information storage unit 25 '. A plurality of measurement points having a distance on the θ-r plane close to the remaining measurement point group are grouped and detected as obstacles. The grouping distance is preset. If the distance information is obtained only by one point of the Y coordinate value, it is also possible to omit discrimination of the ground and detect the obstacle by only grouping on the? -R plane. Subsequently, the determination section 33 in the second embodiment determines whether or not the detected obstacle exists on the main (Tr) side. The judging unit 33 judges whether or not the range of the θ value at the lower end of the area of the input obstacle and whether the r value is included in the range of the θ value and the r value of the specified principal. As in the first embodiment, similarly to the first embodiment, the judging section 33 judges whether or not the rectangular area surrounding the obstacle area mainly overlaps the area, and then judges the obstacle having the rectangular area overlapping the area, . In the second embodiment, the distance calculating unit 34 in the first embodiment is omitted. This is because the distance on the actual space to the obstacle is calculated by the distance measuring device 21 '. In addition, the person determining section 35 in the second embodiment determines whether or not the region on the reference image corresponding to the region of the obstacle mainly existing on the distance image is a person.

As described above, according to the second embodiment, the distance information r with respect to the scan angle? Can be appropriately obtained by using the distance measuring radar as the distance information detecting unit. The distance can be measured with high precision particularly far from the use of the stereo camera 3 and it is possible to judge the main (Tr) and the obstacle in the three-dimensional information appropriately.

(Example 3)

Next, the obstacle detecting device according to the third embodiment will be described with reference to Fig. 10 is a block diagram showing a configuration of an obstacle detection device in the third embodiment. In the third embodiment, the same reference numerals as those in the first embodiment denote the same parts as those in the first embodiment, and a description thereof will be omitted. The structure of the vehicle and the obstacle detecting device other than those described below is the same as that of the first embodiment.

In the first embodiment, a person is detected by judging whether or not the obstacle determined to be on the (Tr) is human. On the other hand, the obstacle detecting device 54 of the third embodiment is characterized in that a person is detected by judging whether or not the obstacle existing in a wider range than the main (Tr) including mainly (Tr) is human. When a person is detected in this manner, it is preferable to detect a person in a range of a true distance in the width direction, for example, about 1 m from the specified main (Tr) region. As a result, it is possible to detect a person who has only a part of the body in the (Tr) region, so that the avoiding action can be taken early.

The obstacle detecting device 54 of the third embodiment mainly includes the extended area specifying part 64 in the structure of the obstacle detecting device 14 in the first embodiment. In the first embodiment, the judgment section 33 judged whether or not there is an obstacle mainly in the main body specified by the specifying section 29. [ In the third embodiment, as shown in Fig. 11, the extended region Tr 'is mainly defined as a region expanded by a predetermined extent outside the width direction from the (Tr), and mainly the extended region specifying unit 64 specifies do. This mainly determines the obstacle and the person in the extended region Tr '. The predetermined range to be extended is mainly a range in which the distance is widened by about 1 m, for example, in the width direction. That is, the main region in which the presence or absence of the obstacle is determined is extended outward beyond the specified principal width shown in Fig. 6 by the number of pixels corresponding to the parallax corresponding to 1 m.

In addition to the function of the determining section 33 of the first embodiment, the determining section 33 'of the third embodiment is mainly configured to determine whether the obstacle detected by the obstacle detecting section 31 is mainly an extended area (Tr '). In addition to the function of the person judging section 35 of the first embodiment, the person judging section 35 'of the third embodiment mainly includes the extended region Tr' It is determined whether or not the obstacle determined to be a person is a person.

With the above arrangement, in each of the judgment section 33 'and the person judgment section 35', the frequency of each of the obstacle judgment and the person judgment on the (Tr) phase and mainly the extension region Tr ' . That is, the processing time is long, and the processing frequency of the person detection in the vicinity of the main body, which does not use the detection result for the speed control, can be reduced. Therefore, it is possible to secure the frame rate of the obstacle detection process mainly on (Tr). As a result, an error detection of an obstacle can be avoided, and the vehicle 1 can be stably stopped. It is preferable to use the detection result of a plurality of frames rather than to use only the detection result of one frame, and it is preferable that the frame rate of the obstacle detection processing on the (Tr) image is higher.

Thus, the vehicle 1 can be reliably decelerated and stopped with respect to an obstacle on the main Tr that is likely to be in contact with the vehicle. Also, a person in the vicinity of the end portion in the width direction of the main body is likely to be confused with the still water such as surrounding trees. In addition, it is difficult to judge the possibility of contact where a part of the body of the person, such as a person, touches the vehicle 1. [ Such a person can also be alerted by detecting a person and move to a position away from the main body, thereby making it possible to achieve both of the reduction of the contact rate and the avoidance of unpleasant erroneous stoppage.

(Example 4)

Next, the obstacle detecting device according to the fourth embodiment will be described with reference to Fig. 12 is a block diagram showing the configuration of the obstacle detection device in the fourth embodiment. In the fourth embodiment, parts denoted by the same reference numerals as those in the first embodiment are the same as those in the first embodiment, and the description thereof is omitted here. The structure of the vehicle and the obstacle detecting device other than those described below is the same as that of the first embodiment.

In Embodiment 1, the obstacle is detected from the entire parallax images generated, and it is determined whether or not the detected obstacle is mainly on the parallax image. On the other hand, in Embodiment 4, the parallax image of the main image is extracted from the generated parallax images, and the obstacle is detected from the extracted main parallax images. As a result, the area for detecting the obstacle becomes smaller, so that the calculation load can be further reduced.

The main phase difference image extracting section 66 of the fourth embodiment mainly extracts the parallax image generated by the parallax image generating section 23 mainly with the position information mainly stored in the position information storing section 25, Only the image is extracted. For example, it is assumed that the parallax image generating unit 23 generates the parallax image shown in Fig. Mainly the parallax image extracting unit 66 extracts a parallax image in the main (Tr) region and a parallax image in the Y direction from the main (Tr) having the same parallax value for each parallax region, mainly from the positional information. 13 shows the extracted main phase parallax image P3.

The region Ob4 has a time difference value dc. Region (Ob4) is extracted as a differential image in the Y direction from the direction mainly (Tr) between the X coordinate area (X 3 -X 8) having a differential value (dc) on the primarily (Tr). In addition, the region (Ob1 ') is extracted as a differential image in the Y direction from the direction mainly (Tr) between the X coordinate area (X 5 -X 6) having a differential value (de) on the primarily (Tr). Since only the parallax image on the (Tr) phase is extracted, not only the (Tr) is specified but Ob2 and Ob3 are compared with the parallactic image of Fig. 7 of the first embodiment and an obstacle other than the Tr .

The extracted main parallax image is sent to the obstacle detection unit 31 '. The obstacle detecting unit 31 'mainly detects obstacles from within the upper parallax image P3 as in the first embodiment. As a result, the area Ob4 is detected as an obstacle. The area Ob1 'is not detected as an obstacle since the minimum value of the Y coordinate value thereof is distant from the main (Tr). The person judgment section 35 judges whether or not the detected obstacle Ob4 is a person. Further, the obstacle detecting device 56 may include an extended area extracting unit 64 '. As shown in FIG. 14, the extended region extracting unit 64 'extracts parallax information on the mainly extended region Tr' extended by a predetermined range outward in the width direction from the extracted main Tr. 14 is a diagram showing a relationship between the main image Tr extracted mainly when the parallax image generating section 23 generates the parallax image P1 shown in Fig. 6 and the parallax image P4 mainly on the extended region Tr ' . The obstacle detecting unit 31 'detects an obstacle even if it is placed on the extracted main region Tr'. When the obstacle is detected mainly in the extended area Tr ', the person judgment part 35 judges whether the detected obstacle is a person or not. As a result, it is possible to obtain the same effect as in the case where the extended area specifying section 64 is mainly provided in the third embodiment.

Next, the operation of the obstacle detection in the fourth embodiment will be described with reference to Fig. Fig. 15 is a flowchart showing the procedure for detecting an obstacle. Since steps S01 and S02 are the same as those in the first embodiment, description thereof will be omitted.

Mainly the parallax image extracting unit 66 extracts the parallax image mainly from the parallax images based on the main position information corresponding to the current position of the vehicle 1 (step S03). Subsequently, the obstacle detecting unit 31 'starts detecting obstacles in the parallax image on the extracted main image (step S04). If no obstacle is mainly detected in the phase parallax image (No in step S06), the obstacle detection operation is ended, and the process from step S01 is started. In the obstacle detection, when an obstacle is mainly detected on the (Tr) (Yes in step S06), the person determining section 35 determines whether or not the obstacle is a person (step S07). The processing after step S07 is the same as that in the first embodiment, and a description thereof will be omitted.

As described above, according to the fourth embodiment, it is possible to reduce the computational load since it mainly does not detect any other stationary object as an obstacle. For example, in the case of detecting a person who is expecting a person other than a main wall, it is difficult to detect this person as a person because the wall and the person are detected as an obstacle in one body. However, according to the fourth embodiment, since the outer wall is removed in the image of the time difference, only the person inside can be detected as an obstacle, and the detection accuracy of the obstacle can be improved. In addition, since the processing mainly for the steps S03 to S10 can mainly be performed for the extended area, it is possible to detect and warn people in the vicinity. When a radar is used instead of a stereo camera, it is preferable to mask data outside the image range.

The present invention is not limited to the above embodiment, but can be modified as follows.

(1) In the third embodiment, a person is detected by judging whether or not the obstacle mainly determined to be on the person is a person. The present invention is not limited to this, and a person detecting section for detecting a person within a wider range than the main (Tr) including mainly (Tr) may be provided in parallel with the detection of the obstacle. As a result, it is possible to detect a person who has only a part of the body in the (Tr) region, so that the avoiding action can be taken early. In addition, the obstacle detecting unit 31 is provided with a separate human detection unit, so that different processes can be performed in parallel. In other words, it is difficult to judge the possibility of contact with an obstacle on a main body having a high possibility of contact, and it is possible to perform different treatments for a person in the vicinity of both ends in the width direction of the main body, which is likely to be confused with surrounding trees.

(2) In the first to fourth embodiments, the obstacle detecting device may be constituted by a combination of the respective constitutions.

(3) In the above embodiment, the obstacle detecting device 14 is provided in the vehicle 1, but is not limited thereto. In addition, it may be employed, for example, in a vision system for autonomous robots or a support system for visually impaired persons.

1: vehicle 3: stereo camera
3a: Left image sensor 3b: Right image sensor
14, 14 ', 54, 56: an obstacle detecting device 21:
21 ': distance measuring instrument 23: parallax image generating section
23 ': distance image generation unit
25, 25 ': mainly a position information storage unit 27:
29: mainly specific parts 31, 31 ': obstacle detection part
33, 33 ': judgment section 35, 35': person judgment section
64: Extension area extracting unit 66: Mainly parallax image extracting unit

Claims (17)

An obstacle detection device mounted on a moving body traveling on a predetermined main road,
A distance information detecting unit for detecting distance information from the obstacle detecting device to an object ahead of the moving object;
A three-dimensional information generating unit for generating three-dimensional information in front of the moving object on the basis of the distance information;
A storage unit in which position information is mainly stored according to a position of the moving body from a predetermined position;
A position estimating section for estimating a position of the moving body in the main phase,
Dimensional information extracting unit for extracting mainly three-dimensional information of the image in the three-dimensional information based mainly on the position information,
And an obstacle detection unit for detecting an obstacle based on the three-dimensional information,
Wherein the position information is mainly specified by the distance information and the coordinate information in the horizontal direction of the moving object.
The method according to claim 1,
And a person judging section which judges whether or not the obstacle is mainly a person.
3. The method of claim 2,
And an extended region three-dimensional information extracting unit for extracting three-dimensional information in a predominantly extension region, which is a predeterminable region adjacent to the main region mainly based on the position information,
The obstacle detection unit detects obstacles even in the main extension region,
Wherein the person judging unit judges whether or not the obstacle detected mainly in the extension area is a person.
The method according to claim 1,
An extended region three-dimensional information extracting unit for extracting three-dimensional information in a predominantly extension region, which is a predetermine region adjacent to the main region based mainly on the position information;
And a human detection unit for detecting a person based on the three-dimensional information mainly in the extension area.
An obstacle detection device mounted on a moving body traveling on a predetermined main road,
A distance information detecting unit for detecting distance information from the obstacle detecting device to an object ahead of the moving object;
A three-dimensional information generating unit for generating three-dimensional information in front of the moving object on the basis of the distance information;
A storage unit for storing position information mainly for specifying the main image of the three-dimensional information in accordance with the position of the moving body from a predetermined position;
A position estimating section for estimating a position of the moving body in the main phase,
A main part for specifying the principal in the three-dimensional information by the estimated position and mainly the position information,
An obstacle detecting unit that detects an obstacle based on the three-dimensional information;
And a determination unit that determines whether or not the detected obstacle is on the specified main,
Wherein the position information is mainly specified by the distance information and the coordinate information in the horizontal direction of the moving object.
6. The method of claim 5,
And a person judging section for judging whether or not the obstacle judged to be mainly on the person is a person.
The method according to claim 6,
And an area specifying unit for specifying an enlarged area, which is a predetermined area adjacent to the specified main body,
Wherein the judging unit judges whether or not the detected obstacle is on the specified main expansion area,
Wherein the person judging unit judges whether or not the obstacle determined to be on the extended area is a person.
6. The method of claim 5,
An area specifying unit for specifying a main area, which is a predetermined area adjacent to the specified main area,
And a human detection section for detecting a person in said specified main expansion region.
The method according to claim 1,
Wherein the position estimating unit estimates the position of the moving body on the main body by a moving distance from a predetermined position.
10. The method according to any one of claims 1 to 9,
Wherein the distance information detection unit has a stereo camera,
Wherein the three-dimensional information is a parallax image.
10. The method according to any one of claims 1 to 9,
Wherein the distance information detecting unit has a radar.
A moving body comprising the obstacle detection device according to any one of claims 1 to 9. 13. The method of claim 12,
And an alarm for issuing a warning when the obstacle is detected on the main body.
13. The method of claim 12,
And a speed control unit for decelerating or stopping the moving body when the obstacle is detected on the main body.
A moving object comprising the obstacle detecting device according to any one of claims 3, 4, 7, and 8,
A speed control unit for decelerating or stopping the moving object with respect to the detected obstacle on the main body,
And an alarm that alerts the detected person mainly on the extended area.
9. The method according to any one of claims 2 to 8,
Wherein the position estimating unit estimates the position of the moving body on the main body by a moving distance from a predetermined position.
14. The method of claim 13,
And a speed control unit for decelerating or stopping the moving body when the obstacle is detected on the main body.
KR1020160022324A 2015-03-06 2016-02-25 Obstacle detecting device and moving object provided therewith KR101747761B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015044865A JP5947938B1 (en) 2015-03-06 2015-03-06 Obstacle detection device and moving body equipped with the same
JPJP-P-2015-044865 2015-03-06

Publications (2)

Publication Number Publication Date
KR20160108153A KR20160108153A (en) 2016-09-19
KR101747761B1 true KR101747761B1 (en) 2017-06-15

Family

ID=56329532

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160022324A KR101747761B1 (en) 2015-03-06 2016-02-25 Obstacle detecting device and moving object provided therewith

Country Status (3)

Country Link
JP (1) JP5947938B1 (en)
KR (1) KR101747761B1 (en)
TW (1) TWI598854B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018055371A (en) * 2016-09-28 2018-04-05 アイシン精機株式会社 Travelling environment prediction device
KR101894731B1 (en) * 2016-11-28 2018-09-04 동국대학교 산학협력단 System and method for vehicle collision avoidance
JP7062407B2 (en) * 2017-11-02 2022-05-06 株式会社東芝 Obstacle detection device
JP6886929B2 (en) * 2018-01-09 2021-06-16 日立建機株式会社 Transport vehicle
WO2020026294A1 (en) * 2018-07-30 2020-02-06 学校法人 千葉工業大学 Map generation system and mobile object
KR102676437B1 (en) 2019-03-15 2024-06-19 야마하하쓰도키 가부시키가이샤 Vehicle running on preset route
CN114072319B (en) * 2019-07-17 2024-05-17 村田机械株式会社 Traveling vehicle and traveling vehicle system
JP7227112B2 (en) * 2019-09-27 2023-02-21 日立Astemo株式会社 OBJECT DETECTION DEVICE, TRIP CONTROL SYSTEM, AND TRIP CONTROL METHOD
JP2022068498A (en) 2020-10-22 2022-05-10 ヤマハ発動機株式会社 Tracking system and method of on-water object, and ship including tracking system of on-water object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003285705A (en) 2002-01-28 2003-10-07 Matsushita Electric Works Ltd Obstacle detection alarming system on vehicle
KR101395089B1 (en) 2010-10-01 2014-05-16 안동대학교 산학협력단 System and method for detecting obstacle applying to vehicle
KR101510050B1 (en) 2014-04-15 2015-04-07 현대자동차주식회사 Vehicle cruise control apparatus and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05151345A (en) * 1991-11-29 1993-06-18 Mazda Motor Corp Image processor
JPH10141954A (en) 1996-11-06 1998-05-29 Komatsu Ltd Device for detecting obstruction on track for moving body
JP5827508B2 (en) * 2011-07-13 2015-12-02 ヤマハ発動機株式会社 Obstacle detection device for vehicle and vehicle using the same
JP5996421B2 (en) * 2012-12-26 2016-09-21 ヤマハ発動機株式会社 Obstacle detection device and vehicle using the same
JP6114572B2 (en) * 2013-02-26 2017-04-12 ヤマハ発動機株式会社 Object area estimation method, object area estimation apparatus, object detection apparatus including the object area, and vehicle.

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003285705A (en) 2002-01-28 2003-10-07 Matsushita Electric Works Ltd Obstacle detection alarming system on vehicle
KR101395089B1 (en) 2010-10-01 2014-05-16 안동대학교 산학협력단 System and method for detecting obstacle applying to vehicle
KR101510050B1 (en) 2014-04-15 2015-04-07 현대자동차주식회사 Vehicle cruise control apparatus and method

Also Published As

Publication number Publication date
KR20160108153A (en) 2016-09-19
TWI598854B (en) 2017-09-11
JP5947938B1 (en) 2016-07-06
JP2016164735A (en) 2016-09-08
TW201633267A (en) 2016-09-16

Similar Documents

Publication Publication Date Title
KR101747761B1 (en) Obstacle detecting device and moving object provided therewith
KR101887335B1 (en) Magnetic position estimating apparatus and magnetic position estimating method
US20160292905A1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
GB2313971A (en) Obstacle tracking by moving vehicle
JP5439876B2 (en) Image processing apparatus and method, and program
JP2017096777A (en) Stereo camera system
CN112130158B (en) Object distance measuring device and method
JP4055701B2 (en) Autonomous mobile vehicle
JP7552101B2 (en) Industrial Vehicles
JP4940706B2 (en) Object detection device
JP6247569B2 (en) Distance estimating device and vehicle equipped with the same
JP7179687B2 (en) Obstacle detector
JP6204782B2 (en) Off-road dump truck
JP2019206318A (en) Monitoring device
JP5996421B2 (en) Obstacle detection device and vehicle using the same
JP6690904B2 (en) Self-driving vehicle
JP7308591B2 (en) moving body
KR20180000965A (en) System and method for Autonomous Emergency Braking
US11884303B2 (en) Apparatus and method for determining lane change of surrounding objects
JP2016014610A (en) Camera system, distance measurement method, and program
JP6114572B2 (en) Object area estimation method, object area estimation apparatus, object detection apparatus including the object area, and vehicle.
CN111487956B (en) Robot obstacle avoidance method and robot
JP2011177334A (en) Step detecting device and electric-powered vehicle equipped with the same
US20240116500A1 (en) Information processing apparatus, movable apparatus, information processing method, and storage medium
EP4318165A1 (en) Industrial truck autonomous or assisted driving using a plurality of cameras

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
GRNT Written decision to grant