JP2003111075A - Obstacle detecting method and apparatus - Google Patents

Obstacle detecting method and apparatus

Info

Publication number
JP2003111075A
JP2003111075A JP2001296667A JP2001296667A JP2003111075A JP 2003111075 A JP2003111075 A JP 2003111075A JP 2001296667 A JP2001296667 A JP 2001296667A JP 2001296667 A JP2001296667 A JP 2001296667A JP 2003111075 A JP2003111075 A JP 2003111075A
Authority
JP
Japan
Prior art keywords
obstacle
unit
area
signal conversion
binarized signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2001296667A
Other languages
Japanese (ja)
Inventor
Masahiro Sugimoto
Hitoshi Takamizawa
杉本正弘
均 高見澤
Original Assignee
Masahiro Sugimoto
Hitoshi Takamizawa
杉本 正弘
均 高見澤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Masahiro Sugimoto, Hitoshi Takamizawa, 杉本 正弘, 均 高見澤 filed Critical Masahiro Sugimoto
Priority to JP2001296667A priority Critical patent/JP2003111075A/en
Publication of JP2003111075A publication Critical patent/JP2003111075A/en
Application status is Pending legal-status Critical

Links

Abstract

(57) [Summary] [PROBLEMS] To provide an obstacle detection device capable of easily and accurately detecting the presence or absence of an obstacle around a moving body. SOLUTION: A binarized signal conversion unit 22 that converts two image data A and B into a binarized signal, and a tilt correction processing unit 23 that corrects a tilt of each image data converted into the binarized signal.
And a figure extracting unit 24 for extracting a figure from the image data subjected to the inclination correction processing. An obstacle candidate extraction unit 26 that calculates the area of the extracted graphic and calculates its barycentric coordinates, and an obstacle list storage unit 27A that stores the label value, barycentric coordinates, and area of the extracted obstacle candidate graphic , 27B. Obstacle list storage 2
A combination extraction unit 28 for extracting a combination of figures whose distance between their center of gravity coordinates is smaller than a predetermined value from 7A and 27B,
A determination unit 29 is provided that outputs an alarm signal to the alarm device 3 when the change rate of the area of the extracted figure of the combination is equal to or more than a certain value.

Description

Description: BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a mobile object such as a ship, an aircraft or an automobile, which detects the presence of an obstacle in the surroundings of a mobile object during visual control, and provides the operator with the information. The present invention relates to a method and an apparatus for detecting an obstacle that allows a user to recognize a moving object and safely drive the moving object. 2. Description of the Related Art The invention of Japanese Patent Application No. Hei 7-34515 is provided as an example of an obstacle detecting device which detects the presence of an obstacle present ahead of a vehicle in the traveling direction and makes the driver recognize the obstacle. ing. [0003] The present invention relates to a visible light or infrared camera for photographing a scene around a moving object, an imaging means for forming an object image of the scene around the moving object based on a detection signal from the camera, and Extraction means for extracting an obstacle object area, and means for measuring a relative distance between the obstacle and the moving object, and an obstacle distribution image around the moving object is created from a scene around the moving object from a camera. , And is configured to detect the presence of an obstacle by extracting the dangerous area. That is, this prior art compares two images taken at a fixed time interval, for example, an image immediately after the vehicle stops and an image immediately before the start, and finds an image that does not exist immediately after the stop immediately before the start. In addition, this is recognized as an obstacle. [0004] By the way, in the prior art having the above-mentioned configuration, since two images taken at a fixed time interval are both in a state where the vehicle is stopped, The positional relationship between the camera mounted on the vehicle and the object (such as the surrounding scenery) has not changed, and most images are common between when the vehicle stops and when the vehicle departs. Can be recognized. However, if such a conventional technique is applied to a running vehicle or a ship, the vehicle or the ship moves after a certain period of time has elapsed since the first image was captured.
The obtained two images are also greatly different, and even if a changed part of the two images is extracted, it is not always an obstacle immediately. Particularly, in the case of a ship, since the ship shakes due to the waves, the image is usually tilted or shakes up, down, left and right, and it is very difficult to compare the two images. [0006] When two images are compared, minute objects that do not hinder driving, such as waves and water drops in the case of a ship or pebbles and leaves of a vehicle, are also recognized as images. However, these minute objects should be originally removed as noise. However, if the two images are compared and the amount of change (the area on the screen) is simply detected as in the related art, such noise is erroneously detected as an obstacle, and the detection accuracy is reduced. There were drawbacks. SUMMARY OF THE INVENTION The present invention has been proposed to solve the above-mentioned problems of the prior art. It is an object of the present invention to eliminate errors caused by image tilt and blur and to improve the accuracy of obstacle detection. An object of the present invention is to provide an obstacle detection device with improved performance. It is another object of the present invention to provide an obstacle detection device that eliminates the influence of noise such as a minute obstacle and improves the accuracy of determination. [0008] In order to achieve the above object, according to the first aspect of the present invention, a binarized signal conversion process is performed on two images taken at a fixed time, The two images subjected to the binarized signal conversion processing are each subjected to a skew correction process to create a corrected image, a graphic having an area equal to or more than a certain amount is extracted from each corrected image, and For the extracted figures, create a pair of figures that are considered to represent the same object, calculate the area change rate for this pair of figures, and if the area change rate is above a certain value And sending an alarm. According to a fifth aspect of the present invention, the first aspect of the present invention is grasped from the viewpoint of an apparatus, and performs a binarized signal conversion process on two images photographed while keeping a predetermined time. A binarized signal conversion processing unit, a tilt correction processing unit that performs a tilt correction process on each of the two images processed by the binarized signal conversion processing unit to create a corrected image, An obstacle extraction unit that extracts a figure having a certain area or more, and a figure extracted from each corrected image, creates a pair of figures that are considered to display the same target object, and creates this pair. An obstacle determining unit that calculates the rate of change of the area of the figure and determines whether the rate of change of the area is equal to or greater than a certain value, and an alarm device that issues an alarm according to a command from the obstacle determining unit. Characterized by having That. According to the first or fifth aspect of the present invention, the two images can be compared on the same basis by correcting the inclination of the two images, and the ship can be shaken. Even when the angle and the position of the obstacle on two images are different, such as an image taken at a fixed time interval from a moving object, it is possible to recognize the same obstacle. As a result, the areas of the obstacles on the two images are compared, and if the rate of change is equal to or greater than a certain value, it is determined that the obstacle is approaching,
An alarm can be issued. According to a second aspect of the present invention, the skew correction processing includes:
Detecting lines are set at both ends of the image subjected to the binarized signal conversion processing, labels are given to the image data in contact with both the detecting lines, and a combination connecting all the labels on both the detecting lines is created. Is set, and the line segment with the largest number of pixel data existing on each line segment is extracted,
This line segment is set as a reference line, and the inclination is corrected by rotating the entire image data in accordance with the reference line. According to a second aspect of the present invention having such a configuration, the most straight line segment crossing the contour-extracted image is extracted and set as a reference line, so that, for example, a horizontal line is extracted. The inclinations of the two images can be matched based on a line segment that is assumed to have the least change in the two images. [0013] According to a third aspect of the present invention, the tilt correction processing comprises:
After matching the inclinations of the two images, the two images are shifted so that the reference line of each image coincides. According to the third aspect of the present invention having such a configuration, even when the reference lines of two images do not coincide with each other simply by making the inclinations coincide with each other, the images are shifted so that the reference lines coincide with each other, so that the upper and lower sides or The deviation in the left-right direction can be eliminated, and the obstacles captured in the two images can be easily compared. According to a fourth aspect of the present invention, the contour extraction processing includes:
It is characterized by including a removal process of a high frequency component of a captured image and a binarized signal conversion process. According to a sixth aspect of the present invention, the invention of the fourth aspect is grasped from the viewpoint of an apparatus, wherein the binarized signal conversion processing unit includes a removing unit for removing a high frequency component in the captured image data. A binary signal conversion unit. According to the fourth or sixth aspect of the present invention, a noise component which can be ignored as an obstacle such as a wave splash can be removed from a captured image. In other words, it is possible to speed up the process of recognizing obstacles and increase the accuracy. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, an embodiment of an obstacle detecting device according to the present invention will be specifically described with reference to the drawings. [1. Configuration of First Embodiment] An embodiment of the present invention will be described below in detail with reference to FIGS. However, the dimensions and materials of the components described in this embodiment,
Unless otherwise specified, shapes, relative positions, and the like are not intended to limit the scope of the present invention, but are merely illustrative examples. FIG. 1 is a plan view showing an embodiment in which the present invention is applied as an obstacle detecting device for a ship, FIG. 2 is a block diagram showing a configuration of the obstacle detecting device, and FIGS. 9 is a flowchart showing the operation of the second device. 6 and 7 show the state of image data related to the progress of the flow chart. In FIG. 1, 100 is the hull of a small fishing boat,
101 is an obstacle detection device of the present invention. That is, the hull 1
An obstacle detection device 101 is attached to a predetermined portion of 00, and a solid-state CCD camera 1 provided on the device detects a scene ahead in the traveling direction. The position, direction, and number of the obstacle detection devices 101 attached to the hull 100 are not particularly limited, but are usually one device 101.
In the direction to monitor the traveling direction. In this case, it is preferable to attach a wide-angle lens to the front surface of the solid-state CCD camera 1 so that the light receiving angle range can be widened. FIG. 2 is a block diagram of the obstacle detection device 101. Reference numeral 2 denotes an image processing unit to which an image from the solid-state CCD camera 1 is input. It is formed of a ROM or the like in which a program is built, and determines whether or not an obstacle exists in a shooting space ahead of the hull 100 in the traveling direction by a procedure described later. Three
Is an alarm device for, when the image processing unit 2 detects that an obstacle is present in the front photographing space, receiving the detection signal and outputting an alarm to the outside. The warning device 3 outputs a buzzer, a warning sound, and the like from a speaker, and turns on a warning light such as an LED or a rotating light. The image processing section 2 is specifically configured as follows. First, an image storage unit (for example, an image RAM) 2 for storing two image data A and B imaged by the solid-state CCD camera 1 at regular time intervals.
0A, 20B, a high-frequency removing unit 21 for removing high-frequency components from each of the image data read from the image storage units 20A, 20B, and a binarized signal of each image data from which the high-frequency components have been removed. And a binarized signal conversion unit 22 for converting the signal into a binary signal. An inclination correction processing unit 23 for correcting the inclination of each image data converted to the binary signal so as to match the inclination, and extracting a figure in the two image data subjected to the inclination correction processing. And an extracted figure storage section 25A, 25B for storing a figure extracted from two pieces of image data. Further, the extracted figure storage units 25A, 25B
An obstacle candidate extraction unit 26 which reads out each figure stored in the area, calculates its area, extracts a figure having a certain area or more, gives a predetermined label value to the extracted figure, and calculates a barycentric coordinate; Obstacle list storage units 27A and 27B are provided for storing the label values, the barycentric coordinates, and the area of the extracted figure as the obstacle candidate. The obstacle list storage units 27A, 2
7B, a combination of figures whose distance between the barycentric coordinates is equal to or less than a predetermined value is extracted, and a combination extraction unit 28 that determines that these figures are the same obstacle, and a change rate of the area of the extracted combination of figures. Is calculated, and when the rate of change is equal to or more than a predetermined value, the determination unit 29 determines that the obstacle is an obstacle to be warned, and outputs a warning signal to the warning device 3. [2. Obstacle Detection Processing] Next, the operation of the obstacle detection device according to this embodiment will be described with reference to the flowcharts shown in FIGS. 3 and 4 and the explanatory diagram of FIG. That is, in the present embodiment, when the ship has sailed for a certain period of time, before the navigation and after the navigation, images of the forward direction of travel are respectively taken by the individual CCD cameras, and two obstacle lists are created based on the images. I do. Then, by comparing the areas of the same figure registered in the two obstacle lists, it is determined whether the obstacle is approaching or leaving the ship. First, a process of extracting a figure which is a candidate for an obstacle from two images taken at a predetermined time interval and listing the figure as an obstacle list will be described with reference to FIG. This will be described with reference to the flowchart of FIG. The two image data FA and FB captured by the solid-state CCD camera 1 after a fixed time are input to the image processing unit 2 (S100), and the respective image storage units 2
0A and 20B are temporarily stored. This image storage unit 2
The high-frequency component removing unit 21 removes high-frequency components from the image data stored in 0A and 20B (S
101) The binarized signal is converted into a binarized signal by the binarized signal converter 22 (S102). Thereby, noise such as wave splash and dust can be removed, and the determination accuracy is improved. Next, in the inclination correction processing unit 23, inclination correction is performed on the basis of a universal natural target whose size or direction is considered not to change within a specified time within the field of view of the camera such as a horizontal line and a horizon line (S103). Then, the graphic extraction unit 24 extracts characteristic individual graphics from the corrected image data (S104), and stores them in the extracted graphic storage units 25A and 25B. In the extraction of the figure by the figure extracting unit 24, one area defined by the line segment and the frame around the screen is recognized as one figure from the corrected image data. Next, in the obstacle candidate extracting section 26, each of the figures stored in the extracted figure storing sections 25A and 25B is uniquely labeled (S105), and each labeled figure is Calculate the area for
For a figure having a certain area or more, the position of the center of gravity is calculated as a candidate for an off-site object, labeled, and registered in the obstacle list. That is, the label counter is initialized (S
106), the first figure is taken out of the work area B, and its area is calculated (S107). Then, it is determined whether the area calculated for this figure falls within a specified range that can be an obstacle (S108). That is, since a figure having a certain area or less cannot be an obstacle in the running of the ship, it is removed from the comparison target, and after increasing the label counter by one (S111), the next figure is stored as an extracted figure. The area is calculated from the parts 25A and 25B and the figure is calculated (S107). If it is determined that the calculated area is equal to or larger than the specified amount (Y in S108), it is determined that an obstacle exists, and the coordinates of the center of gravity of the figure are calculated (S10).
9) Register the label value, barycentric coordinates, and area in the obstacle list of the obstacle list storage units 27A and 27B. After the graphic is registered in the list as an obstacle in this way, the label count is increased by one, and the same processing is repeated for the next graphic (S111), and the area of all the graphics is checked. When the label count matches the total number of labels (Y in S112), the creation of the obstacle list is completed. As described above, the obstacle candidate extraction unit 26
Is created even after the ship has sailed for a certain period of time, two obstacle lists before and after navigation are created and registered in the obstacle list storage units 27A and 27B, respectively. After the two obstacle lists are created in this way, the combination extraction unit 28 compares the lists and extracts a combination in which the distance between the barycentric coordinates is equal to or less than a specified value. That is, a pair of label numbers indicating the same obstacle, in this example (FIG. 6A) and (FIG.
7), (FIG. 6-2) and (FIG. 6-8), (FIG. 6-3) and (FIG. 6-9), (FIG. 6-4) and (FIG. 6-10), (FIG.
-5) and (FIG. 6-11), (FIG. 6-6) and (FIG. 6-1).
The six pairs of 2) are extracted (S200). Next, the determination unit 29 obtains the change rate of the area in each pair (S201), initializes the pair counter (S202), and then changes the area corresponding to the two labels in each pair. Is checked to determine whether the area change rate is equal to or greater than a specified value. If the area change rate is equal to or larger than the specified value (Y in S203), an alarm is output to the alarm device 3 (S204). When the rate of change is equal to or less than the specified value (N in S203), or after outputting an alarm when the rate is equal to or more than the specified value, the bear counter is incremented by one, and the rate of change in area is checked for the next pair. The above determination operation is repeated until the pair count reaches the total number of pair counts (S206). After the inspection of the area change rate is performed for all the pairs, a specified time is waited until the next obstacle detection time (S20).
7) Returning to the image processing of FIG. 3, the above series of sequences is repeated between the newly shot image and the already shot image at that time. In this case, if an image is taken by a solid-state CCD camera at predetermined time intervals to obtain FA, FB, FC,..., The first image FA and the second image FB are obtained.
To detect the presence or absence of an obstacle, and then the second image F
Obstacles can be detected continuously, for example, by comparing B with the third image FC to detect the presence or absence of an obstacle. [3. Tilt Correction Processing] By the way, the details of the tilt correction processing (step S103 in FIG. 3) in the tilt correction processing section 23 will be described with reference to the flowchart shown in FIG. This will be specifically described in combination. In addition, a shoreline, a horizon, a center line of a road, a road shoulder, a telephone pole, etc. can be used as a reference, instead of a horizon, depending on a corresponding situation and application. First, the image data binarized by the binarized signal converter 22 is subjected to contour extraction processing (S3).
00). This contour extraction method employs a general processing method such as taking an exclusive OR between image data obtained by shifting the address of the original image data by one bit in the vertical and horizontal directions and the original image. Next, in order to adapt to the reference line to be detected,
Detection lines L1 and L2 are set on the left and right of the image data. Labels are given to the image data that are in contact with the left and right detection lines (S30).
2) A pair (combination) that connects all the labels between the left and right.
Is created (S303), and line segments L3 and L
4, L5, L6, L7, and L8 are set, the number of pixel data existing on each line segment is counted, and the number is held (S30).
4). In the example shown in the figure, the image data contacting the right detection line L2 is one point on the right end of the inclined horizontal line, whereas the image data contacting the left detection line L1 is one point on the left end of the inclined horizontal line. There are six points from the top of the screen, such as the sail and the hull of the yacht, starting from the point. Therefore, six line segments L3 to L8 are set as line segments connecting the pair of left and right labels. As the pixel data of each line, for example, pixel data constituting the horizontal line exists on the line segment L3 drawn so as to coincide with the horizontal line, and intersects with the sail portion of the yacht. Since the pixel data corresponding to the portion where the bolt crosses is present on the line segment L4 drawn in the above, the number of pixel data located on each line segment is similarly counted, and the number is calculated. It is stored in the storage area of the computer. In this case, it is effective to use a process for thickening the contour data in order to facilitate detection from the influence of distortion of image data due to distortion of the lens of the camera or defect of the detection element. This method is based on general processing such as ORing between the original image and the image data obtained by shifting the address of the original image data by an appropriate number of bits in the vertical and horizontal directions. Next, it is checked whether or not the number of pixel data of each line held in the storage area is equal to or larger than the number that can be used as a reference line, and the largest one is extracted.
That is, first, in order to sequentially confirm the number of pixel data for all the line segments, the pair count is initialized (S30).
5) Enable to start counting from the first line segment. Next, a reference candidate list that stores only line segments having a certain number of pixel data is initialized (S306). Next, the number of pixel data of the first line segment stored in the storage area is taken out, and it is checked whether or not the number of pixel data of the pair is equal to or greater than a specified value. Is incremented by one (S310), the number of pixel data for the next line segment is extracted, and it is checked again whether the number of pixel data in the pair is equal to or greater than a specified value. If the number of pixel data is equal to or larger than the specified value (Y in S307), the number of image data is compared with the number of image data of the registered line segment in the reference candidate list. If the number of image data is larger than the reference candidate list (Y in S308). ), And register this in the reference candidate list (S309).
When the number of image data is smaller than the reference candidate list (N in S308), after increasing the pair count by one (S310), the pixel data number of the next line segment is extracted again and stored in the reference candidate list. Is compared with the number of pixel data of the line segment. In this way, when the check of the number of pixel data has been completed for all the line segments and the pair count matches the total number of pairs (Y in S311), the number of pixels stored in the reference candidate list is determined. A line segment having a large number of pixel data is set as a horizontal reference line. In the example shown,
The horizontal line of line segment L3 is set as a reference line. This reference line L3
The horizontal direction is corrected (S312) by rotating the entire image data so that is horizontal (the tilt is 0). Further, in order to correct the inclination in the vertical direction, the entire image data is shifted up and down to a position where the reference line L3 coincides with a predetermined reference position (line L9 in FIG. 7) of the apparatus. (S313). In the above description, the description has been made of an example on the sea, but, for example, on the land of a vehicle or the like, a center line of a road, a road shoulder, a telephone pole, or the like can be used as a reference line. In that case, no vertical shift is required. Further, in the air such as an aircraft, the same processing as in the case of the illustrated ship is performed. [4. Other Embodiments] (1) In the inclination correction processing, a horizontal reference line is found by providing detection lines on the left and right sides of the contour-extracted image. When it is preferable to use a vertical reference line such as a telephone pole or a wall with a small inclination, similar processing can be performed by providing detection lines above and below an image. (2) In the inclination correction processing, besides rotating the image so that the reference line is horizontal or vertical, when the edge of the road following the straight line is used as the reference line, the reference lines of the two images are compared. Then, either image can be rotated so that both reference lines coincide. Since the present invention is configured as described above, a scene around the moving object is detected (photographed) by the detecting means such as a solid-state camera. The image processing unit receives the image from the camera, extracts a natural target such as a horizon, a shoreline, a centerline, a telephone pole, or the like, and corrects a tilt or blur of the image obtained based on the natural target. The obstacle distribution images of the unit time before and the current time are compared, and if the area is larger than the standard, it is determined that there is an obstacle, and a warning is issued by an alarm device, etc. Recognize. This makes it possible to detect the presence or absence of an obstacle around the moving object by extremely simple and low-cost means. In the above embodiment, the solid C
By using a CD camera and a storage area such as a RAM for holding the content of the image, the image processing unit 2 that determines the presence or absence of an obstacle is provided with a function for removing a small obstacle or other noise. This makes it possible to remove abrupt changes in waves on the sea and air currents in the air, and on the ground, noise due to changes in the attitude of the moving body due to vibrations from the road surface, thereby improving determination accuracy.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a plan view showing an example of mounting an obstacle detection device according to the present invention on a ship. FIG. 2 is a block diagram showing an embodiment of an obstacle detection device according to the present invention. FIG. 3 is a flowchart showing an obstacle list creation process in the embodiment of FIG. 2; FIG. 4 is a flowchart showing an obstacle detection process in the embodiment of FIG. 2; FIG. 5 is a flowchart showing a tilt correction process in the embodiment of FIG. 2; FIG. 6 is a view showing an example of a screen of an obstacle detection process in the embodiment of FIG. 2; FIG. 7 is a view showing an example of a screen of a tilt correction process in the embodiment of FIG. 2; [Description of Signs] 1 ... Solid-state camera 2 ... Obstacle detection unit 3 ... Alarm device 20A, 20B ... Image storage unit 21 ... High frequency removal unit 22 ... Binary signal conversion unit 23 ... Tilt correction processing unit 24 ... Figure Extraction units 25A, 25B ... extracted figure storage unit 26 ... obstacle candidate extraction units 27A, 27B ... obstacle list storage unit 28 ... combination extraction unit 29 ... determination unit

──────────────────────────────────────────────────の Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat ゛ (Reference) B63B 49/00 B63B 49/00 Z B64D 43/00 B64D 43/00 G06T 1/00 330 G06T 1/00 330B 7/20 7/20 A G08B 21/00 G08B 21/00 EU G08G 3/02 G08G 3/02 A F term (reference) 5B057 AA16 BA02 BA29 CA02 CA08 CA12 CA16 CB02 CB08 CB12 CB16 CC01 CD03 CE02 CE12 DA02 DA15 DB02 DB05 DB09 DC04 DC16 DC32. EA43 FA06 FA59 FA70 HA03

Claims (1)

  1. Claims: 1. A binary signal conversion process is performed on two images captured while keeping a fixed time, and tilt correction is performed on each of the two images that have been subjected to the binary signal conversion process. It is considered that a corrected image is created by processing, a figure having a certain area or more is extracted from each corrected image, and the same object is displayed for the figure extracted from each corrected image. An obstacle detection method comprising: creating a pair of figures to be calculated, calculating the rate of change of the area of the pair of figures, and issuing an alarm when the rate of change of the area is equal to or greater than a certain value. 2. The skew correction processing includes setting detection lines at both ends of an image subjected to a binarized signal conversion process, assigning labels to image data in contact with both detection lines, and setting all labels on both detection lines. Create a combination that connects the labels, set the line segment that crosses the combination, extract the line segment with the largest number of pixel data existing on each line segment, set this line segment as the reference line, 2. The obstacle detecting method according to claim 1, wherein the inclination is corrected by rotating the entire image data in accordance with the image data. 3. The obstacle according to claim 2, wherein the inclination correction process shifts the two images such that the reference lines of the respective images coincide with each other after the inclinations of the two images are matched. Object detection method. 4. The obstacle detecting method according to claim 1, wherein the binarized signal conversion processing includes a high frequency component removal processing of a captured image and a contour extraction processing. 5. A binarized signal conversion processing unit for performing a binarized signal conversion process on two images captured while keeping a fixed time, and two images processed by the binarized signal conversion processing unit , A tilt correction processing unit that creates a corrected image by performing a tilt correction process, an obstacle extraction unit that extracts a figure having a certain area or more from each corrected image, and an extraction unit that extracts from each corrected image Creates a pair of figures that are considered to represent the same object for the set figure, calculates the rate of change of the area of this pair of figures, and determines whether the rate of change of the area is greater than or equal to a certain value. An obstacle detection device, comprising: an obstacle determination unit that determines whether an obstacle is present; and an alarm device that issues an alarm according to a command from the obstacle determination unit. 6. The binarized signal conversion unit according to claim 5, wherein the binarized signal conversion processing unit includes a high frequency component removal unit in the captured image data and a binarized signal conversion unit. The obstacle detection device according to the above.
JP2001296667A 2001-09-27 2001-09-27 Obstacle detecting method and apparatus Pending JP2003111075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001296667A JP2003111075A (en) 2001-09-27 2001-09-27 Obstacle detecting method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001296667A JP2003111075A (en) 2001-09-27 2001-09-27 Obstacle detecting method and apparatus

Publications (1)

Publication Number Publication Date
JP2003111075A true JP2003111075A (en) 2003-04-11

Family

ID=19117860

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001296667A Pending JP2003111075A (en) 2001-09-27 2001-09-27 Obstacle detecting method and apparatus

Country Status (1)

Country Link
JP (1) JP2003111075A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006217447A (en) * 2005-02-07 2006-08-17 Yazaki Corp Vehicle display apparatus
JP2006268677A (en) * 2005-03-25 2006-10-05 Secom Co Ltd Sensing device
JP2007233469A (en) * 2006-02-27 2007-09-13 Toshiba Corp Object-detecting device and method therefor
JP2010182171A (en) * 2009-02-06 2010-08-19 Nissan Motor Co Ltd Vehicular approaching object detection device
JP2010187413A (en) * 2010-05-21 2010-08-26 Yazaki Corp Vehicle display apparatus
JPWO2014038026A1 (en) * 2012-09-05 2016-08-08 パイオニア株式会社 Image processing apparatus and image processing method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006217447A (en) * 2005-02-07 2006-08-17 Yazaki Corp Vehicle display apparatus
JP2006268677A (en) * 2005-03-25 2006-10-05 Secom Co Ltd Sensing device
JP4656977B2 (en) * 2005-03-25 2011-03-23 セコム株式会社 Sensing device
JP2007233469A (en) * 2006-02-27 2007-09-13 Toshiba Corp Object-detecting device and method therefor
JP4575315B2 (en) * 2006-02-27 2010-11-04 株式会社東芝 Object detection apparatus and method
JP2010182171A (en) * 2009-02-06 2010-08-19 Nissan Motor Co Ltd Vehicular approaching object detection device
JP2010187413A (en) * 2010-05-21 2010-08-26 Yazaki Corp Vehicle display apparatus
JPWO2014038026A1 (en) * 2012-09-05 2016-08-08 パイオニア株式会社 Image processing apparatus and image processing method

Similar Documents

Publication Publication Date Title
DE10292327B4 (en) Vehicle environment image processing apparatus and recording medium
JP4600357B2 (en) Positioning device
JP4820712B2 (en) Road marking recognition system
DE112005000929B4 (en) Automatic imaging method and device
US7970529B2 (en) Vehicle and lane recognizing device
EP1179803B1 (en) Method and apparatus for object recognition
JP4425495B2 (en) Outside monitoring device
JP4321821B2 (en) Image recognition apparatus and image recognition method
JP3169483B2 (en) Road environment recognition device
US7218207B2 (en) Method for detecting position of lane marker, apparatus for detection position of lane marker and alarm apparatus for lane deviation
JP2006053756A (en) Object detector
JP3945467B2 (en) Vehicle retraction support apparatus and method
JP4557288B2 (en) Image recognition device, image recognition method, position specifying device using the same, vehicle control device, and navigation device
JP3780848B2 (en) Vehicle traveling path recognition device
DE60108351T2 (en) Device and method for detecting lane markings
Alon et al. Off-road path following using region classification and geometric projection constraints
JP4940168B2 (en) Parking space recognition device
US20040096084A1 (en) Image processing system using rotatable surveillance camera
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
US7436982B2 (en) Vehicle surroundings monitoring apparatus
EP2282295A1 (en) Object recognizing device and object recognizing method
US7209031B2 (en) Obstacle detecting apparatus and method
US8005266B2 (en) Vehicle surroundings monitoring apparatus
DE602004012962T2 (en) Real-time obstacle detection with a calibrated camera and known ego motion
US7376247B2 (en) Target detection system using radar and image processing