WO2007138898A1 - 認識システム、認識方法および認識プログラム - Google Patents
認識システム、認識方法および認識プログラム Download PDFInfo
- Publication number
- WO2007138898A1 WO2007138898A1 PCT/JP2007/060315 JP2007060315W WO2007138898A1 WO 2007138898 A1 WO2007138898 A1 WO 2007138898A1 JP 2007060315 W JP2007060315 W JP 2007060315W WO 2007138898 A1 WO2007138898 A1 WO 2007138898A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voting
- point
- space
- parameter space
- distance
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/48—Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
Definitions
- Recognition system recognition method oppi recognition program
- the present invention relates to a recognition system, a recognition method, and a recognition program, and more particularly to a recognition system that is robust against noise and can suppress overdetection using a partial area other than a recognition target specific pattern as a recognition target specific pattern.
- this conventional recognition system includes an imaging unit 31, an edge detection unit 3 2, a Hough transform processing unit 3 3, a Hough space creation unit 3 4, and a linear component extraction unit 3. It consists of five.
- the conventional recognition system having such a configuration operates as follows.
- an image to be recognized is input by the imaging unit 31. Further, the edge of the image is detected by differentiating the image sent from the imaging unit 31 by the edge detection unit 3 2. Further, the edge point sequence of the image detected by the H o u g h conversion processing unit 33 is subjected to H o u g h conversion. Further, a H o u g h space creation unit 3 4 creates a histogram (hereinafter referred to as a H o u g h space) according to the function of the H o u g h transformation.
- a histogram hereinafter referred to as a H o u g h space
- the peak component of the frequency of the Hough space is detected by the linear component extraction unit 35, and the image corresponding to each Hough function curve passing through the detected peak point is detected. It is determined that the edge point sequence is a straight line, and a linear component is extracted from the image.
- the linear component extraction unit 35 detects the maximum peak point from the Hough space created by the Hough space creation unit 34 and extracts a linear component corresponding to the maximum peak point. Then, the range in which the maximum peak point affects the frequency distribution of the histogram and the contribution amount are obtained.
- the linear component extraction unit 35 reduces the influence of the maximum peak point to the frequency in the Hough space of other peak points according to the calculated contribution amount within the range. To correct.
- the linear component extraction unit 35 detects the next largest peak point using the partially modified Hough space, extracts the straight line component corresponding to the peak point, and so on. These processes are performed sequentially. Disclosure of the invention:
- the first problem is that, in general, when a specific pattern such as a straight line is detected by the H o ugh h transformation, a false specific pattern appears around the true specific pattern.
- the second problem is that a large amount of processing is required for dealing with the phenomenon that a false specific pattern appears around a true specific pattern as in the prior art.
- An object of the present invention is to realize a recognition system that can easily detect a robust specific pattern. Is to provide a stem.
- the recognition system includes feature point detection means, (1 2 0), H o gh h conversion means (1 3 0), and specific pattern output means (1 4 0).
- the Hough transforming means (1 3 0) the magnitude relationship of the distances between the points in the Hough space is the same as the magnitude relationship of the distances between the specific patterns representing a difference between the predetermined specific patterns.
- a Hough space is designed, and a specific pattern is detected using the Hough space.
- the first effect is to suppress the phenomenon that a false specific pattern appears around a true specific pattern, which is generally a problem when using the H o ugh h transformation.
- the Hough space is designed so that the magnitude relationship of the distance between each point in the Hough space is equivalent to the magnitude relationship of the distance between the specific patterns that represents the difference between the predetermined patterns. This is because specific patterns that are more similar in the above are expressed as closer points in the Hough space.
- the second effect is that the suppression of the appearance of the false specific pattern can be easily realized.
- FIG. 1 is a block diagram showing the configuration of the prior art.
- FIG. 2 is a block diagram showing the configuration of the recognition system according to the first embodiment of the present invention.
- FIG. 3 is a flowchart showing the operation of the recognition system according to the first embodiment.
- FIG. 4 is a diagram showing a road image as an example of target data.
- FIG. 5 is a block diagram showing a specific configuration of the recognition system according to the first embodiment.
- FIG. 6 is a diagram for explaining the Sobel filter.
- FIG. 7 is a diagram for explaining the distance between straight lines.
- Fig. 8 is a diagram for explaining the voting method for the H o u g h space in consideration of the quantization error.
- FIG. 9 is a diagram illustrating a specific example of the operation of the recognition system according to the first embodiment.
- FIG. 10 is a block diagram showing the configuration of the recognition system according to the second embodiment of the present invention.
- FIG. 11 is a flowchart showing the operation of the recognition system according to the second embodiment.
- FIG. 12 is a block diagram showing a specific configuration of the recognition system according to the second embodiment.
- FIG. 13 is a diagram showing a specific example of the operation of the recognition system according to the second embodiment. Best Mode for Carrying Out the Invention:
- the first embodiment of the present invention is composed of a computer (central processing unit; processor; data processing unit) 10 0 0 and a target data input unit 1 1 0 which operate under the programm control. Has been.
- Computer (central processing unit; processor; data processing unit) 1 0 0 is a feature point detection unit 1 2 0, a Hough conversion unit 1 3 0, and a specific pattern output unit 1 4 W
- the H o u g h conversion unit 1 3 0 includes a H o u g h space voting unit 1 3 1, a H o u g h space smoothing unit 1 3 2, and a H o u h h space peak detection unit 1 3 3.
- the target data input device 110 inputs data that is a detection target of a desired specific pattern.
- the feature point detecting means 120 detects a point estimated as a point on the specific pattern from the target data as a feature point.
- the Hough space voting means 1 31 is a space (hereinafter referred to as a Hough space) having a parameter representing the specific pattern as an axis (hereinafter referred to as a Hough space). For each point on the trajectory corresponding to the feature point, a weight corresponding to the feature point is voted.
- the H ou g h space smoothing means 1 3 2 determines, for each point in the H ou g h space, a voting value smoothed at the point using the voting value of the point and a nearby point. However, in some cases, this H o u g h space smoothing means 1 3 2 may be omitted.
- the H o u g h space peak detecting means 1 3 3 detects a point giving a peak of one or a plurality of the voting values in the H o u g h space.
- the specific pattern output means 140 outputs the specific pattern corresponding to the corresponding parameter for each point giving the peak of the vote value detected by the H o hgh space peak detection means 1 33.
- a distance between specific patterns representing a difference between the specific patterns in a predetermined region of interest in the target data is defined, and the magnitude relationship of the distance between the specific patterns and the correspondence in the Hough space
- the parameter expression of the specific pattern is adopted so that the magnitude relation of the distance between the points to be processed is the same value.
- mapping h n ⁇ ⁇ ⁇ " is designed so that the equivalence relation expressed by the following formula (1) is established, and ⁇ " is used as a parameter expressing the specific pattern.
- the space with the axis be the Hough space.
- voting for each point on the locus in the Hough space for the feature point in the Hough space voting means 1 3 it is determined by a noise component considered to be included in the position of the feature point It is also possible to vote for the near range of each point on the trajectory.
- step A 1 data to be subjected to the specific pattern detection process is input by the target data input device 110 (step A 1 in FIG. 3).
- the feature point detecting means 120 detects a point estimated from the target data as a point on the specific pattern as a feature point (step A 2).
- the Hough space voting means 1 3 1 votes the weights corresponding to the feature points for the respective points on the trajectory corresponding to the feature spaces in the Hough space (Step A 3).
- the Hough space smoothing means 1 3 2 determines, for each point in the Hough space, a smoothed voting value at the point using the voting value of the point and neighboring points. (Step A4).
- step A4 may be omitted in some cases.
- the Hough space peak detecting means 133 detects a point giving a peak of one or a plurality of the voting values in the Hough space (Step A 5).
- the specific pattern output means 140 outputs the specific pattern for the corresponding parameter for each point that gives the peak of the vote value detected by the Hough space peak means (step A). 6).
- voting may be performed for the neighborhood range of each point on the trajectory determined by a noise component considered to be included in the position of the feature point.
- the distance between the specific patterns representing the difference between the specific patterns in the predetermined region of interest in the target data is defined, and the magnitude relationship of the distance between the specific patterns and the Hough space
- the Hough space is configured by the above-described parameter expression of the specific pattern such that the magnitude relationship of the distances between corresponding points in the bag is the same value or approximately the same value. For this reason, by appropriately defining the distance between the specific patterns, the distance is closer in the Hough space, and the points are closer in the target area of the target data, corresponding to the specific pattern. By performing smoothing of an appropriate range at each point in the ough space, it is possible to easily suppress the appearance of a false specific pattern that appears near a true specific pattern, which is a conventional problem.
- the Hough space voting means 1 3 1 votes for the neighborhood range of each point on the locus determined by the noise component considered to be included in the position of the feature point. Therefore, the specific pattern robust to the noise component included in the feature point can be detected.
- an image input from an input device such as a camera is used as the target data, and a plurality of points (hereinafter referred to as edge points) where the pixel value changes sharply as the feature points.
- the straight line formed by these edge points is detected as the specific pattern.
- W 200 / n is detected.
- a specific example of the image will be described by taking a road image 310 obtained by photographing a road from a camera mounted on a vehicle or the like as shown in FIG. By detecting a straight line from the road image, a white line 3 1 2 etc. indicating a driving lane boundary drawn on the road surface 3 1 1 is recognized.
- the authentication system includes an image input device 4 1 0 such as a force mela as the target data input device 1 1 0, and an edge point detection as the feature point detection means 1 2 0.
- the edge point detection means 4 2 0 includes 3 X 3 Sobel filter means 4 2 1, edge strength calculation means 4 2 2, edge strength threshold processing means 4 2 3, and edge point output means 4 2 4 including.
- the edge point detection means 4 20 is not limited to the above-described configuration, and may be any means that detects a point with a sharp change in pixel value.
- the Hough transforming means 4 30 includes a Hough space voting means 4 31, a Hough space smoothing means 4 3 2, and a Hough space peak detecting means 4 33.
- the road image 3 1 0 is input as the target data by the image input device 4 1 0.
- the edge point detection means 420 detects a plurality of the edge points.
- the Hough transform means 4 3 0 is a linear parameter expression described later.
- the H o u g h space defined as a two-dimensional plane with the horizontal axis and the vertical axis, respectively, is processed in the same manner as the H o u h h conversion means 1 30.
- the straight line detecting means 4 40 is the voting detected by the Hough converting means 4 30.
- the 3 X 3 Sobel filter means 4 2 1 includes the pixel values in the vicinity of 3 3 for each point of the road image 3 10 and the X direction gradient kernel and the y direction gradient kernel shown in FIG. Take the sum of the products of the coefficients. These sums of products are called the Sobel X component and Sobel y component, respectively.
- the edge strength calculating means 4 2 2 calculates the square of the sum of squares of the Sobel X component and the Sobel y component or the sum of absolute values for each point of the road image 3 10, respectively.
- the edge strength of the point Further, the edge strength threshold processing means 4 23 determines whether or not the edge strength for each point of the road image 3 10 is not less than a predetermined threshold, for example, 100 or more. Further, the edge point output means 4 24 outputs, as an edge point, the point at which the edge strength is judged to be equal to or higher than the threshold by the edge strength threshold processing means 4 23.
- the Hough space voting means 4 31 is defined by the edge point detected by the edge point detection means 4 20 within a space (hereinafter referred to as Hough space) having a parameter representing the straight line as an axis. For each point on the corresponding trajectory, a weight corresponding to the edge point is voted.
- Hough space a space having a parameter representing the straight line as an axis.
- the weight according to the edge point is not limited to this.
- the slope of the pixel value at the edge point calculated by a constant, a Sobel X component and a Sobel y component described later, and the locus It is also possible to use a value calculated from a pixel value of the edge point or its vicinity, such as an angle formed with the slope of the straight line corresponding to the point, or a value calculated from these values, the edge strength, etc. Good.
- the Hough space smoothing means 43 2 determines, for each point in the Hough space, a smoothed vote value at the point using the vote value of the point and a nearby point. To do. Here, for example, as an average value of points near 3 X 3 of the point The smoothed vote value is determined. However, the method for determining the neighborhood range and the smoothed vote value is not limited to this. In some cases, a book
- Ho u g h space smoothing means 432 may be omitted.
- the Hough space peak detecting means 433 determines a point having a maximum voting value in a neighborhood range such as 3 ⁇ 3 neighborhood, which is equal to or greater than a predetermined threshold in the Hough space. Detect as a peak.
- the criterion for detecting the peak is not limited to this. For example, a criterion corresponding to the problem setting is adopted, for example, only a point having the maximum vote value in the Hough space is used as a peak. Things come out.
- L represents the y coordinate of a horizontal line in the road image 310 as shown in FIG.
- the attention area is set as a lower area of the road image 310, for example, an area below the central horizontal line as shown in FIG.
- This attention area A31 1 is the road This area is expected to correspond to the road surface in the image 3 10.
- Equation (3) is transformed into Equation (4).
- the Hough space voting means 4 3 1 assigns a weight corresponding to the edge point to each point on the trajectory corresponding to the edge point detected by the edge point detection means 4 20 in the Hough space. Vote.
- the H o u g h space voting means 4 3 1 in the computer 4 0 it is necessary to discretize the H o u g h space. If the straight line parameter expression is the above equation (6), the parameters in the equation (6) are as shown in FIG.
- the accuracy is considered to be equal to the positional accuracy of each point in the road image in the X direction. Yes.
- the discretization interval is set to 1 in this embodiment.
- the parameter is also determined by the error range for the position of each point of the road image considering the factor. Just do it.
- the same interval as the parameter may be used in order to maintain the relationship of Equation (1).
- the discretization interval of the H ou g h space can be automatically determined from the error range of the target data.
- the Hough space voting means 4 31 will be supplemented.
- the Hough space voting means 4 3 1 votes the weight corresponding to the edge point for each point on the trajectory corresponding to the edge point detected by the edge point detection means 4 20 in the Hough space.
- this voting can be performed not on each point on the trajectory but on a neighborhood range of each point on the trajectory determined by a noise component considered to be included in the position of the edge point.
- the voting only the noise component resulting from the coordinate discretization is considered as the noise component included in the edge point position.
- a certain pixel position in the road image It can be considered that the true position of the edge point detected in (5) is in the range of the formula (7).
- the road image is input by the image input device 4 1 0 (step B 1 in FIG. 9) 0
- the single filter means 4 2 1 applies to each point of the road image. to calculate the source one ⁇ ⁇ les X component and Seo 1 ⁇ Le y component (step B 2).
- the above-described wedge strength calculating means 4 2 2 calculates the wedge strength for each point of the road image (step B 3).
- the edge strength threshold processing means 4 2 3 performs threshold processing on edge strength for each point of the road image (step B 4).
- the edge point output means 4 24 outputs a point having the edge strength equal to or higher than the threshold value as the edge point (step B 5).
- the Hough space voting means 431 votes on the locus in the Hough space for each edge point (step B6). Further, the Hough space smoothing means 4 32 smoothes the voting value of each point in the Hough space (step B 7). Further, the Hough space peak detecting means 4 33 detects the peak point in the Hough space (step B 8). Finally, the straight line output means 44 0 outputs the straight lines corresponding to the respective Hough space peak points (step B 9).
- the target data is an image.
- the target data is not limited to this.
- a distance image indicating a distance to a real object corresponding to a pixel value, and a position at each position in a three-dimensional space.
- 3D image given pixel values, obtained at different times It is only necessary to use data in which the coordinate position and the data value at each position are associated with each other, such as the obtained image or the time-series image obtained by arranging the three-dimensional images in the time-series direction.
- the distance between the straight lines is defined as the sum of squares of the difference of the X coordinate on each horizontal line of the attention area A 31 1, but this is limited to the horizontal line. Rather, it should be the sum of squares of the distances between positions on each line of a set of parallel lines such as vertical lines.
- the distance between straight lines is defined as the sum of squares of the distance between positions on each line of a set of lines parallel to each other, but this is not limited to the sum of squares.
- the set of straight line positions on each line of the set of lines parallel to each other such as the sum of absolute values, the maximum absolute value, etc., corresponds to the two lines corresponding to the two straight lines. It may be an amount defined as the distance between the kuttles.
- Equation (8) may be used as the linear parameter expression ( ⁇ , ⁇ ) - ⁇ .
- the straight line parameter expression according to the mathematical formula (9) is obtained by replacing the straight line parameter expression according to the mathematical formula (8) with the discrete discretization at the time of the Hoough transform. It is modified to remove the influence of noise components. Therefore, in the Hough transformation using the Hough space by the linear parameter expression according to Equation (9), voting to the set of trajectories 7 06 shown in FIG. 8 and Equation (7) above There is no need to vote for the locus 7 0 5 only. In this case, the amount of processing can be reduced compared to the vote for the set of trajectories 7 06.
- the authentication system includes a computer (central processing unit; processor; data processing unit) 9 0 0 that operates by program control, and a target data input unit. Has 9 1 0.
- the computer (central processing unit; processor; data processing unit) 9 0 0 includes characteristic point detection means 9 2 0, H o u g h conversion means 9 3 0, and specific pattern output means 9 4 0.
- the H o u g h conversion means 9 30 includes H o u g h space voting means 9 3 1, H o u g h space smoothing means 9 3 2, and H o u g h space peak detection means 9 3 3.
- the target data input device 9 10 inputs data that is a detection target of a desired specific pattern.
- the feature point detecting means 9 20 detects a point estimated from the target data as a point on the specific pattern as a feature point.
- the Hough space voting means 9 3 1 is configured to obtain, for each point on the trajectory corresponding to the feature point detected by the feature detection means 9 20, the feature point in the Hough space expressing the specific pattern. Vote according to the weight.
- the Hough space smoothing means 9 3 2 determines, for each point in the Hough space, a smoothed vote value at the point using the vote value of the point and a nearby point. .
- the Hough space peak detecting means 9 33 detects a point giving a peak of one or a plurality of the vote values in the H 0 ug li space.
- the specific pattern output means 94 0 outputs the specific pattern corresponding to the corresponding parameter for each point giving the peak of the vote value detected by the Hough space peak detection means 9 33.
- the Hough space smoothing means 9 3 2 is configured such that each point of the Hough space
- the set of m is considered as the neighborhood of the point, and a vote value within this neighborhood or an approximation of this neighborhood is determined.
- r is a predetermined constant or a number determined according to some criterion for each point in the H o u g h space.
- the Hough space voting means 9 31 performs voting on each point on the locus in the Hough space for the feature point, it is determined by a noise component considered to be included in the position of the feature point It is also possible to vote for the near range of each point on the trajectory.
- the target data input device 9 10 inputs data to be subjected to the specific pattern detection process (step C 1 in FIG. 3).
- the feature point detection means 9 20 0 force S a point estimated from the target data to be a point on the specific pattern is detected as a feature point (step C 2).
- the Hough space voting means 9 3 1 votes the weights corresponding to the feature points for the respective points on the trajectory corresponding to the feature spaces in the Hough space (Step C 3).
- the Hough space smoothing means 93 2 determines, for each point in the Hough space, a smoothed vote value at the point using the vote value of the point and a nearby point. (Step C4).
- the Hough space peak detecting means 93 3 detects a point giving a peak of one or a plurality of the vote values in the Hough space (step C 5).
- the specific pattern output means 94 0 outputs the specific pattern for the corresponding parameter for each point that gives the peak of the vote value detected by the Hough space peak means (step C 6). In the voting in step C3, voting may be performed for the neighborhood range of each point on the locus determined by the noise component considered to be included in the position of the feature point.
- the distance between the specific patterns representing the difference between the specific patterns in the predetermined area of interest in the target data is defined, and the area where the distance between the specific patterns is close is considered as the neighborhood. Since the Hough transform smoothing is performed in the vicinity range, by appropriately defining the distance between the specific patterns, a false specific pattern that appears near the true specific pattern, which is a conventional problem, can be easily obtained. Appearance can be suppressed.
- the Hough space voting means 1 3 1 votes for the neighborhood range of each point on the locus determined by the noise component considered to be included in the position of the feature point. Therefore, the specific pattern robust to the noise component included in the feature point can be detected.
- This authentication system uses an image input from an input device such as a camera as the target data, detects a plurality of points (hereinafter referred to as edge points) where the pixel value changes sharply as the feature points, and detects them.
- a straight line formed by the edge points is detected as the specific pattern.
- a road image 3 10 obtained by photographing a road from a camera mounted on a vehicle as shown in FIG. 4 is taken up. By detecting a straight line from this road image, the white line 3 1 2 indicating the traveling lane boundary drawn on the road surface 3 11 is recognized.
- the target data input device 9 10 is an image input device 1 1 1 0 such as a camera
- the feature point detection means 9 2 0 is an edge.
- Point detection means 1 1 2 0, Hough conversion means 1 1 3 0, and linear pattern output means 1 1 4 0 as specific pattern output means 9 4 0 are included.
- the wedge point detection means 1 1 20 includes 3 ⁇ 3 sobel filter means 1 121, edge strength calculation means 1 1 22, edge strength threshold processing means 1 123, and edge point output means 1 124.
- the edge point detection means 1120 is not limited to the above-described configuration, and may be any means that detects a point with a sharp change in pixel value.
- the H o gh h conversion means 1 130 includes a Ho u g h space voting means 1 131, a H o u g h space smoothing means 1 1 32, and a Ho u g h space peak detection means 1 1 33.
- the Ho g h space defined as a two-dimensional plane with the horizontal axis and the vertical axis, respectively, is subjected to the same processing as the H o ugh conversion unit 930.
- the straight line detection means 1140 outputs a straight line represented by the straight line parameter expression corresponding to each point in the Hough space that gives the peak of the vote value detected by the H o h h h conversion means 1 1 30.
- the 3 ⁇ 3 Sobel filter means 1121 calculates a Sobel ⁇ component and a Sobel y component for each point of the road image 310. Further, the edge strength calculating means 1 1 22 calculates the square of the sum of squares of the Sobel X component and the Sobel y component or the sum of absolute values for each point of the road image 3 10, respectively. The edge strength of each point is used.
- the edge strength threshold processing means 1 1 23 is for each point of the road image 3 10 It is determined whether the edge strength is a predetermined threshold value, for example, 100 or more.
- the wedge point output means 1 1 2 4 outputs, as the edge point, the point at which the edge strength is judged to be equal to or higher than the threshold value by the edge strength threshold processing means 1 1 2 3.
- the Hough space voting means 1131 is the edge for each point on the trajectory corresponding to the edge point detected by the edge point detection means 1120 in the Hough space. Vote for weights according to points.
- the edge strength is voted as a weight corresponding to the edge point.
- the weight according to the edge point is not limited to this.
- the gradient of the pixel value and the locus at the edge point calculated by a constant, a so-called mil X component and a Sobel y component, which will be described later, are used.
- a value calculated from the edge point or a pixel value in the vicinity thereof such as an angle formed with the slope of the straight line corresponding to the upper point, or a value calculated from these values or the edge strength, etc. Also good.
- the Hough space smoothing means 1 1 3 2 uses, for each point in the Hough space, the average value of voting values at neighboring points, for example, using the voting values at the point and neighboring points. Determine the smoothed vote value at the point.
- the smoothed voting value is not limited to this, and is calculated from the positional relationship and voting value of neighboring points such as the sum of each neighboring point multiplied by an appropriate weight and the maximum voting value in the neighborhood. Various values may be used.
- the Hough space peak detecting means 1 1 3 3 is a peak having a maximum voting value in a neighborhood range such as a neighborhood of 3 X 3 in the Hough space and not less than a predetermined threshold value. Detect as.
- the criterion for detecting the peak is not limited to this.
- a criterion corresponding to the problem setting is used, such as peaking only at a point having the maximum vote value in the Hough space. I can do it.
- the H 0 ug h space smoothing means 1 1 3 2 will be described.
- L represents the y coordinate of a horizontal line in the road image 310 as shown in FIG.
- the attention area is set as a lower area of the road image 310, for example, an area below the central horizontal line as shown in FIG.
- This attention area A31 1 is an area expected to correspond to a road surface in the road image 310.
- Equation (10) can be transformed to Equation (12) below from Equation (4).
- the Hough space smoothing means 1132 is configured so that each point of the Hough space
- an average value of the voting values in the approximate region thereof can be used as the smoothed voting value.
- the smoothed voting value is not limited to this, and the position relationship between neighboring points such as the sum of each point of the set of points multiplied by an appropriate weight, the maximum voting value in the neighborhood, etc.
- Various values calculated from the vote value may be used.
- the Hough space voting means 1 1 3 1 will be supplemented.
- the Hough space voting means 1 1 3 1 according to the edge point for each point on the trajectory corresponding to the edge point detected by the edge point detection means 1 1 2 0 in the Hough space.
- this voting can be performed not on each point on the trajectory but on the neighborhood range of each point on the trajectory determined by a noise component considered to be included in the position of the edge point.
- only the noise component resulting from the coordinate discretization is considered as the noise component included in the edge point position at the time of voting. In this case, a certain pixel position in the road image
- the true position of the edge point detected in step 3 is considered to be in the range of the following formula (1 3).
- the road image is input by the image input device 1 1 1 0 (step D 1 in FIG. 13).
- the Sobel filter component 1 1 2 1 force S and the Sobel X component and Sobel y component are calculated for each point of the road image (Step D 2).
- the edge strength calculating means 1 1 2 2 calculates edge strength for each point of the road image (step D 3).
- the edge strength threshold processing means 1 1 2 3 Force Edge strength is thresholded for each point of the road image (step D 4).
- the edge point output means 1 1 2 4 outputs a point having the edge strength equal to or higher than the threshold value as the edge point (step D 5).
- the H o g h space peak detecting means 1 1 3 3 force The peak point in the H o u g h space is detected (step D 8).
- the straight line output means 1 1 4 0 force The straight lines corresponding to the respective H o u g h space peak points are outputted (step D 9).
- the target data is an image.
- the target data is not limited to this.
- a distance image indicating a distance to a real object corresponding to a pixel value or each position in a three-dimensional space.
- the distance between the straight lines is defined as the sum of squares of the difference of the X coordinate on each horizontal line of the region of interest A 3 11, but this is limited to the horizontal line. Instead, it is only necessary to use the sum of squares of the distances between positions on each line of a set of parallel lines such as vertical lines.
- the distance between the straight lines is defined as the sum of squares of the distance between positions on each line of a set of lines parallel to each other.
- this is not limited to the sum of squares.
- the vectors corresponding to the two straight lines when the set of straight line positions on each line of the set of parallel lines such as the sum of values and the maximum absolute value is regarded as one vector. It may be an amount defined as a distance between them.
- the present invention can be applied to uses such as detecting a specific pattern in target data such as an image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/092,025 US8208757B2 (en) | 2006-05-25 | 2007-05-15 | Recognition system, recognition method, and recognition program |
JP2008517843A JP4577532B2 (ja) | 2006-05-25 | 2007-05-15 | 認識システム、認識方法および認識プログラム |
EP07743749A EP2026281A1 (en) | 2006-05-25 | 2007-05-15 | Recognizing system, recognizing metho and recognizing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-145274 | 2006-05-25 | ||
JP2006145274 | 2006-05-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007138898A1 true WO2007138898A1 (ja) | 2007-12-06 |
Family
ID=38778418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/060315 WO2007138898A1 (ja) | 2006-05-25 | 2007-05-15 | 認識システム、認識方法および認識プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US8208757B2 (ja) |
EP (1) | EP2026281A1 (ja) |
JP (1) | JP4577532B2 (ja) |
CN (1) | CN101356547A (ja) |
WO (1) | WO2007138898A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013239165A (ja) * | 2012-05-15 | 2013-11-28 | Palo Alto Research Center Inc | 近視野のカメラの障害物の検出 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101567086B (zh) * | 2009-06-03 | 2014-01-08 | 北京中星微电子有限公司 | 一种车道线检测方法及其设备 |
WO2011065399A1 (ja) * | 2009-11-25 | 2011-06-03 | 日本電気株式会社 | 走路認識装置、車両、走路認識方法及び走路認識プログラム |
US8675089B2 (en) * | 2009-12-25 | 2014-03-18 | Samsung Electronics Co., Ltd. | Apparatus and method for assisting composition of photographic image |
JP2013003686A (ja) * | 2011-06-13 | 2013-01-07 | Sony Corp | 認識装置および方法、プログラム、並びに記録媒体 |
KR101279712B1 (ko) * | 2011-09-09 | 2013-06-27 | 연세대학교 산학협력단 | 실시간 차선 검출 장치 및 방법과 이에 관한 기록매체 |
US9189702B2 (en) * | 2012-12-31 | 2015-11-17 | Cognex Corporation | Imaging system for determining multi-view alignment |
CN103955925B (zh) * | 2014-04-22 | 2017-03-29 | 湖南大学 | 基于分块固定最小采样的改进概率霍夫变换曲线检测方法 |
EP3227855A4 (en) * | 2014-12-04 | 2018-06-20 | Le Henaff, Guy | System and method for interacting with information posted in the media |
US9852287B1 (en) | 2016-10-04 | 2017-12-26 | International Business Machines Corporation | Cognitive password pattern checker to enforce stronger, unrepeatable passwords |
JP6345224B1 (ja) * | 2016-12-19 | 2018-06-20 | 株式会社Pfu | 画像処理装置、矩形検出方法及びコンピュータプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06266828A (ja) * | 1993-03-12 | 1994-09-22 | Fuji Heavy Ind Ltd | 車輌用車外監視装置 |
JPH10208056A (ja) * | 1997-01-16 | 1998-08-07 | Honda Motor Co Ltd | 直線検出方法 |
JP2003271975A (ja) * | 2002-03-15 | 2003-09-26 | Sony Corp | 平面抽出方法、その装置、そのプログラム、その記録媒体及び平面抽出装置搭載型ロボット装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06314339A (ja) | 1993-04-27 | 1994-11-08 | Honda Motor Co Ltd | 画像の直線成分抽出装置 |
-
2007
- 2007-05-15 US US12/092,025 patent/US8208757B2/en active Active
- 2007-05-15 WO PCT/JP2007/060315 patent/WO2007138898A1/ja active Application Filing
- 2007-05-15 EP EP07743749A patent/EP2026281A1/en not_active Withdrawn
- 2007-05-15 CN CNA2007800012755A patent/CN101356547A/zh active Pending
- 2007-05-15 JP JP2008517843A patent/JP4577532B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06266828A (ja) * | 1993-03-12 | 1994-09-22 | Fuji Heavy Ind Ltd | 車輌用車外監視装置 |
JPH10208056A (ja) * | 1997-01-16 | 1998-08-07 | Honda Motor Co Ltd | 直線検出方法 |
JP2003271975A (ja) * | 2002-03-15 | 2003-09-26 | Sony Corp | 平面抽出方法、その装置、そのプログラム、その記録媒体及び平面抽出装置搭載型ロボット装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013239165A (ja) * | 2012-05-15 | 2013-11-28 | Palo Alto Research Center Inc | 近視野のカメラの障害物の検出 |
Also Published As
Publication number | Publication date |
---|---|
EP2026281A1 (en) | 2009-02-18 |
CN101356547A (zh) | 2009-01-28 |
US8208757B2 (en) | 2012-06-26 |
JP4577532B2 (ja) | 2010-11-10 |
US20090252420A1 (en) | 2009-10-08 |
JPWO2007138898A1 (ja) | 2009-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007138898A1 (ja) | 認識システム、認識方法および認識プログラム | |
US20210118161A1 (en) | Vehicle environment modeling with a camera | |
CN107679520B (zh) | 一种适用于复杂条件下的车道线视觉检测方法 | |
US8655081B2 (en) | Lane recognition system, lane recognition method, and lane recognition program | |
JP2008123036A (ja) | 消失点検出システム、消失点検出方法および消失点検出用プログラム | |
US8300949B2 (en) | Edge detection technique having improved feature visibility | |
JP4637618B2 (ja) | 車線認識装置 | |
US9811742B2 (en) | Vehicle-surroundings recognition device | |
EP1329850A2 (en) | Apparatus, program and method for detecting both stationary objects and moving objects in an image | |
US9330472B2 (en) | System and method for distorted camera image correction | |
EP1320072A2 (en) | Lane marker recognition method | |
JPH10208056A (ja) | 直線検出方法 | |
KR101191308B1 (ko) | 지능형 운송 시스템의 도로 및 차선 검출 시스템 및 방법 | |
WO2014002692A1 (ja) | ステレオカメラ | |
CN102194102A (zh) | 对交通标志进行分类的方法和装置 | |
Sujatha | Computer vision based novel steering angle calculation for autonomous vehicles | |
JP5287335B2 (ja) | 道路白線認識装置及び道路白線認識方法 | |
EP2256691B1 (en) | Image processing device for vehicle and image processing program | |
Dinh et al. | Image segmentation based on histogram of depth and an application in driver distraction detection | |
Ying et al. | An illumination-robust approach for feature-based road detection | |
Nuthong et al. | Lane detection using smoothing spline | |
CN102073842A (zh) | 一种通过分类匹配的人脸识别方法 | |
JP4479174B2 (ja) | 物体検出装置 | |
JP2004038760A (ja) | 車両用走行路認識装置 | |
JP2015215235A (ja) | 物体検出装置及び物体検出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200780001275.5 Country of ref document: CN |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07743749 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2008517843 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007743749 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12092025 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |