JP2010020476A - Object detection device and object detection method - Google Patents

Object detection device and object detection method Download PDF

Info

Publication number
JP2010020476A
JP2010020476A JP2008179346A JP2008179346A JP2010020476A JP 2010020476 A JP2010020476 A JP 2010020476A JP 2008179346 A JP2008179346 A JP 2008179346A JP 2008179346 A JP2008179346 A JP 2008179346A JP 2010020476 A JP2010020476 A JP 2010020476A
Authority
JP
Japan
Prior art keywords
feature
pixel
region
feature region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2008179346A
Other languages
Japanese (ja)
Inventor
Yosuke Matsuno
洋介 松野
Original Assignee
Nissan Motor Co Ltd
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd, 日産自動車株式会社 filed Critical Nissan Motor Co Ltd
Priority to JP2008179346A priority Critical patent/JP2010020476A/en
Publication of JP2010020476A publication Critical patent/JP2010020476A/en
Pending legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide an object detection device which accurately detects a three-dimensional object. <P>SOLUTION: The object detection device has: an on-vehicle camera 10; a movement information calculation part 20 calculating movement information of each pixel, based on information on an imaged image of an object; and a detection part 30 detecting the object, based on the movement information. The detection part 30 extracts a first characteristic area, wherein pixels having common movement information continue longways based on the movement information, extracts a second characteristic area, wherein pixels having common moving speed continue along a direction of a prescribed angle, based on the movement information, and decides that an object corresponding to an area including the first characteristic area and the second characteristic area is the three-dimensional object, when the first characteristic area is positioned between the second characteristic areas. <P>COPYRIGHT: (C)2010,JPO&INPIT

Description

  The present invention relates to an object detection apparatus and an object detection method for detecting an object present around a vehicle.

  A distance image indicating the distance from the captured image of the stereo camera to the imaging target is converted into an overhead coordinate system, and pattern matching is performed on the pattern of the area where distance information does not exist in the overhead coordinate system. A three-dimensional recognition device that performs the determination is known (see Patent Document 1).

JP-A-2005-346381

  However, when determining the three-dimensional object using the shadow of the distance information from the stereo camera, the pattern of the three-dimensional object that is flat and low in height approximates a pattern that does not have distance information. There is a problem that a three-dimensional object that is flat and low in height, such as a sidewalk, may be erroneously detected as a flat object.

  The problem to be solved by the present invention is to reduce erroneous detection of a three-dimensional object having a low height as a flat object.

  In the present invention, when the first feature region in which the pixels having the same movement information are continuous in the vertical direction exists between the second feature regions in which the pixels having the same movement speed are continuous along the direction of the predetermined angle, The above-described problem is solved by determining a region including the one feature region and the second feature region as a three-dimensional object.

  According to the present invention, the three-dimensional object is based on the positional relationship between the first feature region in which pixels having common movement information are continuous in the vertical direction and the second feature region in which pixels having common movement speed are continuous along a predetermined angle. Therefore, it is possible to reduce erroneous detection of a three-dimensional object having a low height such as a curb or a sidewalk of a road as a flat object.

  The object detection apparatus according to the present embodiment is an apparatus that detects an object existing around a vehicle while distinguishing the attribute of the object such as whether the object is a three-dimensional object or a flat object.

<< First Embodiment >>
Hereinafter, a first embodiment will be described based on the drawings.

  FIG. 1 is a diagram illustrating an example of a block configuration of an in-vehicle device 1000 including the object detection device 100. As shown in FIG. 1, the in-vehicle device 1000 of this embodiment includes an object detection device 100, a vehicle controller 200 that provides vehicle information to the object detection device 100, various sensors 210 that detect the vehicle information, and an object An output device 300 that outputs information based on the determination result of the detection device 100 and a driving support device 400 that performs driving support based on the determination result of the object detection device 100 are provided. The object detection device 100, the vehicle controller 200, the output device 300, and the travel support device 400 are configured by combining operation circuits such as a CPU, MPU, DSP, and FPGA. Moreover, these are connected by CAN (Controller Area Network) and other vehicle-mounted LAN.

  In addition, the object detection apparatus 100 according to the present embodiment includes a camera 10 as one aspect of an imaging unit, a movement information calculation unit 20, and a detection unit 30. Although the specific configuration is not particularly limited, the movement information calculation unit 20 and the detection unit 30 include, for example, a program configured and operated by a microcomputer and a memory, and an ASIC and FPGA in which each process is incorporated as a warmer. Can be used.

  Hereinafter, each structure with which the object detection apparatus 100 is provided is demonstrated.

  First, the camera 10 as an example of an imaging unit will be described. The camera 10 is a camera having an image sensor such as a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). The camera 10 of the present embodiment images an object (including a solid and a plane object on the road surface) existing around the vehicle (vehicle front, vehicle rear, vehicle side, etc.) at a predetermined cycle, and is imaged for each frame. Images are sequentially output to the image memory 11. The image memory 11 stores an image captured by the camera 10 in an accessible state.

  FIG. 2 shows an installation example of the camera 10. FIG. 2A is a side view of a vehicle equipped with a camera, and FIG. 2B is a view of the vehicle equipped with a camera viewed from above. As shown in FIG. 2, in this embodiment, one camera 10 is installed in the vehicle. That is, in this embodiment, the surroundings of the vehicle are imaged by the monocular camera 10. In the present embodiment, the camera 10 is installed in the upper part of the vehicle interior facing the front of the vehicle. Then, the optical axis LS of the camera 10 is adjusted so as to be directed in the Z direction of the vehicle traveling direction (driver front direction), the horizontal axis X of the imaging surface is adjusted to be parallel to the road surface, and the imaging surface The vertical axis Y is adjusted to be perpendicular to the road surface.

  FIG. 3 is an example of an image captured in front of the vehicle using the camera 10 of the present embodiment. An image captured by the camera 10 is represented by an xy coordinate system with the vertex at the upper left of the image as the origin. An axis extending rightward from the origin is taken as the x-axis, and an axis extending downward from the origin is taken as the y-axis. The captured image shown in FIG. 3 includes a pole as a three-dimensional object (three-dimensional) and a white line (including lines other than white) on the road as a flat object (planar object).

  Next, the movement information calculation unit 20 will be described. The movement information calculation unit 20 includes a feature extraction unit 21 and a calculation unit 22, and pixels corresponding to the object in the image based on the image information of the object (including pole, white line, etc.) imaged by the camera 10. The movement information of is calculated.

  The feature extraction unit 21 extracts a feature part including an extension of the object and a characteristic part of the object from each image data (frame) captured by the camera 10 in order to observe the movement of the imaged object on the image. To do. The feature extraction unit 21 of the present embodiment reads an image captured by the camera 10 from the image memory 11, binarizes the read captured image using a predetermined threshold, and extracts an edge of an object present in the image. . A feature portion is extracted based on the edge component.

  The feature extraction unit 21 of the present embodiment reads an image captured by the camera 10 from the image memory 11 and binarizes the read captured image using a predetermined threshold value, whereby an edge of an object existing in the image is detected. To extract.

  FIG. 4A shows an example of vertical edges extracted. Next, thinning processing is performed on each extracted edge to narrow the edge width, and the center of the edge is accurately set (see FIG. 4B). Further, the edge is expanded in the horizontal direction so that the edge width of the thinned edge becomes a constant width, for example, a width corresponding to three pixels (see FIG. 4C). By this operation, the extracted edges are normalized, and an edge image having a uniform width for each edge is obtained.

  The calculation unit 22 calculates the speed of the pixel of the feature portion obtained from the edge extracted by the feature extraction unit 21. The obtained moving speed and moving direction of the characteristic part are stored in association with the imaging timing identifier or the frame identifier. This pixel movement information includes “pixel movement speed” and “pixel movement direction” along with pixel identification information. Note that if there are a plurality of feature portions in one image data, the speed is calculated for all the feature portions.

  Hereinafter, a specific movement speed calculation method performed by the movement information calculation unit 20 of the present embodiment will be described.

  The calculation unit 22 according to the present embodiment counts up the count value of the pixel at the position where the edge corresponding to the contour of the object is detected based on the information of the image of the object captured by the camera 10, Based on the inclination, the moving speed and moving direction of the edge are calculated.

  The movement information calculation unit 20 of the present embodiment updates the counter value of the pixel counter of the pixel corresponding to the edge included in each image data for image data with different imaging timings by a predetermined method. Here, the pixel counter is a counter set for each pixel. When the pixel corresponds to the edge, the counter value of the pixel counter is incremented by +1. When the pixel does not correspond to the edge, the counter value of the pixel counter is increased. This is a counter that is set to 0 (initialized). This counter value update process is performed for each frame that is repeatedly imaged by the camera 10 in a predetermined cycle. When this operation is performed, the counter value of the corresponding pixel counter increases for a pixel having a long time corresponding to an edge, while the counter value of the corresponding pixel counter decreases for a pixel having a short time corresponding to an edge.

  This change in the counter value of the pixel counter represents the moving direction and moving amount of the edge. For this reason, the moving direction and moving speed of the edge on the captured image are calculated based on the counter value. Since the coordinate system of the image represents the azimuth, the moving direction and moving speed of the edge and the feature corresponding to the edge can be obtained.

  Furthermore, a movement information calculation method performed by the calculation unit 22 will be described with reference to FIG. FIG. 4 is a diagram for explaining the movement information calculation process, that is, a process for obtaining an edge image in which the extracted edge is normalized and calculating the movement direction and the movement speed from the edge counter value (dwell time). It is a figure for demonstrating concretely.

  First, the feature extraction unit 21 performs binarization processing on the edge image. The binarization process is a process in which a pixel at a position where an edge is detected is set to 1 and a pixel at a position where no edge is detected is set to 0. FIG. 4A shows an example of the binarized image of the extracted vertical edge.

  Next, as shown in FIG. 4B, thinning processing is performed on the generated binary image. The thinning process is a process of reducing the edge width of the detected edge until a predetermined pixel width is reached. That is, the edge width is narrowed by performing thinning processing on each extracted edge. In this example, as shown in FIG. 4B, the edge width of the edge is thinned until the predetermined pixel width becomes one pixel. In this way, the edge is thinned to a predetermined pixel width, thereby setting the center position as the center of the edge. Note that, in this example, an example in which one pixel is thinned is shown, but the number of pixels to be thinned is not particularly limited.

  Next, an expansion process is performed to expand the edge width of the thinned edge. The expansion process is performed so that the edge width is constant from the center position set by thinning toward the edge movement direction, and the edge width is also changed from the center position to the direction opposite to the edge movement direction. It is a process of expanding. In this example, the edge is expanded in the horizontal direction so that the edge width of the thinned edge becomes a width corresponding to three pixels. By this process, the extracted edges are normalized, and an edge image having a uniform width is obtained. Specifically, as shown in FIG. 4C, one pixel is expanded from the edge center position x0 to the edge movement direction (the positive direction of the x-axis), and at the same time opposite to the edge movement direction from the edge center position x0. The edge width is expanded to 3 pixels by expanding one pixel in the direction (negative direction of the x axis).

  By performing the thinning process and the expansion process in this way, the edge width of the extracted edge image is standardized by standardizing to a predetermined width in the edge moving direction.

  Next, a count-up process performed by the calculation unit 22 to calculate movement information will be described. The count-up process mentioned here is a process for counting up the value of the memory address corresponding to the position of the pixel where the edge is detected and initializing the value of the memory address corresponding to the position of the pixel where the edge is not detected. It is.

  Hereinafter, the edge count-up processing by the calculation unit 22 will be described with reference to FIGS. For convenience of explanation, here, a case where the edge moves in the positive direction of the x-axis will be described as an example. Even when the edge moves in the negative x-axis direction, the y-axis direction, or two-dimensionally, the basic processing method is common.

  As shown in FIG. 4C, the edge has a center position of the edge at a position x0 in a certain frame. Then, it is expanded from the center position to the position x0 + 1 of one pixel in the edge moving direction, and similarly expanded from the center position to the position x0-1 of one pixel in the direction opposite to the edge moving direction.

  The count value of the memory address corresponding to the position where such an edge is detected, “x0-1”, “x0”, “x0 + 1” is incremented by “+1”. On the other hand, the count value of the memory address corresponding to the position where the edge is not detected is reset.

  For example, in FIG. 4D, edges are detected at positions “x0-1”, “x0”, and “x0 + 1” at time t. Therefore, the count value of the memory address corresponding to each position is incremented by “1”. As a result, the count value at the position “x0 + 1” is “1”, the count value at the position “x0” is “3”, and the count value at the position “x0-1” is “5”.

  Next, as shown in FIG. 4E, since the edge does not move even at time t + 1, the edge is detected at each of the positions “x0-1”, “x0”, and “x0 + 1”. . Therefore, the count values at the positions “x0-1”, “x0”, and “x0 + 1” are further incremented by one. As a result, the count value at position “x0 + 1” is 2, the count value at position “x0” is 4, and the count value at position “x0-1” is 6.

  Further, as shown in FIG. 4F, at time t + 2, the edge is shifted by one pixel in the positive direction of the x-axis, and the edge is detected at positions “x0”, “x0 + 1”, and “x0 + 2”. Therefore, the count value of the memory address corresponding to the positions “x0”, “x0 + 1”, and “x0 + 2” where the edge is detected is counted up. On the other hand, the count value at the position “x0-1” where no edge is detected is reset to “zero”. As a result, the count value at position “x0 + 2” is 1, the count value at position “x0 + 1” is 3, and the count value at position “x0” is 5, as shown in FIG. Further, the count value at the position “x0-1” where no edge is detected is reset to “0”.

  Thus, the calculation unit 22 counts up the count value of the memory address corresponding to the position where the edge is detected, and resets the count value of the memory address corresponding to the position where the edge is not detected.

  In the description based on FIG. 4, as the position for detecting the count value, the center position “x0” of the edge, the position “x0 + 1” of one pixel in the moving direction of the edge from the center position, and the position of the edge from the center position. The count value is detected at three positions of the position “x0-1” of one pixel in the direction opposite to the moving direction. However, if the slope of the count value described later is obtained, the arrangement and number of points for detecting the count value are not limited. . That is, as long as the count value can be detected at two or more locations in the edge moving direction, the count value may be detected in any number.

  Further, when the object approaches the vehicle at a constant angle, the edge is detected a plurality of times at the same position between successive frames. For example, in the example of FIG. 4, the edge is detected twice at the position x0 in the continuous frame at time t and frame at time t + 1. Therefore, when the count value of the memory address corresponding to the position where the edge is detected is counted up, the count value correlates with the time (number of frames, dwell time) at which the edge is detected at that position.

  Next, an edge moving speed, moving direction, and position calculation method in this embodiment will be described. In this embodiment, the inclination of the count value is calculated, and the moving speed, moving direction, and position of the edge are calculated based on this inclination.

  For example, in the case of FIG. 4E, the count values at the positions “x0-1”, “x0”, and “x0 + 1” are “6”, “4”, and “2”, respectively. When the count value “2” of “x0 + 1” is subtracted from the count value “6” of the position “x0-1”, the slope H of the count value can be calculated as H = (6-2) / 2 = 2.

  This means H = {(time from the edge moving to the position x0-1 to the present) − (time after the edge moves to the position x0 + 1)} / (2 pixels). That is, by calculating the inclination H, the time (number of frames) required for the edge to pass through one pixel at the position x0 is calculated.

  Therefore, the slope H of the count value corresponds to how many frames it takes for the edge to move by one pixel, and the edge moving speed 1 / H is calculated based on the slope H of the count value. In FIG. 4E, since 2 frames are required to move 1 pixel, the edge moving speed is calculated as 1/2 (pixel / frame).

  Next, a method for determining the edge moving direction based on the magnitude of the count value will be described. Since the edge moves to a position where there is no edge and the count value at the position where the edge is newly detected is 1, the count value at each position is the smallest value. Therefore, the count value in the direction in which the edge moves is small, and the count value in the direction opposite to the direction in which the edge moves is large. By using this tendency, the moving direction of the edge can be determined.

  From the above, it is possible to count up the count value of the memory address corresponding to the position where the edge is detected, and to calculate the moving speed and moving direction of the edge based on the slope of the counted up count value.

  In addition, the calculation unit 22 classifies the movement information of the edges present on the captured image into predetermined class values, and generates a movement image that represents the feature of the movement information. FIG. 5 shows an example of the moving image. As shown in FIG. 5, in the moving image of the present embodiment, the edge pixel from which the movement information is detected is indicated by a circle, and the pixel having the higher movement speed is indicated by a larger circle, thereby obtaining pixel speed information. Express. In addition, the moving direction of the pixel is represented by a solid black mark that represents a pixel that moves to the right, that is, the right direction, and the moving direction is represented by a white mark that represents a pixel that moves to the left, that is, the left direction. Express. As described above, the moving image can express movement information including speed information and a moving direction.

  By the way, when the vehicle travels purely straight, the moving speed of the plane object is not detected. However, in a normal driving environment, vehicle behavior other than straight travel occurs. In the present embodiment, the pseudo speed is calculated based on the behavior of the camera 10 that occurs in association with the vehicle behavior that occurs during normal travel.

  FIG. 6 is a diagram for explaining the movement on the image of a plane object (curbstone extension, sidewalk extension, white line) when a pitching behavior occurs in the vehicle. Due to the occurrence of pitching behavior in the vehicle, the extension and white line on the curb at the position a shown in FIG. 6 move on the image to the position b at the next frame. When the curb extension or white line moves by pitching, the curb extension or white line movement in the image is in the direction of the solid arrow Q, but the calculation unit 22 of the present embodiment moves in the direction of the broken line arrow q in the horizontal direction. Calculate information.

  Incidentally, since the upper surface of a flat object or a three-dimensional object with a low height is the speed observed by the movement of the camera 10, a line segment that appears obliquely on the image can be located at any position on the image (from the camera). The movement speed on the line segment is the same (regardless of the distance).

  Next, the detection unit 30 will be described. The detection unit 30 determines whether the imaged object is a three-dimensional object based on the movement information of each pixel, and outputs the determination result. The detection unit 30 includes a first feature region extraction function 31, a second feature region extraction function 32, and a determination unit.

  Hereinafter, each structure with which the detection part 30 is provided is demonstrated.

  First, the first feature region extraction function 31 will be described. The first feature region extraction function 31 extracts a first feature region in which pixels having common movement information are continuous in the vertical direction.

  The first feature region extraction function 31 extracts a region in which pixels having common movement information are continuous in the vertical direction based on the movement information of each pixel of the feature portion calculated by the movement information calculation unit 20. The feature that “pixels having common movement information in the vertical direction are continuous” is a three-dimensional feature on the image.

  Although not particularly limited, movement information calculated based on the inclination of the count value obtained by counting up the count value of the pixel at the position where the feature portion is detected in the image is extracted in extracting the first feature area which is a three-dimensional feature. Can be used.

  An example of the first feature region extraction method will be described with reference to FIG. As shown in FIG. 7, first, a moving image (see FIG. 5) is searched in the vertical direction (y-axis direction). However, when extracting a pixel row having vertically continuous movement information, the upper end pixel and the lower end pixel of the pixel row are not subjected to processing for determining the commonality of the movement information. When a moving image is searched in the vertical direction and a pixel A having movement information is found, a pixel B adjacent to the pixel A is searched. When the pixel B has movement information and the pixel C adjacent to the pixel C also has movement information, the difference in the velocity direction between the pixel B and the pixel C (E3-E2) is within the threshold value Re, and the pixel B If the difference in speed between the pixel C and the pixel C (V3-V2) is within the threshold value Tv, it is determined that the pixels are vertically continuous. Next, the presence / absence of movement information is similarly determined for the pixel C adjacent to the pixel C, and the direction of the speed of the pixels B and D (whether E4-E2 is within the threshold value Tv) is within the threshold value Re, or the speed is large. It is determined whether the difference (V4−V2) is within the threshold value Tv. Thereafter, the process is repeated until one of the conditions is not satisfied.

  As shown in FIG. 7, when the condition is not satisfied with the pixel N, for example, when the pixel N has no moving speed, the number of pixels from the pixel A to the pixel N-1 is counted. Further, when the pixel N has a moving speed but the difference (Vn−Vn−1) from the moving speed of the pixel (N−1) exceeds the threshold value Tv, the pixels N to N are changed. Count the number. Furthermore, when the speed direction of the pixel N and the speed direction of the pixel N-1 are different from each other by a predetermined value or more, the number of pixels from the pixel A to the pixel N is counted. Thus, the number of pixels whose movement information satisfies a predetermined condition is counted. If the count value is equal to or greater than the threshold value TH, the pixel row is extracted as a first feature region having common movement information in the vertical direction. Since the first feature region including the vertical continuous component is considered to have features in the three-dimensional moving image, the object corresponding to the pixel in the first feature region may be a part or all of the three-dimensional object. is there.

  Next, the second feature region extraction function 32 will be described. The second feature region extraction function 32 extracts a second feature region in which pixels having a common moving speed are continued along a direction of a predetermined angle.

  Although the extraction method of the second feature region is not particularly limited, in the present embodiment, a method using the lower end pixel as the upper end pixel of the first feature region will be described. Of course, the second feature region extraction function 32 extracts pixels having the same movement speed of the movement information, refers to the distribution of the pixels, and extracts a region including pixels connected along a predetermined angle as the second feature region. be able to.

  The second feature region extraction function 32 of the present embodiment extracts the upper end pixel and the lower end pixel of pixels that are continuous in the vertical direction in common in the movement information extracted by the first feature region extraction function 31, and the extracted upper end pixel And a region including the extracted lower end pixel are extracted as second feature regions, respectively.

  First, the second feature region extraction function 32 extracts the upper end pixel and the lower end pixel of the first feature region.

  The second feature region extraction function 32 extracts the upper end pixel (pixel A in FIG. 7) and the lower end pixel (pixel N in FIG. 7) that the first feature region extraction function 31 has excluded from the determination target of the commonness of the movement information. To do. When there is no movement information in the pixel N, the upper end pixel (pixel A in FIG. 7) and the lower end pixel (pixel N-1 in FIG. 7) are extracted.

  Then, for each pixel, an end pixel table is recorded in which the pixel is the top pixel or the bottom pixel, the X coordinate of the pixel, the Y coordinate of the pixel, and the velocity value of the pixel are associated with each other. An example of this end pixel table is shown in FIG.

  The second feature region extraction function 32 further extracts end points having the same moving speed (within a predetermined value that can be determined to be substantially the same) from the extracted end points, and extracts a second feature region including them. The second feature region including the end points having the same moving speed (within a predetermined value that can be determined to be substantially the same) is a linear region. Specifically, referring to the end pixel table shown in FIG. 8, paying attention to the moving speed included in the end pixel table, first, the speed Va of the first pixel A is used as a reference, and the speed is within a certain threshold Tvg. Select only the endpoints that fall within the (Va ± Tvg) range.

  If the number of end pixels having a moving speed in the range of moving speed (Va ± Tvg) is smaller than the threshold value Tn, those end points are deleted from the end pixel table. On the other hand, when the number of end pixels having a moving speed in the range of moving speed (Va ± Tvg) is equal to or greater than the threshold value Tn, sorting is performed in ascending order based on the value of the X coordinate included in the end pixel table. Transfer the result to another sort table. An example of the sort table is shown in FIG.

  In the sort table, the end point P1 having the smallest X coordinate has the values of the coordinates X1 and the Y coordinate Y1. Based on FIG. 10, the conditions of the end pixels grouped in the common second feature region will be described. When X1 is less than or equal to half the coordinate value of the pixel size Wn of the image, that is, when (X1 <Wn / 2) is satisfied and P1 is in the upper left half of the image, X2> X1 and Y2 < If Y1 is satisfied, X2−X1 + Y1−Y2 is minimum (the distance is closest), and if the attribute of the end point of P1 is the upper end point, P2 of the upper end point (or lower end point if it is the lower end point) Explore. If P2 satisfies all the above conditions, P1 and P2 are determined to be points on the same line segment (points that continue along the direction of a predetermined angle), and a common group ID is added.

  The same processing is performed for the right half area on the image. This is because pixels corresponding to an object on the left side of the vehicle appear on the left side of the image, and pixels corresponding to an object on the right side of the vehicle appear on the right side of the image. That is, when (X1> Wn / 2) and P1 is in the upper right half of the image, X2> X1 and Y2> Y1 are satisfied, X2-X1 + Y2-Y1 is minimum, and the attribute of the end point of P1 is the upper end point If so, search for P2 at the upper end point (or the lower end point if it is the lower end point). When P2 satisfies all the conditions, P1 and P2 are determined to be points on the same line segment (points connected along the direction of a predetermined angle), and a common group ID is added.

  Similarly, the same processing is performed for P3 and P4 with P2 as a reference, and pixels to which a common group ID is added at the time when processing is completed for all end pixels are set as groups belonging to the common second feature region. Move to the second feature area table. FIG. 11 is a diagram illustrating an example of the second feature area table.

Further, the above processing is repeated for the end pixels remaining in the sort table shown in FIG. When it is determined that the end pixel to which the common group ID is assigned does not exist in the sort table, the information regarding the end pixel in the sort table 2 is deleted from the end pixel table and the sort table.

Subsequently, in the second feature region table shown as an example in FIG. 11, a set of end pixels to which a common group ID is attached is a set of moving speeds on the same line segment (continuous along a direction of a predetermined angle). . For this reason, a line segment that satisfies the least square method is calculated for the end pixels with the common group ID, and the line segment is extracted as the second feature region.

FIG. 12 is a diagram illustrating a process of obtaining a second feature region from a set of end pixels with a common group ID. Note that the method of obtaining the second feature region from the set of end pixels to which the common group ID is assigned is not limited to the least square method. Further, when obtaining the second feature region, among the end pixels assigned with the common group ID, the end pixel having a large Y coordinate value (close to the camera) is weighted, and the end pixel having a large weight is selected. The second feature region may be calculated based on the position. As a result, the second feature region can be obtained based on highly accurate information near the camera 10.

The first feature region extraction function 31 sends information (including pixel coordinates and movement information) regarding the extracted first feature region to the determination function 33. The second feature region extraction function 32 sends information (including pixel coordinates and movement information) regarding the extracted second feature region to the determination function 33.

  Next, the determination function 33 will be described.

  When the first feature region is located between the second feature regions, the determination function 33 of the present embodiment determines that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object. .

  The determination function 33 of the present embodiment is based on the positional relationship between a first feature region in which pixels having common movement information are continuous in the vertical direction and a second feature region in which pixels having common movement speed are continuous along a predetermined angle direction. Based on this, a three-dimensional object is detected.

  Here, a method for detecting a three-dimensional object based on the positional relationship between the first feature region and the second feature region will be described.

  When the camera 10 mounted on the vehicle shown in FIG. 13 images the front of the vehicle, a captured image including a curb and a sidewalk on the left side of the road on which the vehicle travels can be obtained. An example of the captured image is shown in FIG. As shown in FIG. 14, a curb or a sidewalk is included on the left side of the image surrounded by a round frame. An object with a flat upper surface and a low height, such as a curb or a sidewalk, may be erroneously detected as a flat object.

  The object detection apparatus 100 according to the present embodiment detects a three-dimensional object based on the feature of pixel movement information corresponding to a straight line that forms the outline of the object and the positional relationship between the straight lines that form the outline of the object.

  FIG. 15A is a model of a three-dimensional object extending along a traveling path such as a curb shown in FIG. As shown in FIG. 15B, the three-dimensional object shown in FIG. 15A has a linear component L that continues along the direction of a predetermined angle and a linear component K that continues along the vertical direction.

  Since the straight line extending along the traveling path of the vehicle is extracted based on movement information generated by the behavior of the camera 10, the moving speed and / or moving direction is the same even if the location of the image is different. Therefore, it is possible to extract the linear component L constituting the three-dimensional object extending along the traveling path of the vehicle using this feature.

  However, since a straight line extending from the vicinity to the far side as viewed from the camera 10 (a straight line extending along the traveling path of the vehicle) includes a planar object such as a white line on the road surface, it is a three-dimensional object simply by extracting a linear component. It cannot be judged whether or not there is.

  In the present embodiment, when there is a linear component K in which pixels having common movement information continue in the vertical direction between the linear components L that continue along the direction of a predetermined angle, the linear component L and the linear component K are The containing object is detected as a three-dimensional object.

  The feature that “pixels having common movement information continue in the vertical direction” seen in the straight line component K is peculiar to a three-dimensional object. For this reason, when there is a linear component K between the two linear components L, “the pixels with common movement information are continuous in the vertical direction”, the linear component L and the linear component K together represent the same three-dimensional object. It can be assumed that it is configured.

  Further, the determination function 33 of the present embodiment further includes a positional relationship comparison function 331 and a speed comparison function 332, and accurately detects a three-dimensional object.

  First, the positional relationship comparison function 331 will be described. The positional relationship comparison function 331 compares the slopes of the two second feature areas extracted by the second feature area extraction function 32. The difference between the inclinations of the two second feature areas is within a predetermined threshold, and it can be determined that the two characteristic areas have the same inclination, and the first characteristic area is located between the second characteristic areas having the substantially same inclination. When doing so, it is determined that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object.

  Specifically, a method for determining a pair of second feature areas constituting the same solid for the second feature area extracted by the second feature area extraction function 32 will be described with reference to FIG.

  The positional relationship comparison function 331 obtains the inclinations of all the second feature areas extracted by the second feature area extraction function 32. Next, a plurality of second feature areas composed only of the lower end pixels of the first feature area extracted by the first feature area extraction function 31 are obtained. The coordinates of the end points of each second feature region are compared, and the second feature region including the end point having the largest Y coordinate value (located on the lower side of the image) is set as the reference region. In the example shown in FIG. 16, the second feature region having A1 and A2 as end points is the reference region.

  The positional relationship comparison function 331 has the smallest difference from the inclination of the second feature area (A1-A2), which is the reference area, has the same positive / negative inclination, and includes only the upper end pixel of the first feature area. The second feature region is determined as a pair of second feature regions (A1-A2) which are reference regions. If the second feature region that satisfies the above conditions cannot be extracted, the positional relationship comparison function 331 excludes the second feature region (A1-A2) that is the reference region from the three-dimensional object candidates.

  On the other hand, when the second feature area that satisfies the above conditions can be extracted, the positional relationship comparison function 331 compares the positions of the pixels constituting each second feature area.

  The positional relationship comparison function 331 has the smallest difference from the slope of the second feature area (A1-A2), the slope has the same sign, and the second feature is composed of only the upper end pixel of the first feature area. An example in which the region (B1-B2) is extracted will be described with reference to FIG.

  As shown in FIG. 16, when the second feature region (B1-B2) paired with the second feature region (A1-A2) has been extracted, the pixels constituting the second feature region (A1-A2) The coordinates are A1 (Xa1, Ya1) and A2 (Xa2, Ya2) from the smaller X coordinate, and the coordinates of the pixels constituting the second feature region (B1-B2) are B1 from the smaller X coordinate. (Xb1, Yb1) and B2 (Xb2, Yb2).

First, the positional relationship comparison function 331 does not intersect the second feature region (A1-A2) and the second feature region (B1-B2), that is, when (Yb1> Ya1) and (Yb2> Ya2) are satisfied. The second feature region (A1-A2) and the second feature region (B1-B2) are determined as a pair of second feature regions.

Independently of this condition, or in addition to this condition, the positional relationship comparison function 331 determines that the position of the center of gravity of the line segment of the second feature area (A1-A2) and the second feature area (B1-B2) is When smaller than a certain threshold Tm, that is, when {(Xa1 + Xa2) / 2− (Xb1 + Xb2) / 2} 2 + {(Ya1 + Ya2) / 2− (Yb1 + Yb2) / 2} 2 <Tm is satisfied, the second feature region ( A1-A2) and the second feature region (B1-B2) are determined as a pair of second feature regions.

Then, the determination function 33 determines that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object when the first feature region is located between the paired second feature regions. To do.

Next, the speed comparison function 332 will be described.

The speed comparison function 332 compares the movement speeds of a plurality of first feature areas located between the second feature areas. Then, it is determined whether or not the moving speed of the pixels belonging to the first feature area located on the lower side of the image is relatively faster than the moving speed of the pixels belonging to the first feature area located on the upper side of the image.

The speed comparison function 332 compares the moving speeds of the first feature areas existing between the paired second feature areas determined by the positional relationship comparison function 331, and the front side of the camera 10, that is, an area with a large Y coordinate. Does it have a feature that the moving speed is relatively fast in the (lower area of the image) and that the moving speed is relatively slow on the side away from the camera 10, that is, in the area where the Y coordinate is small (lower area of the image)? Judge whether or not.

FIG. 17 shows a second feature region (A1-A2) configured by the lower end point of the first feature region and a second feature region (B1-B2) configured by the upper end point of the first feature region. Of A1 and B1, the coordinate of A1 with the large X coordinate is (X1, Y11), and between A2 and B2, the coordinate of B2 with the small X coordinate is (X2, Y21). The coordinates of the point at which the X coordinate becomes X1 on the second feature region B including B1 having a small X coordinate among A1 and B1 are defined as (X1, Y12). Similarly, the coordinates of the point at which the X coordinate becomes X2 on the second feature region A including A2 having a larger X coordinate among A2 and B2 are defined as (X2, Y22). It is searched whether there is movement information for the pixel located between (X1, Y11) and (X1, Y12), and if there is movement information, an average value Vx1 of the movement speed is obtained.

On the other hand, when there is no velocity information between (X1, Y11) and (X1, Y12), the Y coordinate of the second feature area A that satisfies (X1 + 1) as the X coordinate is (Yn + 1), and the X coordinate is (X1 + 1). The Y coordinate of the line segment B satisfying (Ym + 1) is set to (Ym + 1), the pixels in the region sandwiched between the second feature regions A and B are sequentially searched in the vertical direction, and the average value of the moving speed is calculated.

Similarly, it is searched whether there is pixel movement information in the Y direction between (X2, Y21) and (X2, Y22), and if there is movement information, an average value Vx2 of the movement speed is obtained. On the other hand, when there is no movement information, the Y coordinate of the second feature region A whose X coordinate satisfies (X2-1) is (Yn2 + 1), and the Y coordinate of the second feature region B whose X coordinate satisfies (X2-1) is As (Ym2 + 1), the pixels in the region sandwiched between the second feature regions A and B are sequentially searched in the vertical direction, and the average value of the moving speed is calculated.

If both Vx1 and Vx are not calculated by the speed comparison function 332, the determination function 33 determines that the pair of the second feature areas A and B is not a three-dimensional object.

The determination function 33 also determines that the pair of the second feature areas A and B is not a three-dimensional object even when the speed comparison function 332 determines that the difference between Vx1 and Vx2 is less than the threshold value Ts.

Furthermore, when the absolute value of the difference (Xf−Xn) between the X coordinate Xn when Vx1 is calculated and the X coordinate Xf when Vx2 is calculated is smaller than the threshold value Txd, the determination function 33 calculates the second feature area A , B is determined not to be a three-dimensional object.

When Vx1 and Vx2 satisfy (Vx1−Vx2)> Ts and | Xf−Xn |> Txd, pixels included in the second feature regions A and B and included in the first feature region existing therebetween It is determined that the pixels correspond to the same three-dimensional object.

That is, in the determination function 33, the speed comparison function 232 compares the moving speeds of the plurality of first feature areas located between the second feature areas, and the pixels belonging to the first feature area located on the lower side of the image are compared. When it is determined that the moving speed is relatively faster than the moving speed of the pixels belonging to the first feature area located on the upper side of the image, the object corresponding to the area including the first feature area and the second feature area is It is determined that the object is a three-dimensional object.

  Next, the procedure of the object detection process of the object detection device 100 of this embodiment will be described based on the flowchart shown in FIG.

  When an ignition switch (not shown) is turned on and the in-vehicle device 1000 is activated, this processing program is executed.

  First, the camera 10 captures the surroundings of the vehicle at a predetermined cycle, and the captured images are output to the image memory 11 or the movement information calculation unit 20 (S101). The image memory 11 records a captured image as necessary, and accepts reading of the movement information calculation unit 20.

  The feature extraction unit 21 of the movement information calculation unit 20 performs edge extraction processing on the image, extracts the outline of the object existing in the captured image as an edge image, and normalizes the edge image (S102).

  The calculating unit 22 calculates the movement information of the edge, and creates a moving image (see FIG. 5) in which the calculated movement information is represented by a predetermined gradation (S103).

  In step S104, the first feature region extraction function 31 extracts a first feature region in which pixels having common movement information are continuous in the vertical direction based on the calculated moving image (S104). The feature that pixels having common movement information are continuous in the vertical direction is a feature of an image corresponding to a three-dimensional object.

  Subsequently, in step S105, the first feature region extraction function 31 confirms whether extraction processing has been performed for all pixels. If the extraction processing is completed, the process proceeds to step S106, and all extraction processing has been completed. If not, the process proceeds to S104 (S105).

  In step S106, the second feature region extraction function 32 performs processing of extracting the upper end point and the lower end point from the longitudinal continuous components of the first feature region extracted in S104 and storing them in the end pixel table (S106). .

  In step S107, the end pixel table in which the upper end point and the lower end point of the first feature region (vertical continuous component) extracted in step S106 are stored is referred to, and the upper end point and lower end point grouping belonging to the same line segment are grouped. Processing is performed (S107).

  In step S108, it is confirmed whether or not the grouping process has been completed for all end points. If completed, the process proceeds to step S109, and if not completed, the process proceeds to step S107 (S108).

  In the subsequent step S109, the second feature region extraction function 32 performs a process of obtaining a line segment for each group with respect to the end points grouped in S107 and extracting the second feature region.

  In step S110, the second feature region extraction function 32 checks whether or not the second feature region has been extracted for all the grouped end points, and extracts the second feature region for all the grouped end points. If this is the case, the process proceeds to step S111. If the extraction process has not been completed, the process proceeds to step S109 (S110).

  In step S111, the second feature region extraction function 32 includes, in the second feature region extracted in S109, the vicinity of the upper end pixel or the upper end pixel of the first feature region that is the feature of the three-dimensional object, or the upper end pixel or the lower end pixel. A candidate for the second feature region passing through the vicinity of is determined (S111). The second feature region extraction function 32 extracts a second feature region that sandwiches the first feature region from the upper end side and the lower end side. That is, the first feature region is located between the second feature regions.

  Subsequently, in step S112, the positional relationship comparison function 331 of the determination function 33 checks whether or not all the pairs of the second feature regions passing through the upper and lower ends of the three-dimensional object have been extracted. If not, the process proceeds to step S111 (S112).

  In step S113, when a pair of second feature areas that sandwich the first feature area extracted in S111 from the upper end side and the lower end side is extracted, the speed comparison function 332 of the determination function 33 performs a search between the second feature areas. The moving speed of the first feature area existing in the area is acquired (S113).

  In step S114, the speed comparison function 332 determines whether the moving speeds of all the first feature areas existing between the second feature areas have been acquired. If all the moving speeds have been acquired, the process proceeds to step S115, and if not, the process proceeds to step S113 (S114).

  Further, in step S115, the speed comparison function 332 compares the movement speeds of the plurality of first feature areas located between the second feature areas extracted in S113. Comparison is made by the speed comparison function 332 that the moving speed of the pixels belonging to the first feature area located on the lower side of the image is relatively faster than the moving speed of the pixels belonging to the first feature area located on the upper side of the image. When the result is derived, the determination function 33 determines that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object (S115).

  That is, the determination function 33 has a feature that the moving speed of the first feature region is relatively high at a position where the Y coordinate of the image is large, and relatively low at a position where the Y coordinate is small. If this condition is satisfied, it is determined that the first feature area and the second feature area are pixels corresponding to the same three-dimensional object.

  Further, in step S116, it is determined whether or not the ignition switch of the vehicle is turned off. If the ignition switch is turned off, the process proceeds to S117 and the object detection process is terminated. On the other hand, if the ignition switch is ON, the process returns to S101 for processing.

  Since the object detection apparatus 100 of the present embodiment is configured and operates as described above, the following effects can be obtained.

In the object detection apparatus 100 according to the present embodiment, a first feature region in which pixels having common movement information are continuous in a vertical direction is located between a second feature region in which pixels having common movement speeds are arranged along a predetermined angle direction. When the object is located, it is determined that the object corresponding to the area including the first feature area and the second feature area is a three-dimensional object, thereby preventing a three-dimensional object having a flat and low height from being erroneously detected as a flat object. can do.

In other words, according to the present invention, the first feature region having this feature is common in the moving speed together with the feature that “pixels having common moving information continue in the vertical direction” observed in the moving speed information of the three-dimensional object. The first feature region and the second feature region correspond to the contour of the three-dimensional object, when the pixel is provided with the feature in the positional relationship that the pixel to be present exists between the linear regions that continue along the direction of the predetermined angle By paying attention to the point, the three-dimensional object can be accurately determined. In particular, it is possible to accurately detect a three-dimensional object that is flat and has a low height that is easily detected as a flat object.

In addition, the second feature region in which pixels having the same moving speed are connected along the direction of a predetermined angle is an upper surface of a planar object extracted based on the moving speed observed by the behavior of the vehicle, or a three-dimensional object having a low height Corresponding to The three-dimensional object is accurately detected based on such a feature that the second feature region and the first feature region having the feature of the three-dimensional object are included, and the first feature region exists between the second feature regions. can do.

In particular, the second feature region in which pixels having the same moving speed are connected along the direction of a predetermined angle is a feature corresponding to a white line on the road, but the positional relationship with the first feature region having the feature of a three-dimensional object. Therefore, it is possible to accurately detect a three-dimensional object as a three-dimensional object without erroneously detecting a curbstone on a road or a stepped sidewalk as a flat object.

Furthermore, in the present embodiment, the movement speeds of the plurality of first feature areas located between the second feature areas are compared, and the movement speed of the pixels belonging to the first feature area located on the lower side of the image is When the moving speed of the pixels belonging to the first feature region located on the upper side is relatively higher than the moving speed, it is determined that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object. Objects can be detected accurately. In the same object, the feature that the moving speed of the pixel located on the lower side of the image is relatively faster than the moving speed of the pixel located on the upper side of the image is a feature of the three-dimensional object. In the present embodiment, since a region including pixels having characteristics observed at the moving speed of the three-dimensional object is determined as the three-dimensional object, the three-dimensional object can be accurately detected.

In addition, when extracting the second feature region, the upper end pixel and the lower end pixel of the first feature region (vertical continuous pixels with common movement information) are extracted, and the region including the upper end pixel and the lower end pixel are included. By extracting the region as the second feature region, it is possible to extract a region where the first feature region exists between the extracted second feature regions. As a result, it is possible to accurately extract a region including the first feature region and the second feature region having a specific positional relationship that the first feature region exists between the second feature regions, and to accurately represent the three-dimensional object. Can be extracted.

Further, in the present embodiment, in the determination of the three-dimensional object, when the first feature area is located between the second feature areas having the same inclination, the object corresponding to the area including the first feature area and the second feature area Is determined to be a three-dimensional object. Thus, when there is a characteristic positional relationship in which the first feature region exists between two second feature regions that are in a parallel positional relationship, it is determined that these feature points correspond to a three-dimensional object. A three-dimensional object can be accurately detected.

  In particular, the positional relationship that the first feature region exists between two or more second feature regions that are in a parallel positional relationship is likely to correspond to the curb of the road or the contour of the sidewalk. It is possible to accurately detect a three-dimensional object with a low height without erroneously detecting it as a flat object.

  Then, in calculating the movement information of the pixel corresponding to the object, the movement information calculation unit 20 is based on the information of the image of the object picked up by the camera 10 and the pixel at the position where the characteristic part corresponding to the object is detected. Because the count value is counted up and the movement information of the feature portion is calculated based on the slope of the count value, the movement information of the pixel can be obtained by a simple method. In addition, by such image processing, it is possible to easily derive a three-dimensional feature point called a first feature region in which pixels having common movement information are continuous in the vertical direction.

  Since the object detection device 100 for a vehicle according to the present embodiment determines the attribute of the object based on the movement information acquired from the image of the camera 10, it can determine the attribute of the object without depending on the distance information. In addition, since a configuration for obtaining distance information such as a stereo camera is not required, the object detection apparatus 100 can be simplified.

  In addition, since the vehicle object detection device 100 according to the present embodiment determines the attribute of the object based on the movement information acquired from the image of the monocular camera 10, erroneous distance measurement due to mismatching in image processing occurs. Therefore, the attribute of the object can be determined with high accuracy. That is, in a method for determining an object attribute based on distance information, for example, a method for determining an object attribute based on a stereo image, incorrect distance information may be given due to mismatching. If there is an error in the distance information, an erroneous determination that erroneously determines a two-dimensional object as a three-dimensional object occurs, but the object detection device 100 of the present embodiment determines the attribute of the object without depending on the distance information. It is possible to reduce the occurrence of erroneous detection based on incorrect distance information and perform image processing for performing highly accurate determination.

<< Second Embodiment >>
The second embodiment will be described below based on the drawings. FIG. 19 shows a block diagram of this embodiment.

  The present embodiment is characterized by the method of extracting the second feature region. The second feature region extraction function 32 of the present embodiment extracts a region in which pixels corresponding to the contour of the object in the image continue along the direction of a predetermined angle on the image as a second feature region (straight line component). That is, in the second embodiment, the second feature region in which pixels are continuous along the direction of a predetermined angle in the image is not extracted based on the moving speed, but the feature amount of the image extracted by the feature extraction unit 21 is used. Extract based on.

  Although not particularly limited, based on the edge information of the object extracted by the feature extraction unit 21, a linear component is extracted using, for example, Hough transform, and the extracted linear component is used as a line segment (a three-dimensional object) of the second feature region. Candidate line segment).

  Unlike the first embodiment, when information indicating whether the second feature area is a set of upper end pixels of the first feature area or a set of lower end pixels is not added, the upper end pixel of the first feature area is An object cannot be detected based on the bottom pixel. That is, the second feature area passing through the upper end pixel of the first feature area and the second feature area passing through the lower end pixel cannot be determined as a pair corresponding to one object.

  For this reason, the positional relationship comparison function 331 of the present embodiment says that the condition based on the inclination of the second feature area, the condition related to the distance of the center of gravity of the second feature area (line segment), and the second feature areas do not intersect each other. Using the condition, a pair of second feature regions having the first feature region therebetween is extracted. The method for extracting the second feature region pair is not particularly limited, and the method described in the first embodiment can be used.

  Next, the control procedure of the object detection apparatus 100 ′ of the second embodiment will be described based on the flowchart of FIG.

This process is executed as a program that is activated when an ignition switch (not shown) is turned on. The processes from S101 to S103 shown in FIG. 20 are common to the control procedure of the first embodiment shown in FIG.

In S104 following S103, the second feature region extraction function 32 extracts a straight line component from the edge image extracted in Step S102 and temporarily stored in the image memory 11 using the Hough transform, and the line component is extracted. The range where the edge exists is extracted as the second feature region.

In step S205, it is determined whether or not the second feature region extraction processing has been completed for all the linear components. When the extraction process is completed, the process proceeds to step S206, and when it is not completed, the process proceeds to S204.

Subsequently, in step S206, the positional relationship comparison function 331 further extracts a pair of second feature areas corresponding to the upper and lower ends of the three-dimensional object from the extracted second feature areas.

Further, in step S207, it is determined whether or not the pair extraction process has been completed for all the second feature areas. If the processes for all the second feature areas have been completed, the process proceeds to step S208 and has not been completed. In this case, the process proceeds to step S206.

In subsequent step S208, the speed comparison function 332 acquires the moving speed of the first feature region located between the pair of line segments of the second feature region corresponding to the upper end and the lower end of the three-dimensional object extracted in S206. In step S204, the moving image temporarily stored is referred to, and the moving speed existing between the second feature regions of the pair is acquired.

In step S114, it is confirmed whether or not the moving speed existing between the line segments can be acquired for the pairs of the second feature regions at the upper and lower ends of all the three-dimensional objects. If acquired, the process proceeds to step S210. If not acquired, the process proceeds to step S208.

The processing of steps S210 to S212 is the same processing as S110 to S112 shown in FIG.

According to the object detecting apparatus 100 ′ configured and operated as described above, the following functions and effects can be obtained in addition to the functions and effects in the first embodiment.

The object detection device 100 ′ of the present embodiment uses the second feature in which pixels are connected along the direction of a predetermined angle from the edge image extracted by the feature extraction unit 21 without using movement information when extracting the second feature region. Since the feature region can be extracted, the processing speed can be increased when the feature region is operated in combination with a logic that detects a white line based on another feature region.

  In addition, there are two methods: a method for obtaining the second feature region based on movement information as in the first embodiment, and a method for obtaining the second feature region based on the feature amount of the edge image as in the second embodiment. By using the method, it is possible to extract the second feature region that is highly robust to the environment, and as a result, it is possible to accurately detect the object.

  The embodiment described above is described for facilitating the understanding of the present invention, and is not described for limiting the present invention. Therefore, each element disclosed in the above embodiment is intended to include all design changes and equivalents belonging to the technical scope of the present invention.

  In this specification, as an aspect of the object detection apparatus 100, the camera 10 as an example of an imaging unit, the movement information calculation unit 20 as an example of movement information calculation unit, and the detection unit 30 as an example of detection unit. With.

  The detection unit 30 described in this specification includes a first feature region extraction function 31 as an example of a first feature region extraction unit, and a second feature region extraction function 32 as an example of a second feature region extraction unit. And a determination function 33 as an example of a determination unit.

  Furthermore, although the embodiment of the present invention has been described in detail with reference to the drawings, the embodiment is merely an example of the present invention, and the present invention is not limited only to the configuration of the embodiment. Therefore, it goes without saying that the present invention includes the following design changes and the like without departing from the scope of the present invention.

  For example, the block diagram is not limited to the one shown in the above embodiment, and may have other configurations having equivalent functions.

  The camera mounting position is not limited to the position described in the embodiment. The optical axis of the camera faces the front front direction (Z direction) of the host vehicle, and the horizontal axis and the vertical axis of the imaging surface are respectively the road surface. What is necessary is just to set so that it may become substantially parallel and substantially perpendicular | vertical.

In addition, in normalizing the detected edge width, the edge width is not limited to three pixels, and an arbitrary number of pixels can be set. In this case, since the pixel at the center of the edge is used in the subsequent processing, the number of pixels with the edge width is desirably an odd number.

Also, the number of strip regions set by dividing the xy plane is not limited to that shown in the above embodiment, and can be set by dividing it into an arbitrary number and an arbitrary width.

Moreover, although the said embodiment demonstrated the example which mounts the object detection apparatus 100 in the own vehicle which drive | works a road, you may mount in another mobile body.

It is a figure which shows an example of the block configuration of the vehicle-mounted apparatus 1000 containing the object detection apparatus 100 of this embodiment. (A) And (B) is a figure which shows the example of mounting of the camera 10. FIG. 2 is an example of an image in front of a vehicle imaged by a camera 10; (A)-(f) is a figure for demonstrating the calculation process of moving speed. It is a figure which shows an example of a moving image. It is a figure for demonstrating the motion on the image of the plane object (white line) when a pitching behavior generate | occur | produces in a vehicle. It is a figure for demonstrating the example of the calculation method of a 1st feature area. It is a figure which shows an example of an end pixel table. It is a figure which shows an example of a sort table. It is a figure for demonstrating the conditions of the end pixel grouped by the common 2nd characteristic area. It is a figure which shows an example of the 2nd feature area table. It is a figure for demonstrating the process of calculating | requiring a 2nd feature area from the collection of the end pixel to which the common group ID was attached | subjected. It is a figure for demonstrating an imaging area | region in case there exists solid objects, such as a curbstone, in a travel path. It is a figure which shows an example of the captured image containing the curb and the sidewalk in the left side of the road where a vehicle drive | works. FIG. 15A shows a model of a three-dimensional object extending along the traveling path. FIG. 15B is a diagram for explaining a linear component extracted from the contour of a three-dimensional object. It is a figure for demonstrating the method of judging the pair of the 2nd feature area which comprises the same solid from the taken out 2nd feature area. It is a figure for demonstrating the method of comparing the moving speed of the some 1st feature area located between 2nd feature areas. It is a flowchart figure which shows the control procedure of the object detection apparatus of this embodiment. It is a figure which shows an example of the block configuration of the vehicle-mounted apparatus 1000 containing the object detection apparatus of 2nd this embodiment. It is a flowchart figure which shows the control procedure of the object detection apparatus of 2nd Embodiment.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1000 ... Vehicle-mounted apparatus 100 ... Object detection apparatus 10 ... Camera 11 ... Image memory 20 ... Movement information calculation part 21 ... Feature extraction part 22 ... Calculation part 30 ... Detection part 31 ... 1st feature area extraction function 32 ... 2nd feature area extraction Function 33 ... Determination function 331 ... Position relation comparison function 332 ... Speed comparison function 200 ... Vehicle controller 300 ... Output device 400 ... Driving support device

Claims (7)

  1. Imaging means mounted on the vehicle;
    Movement information calculation means for calculating movement information including movement direction and / or movement speed of a pixel corresponding to an object in the image based on information of an image captured by the imaging means;
    Detecting means for detecting an object based on the movement information,
    The detection means includes
    A first feature region extraction unit that extracts a first feature region in which pixels common to the movement information are continuous in the vertical direction based on the movement information;
    Based on the movement information, a second feature area extraction unit that extracts a second feature area in which pixels having the same moving speed are continuous along a direction of a predetermined angle;
    A determination unit configured to determine that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object when the first feature region is located between the second feature regions; Object detection device.
  2. The object detection apparatus according to claim 1,
    The determination unit compares the moving speeds of a plurality of first feature areas located between the second feature areas, and the moving speed of pixels belonging to the first feature area located below the image is If the moving speed of the pixels belonging to the first feature region located above the first feature region is relatively faster, it is determined that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object An object detection apparatus having a determination unit.
  3. In the object detection device according to claim 1 or 2,
    The second feature region extraction unit extracts the upper end pixel and the lower end pixel of pixels that are continuous in the vertical direction in common in the movement information extracted by the first feature region extraction unit, and includes the extracted upper end pixel And the object detection apparatus which extracts the area | region containing the extracted said lower end pixel as a 2nd feature area, respectively.
  4. In the vehicle image processing apparatus according to claim 1 or 2,
    The second feature region extraction unit is an object detection device that extracts, as the second feature region, a region in which pixels corresponding to the contour of an object in the image are continuous along a predetermined angle direction on the image.
  5. In the object detection device according to any one of claims 1 to 4,
    In the case where the first feature region is located between second feature regions having the same inclination, the determination unit determines that the object corresponding to the region including the first feature region and the second feature region is a three-dimensional object. The object detection apparatus which determines that it is.
  6. In the vehicle image processing device according to any one of claims 1 to 5,
    The movement information calculation unit extracts a feature portion corresponding to the object based on information of each image of the object imaged at a predetermined period by the imaging unit, and a pixel corresponding to the position of the extracted feature portion An object detection device that counts up the count value of the pixel and calculates movement information of the pixel corresponding to the feature based on the slope of the count value.
  7. Based on the information of the captured image in front of the vehicle, the movement information including the movement direction and / or movement speed of the pixel corresponding to the object in the image is calculated,
    The first feature region obtained based on the movement information and in which the pixels common to the movement information are continuous in the vertical direction is the first characteristic area obtained based on the movement information. An object detection method for determining that an object corresponding to a region including the first feature region and the second feature region is a three-dimensional object when located between second feature regions that are continuous along a direction.
JP2008179346A 2008-07-09 2008-07-09 Object detection device and object detection method Pending JP2010020476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008179346A JP2010020476A (en) 2008-07-09 2008-07-09 Object detection device and object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008179346A JP2010020476A (en) 2008-07-09 2008-07-09 Object detection device and object detection method

Publications (1)

Publication Number Publication Date
JP2010020476A true JP2010020476A (en) 2010-01-28

Family

ID=41705312

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008179346A Pending JP2010020476A (en) 2008-07-09 2008-07-09 Object detection device and object detection method

Country Status (1)

Country Link
JP (1) JP2010020476A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011248638A (en) * 2010-05-27 2011-12-08 Nissan Motor Co Ltd Road environment information acquiring apparatus and method of the same
WO2012145819A1 (en) * 2011-04-25 2012-11-01 Magna International Inc. Image processing method for detecting objects using relative motion
WO2013190719A1 (en) * 2012-06-19 2013-12-27 トヨタ自動車株式会社 Roadside object detection device
WO2014033955A1 (en) * 2012-09-03 2014-03-06 トヨタ自動車株式会社 Speed calculating device and speed calculating method, and collision determination device
WO2019031137A1 (en) * 2017-08-07 2019-02-14 日立オートモティブシステムズ株式会社 Roadside object detection device, roadside object detection method, and roadside object detection system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011248638A (en) * 2010-05-27 2011-12-08 Nissan Motor Co Ltd Road environment information acquiring apparatus and method of the same
WO2012145819A1 (en) * 2011-04-25 2012-11-01 Magna International Inc. Image processing method for detecting objects using relative motion
US10043082B2 (en) 2011-04-25 2018-08-07 Magna Electronics Inc. Image processing method for detecting objects using relative motion
US10452931B2 (en) 2011-04-25 2019-10-22 Magna Electronics Inc. Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
US9547795B2 (en) 2011-04-25 2017-01-17 Magna Electronics Inc. Image processing method for detecting objects using relative motion
WO2013190719A1 (en) * 2012-06-19 2013-12-27 トヨタ自動車株式会社 Roadside object detection device
WO2014033955A1 (en) * 2012-09-03 2014-03-06 トヨタ自動車株式会社 Speed calculating device and speed calculating method, and collision determination device
JPWO2014033955A1 (en) * 2012-09-03 2016-08-08 トヨタ自動車株式会社 Speed calculation device, speed calculation method, and collision determination device
CN104620297B (en) * 2012-09-03 2017-03-22 丰田自动车株式会社 Speed calculating device and speed calculating method, and collision determination device
CN104620297A (en) * 2012-09-03 2015-05-13 丰田自动车株式会社 Speed calculating device and speed calculating method, and collision determination device
WO2019031137A1 (en) * 2017-08-07 2019-02-14 日立オートモティブシステムズ株式会社 Roadside object detection device, roadside object detection method, and roadside object detection system

Similar Documents

Publication Publication Date Title
US9330320B2 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
US9886649B2 (en) Object detection device and vehicle using same
JP6550881B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP3711405B2 (en) Method and system for extracting vehicle road information using a camera
JP3931891B2 (en) In-vehicle image processing device
KR100422370B1 (en) An Apparatus and Method to Measuring Dimensions of 3D Object on a Moving Conveyor
CN103052968B (en) Article detection device and object detecting method
US7747039B2 (en) Apparatus and method for automatically detecting objects
DE19934925B4 (en) Vehicle area recognition device and vehicle area determination method
US8548229B2 (en) Method for detecting objects
US7660436B2 (en) Stereo-vision based imminent collision detection
JP4607193B2 (en) Vehicle and lane mark detection device
JP4544028B2 (en) In-vehicle image processing apparatus and image processing method
EP1796043B1 (en) Object detection
US9177196B2 (en) Vehicle periphery monitoring system
JP5407898B2 (en) Object detection apparatus and program
JP4830604B2 (en) Object detection method and object detection apparatus
US7769227B2 (en) Object detector
EP2775423A2 (en) Object detection apparatus, vehicle-mounted device control system and carrier medium of program of object detection
JP5783243B2 (en) periodic stationary object detection apparatus and periodic stationary object detection method
US9123242B2 (en) Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program
US8976999B2 (en) Vehicle detection apparatus
EP1705917A2 (en) Detection apparatus and method
US6731777B1 (en) Object recognition system
KR101609303B1 (en) Method to calibrate camera and apparatus therefor