WO2017003257A1 - Dispositif et procédé permettant de reconnaître un état de surface de route - Google Patents

Dispositif et procédé permettant de reconnaître un état de surface de route Download PDF

Info

Publication number
WO2017003257A1
WO2017003257A1 PCT/KR2016/007122 KR2016007122W WO2017003257A1 WO 2017003257 A1 WO2017003257 A1 WO 2017003257A1 KR 2016007122 W KR2016007122 W KR 2016007122W WO 2017003257 A1 WO2017003257 A1 WO 2017003257A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
polarized
road surface
candidate
polarized image
Prior art date
Application number
PCT/KR2016/007122
Other languages
English (en)
Korean (ko)
Inventor
이승래
송원석
Original Assignee
이승래
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이승래 filed Critical 이승래
Publication of WO2017003257A1 publication Critical patent/WO2017003257A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present invention relates to an apparatus and method for recognizing road surface conditions. More particularly, the present invention relates to a road surface state recognition apparatus, and more particularly, to corrected polarized images taken at different locations to images taken at the same location, and to use road surface conditions using the corrected polarization characteristics of the polarized images.
  • An apparatus and method for recognizing a road surface state are disclosed.
  • Most of the road surface state recognition methods recognize a road surface state based on an image photographed by a camera of a fixed position rather than a camera installed in a driving car. However, in some cases, the vehicle is in a stationary state, but in most cases, the vehicle is driven at a specific moving speed. In addition, an automobile driver often wants information about a changing road surface condition while driving a car, and there is little need to inform the driver of the road surface condition while the vehicle is stopped.
  • an object of the present invention is to provide an apparatus and method for recognizing a road surface state by using a polarization image photographed in a driving car.
  • Another object of the present invention is to provide a road surface state apparatus and method which can more accurately determine the road surface state by correcting the photographing position of the polarized images photographed at different positions through one polarization image photographing unit. .
  • Another object of the present invention is to provide a road surface state apparatus and method that can more accurately determine the road surface state by correcting the photographing position of the polarized images photographed through two polarized image photographing units spaced apart from each other. .
  • the road surface state recognition apparatus is a position correction unit for correcting the first polarized image and the second polarized image of the road surface with the polarized image photographed at the same position And a road surface state determination unit determining a state of the road surface through the first polarized image and the second polarized image corrected by the position corrector.
  • one of the first polarized image and the second polarized image is characterized in that the vertical polarized image, the other is a horizontal polarized image.
  • the road surface state recognition apparatus further comprises a polarization image pickup unit for capturing the first polarized image and the second polarized image, the first polarized image and the second polarized image is a polarized image It is characterized in that the image was taken at a different position using the photographing unit.
  • the polarization imaging unit is configured to capture the first polarization image and the second polarization image using at least one of a physically rotatable polarizer or a polarizer for changing the polarization direction by an electrical method. do.
  • the first polarized image photographing unit for capturing the first polarized image and the first polarized image photographing unit disposed to be spaced apart, the second polarized image capturing unit for capturing the second polarized image It further comprises.
  • the position corrector may define one of the first polarized image and the second polarized image as a reference image, the other as a candidate image, and the same area as that of the reference image for each region of the reference image. And generating a motion vector connecting the regions of the candidate pixels representing the positions, and correcting at least a photographing position of the reference image and the candidate image based on the motion vector.
  • the position correction unit calculates a cost function between the region of the reference image and the region of the candidate image, and the region of the candidate image and the reference image having a minimum value of the cost function. And generating a motion vector connecting the regions.
  • the road surface condition determination unit may determine the state of the road surface except for the area of the reference image whose value of the cost function is greater than or equal to the threshold.
  • the cost function is a matching cost function that is a function indicating a degree of similarity between the region of the reference image and the candidate region of the candidate image, and the candidate vector of the motion vector and the motion vector around the candidate vector. And is determined based on a spatial constraint function, which is a function representing a degree of similarity.
  • the cost function is a reference motion vector determined by considering the speed at which the road surface state recognition apparatus moves and the characteristics of the polarization image photographing unit photographing the first polarization image and the second polarization image. and a velocity constraint function, which is a function indicating a degree of similarity between the motion vector) and the candidate vector of the motion vector.
  • the road surface state determination unit determines the road surface state except for the region of the reference image corresponding to the motion vector having a difference greater than or equal to the reference motion vector among the motion vectors generated by the position correction unit. Characterized in that.
  • the position correction unit sets a search range centering on the reference motion vector, and the candidate region of the candidate image is a region within the search range of the candidate image.
  • the position corrector may define one of the first polarized image and the second polarized image as a reference image, the other as a candidate image, and the same area as that of the reference image for each region of the reference image. And generating a disparity vector connecting regions of candidate pixels indicating positions, and correcting photographing positions of one of the reference image and the candidate image based on the disparity vector.
  • the position correction unit calculates a cost function between the region of the reference image and the region of the candidate image, and the region of the candidate image and the reference image having a minimum value of the cost function. And generating a motion vector connecting the regions.
  • the cost function is a matching cost function that is a function indicating a degree of similarity between the region of the reference image and the candidate region of the candidate image, and the disparity vector of the candidate vector of the parallax vector and the disparity vector around the candidate vector. And is determined based on a spatial constraint function, which is a function representing a degree of similarity.
  • the cost function the distance between the first polarized image photographing unit for capturing the first polarized image and the second polarized image capturing unit for capturing the second polarized image and the first polarized image capturing unit
  • a disparity constraint function which is a function indicating a degree of similarity between the reference parallax vector and the candidate vector of the disparity vector determined in consideration of characteristics of the second polarization image photographing unit.
  • Road surface state recognition method to solve the problems as described above, the step of correcting the position of the first polarized image and the second polarized image for the road surface to the polarized image photographed at the same position and And determining the state of the road surface based on the corrected first and second polarized images.
  • the step of correcting the position defining one of the first polarized image and the second polarized image as a reference image, the other as a candidate image, the area of the reference image for each region of the reference image Generating a motion vector or disparity vector connecting regions of candidate pixels representing the same position as and correcting the photographing position of one of the reference image and the candidate image based on the motion vector or disparity vector; Characterized in that it comprises a step.
  • the computer-readable medium is position-corrected by the polarized image photographed at the same position and the first polarized image and the second polarized image on the road surface, And a set of instructions for determining the state of the road surface via the first polarized image and the second polarized image.
  • the road surface state may be determined using a polarized image captured by a driving vehicle.
  • the present invention can more accurately determine the road surface state by correcting the photographing positions of the polarized images photographed at different positions through one polarizing image photographing unit.
  • the present invention can more accurately determine the road surface state by correcting the photographing positions of the polarized images photographed through two polarized image photographing units spaced apart from each other.
  • FIG. 1 is a schematic block diagram of a road surface state recognition apparatus according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a road surface state recognition method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a road surface state recognition method according to another embodiment of the present invention.
  • FIG. 4 is a schematic view for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 5A to 6B are schematic diagrams of polarized images for explaining a road surface state recognition method according to another embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a road surface state recognition method according to another embodiment of the present invention.
  • FIG. 8 is a schematic view for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 9A to 10B are schematic diagrams of polarized images for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 11 and 12 are schematic block diagrams of a road surface state recognition apparatus according to various embodiments of the present disclosure.
  • Combinations of each block of the accompanying block diagrams and respective steps of the flowcharts may be performed by algorithms or computer program instructions, consisting of firmware, software, or hardware.
  • These algorithms or computer program instructions may be embedded in a processor of a general purpose computer, special purpose computer, or other programmable digital signal processing device, so that instructions executed through a processor of a computer or other programmable data processing equipment. These will create means for performing the functions described in each block of the block diagram or in each step of the flowchart.
  • These algorithms or computer program instructions may be stored in a computer usable or computer readable memory that can be directed to a computer or other programmable data processing equipment to implement functionality in a particular manner, and thus the computer available or computer readable.
  • the instructions stored in the capable memory may produce an article of manufacture containing instructions means for performing the functions described in each block or flow chart step of the block diagram.
  • Computer program instructions may also be mounted on a computer or other programmable data processing equipment, such that a series of operating steps may be performed on the computer or other programmable data processing equipment to create a computer-implemented process to create a computer or other programmable data. Instructions that perform processing equipment may also provide steps for performing the functions described in each block of the block diagram and in each step of the flowchart.
  • each block or step may represent a portion of a module, segment or code that includes one or more executable instructions for executing a specified logical function (s).
  • a specified logical function s.
  • the functions noted in the blocks or steps may occur out of order.
  • the two blocks or steps shown in succession may in fact be executed substantially concurrently or the blocks or steps may sometimes be performed in the reverse order, depending on the functionality involved.
  • each of the features of the various embodiments of the present invention may be combined or combined with each other in part or in whole, various technically interlocking and driving as can be understood by those skilled in the art, each of the embodiments may be implemented independently of each other It may be possible to carry out together in an association.
  • the road surface state recognition apparatus 100 may include a position corrector 110 and a road surface state determiner 120.
  • the apparatus 100 for recognizing a road surface state corrects the first polarized image and the second polarized image photographed at different positions, and determines the state of the road surface using the corrected first polarized image and the second polarized image. to be.
  • the road surface state recognition apparatus 100 may be implemented with various types of electronic devices. For example, it may be generally implemented in the form of a black box attached to a vehicle, a navigation, or the like, or may be implemented to be embedded in a vehicle. In addition, the electronic device may be implemented to be attached to a vehicle in the form of a separate electronic device irrelevant to a black box, a navigation device, or the like.
  • the road surface state recognition apparatus 100 may include a position corrector 110 and a road surface state determiner 120.
  • a position corrector 110 and a road surface state determiner 120.
  • the functions of the position correcting unit 110 and the road surface state determining unit 120 will be described in more detail with reference to FIG. 2.
  • the position corrector 110 corrects the position of the first polarized image and the second polarized image of the road surface with the polarized images photographed at the same position (S100).
  • the position corrector 110 receives a first polarized image and a second polarized image of the road surface.
  • the first polarized image and the second polarized image are images captured by the polarization image photographing unit separate from the road surface state recognition apparatus 100, and are at least polarized images of the road surface.
  • the first polarization image and the second polarization image may be photographed by one polarization image photographing unit separate from the road surface state recognition apparatus 100, and one polarization image photographing unit is photographed at different positions. It may be an image.
  • first polarization image and the second polarization image may be photographed by each of two polarization image capturing units spaced apart from each other, and since the two polarization image capturing units are spaced from each other, the first polarization image and the second polarization image are mutually It may be a polarized image photographed at another position.
  • one of the first polarized image and the second polarized image is a vertical polarized image, and the other is a horizontal polarized image. That is, since the vertical polarization image and the horizontal polarization image photographed at the same position are required for determining the road surface state, the position correction unit 110 may receive the vertical polarization image and the horizontal polarization image.
  • the first polarized image and the second polarized image are polarized images photographed at different positions
  • the first polarized image is a vertical polarized image
  • the second polarized image is a horizontal polarized image
  • the first polarized image may be a horizontal polarized image
  • the second polarized image may be a vertical polarized image.
  • the position corrector 110 corrects the position of the first polarized image and the second polarized image photographed at different positions with the polarized image photographed at the same position. That is, if it is assumed that the first polarized image is a polarized image photographed at the first position and the second polarized image is a polarized image photographed at the second position, the position correction unit 110 photographs the first polarized image at the second position.
  • the polarized image may be corrected or the second polarized image may be corrected by the polarized image photographed at the first position.
  • the position corrector 110 may use the motion estimation method to determine the first polarization image and the second polarization image.
  • the polarized image may be corrected.
  • the position correcting unit 110 receives a pair of the vertically polarized image and the horizontal polarized image having different positions and positions.
  • the motion prediction method may generate a pair of polarized images, such as photographed at the same location.
  • the position corrector 110 may use a first method using a disparity estimation method.
  • the polarized image and the second polarized image may be corrected.
  • the position correcting unit 110 receives a pair of the vertical polarized image and the horizontal polarized image having different capturing positions.
  • a pair of polarized images such as photographed at the same location, may be generated using a parallax prediction method.
  • the road surface state determination unit 120 determines the state of the road surface through the corrected first polarized image and the second polarized image (S200).
  • the road surface state determination unit 120 receives the first polarized image and the second polarized image corrected by the position corrector 110 from the position corrector 110. As a result of the position correction of the position correcting unit 110, the first polarized image and the second polarized image are corrected to the image photographed at the same position, so that the road surface state determiner 120 and the vertical polarized image photographed at the same position are horizontally polarized. An image can be received.
  • the road surface state determination unit 120 classifies the road surface state using the corrected polarization characteristics and texture characteristics of the first polarized image and the second polarized image.
  • the reflectance when light is incident on a material with high reflectivity such as water surface, the reflectance varies according to the vibration direction of the light. I notice that this is high. In particular, when the angle of incidence is less than the Brewster angle, the reflectance of the horizontal component is close to zero, resulting in a large difference in the amount of reflected light between the vertically polarized image and the horizontally polarized image. On the other hand, in the case of a material having a low reflectance such as a dry road, the difference in reflectance between the vertically polarized image and the horizontally polarized image is not large. Based on the above, the road surface state can be determined as the polarization coefficient is high and low.
  • a horizontal polarized image is passed through a wavelet packet converter to obtain wavelet coefficients.
  • the road surface state can then be determined by classifying the texture of the road surface using the wavelet coefficients.
  • the road surface state determination unit 120 may determine the state of the road surface by using the method for recognizing the road surface state using the polarization characteristic and the method for recognizing the road surface state using the texture characteristic as described above. In addition, the validity test may be performed to output a final road surface recognition result for the valid area.
  • a vertically polarized image and a horizontally polarized image are both used when one polarized image capture unit and two polarized image capture units are used in a driving vehicle.
  • the problem that this photographed position is different can be solved.
  • the position correction unit 110 of the road surface state recognition apparatus 100 uses the motion prediction method or the parallax prediction method to determine the position of at least one of the vertically polarized image and the horizontally polarized image.
  • the vertical polarization image and the horizontal polarization image photographed at different positions may be corrected to be photographed at the same position.
  • the road surface state recognition apparatus 100 and the method according to an embodiment of the present invention the road surface state is accurately captured by using a vertically polarized image and a horizontally polarized image which are photographed at different positions but corrected to be photographed at the same position.
  • the surface state can be determined.
  • 3 is a flowchart illustrating a road surface state recognition method according to another embodiment of the present invention.
  • 4 is a schematic view for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 5A and 5B are schematic diagrams of polarized images for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 3 is a flowchart illustrating a road surface state recognition method using a motion prediction method
  • FIG. 4 is a schematic diagram for explaining a polarization image photographing position in a vehicle 400 equipped with the road surface state recognition device 100.
  • 5A and 5B are schematic diagrams of a first polarized image and a second polarized image for explaining a road surface state recognition method using a motion prediction method.
  • the position corrector 110 receives a first polarized image and a second polarized image of the road surface.
  • One of the first polarization image and the second polarization image is a vertical polarization image, and the other is a horizontal polarization image.
  • the first polarized image is a polarized image photographed by the polarized image photographing unit 430 attached to the vehicle 400
  • the second polarized image is a vehicle
  • the image is a polarized image captured by the polarization image photographing unit 430.
  • the vehicle 400 moves for a time corresponding to the interframe time interval and moves from the first position L1 to the second position L2 or L2 '.
  • the position of the star-shaped object in the first polarized image and the position of the star-shaped object in the second polarized image may be different.
  • the position correcting unit 110 corrects the position of the first polarized image and the second polarized image of the road surface with the polarized image photographed at the same position (S100).
  • S100 polarized image photographed at the same position
  • a process of performing position correction by the position correction unit 110 using the motion prediction method will be described with reference to FIGS. 5A and 5B.
  • the position corrector 110 defines one of the first polarized image and the second polarized image as a reference image and the other as a candidate image (S110).
  • the position corrector 110 may calculate which region of the candidate image is the same position as each region of the polarization image defined as the reference image among the first polarization image and the second polarization image. For example, the position corrector 110 may calculate which pixel P1 corresponding to the vertex of the star-shaped object of the first polarized image is the same position as which pixel of the second polarized image, and calculates the pixel ( It may be calculated that P1) is the same position as the pixel P2 of the second polarized image.
  • the region of the polarization image may be defined as one pixel in the polarization image or may be a block that is a group of pixels.
  • the first polarized image is a reference image and the second polarized image is a candidate image, but is not limited thereto. Further, hereinafter, it is assumed that an area of the polarized image is one pixel in the polarized image, but is not limited thereto.
  • the position correction unit 110 generates a motion vector mv connecting regions of candidate pixels representing the same position as that of the reference image for each region of the reference image (S120).
  • the position correction unit 110 calculates a cost function between the region of the reference image and the region of the candidate image, and connects the region of the candidate image and the region of the reference image to minimize the value of the cost function. Create a vector. Specifically, the position correction unit 110 calculates a cost function between the target pixel of the reference image and the candidate pixel of the candidate image, and connects the candidate pixel of the candidate image having the minimum cost function with the target pixel of the reference image by using the motion vector. Calculate The motion vector mv may be calculated by the following Equation 1.
  • mv (x) is a motion vector at the position x of the target pixel of the reference image
  • the motion vector is a vector which minimizes a cost function among candidate vectors v of the motion vector.
  • the candidate vector v of the motion vector is a motion vector connecting the target pixel of the reference image and the pixels in the search range SR of the candidate image.
  • the search range SR refers to a range in which candidate groups of motion vectors for comparing a cost function are distributed.
  • the search range SR is a motion vector for the pixel P1 of the first polarized image.
  • the search range SR used to generate may be defined based on a position in the second polarized image corresponding to the pixel P1, may be defined as a quadrangular shape as shown in FIG. 5B, or a circular shape. It may be defined as. However, it is not limited thereto.
  • the cost function indicates a matching cost function, which is a function indicating a degree of similarity between the region of the reference image and the candidate region of the candidate image, and a degree of similarity between the candidate vector of the motion vector and the motion vector around the candidate vector. It may be determined based on a spatial constraint function that is a function.
  • the matching cost function is a function indicating how similar the image of the position x of the target pixel of the reference image and the image of the position x + v of the candidate image are. That is, the matching cost function is a function indicating how similar the target pixel of the reference image and the pixel in the search range SR of the candidate image are.
  • the matching cost function for example, sum of absolute differences (SAD) may be used.
  • Equation 3 using SAD as a function of matching cost.
  • I is a first polarized image
  • k represents the index of the peripheral pixel.
  • the spatial constraint function indicates how similar the candidate vector v of the motion vector to the position x of the target pixel of the reference image is similar to the motion vectors around the candidate vector v.
  • most of the motion vector field is smooth, so that the difference between the motion vectors around the candidate vector v is not equal to the candidate vector v of the motion vector with respect to the position x of the target pixel of the reference image. The larger the higher the cost. Equation 4 is one embodiment.
  • the cost function may be determined based on the match cost function and the space constraint function.
  • the weight constraint lambda may be multiplied.
  • the position correction unit 110 corrects the photographing position of one of the reference image and the candidate image based on the motion vector (S130).
  • the position corrector 110 may correct at least one photographing position of the reference image and the candidate image based on the generated motion vector.
  • the position correction unit 110 corrects the photographing position of the reference image by adjusting the positions of the pixels of the reference image by adding a motion vector to the pixels of the reference image, so that the reference image and the candidate image are the same. Correct the picture taken at the position.
  • the position correction unit 110 corrects the photographing position of the candidate image by adjusting the positions of the pixels of the candidate image by subtracting a motion vector to the pixels of the candidate image, thereby adjusting the reference image and the candidate image. It corrects that it was photographed at this same position.
  • the road surface state determination unit 120 determines the state of the road surface through the corrected first polarized image and the second polarized image (S200).
  • the determination of the state of the road surface by the road surface state determination unit 120 is the same as that described with reference to FIGS. 1 and 2.
  • the road surface state recognition method when the first polarized image and the second polarized image are photographed at different positions using one polarized image photographing unit 430 in the driving vehicle 400,
  • the first polarization image and the second polarization image may be corrected to be photographed at the same position by using a motion prediction method. Accordingly, the road surface state can be more accurately determined using the vertically polarized image and the horizontally polarized image corrected to be photographed at the same position.
  • the cost function is a reference motion determined by considering the speed at which the road surface state recognition apparatus 100 moves and the characteristics of the polarization image capturing unit 430 capturing the first polarization image and the second polarization image.
  • the determination may be further based on a velocity constraint function, which is a function indicating a degree of similarity between a vector (default motion vector) and a candidate vector of the motion vector. That is, the cost function may be determined based on the matching cost function, the space constraint function, and the speed constraint function.
  • the speed at which the road surface state recognition apparatus 100 moves is the same as the speed of the driving vehicle 400.
  • the speed of the road surface state recognition apparatus 100 may be obtained through a speedometer or a GPS of 400. Therefore, the difference in the photographing position between the polarized images may be basically obtained from the moving distance of the polarized image capturing unit 430 obtained by multiplying the speed of the vehicle 400 by the time interval between frames.
  • the vehicle 400 may go straight and move from the first position L1 to the second position L2, but the vehicle 400 does not go straight but the first position L1. Is often moved to the second position L2 '.
  • the road surface is often not exactly horizontal.
  • the motion vector of all the pixels is included in the matching cost function including the difference between the motion vector generated from the target pixel of the reference image and the candidate pixel of the candidate image and the motion vector obtained from the speedometer of the vehicle 400. It can be derived so that it does not differ significantly from the reference motion vector obtained from the velocity of.
  • the position corrector 110 receives the speed V of the vehicle 400 and obtains a default motion vector v 0 (x) with respect to the position x of the reference image.
  • the reference motion vector is basically proportional to the speed of the vehicle 400, but since the distance between the polarization image capturing unit 430 and the subject is different for each pixel, the proportional constant ⁇ (x) should be defined differently according to the pixel position. do. Since the proportional constant ⁇ (x) depends on the installation condition of the polarization image capturing unit 430 and the standard of the polarization image capturing unit 430, it may be set through actual measurement.
  • the reference motion vector is shown in Equation 5 below.
  • the reference motion vector is identical to the motion vector in an ideal driving environment (a situation in which the vehicle 400 has no up and down movement and no curvature at a constant speed in a straight line), but in a real driving environment, the reference motion vector is not used as it is. Can not be used as a speed constraint function.
  • One implementation of the speed constraint function is shown in Equation 6.
  • Equation 6 If the speed constraint function as shown in Equation 6 is applied, the overall cost function is as follows.
  • the photographing positions of the reference image and the candidate image may be corrected more accurately. That is, in order to correct the polarized images photographed by the polarization image capturing unit 430 installed in the driving vehicle 400 to be photographed at the same position, by additionally considering a speed constraint function related to the speed of the vehicle 400, The shooting positions of the reference image and the candidate image are more accurately corrected, and thus the road surface state can be more accurately identified.
  • the road surface state determination unit 120 may determine the state of the road surface except for the area of the reference image whose value of the cost function is greater than or equal to the threshold. In the case of an area where a wrong motion vector has been calculated due to a failure of the photographing position correction, incorporating them into the road surface recognition often results in an incorrect recognition. Therefore, if it is determined that the position correction has failed, the area can be excluded from the road surface recognition.
  • the cost function can be used as one way to determine if the position calibration fails. In general, the smaller the value of the cost function of the motion vector is, the higher the probability that the motion vector is true. Therefore, when the value of the cost function for a particular region of the reference image is greater than or equal to the threshold, the region may be excluded from the road surface recognition, and thus the road surface may be more accurately determined.
  • the road surface state determination unit 120 may exclude the area of the road surface except for an area of a reference image corresponding to a motion vector having a difference greater than or equal to a reference motion vector among the motion vectors generated by the position correction unit 110.
  • the state can be determined.
  • the road surface recognition algorithm using the polarization characteristic presupposes that the vertically polarized image and the horizontally polarized image are photographed at exactly the same position. This is because the same object must be photographed at the same pixel in the two polarized images. However, even when the two polarized images are photographed at the exact same position or corrected perfectly, if the photographing time is different, the movement of each object may occur and thus the premise described above may not be established. Therefore, the road surface state determination unit 120 may incorrectly interpret the difference in luminance caused by the movement of the object as the polarization effect, and may recognize the non-wet area as the wet state.
  • the motion vector is calculated from a pair of polarized images having different photographing times, and then excluded from the road surface recognition by considering the pixels that are significantly different from the regions having different motion vectors as moving objects. do. Also, the background part of the moving object is excluded from the road surface recognition because the normal motion vector cannot be obtained.
  • the road surface condition determination unit 120 may determine the road surface condition except for the region of the reference image corresponding to the motion vector having a difference greater than or equal to the reference motion vector calculated on the assumption that the road surface is a perfect plane. Can be. Thus, the road surface can be determined more accurately.
  • the position corrector 110 sets a search range centering on the reference motion vector, and the candidate region of the candidate image may be defined as an area within the search range in the candidate image.
  • the candidate region of the candidate image may be defined as an area within the search range in the candidate image.
  • 6A and 6B are schematic diagrams of polarized images for describing a road surface state recognition method according to another embodiment of the present invention.
  • the search range refers to a range in which candidate groups of motion vectors for comparing cost functions are distributed. Therefore, when the magnitude of the actual motion vector is larger than the search range, motion prediction fails.
  • the search range is set to the first search range SR1
  • the star-shaped object of the first polarized image is out of the first search range SR1 in the second polarized area. If it is moved, the pixel P2 in the second polarized image corresponding to the pixel P1 corresponding to the vertex of the star-shaped object in the first polarized image is outside the first search range SR1. . Therefore, the motion vector corresponding to the pixel P1 in the first polarized image may not be accurately generated.
  • the size of the search range should be set wider to the second search range SR2.
  • the size of the search range is increased, there is a disadvantage in that a large amount of computation increases.
  • the search range in the second polarized image is set based on a zero vector. This is because the motion vector having the highest probability of appearance in a general image is a zero vector.
  • the motion vector having the highest occurrence probability may be referred to as a reference motion vector rather than a zero vector. Accordingly, as shown in FIG. 6B, when the search range is set around P ′ using the reference motion vector V 0 , a large motion vector may be searched for even if the search range SR ′ is not large. SR ').
  • the motion vector can be calculated more accurately, and the amount of computation required to calculate the motion vector can also be reduced.
  • FIG. 7 is a flowchart illustrating a road surface state recognition method according to another embodiment of the present invention.
  • 8 is a schematic view for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 9A and 9B are schematic diagrams of polarized images for explaining a road surface state recognition method according to another embodiment of the present invention.
  • 7 is a flowchart for explaining a road surface state recognition method using a parallax prediction method
  • FIG. 8 is a schematic diagram for explaining a polarization image photographing position in a vehicle 800 equipped with the road surface state recognition device 100.
  • 9A and 9B are schematic diagrams of a first polarized image and a second polarized image for explaining a road surface state recognition method using a motion prediction method.
  • the position corrector 110 receives a first polarized image and a second polarized image of the road surface.
  • One of the first polarization image and the second polarization image is a vertical polarization image, and the other is a horizontal polarization image.
  • the first polarized image is a polarized image taken by the first polarized image capturing unit 830 of the car 800
  • the second polarized image is taken by the second polarized image capturing unit 840 of the car 800 It is a polarized image taken.
  • the first polarization image capturing unit 830 and the second polarization image capturing unit 840 are synchronized. Referring to FIG.
  • a star shape of a first polarized image is photographed by photographing a polarized image by the first polarized image capturing unit 830 and the second polarized image capturing unit 840 spaced apart from each other in the vehicle 800. It can be seen that the position of the object of and the position of the star-shaped object in the second polarized image is different.
  • the position correcting unit 110 corrects the position of the first polarized image and the second polarized image of the road surface with the polarized image photographed at the same position (S100).
  • S100 polarized image photographed at the same position
  • a process of performing position correction by the position correction unit 110 using the parallax prediction method will be described with reference to FIGS. 9A and 9B.
  • the position corrector 110 defines one of the first polarized image and the second polarized image as a reference image and the other as a candidate image (S110). Since step S110 is the same as step S110 described with reference to FIG. 3, redundant description is omitted.
  • the position correction unit 110 generates a parallax vector D that connects regions of candidate pixels having the same position as that of the reference image for each region of the reference image (S121).
  • the position correction unit 110 calculates a cost function between the region of the reference image and the region of the candidate image, and generates a parallax vector that connects the region of the candidate image and the region of the reference image having the minimum value of the cost function. . Specifically, the position correction unit 110 calculates a cost function between the target pixel of the reference image and the candidate pixel of the candidate image and connects the candidate pixel of the candidate image having the minimum cost function with the target pixel of the reference image by Calculate
  • the parallax vector D may be calculated through the following equation (8).
  • D (x) is a parallax vector at a position x of the target pixel of the reference image
  • the parallax vector is a vector which minimizes a cost function among candidate vectors d of the parallax vector.
  • the candidate vector d of the disparity vector is a disparity vector connecting the target pixel of the reference image and the pixels in the search range SR of the candidate image.
  • the cost function (cost) is largely composed of two parts as shown in equation (9).
  • cost is a space constraint that is a function that indicates the similarity between the region of the reference image and the candidate region of the candidate image and the degree of similarity between the candidate vector of the disparity vector and the disparity vector around the candidate vector. Can be determined based on the function.
  • the matching cost function is a function indicating how similar the image of the position x of the target pixel of the reference image and the image of the position x + d of the candidate image are. That is, the matching cost function is a function indicating how similar the target pixel of the reference image and the pixel in the search range SR of the candidate image are.
  • SAD SAD or the like may be used as described above.
  • Equation 10 using SAD as a function of matching cost.
  • I is a first polarized image
  • k represents the index of the peripheral pixel.
  • the spatial constraint function indicates how similar the candidate vector d of the parallax vector to the position x of the target pixel of the reference image is with the parallax vectors around the candidate vector d. Since most of the disparity vector field is soft in a general image, the candidate vector (d) of the disparity vector with respect to the position x of the target pixel of the reference image has a difference between the disparity vectors around the candidate vector (d). The larger the higher the cost. Equation 11 is an embodiment.
  • the cost function may be determined based on the match cost function and the space constraint function.
  • the weight constraint lambda may be multiplied.
  • the position correction unit 110 corrects the photographing position of one of the reference image and the candidate image based on the parallax vector (S131).
  • the position corrector 110 may correct at least one photographing position of the reference image and the candidate image based on the generated parallax vector.
  • the position correction unit 110 corrects the photographing position of the reference image by adjusting the positions of the pixels of the reference image by adding a parallax vector to the pixels of the reference image, so that the reference image and the candidate image are the same. Correct the picture taken at the position.
  • the position correction unit 110 corrects the photographing position of the candidate image by adjusting the positions of the pixels of the candidate image by subtracting the parallax vector to the pixels of the candidate image, thereby correcting the reference image and the candidate image. It corrects that it was photographed at this same position.
  • the road surface state determination unit 120 determines the state of the road surface through the corrected first polarized image and the second polarized image (S200).
  • the determination of the state of the road surface by the road surface state determination unit 120 is the same as that described with reference to FIGS. 1 and 2.
  • the first polarized light photographed by the first polarized image photographing unit 830 and the second polarized image photographing unit 840 installed spaced apart from each other in the vehicle 800.
  • the first polarized image and the second polarized image may be corrected to be photographed at the same position by correcting the image and the second polarized image by using a motion prediction method. Accordingly, the road surface state can be more accurately determined using the vertically polarized image and the horizontally polarized image corrected to be photographed at the same position.
  • the cost function may include a distance between the first polarization imager 830 and the second polarization imager 840 that has taken the second polarization image, and the first polarization imager 830 and the first polarization imager 830. It may be further determined based on a disparity constraint function, which is a function indicating a degree of similarity between a default disparity vector and a candidate vector of the disparity vector, which are determined in consideration of the characteristics of the bipolar imaging apparatus 840. . That is, the cost function may be determined based on the matching cost function, the space constraint function and the parallax constraint function.
  • the distance between the first polarization image capturing unit 830 and the second polarization image capturing unit 840 installed in the vehicle 800 is already known, assuming that the road is a perfect plane, the first polarization image capturing unit 830 ) And the disparity between the pixel of the reference image and the pixel of the candidate image can be calculated from the installation conditions of the second polarization image capturing unit 840. If ideal conditions, the parallax vector of the pixel of the reference image and the pixel of the candidate image will coincide with the reference parallax vector. However, since the road in reality is not a perfect plane and the vehicle 800 does not just go straight, it is not possible to use only the reference parallax vector in all cases.
  • the difference in the parallax vector between the target pixel of the reference image and the candidate pixel of the candidate image is predicted by adding to the matching cost function the difference.
  • the parallax vector can be prevented from deviating significantly from the reference parallax vector.
  • Equation 12 An embodiment of the parallax constraint function using the reference parallax vector d 0 is represented by Equation 12.
  • Equation 12 When the speed constraint function shown in Equation 12 is applied, the improved overall cost function is as follows.
  • the photographing positions of the reference image and the candidate image may be corrected more accurately. That is, in order to correct the polarized images photographed by the first polarization image capturing unit 830 and the second polarization image capturing unit 840 installed at different positions in the vehicle 800 to be photographed at the same position, the first polarization
  • the photographing position of the reference image and the candidate image is more accurately corrected, and thus the road surface state is more accurately. Can be identified.
  • the road surface state determination unit 120 may determine the state of the road surface except for the area of the reference image whose value of the cost function is greater than or equal to the threshold. In the case of an area where a wrong parallax vector has been calculated due to a failure in correcting a photographing position, including a road surface recognition often results in an incorrect recognition. Therefore, if it is determined that the position correction has failed, the area can be excluded from the road surface recognition.
  • the cost function can be used as one way to determine if the position calibration fails. In general, the smaller the value of the cost function of the parallax vector, the higher the probability that the parallax vector is true. Therefore, when the value of the cost function for a particular region of the reference image is greater than or equal to the threshold, the region may be excluded from the road surface recognition, and thus the road surface may be more accurately determined.
  • the road surface condition determiner 120 may exclude the area of the reference image corresponding to the parallax vector having a difference greater than or equal to the reference parallax vector among the parallax vectors generated by the position corrector 110.
  • the state can be determined.
  • the road surface recognition algorithm using the polarization characteristic presupposes that the vertically polarized image and the horizontally polarized image are photographed at exactly the same position. This is because the same object must be photographed at the same pixel in the two polarized images. However, even if two polarized images are photographed at the exact same position or perfectly corrected, there may be a problem if an object protrudes from the road surface.
  • the road surface state determination unit 120 is highly likely that the area of the reference image corresponding to the parallax vector having a difference greater than or equal to the reference parallax vector calculated on the assumption that the road surface is a perfect plane is an object protruding from the road surface. Therefore, the road surface state can be determined except for the region. Thus, the road surface can be determined more accurately.
  • the position correction unit 110 sets a search range centering on the reference parallax vector, and the candidate region of the candidate image may be defined as an area within the search range in the candidate image.
  • the candidate region of the candidate image may be defined as an area within the search range in the candidate image.
  • 10A and 10B are schematic diagrams of polarized images for describing a road surface state recognition method according to another embodiment of the present invention.
  • the search range means a range in which candidate groups of parallax vectors for comparing cost functions are distributed. Therefore, motion prediction fails when the magnitude of the actual parallax vector is larger than the search range. For example, as illustrated in FIG. 10A, when the search range is set to the first search range SR1, the star-shaped object of the first polarized image is out of the first search range SR1 in the second polarized area. If it is moved, the pixel P2 in the second polarized image corresponding to the pixel P1 corresponding to the vertex of the star-shaped object in the first polarized image is outside the first search range SR1. . Therefore, the parallax vector corresponding to the pixel P1 in the first polarized image cannot be accurately generated.
  • the size of the search range should be set wider to the second search range SR2.
  • the size of the search range is increased, there is a disadvantage in that a large amount of computation increases.
  • the center of the search range as the reference parallax vector, it is possible to correspond to a large parallax vector without increasing the amount of computation.
  • FIG. 10B when the search range is set around P ′ using the reference parallax vector d 0 , a large disparity vector is searched even if the search range SR ′ is not large. (SR ').
  • the parallax vector can be calculated more accurately, and the amount of computation required to calculate the parallax vector can also be reduced.
  • 11 and 12 are schematic block diagrams of a road surface state recognition apparatus according to various embodiments of the present disclosure.
  • the apparatus for recognizing a road surface state 1100 uses one polarization image capture unit 1130, a position corrector 110, and a road surface state determiner 120. Include.
  • the road surface state recognition apparatus 100 illustrated in FIG. 11 is different from the road surface state recognition apparatus 100 illustrated in FIG. 1 except that the polarization image photographing unit 1130 is added. Since it is substantially the same, duplication description is abbreviate
  • the road surface state recognition apparatus 1100 may include one polarization image photographing unit 1130 for capturing the first polarization image and the second polarization image.
  • the polarization image photographing unit 1130 may photograph the first polarized image at the first position and the second polarized image at the second position as the road surface state recognition apparatus 1100 moves. That is, the first polarization image and the second polarization image are polarization images photographed at different positions using the polarization image capturing unit 1130.
  • the polarization image photographing unit 1130 needs to photograph another polarization image at the first position and the second position.
  • the polarization image capturing unit 1130 may include a polarizer.
  • the polarization image capturing unit 1130 may photograph a vertically polarized image at a first position using a physically rotatable polarizer and rotate the polarizer at a second position to capture a horizontal polarized image.
  • the polarization image capturing unit 1130 may include a polarizer for changing the polarization direction by an electrical method.
  • the polarization image capturing unit 1130 may take a vertical polarization image at a first position and rotate the polarizer at a second position to generate a horizontal polarization image. You can shoot.
  • the present invention is not limited to the above-described method, and the polarization image capturing unit 1130 may photograph the vertical polarization image and the horizontal polarization image through various methods. Accordingly, the road surface state recognizing apparatus 1100 acquires the polarized image through the polarization image capturing unit 1130 included in the road surface state recognizing apparatus 1100 even when the polarization image capturing unit is not embedded or attached to the vehicle. can do.
  • the road surface state recognition apparatus 1200 may include a first polarized image photographing unit 1230, a second polarized image photographing unit 1240, and a position correcting unit ( 110 and the road surface state determination unit 120.
  • the first polarization image capture unit 1230 and the second polarization image capture unit 1240 may be compared with the road surface state recognition apparatus 100 illustrated in FIG. 1. Only the difference is added, other components are substantially the same, so duplicate description is omitted.
  • the road surface state recognizing apparatus 1200 may include a first polarization image capturing unit 1230 for capturing a first polarization image and a second polarization image capturing unit 1240 for capturing a second polarization image.
  • the first polarization image capturing unit 1230 captures a first polarization image which is one of a vertical polarization image and a horizontal polarization image
  • the second polarization image capturing unit 1240 is spaced apart from the first polarization image capturing unit 1230.
  • a second polarized image which is the other of the vertical polarized image and the horizontal polarized image, may be captured.
  • the road surface state recognizing apparatus 1200 may include the first polarization image capturing unit 1130 and the second polarization included in the road surface state recognizing apparatus 1200 even when the polarization image photographing unit is not embedded or attached to the vehicle.
  • the polarization image may be obtained through the image capturing unit 1140.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif et un procédé qui permettent de reconnaître un état de surface de route. Le dispositif permettant de reconnaître un état de surface de route comprend : une unité de correction de position qui sert à corriger une première image polarisée et une seconde image polarisée d'une surface de route par rapport à des images polarisées photographiées à la même position; et une unité de détermination d'état de surface de route qui est destinée à déterminer l'état de la surface de route au moyen de la première image polarisée et de la seconde image polarisée corrigées par l'unité de correction de position.
PCT/KR2016/007122 2015-07-02 2016-07-01 Dispositif et procédé permettant de reconnaître un état de surface de route WO2017003257A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2015-0094849 2015-07-02
KR1020150094849A KR101766239B1 (ko) 2015-07-02 2015-07-02 도로 표면 상태 인식 장치 및 방법

Publications (1)

Publication Number Publication Date
WO2017003257A1 true WO2017003257A1 (fr) 2017-01-05

Family

ID=57608875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/007122 WO2017003257A1 (fr) 2015-07-02 2016-07-01 Dispositif et procédé permettant de reconnaître un état de surface de route

Country Status (2)

Country Link
KR (1) KR101766239B1 (fr)
WO (1) WO2017003257A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991713A (zh) * 2019-12-13 2021-06-18 百度在线网络技术(北京)有限公司 数据处理方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102467099B1 (ko) 2020-07-28 2022-11-14 세종대학교산학협력단 적외선 영상의 편광 각도를 이용한 도로 영역 검출 방법 및 그 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004279279A (ja) * 2003-03-17 2004-10-07 Nagoya Electric Works Co Ltd 路面状態検出装置、路面状態検出方法および路面状態検出プログラム
JP2006058122A (ja) * 2004-08-19 2006-03-02 Nagoya Electric Works Co Ltd 路面状態判別方法およびその装置
KR20110061741A (ko) * 2009-12-02 2011-06-10 주식회사 래도 노면 상태 판별 장치 및 노면 상태 판별 방법
KR20120085932A (ko) * 2009-12-25 2012-08-01 가부시키가이샤 리코 촬상 장치, 차량용 촬상 시스템, 노면 외관 인식 방법 및 물체 식별 장치
KR101165595B1 (ko) * 2004-03-31 2012-08-09 조아킴 피에들러 탈착가능한 마그네트 홀더

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004279279A (ja) * 2003-03-17 2004-10-07 Nagoya Electric Works Co Ltd 路面状態検出装置、路面状態検出方法および路面状態検出プログラム
KR101165595B1 (ko) * 2004-03-31 2012-08-09 조아킴 피에들러 탈착가능한 마그네트 홀더
JP2006058122A (ja) * 2004-08-19 2006-03-02 Nagoya Electric Works Co Ltd 路面状態判別方法およびその装置
KR20110061741A (ko) * 2009-12-02 2011-06-10 주식회사 래도 노면 상태 판별 장치 및 노면 상태 판별 방법
KR20120085932A (ko) * 2009-12-25 2012-08-01 가부시키가이샤 리코 촬상 장치, 차량용 촬상 시스템, 노면 외관 인식 방법 및 물체 식별 장치

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991713A (zh) * 2019-12-13 2021-06-18 百度在线网络技术(北京)有限公司 数据处理方法、装置、设备及存储介质
CN112991713B (zh) * 2019-12-13 2022-11-22 百度在线网络技术(北京)有限公司 数据处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
KR20170004466A (ko) 2017-01-11
KR101766239B1 (ko) 2017-08-08

Similar Documents

Publication Publication Date Title
WO2017008224A1 (fr) Procédé de détection de distance à un objet mobile, dispositif et aéronef
WO2014058248A1 (fr) Appareil de contrôle d'images pour estimer la pente d'un singleton, et procédé à cet effet
WO2011013862A1 (fr) Procédé de commande pour la localisation et la navigation de robot mobile et robot mobile utilisant un tel procédé
WO2019172725A1 (fr) Procédé et appareil pour effectuer une estimation de profondeur d'objet
WO2011052826A1 (fr) Procédé de création et d'actualisation d'une carte pour la reconnaissance d'une position d'un robot mobile
WO2021091021A1 (fr) Système de détection d'incendie
WO2015194868A1 (fr) Dispositif de commande d'entraînement d'un robot mobile sur lequel sont montées des caméras grand-angle, et son procédé
WO2015194867A1 (fr) Dispositif de reconnaissance de position de robot mobile utilisant le suivi direct, et son procédé
WO2015194864A1 (fr) Dispositif de mise à jour de carte de robot mobile et procédé associé
WO2015093828A1 (fr) Caméra stéréo et véhicule comportant celle-ci
WO2015152691A2 (fr) Appareil et procédé de génération d'une image autour d'un véhicule
WO2012064106A2 (fr) Procédé et appareil de stabilisation de vidéo par compensation de direction de visée de caméra
WO2015194865A1 (fr) Dispositif et procede pour la reconnaissance d'emplacement de robot mobile au moyen d'appariement par correlation a base de recherche
WO2012005387A1 (fr) Procédé et système de suivi d'un objet mobile dans une zone étendue à l'aide de multiples caméras et d'un algorithme de poursuite d'objet
WO2013125768A1 (fr) Appareil et procédé pour détecter automatiquement un objet et des informations de profondeur d'image photographiée par un dispositif de capture d'image ayant une ouverture de filtre à couleurs multiples
WO2018097627A1 (fr) Procédé, dispositif, système d'analyse d'image et programme qui utilisent des informations de conduite de véhicule, et support d'informations
WO2019142997A1 (fr) Appareil et procédé pour compenser un changement d'image provoqué par un mouvement de stabilisation d'image optique (sio)
WO2016167499A1 (fr) Appareil de photographie et procédé permettant de commander un appareil de photographie
WO2015147371A1 (fr) Dispositif et procédé de correction de position de véhicule, système de correction de position de véhicule à l'aide de celui-ci, et véhicule capable de fonctionnement automatique
WO2020101420A1 (fr) Procédé et appareil de mesurer des caractéristiques optiques d'un dispositif de réalité augmentée
WO2015099463A1 (fr) Dispositif d'assistance à la conduite de véhicule et véhicule le comportant
WO2017003257A1 (fr) Dispositif et procédé permettant de reconnaître un état de surface de route
WO2023008791A1 (fr) Procédé d'acquisition de distance à au moins un objet situé dans une direction quelconque d'un objet mobile par réalisation d'une détection de proximité, et dispositif de traitement d'image l'utilisant
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
WO2016086380A1 (fr) Procédé et dispositif de détection d'objet, dispositif mobile de commande à distance et véhicule de vol

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16818290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16818290

Country of ref document: EP

Kind code of ref document: A1