WO2017020528A1 - 车道线的识别建模方法、装置、存储介质和设备及识别方法、装置、存储介质和设备 - Google Patents
车道线的识别建模方法、装置、存储介质和设备及识别方法、装置、存储介质和设备 Download PDFInfo
- Publication number
- WO2017020528A1 WO2017020528A1 PCT/CN2015/100175 CN2015100175W WO2017020528A1 WO 2017020528 A1 WO2017020528 A1 WO 2017020528A1 CN 2015100175 W CN2015100175 W CN 2015100175W WO 2017020528 A1 WO2017020528 A1 WO 2017020528A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- lane line
- model
- identified
- region
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000012549 training Methods 0.000 claims abstract description 63
- 238000001914 filtration Methods 0.000 claims abstract description 62
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 43
- 230000009466 transformation Effects 0.000 claims description 34
- 230000003287 optical effect Effects 0.000 claims description 31
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2137—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps
- G06F18/21375—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps involving differential geometry, e.g. embedding of pattern manifold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4084—Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- Embodiments of the present disclosure relate to the field of location-based service technologies, and in particular, to a lane line identification modeling method, apparatus, storage medium and device, and identification method, apparatus, storage medium, and device.
- the detection of the existing lane line basically follows the process of performing edge detection on the original image, binarizing the result of the edge detection, performing Hough transform, random Hough transform or ransac algorithm to extract the lane line for the binarization processing. Finally, the extracted lane lines are refined.
- the recognition accuracy of the lane line is high.
- the detection accuracy of the existing detection method is not high.
- embodiments of the present disclosure provide a lane line identification modeling method, apparatus, storage medium and device, and identification method, apparatus, storage medium, and device to improve detection of lane lines. The accuracy of the test is measured.
- an embodiment of the present disclosure provides a method for identifying a lane line, the method comprising:
- a lane line recognition model based on convolutional neural networks is trained.
- an embodiment of the present disclosure further provides a lane line identification modeling apparatus, where the apparatus includes:
- An identification module configured to identify an image area of the lane line from the image based on the two-dimensional filtering
- a training module is configured to train the lane line recognition model based on the convolutional neural network by using the model training data.
- an embodiment of the present disclosure further provides a method for identifying a lane line, where the method includes:
- model reconstruction is performed to identify lane lines in the input image.
- an embodiment of the present disclosure further provides an identification device for a lane line, where the device includes:
- a region identification module configured to identify an image region of the lane line from the image based on the two-dimensional filtering
- a probability calculation module configured to input an image of an image region in which a lane line has been identified to a lane line recognition model based on a convolutional neural network, to obtain an output probability of the model
- a model reconstruction module is configured to perform model reconstruction based on the output probability to identify a lane line in the input image.
- embodiments of the present disclosure provide one or more storage containing computer executable instructions Membrane, the computer-executable instructions, when executed by a computer processor, for performing a recognition modeling method of a lane line, the method comprising:
- a lane line recognition model based on convolutional neural networks is trained.
- an embodiment of the present disclosure provides an apparatus, including:
- One or more processors are One or more processors;
- One or more programs the one or more programs being stored in the memory, and when executed by the one or more processors, performing the following operations:
- a lane line recognition model based on convolutional neural networks is trained.
- an embodiment of the present disclosure provides one or more storage media including computer executable instructions for performing a lane line identification method when executed by a computer processor, the method comprising:
- model reconstruction is performed to identify lane lines in the input image.
- an apparatus including:
- One or more processors are One or more processors;
- One or more programs the one or more programs being stored in the memory, and when executed by the one or more processors, performing the following operations:
- model reconstruction is performed to identify lane lines in the input image.
- the image of the image region in which the lane line has been identified is input to the lane line recognition model based on the convolutional neural network.
- the lane line recognition model based on the convolutional neural network.
- FIG. 1 is a flowchart of a method for identifying a lane line identification provided by a first embodiment of the present disclosure
- FIG. 2 is a flowchart of a method for identifying and identifying a lane line according to a second embodiment of the present disclosure
- FIG. 3 is a flow chart of a step of identifying a lane line identification modeling method according to a third embodiment of the present disclosure
- FIG. 4 is a flowchart of construction steps in a lane line identification modeling method according to a fourth embodiment of the present disclosure
- FIG. 5 is a schematic diagram of a region of interest provided by a fourth embodiment of the present disclosure.
- FIG. 6 is a flowchart of a method for identifying a lane line according to a fifth embodiment of the present disclosure
- FIG. 7A is a diagram of a recognition result of lane line recognition in a plurality of occlusion scenarios provided by a fifth embodiment of the present disclosure.
- FIG. 7B is a diagram showing a recognition result of lane line recognition in a shadow scene provided by a fifth embodiment of the present disclosure.
- FIG. 7C is a recognition knot of lane line recognition in an illumination conversion scenario according to a fifth embodiment of the present disclosure.
- 7D is a diagram showing a recognition result of lane line recognition in a ground marker interference scene according to a fifth embodiment of the present disclosure.
- FIG. 8 is a flowchart of a method for identifying a lane line according to a sixth embodiment of the present disclosure.
- FIG. 9 is a structural diagram of a lane line identification modeling device according to a seventh embodiment of the present disclosure.
- FIG. 10 is a structural diagram of an identification device for a lane line according to an eighth embodiment of the present disclosure.
- FIG. 11 is a schematic diagram of a hardware structure of an apparatus for performing a lane line identification modeling method according to a tenth embodiment of the present disclosure
- FIG. 12 is a schematic diagram showing the hardware structure of an apparatus for performing a lane line identification method according to a twelfth embodiment of the present disclosure.
- the lane line identification modeling method is performed by the lane line recognition modeling device.
- the lane line identification modeling device can be integrated in a computing device such as a personal computer, a workstation, or a server.
- the lane line identification modeling method includes:
- the image is actually acquired on the roadway and contains image data of the lane line.
- lane marking methods mostly have problems of poor adaptability and low recognition accuracy.
- the specific performance is that once the image collection environment changes, for example, the lane lines in the image are largely obscured by other objects. Block, or a large number of shaded areas appear in the image, the false alarm or misjudgment will occur for the recognition of the lane line in the image.
- this embodiment provides a training method for a lane line recognition model, that is, a lane line recognition modeling method.
- a convolutional neural network for accurately identifying lane lines in an image can be trained.
- the convolutional neural network can adapt to scene changes of the image and has a wider range of adaptation.
- the image area of the lane line may be enhanced by filtering the image, and then the image area of the lane line is acquired according to the enhancement. More specifically, a hat-like filter kernel for filtering the image is constructed, the image region of the lane line is enhanced by filtering the image by the hat-like filter, and the lane line is acquired according to the enhanced image region. Corresponding connected domains, and finally the boundary of the connected domain is straight-line fitted, thereby completing the recognition of the image area of the lane line.
- S12 Construct model training data by using the identified image region.
- model training data for training the lane line recognition model is constructed based on the image area of the lane line.
- the image area of the lane line may be outwardly widened, and the widened image area may be used as the area of interest.
- the region of interest is training data for training the lane line recognition model.
- the lane line recognition model is a lane line recognition model based on a convolutional neural network.
- the convolutional neural network includes a number of convolutional layers and sub-sample layers. The number of convolution layers is the same as the number of sub-sample layers.
- the convolutional neural network also includes a number of fully connected layers. After acquiring an image input to the convolutional neural network, the convolutional neural network can give a probability that the image belongs to a real lane line The value, that is, the value of the output probability of the lane line recognition model.
- the image region of the lane line is identified from the image based on the two-dimensional filtering, the model training data is constructed by using the identified image region, and the lane line recognition model based on the convolutional neural network is trained by using the model training data.
- the comprehensive consideration of various abnormal situations that may occur in the image area of the lane line in the image is realized, and the detection accuracy of detecting the lane line is improved.
- the present embodiment further provides a technical solution for the lane line identification modeling method based on the above embodiments of the present disclosure.
- the method before the image region of the lane line is identified from the background image based on the two-dimensional filtering, the method further includes: performing inverse projection transformation on the original image to adjust the optical axis direction of the original image to be perpendicular to the ground. direction.
- the method for identifying and identifying a lane line includes:
- S21 Perform inverse projection transformation on the original image to adjust the optical axis direction of the original image to be perpendicular to the ground.
- the optical axis of the camera used to acquire the image will acquire an image in a direction substantially parallel to the road surface.
- the inverse projection transformation which is also referred to as an inverse perspective mapping, is used to map pixel points in a two-dimensional image acquired by a camera to a three-dimensional space. More specifically, it is assumed that the camera's pitch angle, yaw angle, and roll angle are ⁇ , ⁇ , and ⁇ , respectively, and the focal lengths of the camera in the vertical and horizontal directions are f u , f v , respectively, and the camera's optical center coordinates The abscissa and the ordinate are c u and c v respectively , and the normalized parameter is s, then the inverse projection transformation is performed according to the formula (1):
- S23 Construct model training data by using the identified image region.
- the original image is inversely projected and transformed, so that the optical axis direction of the original image is adjusted to be perpendicular to the ground, so that the input is
- the images in the convolutional neural network are unified in the optical axis direction before being input to the convolutional neural network, which improves the accurate recognition rate of the lane lines in the image.
- This embodiment is based on the above-described embodiment of the present disclosure, and further provides a technical solution for the identification step in the lane line identification modeling method.
- the image area of the lane line is identified from the image based on the two-dimensional filtering, and the background image is filtered by using a hat-like filter kernel having different width parameters and height parameters, and the edge of the image is selected most obviously.
- An image is used as a filtered result image; the filtered result image is binarized to form at least one connected domain; and the connected domain is subjected to straight line fitting of the connected domain by using a modified ransac algorithm.
- the image area of the lane line is identified from the image, including:
- I(x, y) is the gray value of the filtered pixel
- I(u, v) is the gray value of the pixel before filtering
- w is the width parameter of the filtering process
- h is the height of the filtering process. parameter.
- the parameter w is equal to the width of the lane line itself
- the parameter h is equal to the height of the lane line itself.
- the image is separately filtered by using a set of hat-like filter kernels having different width parameters and height parameters, and then an image with the most obvious image enhancement effect is obtained from the filtering result, and the image is used as a filter. Result image.
- the region corresponding to the lane line in the filtered result image has a more significant difference from other regions of the image. At this time, if the filtered result image is binarized, the result of the binarization is more reliable.
- the operation of performing the binarization processing on the filtered result image is specifically: taking a pixel whose gradation value of the pixel is higher than a preset gradation threshold as a pixel in the connected domain, and lowering the gradation value of the pixel Or a pixel equal to a preset gray threshold as a pixel outside the connected domain.
- at least one connected domain is formed in the filtered result image.
- the connected domain identifies the approximate location area of the lane line in the image.
- this embodiment uses the improved ransac algorithm to fit the boundary of the connected domain in a straight line.
- the Ransac (Random sample consensus) algorithm is based on a set of sample data sets containing abnormal data, and calculates mathematical model parameters of the data to obtain effective sample data.
- the existing ransac algorithm does not consider the response intensity of the sample points used to fit the line when performing straight line fitting. In other words, in the existing ransac algorithm, all sample points have the same status.
- the ransac algorithm provided in this embodiment takes the response intensity of different sample points as the weighting parameter of the sample point, weights each sample point, and performs line fitting according to the weighted value.
- a plurality of sample points may be selected at the boundary of the connected domain, and the gray values of the samples are used as their own weighting parameters to calculate the number of inner points covered by the current model.
- the straight line obtained by the improved ransac algorithm provided by the present embodiment can be obtained.
- the background image is filtered by using a hat-like filter kernel with different width parameters and height parameters, and an image with the most obvious edge of the image is selected as a filtered result image, and the filtered result image is binarized.
- Modeling the training data includes: widening the connected domain to form a region of interest on the image; and using the image containing the region of interest as the model training data.
- constructing model training data by using the identified image regions includes:
- the boundary is widened. Specifically, a predetermined number of pixel points may be widened in the width direction, and then a predetermined number of pixel points may be widened in the height direction. In this way, the region of interest after widening is formed.
- Fig. 5 shows an example of the region of interest. Referring to Fig. 5, in this example, the area enclosed by the solid line 51 is the area of interest.
- the background image information is used as the context of the lane line, thereby contributing to improving the recognition accuracy of the trained lane line recognition model.
- the connected domain is widened to form an area of interest on the image, and an image including the region of interest is used as the model training data, thereby realizing the construction of the model training data, so as to enable The constructed training data models the lane line recognition model.
- This embodiment provides a technical solution for the method for identifying a lane line.
- the difference from the recognition modeling method of the lane line introduced in the above embodiment of the present disclosure is that the lane line identification modeling method is used to model the lane line recognition model, and the lane line provided by this embodiment
- the identification method is to identify the lane line from the image by using the lane line recognition model established in the above embodiment.
- the lane line identification method includes:
- the image area of the lane line is identified from the image in the manner described in the third embodiment of the present invention. That is, the image is filtered by the hat-like filter kernel, and the filtered image is binarized. Finally, the improved ransac algorithm is used to fit the boundary of the connected domain obtained by binarization, thereby realizing Identification of the image area of the lane line.
- the convolutional neural network Inputting an image of the image area of the lane line to the convolutional neural network, after acquiring the input image, the convolutional neural network calculates the image, and outputs each identified image in the image The probability that the image area of the lane line belongs to the image area of the real lane line
- model reconstruction is performed according to a depth search technique to identify lane lines in an input image.
- the possible lane lines are divided into k groups, and the length weights of each lane line in each group of lane lines, the angle difference weight and the distance difference weight between each group are calculated.
- the length of the lane line is weighted Given by equation (3):
- H and l i represent the height and width of the lane line, respectively.
- ⁇ i represents the angle of the ith lane line
- ⁇ j represents the angle of the jth lane line
- ⁇ angle represents the angle difference threshold
- l max represents the distance maximum threshold and l min represents the distance minimum threshold.
- a group of lane lines capable of maximizing the value of the objective function shown by equation (6) can be regarded as a true lane line.
- Figure 7 shows sample images taken in several special scenes.
- Figure 7A shows a sample image of a large number of occlusion scenes.
- Figure 7B shows a sample image in a scene with shadows.
- Fig. 7C shows a sample image in a lighting change scene.
- Figure 7D shows a sample image of a ground marker interference scene.
- the image region of the lane line is identified from the image based on the two-dimensional filtering, and the image of the image region in which the lane line has been identified is input to the lane line recognition model based on the convolutional neural network, and the output probability of the model is obtained. And performing model reconstruction based on the output probability to identify lane lines in the input image, being able to adapt to different changes of the input image, and improving the recognition accuracy of the lane line.
- the present embodiment further provides a technical solution of the lane line identification method based on the fifth embodiment of the present disclosure.
- the method before the image area of the lane line is identified from the image based on the two-dimensional filtering, the method further includes: performing inverse projection transformation on the original image to adjust the original image.
- the direction of the optical axis is the direction perpendicular to the ground.
- the method for identifying the lane line includes:
- the present embodiment performs inverse projection transformation on the original image before recognizing the image region of the lane line from the image based on the two-dimensional filtering, so as to adjust the optical axis direction of the original image to be perpendicular to the ground, so that the lane needs to be recognized.
- the image of the line is unified in the optical axis direction before being input to the convolutional neural network, which improves the accurate recognition rate of the lane line in the image.
- the lane line identification modeling device includes an identification module 92, a construction module 93, and a training module 94.
- the identification module 92 is configured to identify an image region of a lane line from the image based on the two-dimensional filtering.
- the constructing module 93 is configured to construct model training data by using the identified image regions.
- the training module 94 is configured to train a lane line recognition model based on a convolutional neural network by using the model training data.
- the lane line identification modeling device further includes: a transform module 91.
- the transforming module 91 is configured to perform inverse projection transformation on the original image before identifying the image region of the lane line from the background image based on the two-dimensional filtering to adjust the optical axis direction of the original image to be perpendicular to the ground.
- transform module 91 is specifically configured to: perform inverse projection transformation on the original image according to the following formula:
- ⁇ , ⁇ , ⁇ are the camera's pitch angle, yaw angle and roll angle, respectively
- f u , f v are the focal length of the camera vertical and horizontal directions
- c u , c v is the abscissa of the camera's optical center coordinate point
- x w , y w and z w respectively represent the three-dimensional coordinates of the coordinate point in the transformed three-dimensional space
- the identification module 92 includes: a filtering unit, a binarization unit, and a fitting unit.
- the filtering unit is configured to filter the background image by using a hat-like filtering kernel having different width parameters and height parameters, and select an image with the most obvious edge of the image as the filtered result image.
- the binarization unit is configured to binarize the filtered result image to form at least one connected domain.
- the fitting unit is configured to perform a straight line fitting of the boundary of the connected domain using a modified ransac algorithm.
- the constructing module 93 includes: a widening unit and a data acquiring unit.
- the widening unit is configured to widen the connected domain to form a region of interest on an image.
- the data acquisition unit is configured to use an image including the region of interest as the model training data.
- the identification device of the lane line includes an area identification module 102, a probability calculation module 103, and a model reconstruction module 104.
- the region identification module 102 is configured to identify an image region of a lane line from the image based on the two-dimensional filtering.
- the probability calculation module 103 is configured to input an image of an image region in which a lane line has been identified to a lane line recognition model based on a convolutional neural network, to obtain an output probability of the model.
- the model reconstruction module 104 is configured to perform model reconstruction based on the output probability to identify lane lines in the input image.
- the identification device of the lane line further includes: a reverse projection transformation module 101.
- the inverse projection transformation module 101 is configured to perform inverse projection transformation on the original image before the image region of the lane line is identified from the image based on the two-dimensional filtering, so as to adjust the optical axis direction of the original image to be perpendicular to the ground. .
- model reconstruction module 104 is specifically configured to:
- Embodiments of the present disclosure provide a storage medium including computer executable instructions for performing a recognition modeling method of a lane line when executed by a computer processor, the method comprising:
- a lane line recognition model based on convolutional neural networks is trained.
- the foregoing storage medium when performing the method, before identifying the image area of the lane line from the background image based on the two-dimensional filtering, further includes:
- the original image is subjected to inverse projection transformation to adjust the optical axis direction of the original image to be perpendicular to the ground.
- the storage medium When performing the method, the storage medium performs inverse projection transformation on the original image to adjust the direction of the optical axis of the original image to be vertical and the ground direction includes:
- ⁇ , ⁇ , ⁇ are the camera's pitch angle, yaw angle and roll angle, respectively
- f u , f v are the focal length of the camera vertical and horizontal directions
- c u , c v is the abscissa of the camera's optical center coordinate point
- x w , y w and z w respectively represent the three-dimensional coordinates of the coordinate point in the transformed three-dimensional space
- identifying an image area of the lane line from the image includes:
- the background image is filtered by using a hat-like filter kernel having different width parameters and height parameters, and an image with the most obvious edge of the image is selected as the filtered result image;
- a straight line fit of the boundary is performed on the connected domain using the improved ransac algorithm.
- model training data includes:
- An image containing the region of interest is used as the model training data.
- FIG. 11 is a schematic diagram of a hardware structure of an apparatus for performing a lane line identification modeling method according to a tenth embodiment of the present disclosure.
- the device includes:
- One or more processors 1110, one processor 1110 is taken as an example in FIG. 11;
- Memory 1120 and one or more modules.
- the device may further include: an input device 1130 and an output device 1140.
- the processor 1110, the memory 1120, the input device 1130, and the output device 1140 in the device may be connected by a bus or other means, and the connection through the bus is taken as an example in FIG.
- the memory 1120 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the lane line identification modeling method in the embodiment of the present disclosure (for example, FIG. 9
- the processor 1110 executes various functional applications and data processing of the device by executing software programs, instructions, and modules stored in the memory 1120, that is, implementing the recognition modeling method of the execution lane line in the above method embodiment.
- the memory 1120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the terminal device, and the like.
- memory 1120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
- memory 1120 can further include relative to processor 1110 Remotely set up memory that can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- Input device 1130 can be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal.
- the output device 1140 may include a display device such as a display screen.
- the one or more modules are stored in the memory 1120, and when executed by the one or more processors 1110, perform the following operations:
- a lane line recognition model based on convolutional neural networks is trained.
- the method further includes:
- the original image is subjected to inverse projection transformation to adjust the optical axis direction of the original image to be perpendicular to the ground.
- performing inverse projection transformation on the original image to adjust the optical axis direction of the original image to be vertical and the ground direction includes:
- ⁇ , ⁇ , ⁇ are the camera's pitch angle, yaw angle and roll angle, respectively
- f u , f v are the focal length of the camera vertical and horizontal directions
- c u , c v is the abscissa of the camera's optical center coordinate point
- x w , y w and z w respectively represent the three-dimensional coordinates of the coordinate point in the transformed three-dimensional space
- identifying the image area of the lane line from the image includes:
- the background image is filtered by using a hat-like filter kernel having different width parameters and height parameters, and an image with the most obvious edge of the image is selected as the filtered result image;
- a straight line fit of the boundary is performed on the connected domain using the improved ransac algorithm.
- constructing the model training data by using the identified image region includes:
- An image containing the region of interest is used as the model training data.
- Embodiments of the present disclosure provide a storage medium including computer executable instructions for performing a lane line identification method when executed by a computer processor, the method comprising:
- model reconstruction is performed to identify lane lines in the input image.
- the foregoing storage medium when performing the method, before identifying the image area of the lane line from the image based on the two-dimensional filtering, further includes:
- the original image is subjected to inverse projection transformation to adjust the optical axis direction of the original image to be perpendicular to the ground.
- performing model reconstruction based on the output probability to identify lane lines in the input image includes:
- FIG. 12 is a schematic diagram showing the hardware structure of an apparatus for performing a lane line identification method according to a twelfth embodiment of the present disclosure.
- the device includes:
- One or more processors 1210, one processor 1210 is taken as an example in FIG. 12;
- Memory 1220 ; and one or more modules.
- the device may also include an input device 1230 and an output device 1240.
- the processor 1210, the memory 1220, the input device 1230, and the output device 1240 in the device may be connected by a bus or other means, and the bus connection is taken as an example in FIG.
- the memory 1220 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the lane line identification method in the embodiment of the present disclosure (for example, as shown in FIG.
- the processor 1210 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 1220, that is, the method for identifying the execution lane line in the above method embodiment.
- the memory 1220 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the terminal device, and the like.
- memory 1220 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
- memory 1220 can further include memory remotely located relative to processor 1210, which can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- Input device 1230 can be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal.
- the output device 1240 can include a display device such as a display screen Ready.
- the one or more modules are stored in the memory 1220, and when executed by the one or more processors 1210, perform the following operations:
- model reconstruction is performed to identify lane lines in the input image.
- the method further includes:
- the original image is subjected to inverse projection transformation to adjust the optical axis direction of the original image to be perpendicular to the ground.
- performing model reconstruction based on the output probability to identify lane lines in the input image includes:
- ROM Read-Only Memory
- RAM Random Access Memory
- FLASH FLASH
- hard disk or optical disk etc., including a number of instructions to make a computer device (can be a personal computer, a server, or Network devices, etc.) perform the methods described in various embodiments of the present disclosure.
- the units and modules included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding The functions of the functional units are only for convenience of distinguishing from each other and are not intended to limit the scope of protection of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (20)
- 一种车道线的识别建模方法,其特征在于,包括:基于二维滤波,从图像中识别车道线的图像区域;利用识别得到的图像区域,构造模型训练数据;利用所述模型训练数据,训练基于卷积神经网络的车道线识别模型。
- 根据权利要求1所述的方法,其特征在于,在基于二维滤波,从背景图像中识别车道线的图像区域之前,还包括:对原始图像进行逆投射变换,以调整所述原始图像的光轴方向为垂直于地面的方向。
- 根据权利要求1至3任一所述的方法,其特征在于,基于二维滤波,从图像中识别车道线的图像区域包括:利用具有不同宽度参数和高度参数的hat-like滤波核,对背景图像进行滤波,并选择图像边沿最为明显的一幅图像,作为滤波结果图像;对所述滤波结果图像进行二值化,形成至少一个连通域;利用改进的ransac算法,对所述连通域进行边界的直线拟合。
- 根据权利要求1至3任一所述的方法,其特征在于,利用识别得到的图像区域,构造模型训练数据包括:对所述连通域进行扩宽,形成图像上的感兴趣区域;将包含所述感兴趣区域的图像作为所述模型训练数据。
- 一种车道线的识别建模装置,其特征在于,包括:识别模块,用于基于二维滤波,从图像中识别车道线的图像区域;构造模块,用于利用识别得到的图像区域,构造模型训练数据;训练模块,用于利用所述模型训练数据,训练基于卷积神经网络的车道线识别模型。
- 根据权利要求6所述的装置,其特征在于,还包括:变换模块,用于在基于二维滤波,从背景图像中识别车道线的图像区域之前,对原始图像进行逆投射变换,以调整所述原始图像的光轴方向为垂直于地面的方向。
- 根据权利要求6至8任一所述的装置,其特征在于,所述识别模块包括:滤波单元,用于利用具有不同宽度参数和高度参数的hat-like滤波核,对背 景图像进行滤波,并选择图像边沿最为明显的一幅图像,作为滤波结果图像;二值化单元,用于对所述滤波结果图像进行二值化,形成至少一个连通域;拟合单元,用于利用改进的ransac算法,对所述连通域进行边界的直线拟合。
- 根据权利要求6至8任一所述的装置,其特征在于,所述构造模块包括:扩宽单元,用于对所述连通域进行扩宽,形成图像上的感兴趣区域;数据获取单元,用于将包含所述感兴趣区域的图像作为所述模型训练数据。
- 一种车道线的识别方法,其特征在于,包括:基于二维滤波,从图像中识别车道线的图像区域;将已经识别了车道线的图像区域的图像输入至基于卷积神经网络的车道线识别模型,得到所述模型的输出概率;基于所述输出概率,进行模型重建,以识别输入图像中的车道线。
- 根据权利要求11所述的方法,其特征在于,在基于二维滤波,从图像中识别车道线的图像区域之前,还包括:对原始图像进行逆投射变换,以调整所述原始图像的光轴方向为垂直于地面的方向。
- 一种车道线的识别装置,其特征在于,包括:区域识别模块,用于基于二维滤波,从图像中识别车道线的图像区域;概率计算模块,用于将已经识别了车道线的图像区域的图像输入至基于卷积神经网络的车道线识别模型,得到所述模型的输出概率;模型重建模块,用于基于所述输出概率,进行模型重建,以识别输入图像中的车道线。
- 根据权利要求14所述的装置,其特征在于,还包括:逆投射变换模块,用于在基于二维滤波,从图像中识别车道线的图像区域之前,对原始图像进行逆投射变换,以调整所述原始图像的光轴方向为垂直于地面的方向。
- 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行车道线的识别建模方法,其特征在于,该方法包括:基于二维滤波,从图像中识别车道线的图像区域;利用识别得到的图像区域,构造模型训练数据;利用所述模型训练数据,训练基于卷积神经网络的车道线识别模型。
- 一种设备,其特征在于,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:基于二维滤波,从图像中识别车道线的图像区域;利用识别得到的图像区域,构造模型训练数据;利用所述模型训练数据,训练基于卷积神经网络的车道线识别模型。
- 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行车道线的识别方法,其特征在于,该方法包括:基于二维滤波,从图像中识别车道线的图像区域;将已经识别了车道线的图像区域的图像输入至基于卷积神经网络的车道线识别模型,得到所述模型的输出概率;基于所述输出概率,进行模型重建,以识别输入图像中的车道线。
- 一种设备,其特征在于,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:基于二维滤波,从图像中识别车道线的图像区域;将已经识别了车道线的图像区域的图像输入至基于卷积神经网络的车道线识别模型,得到所述模型的输出概率;基于所述输出概率,进行模型重建,以识别输入图像中的车道线。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187005239A KR102143108B1 (ko) | 2015-08-03 | 2015-12-31 | 차선 인식 모델링 방법, 장치, 저장 매체 및 기기, 및 인식 방법, 장치, 저장 매체 및 기기 |
US15/750,127 US10699134B2 (en) | 2015-08-03 | 2015-12-31 | Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line |
JP2018505645A JP6739517B2 (ja) | 2015-08-03 | 2015-12-31 | 車線認識のモデリング方法、装置、記憶媒体及び機器、並びに車線の認識方法、装置、記憶媒体及び機器 |
EP15900291.4A EP3321842B1 (en) | 2015-08-03 | 2015-12-31 | Lane line recognition modeling method, apparatus, storage medium, and device, recognition method and apparatus, storage medium, and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510482990.1A CN105046235B (zh) | 2015-08-03 | 2015-08-03 | 车道线的识别建模方法和装置、识别方法和装置 |
CN201510482990.1 | 2015-08-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017020528A1 true WO2017020528A1 (zh) | 2017-02-09 |
Family
ID=54452764
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/100175 WO2017020528A1 (zh) | 2015-08-03 | 2015-12-31 | 车道线的识别建模方法、装置、存储介质和设备及识别方法、装置、存储介质和设备 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10699134B2 (zh) |
EP (1) | EP3321842B1 (zh) |
JP (1) | JP6739517B2 (zh) |
KR (1) | KR102143108B1 (zh) |
CN (1) | CN105046235B (zh) |
WO (1) | WO2017020528A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3441909A1 (en) * | 2017-08-09 | 2019-02-13 | Samsung Electronics Co., Ltd. | Lane detection method and apparatus |
EP3506156A1 (en) * | 2017-12-29 | 2019-07-03 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for detecting lane line, and medium |
CN110413942A (zh) * | 2019-06-04 | 2019-11-05 | 联创汽车电子有限公司 | 车道线方程筛选方法及其筛选模块 |
JP2020038624A (ja) * | 2018-09-04 | 2020-03-12 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 車線処理方法及び装置 |
CN111192216A (zh) * | 2019-12-31 | 2020-05-22 | 武汉中海庭数据技术有限公司 | 一种车道线平滑处理方法及系统 |
CN112434585A (zh) * | 2020-11-14 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种车道线的虚实识别方法、系统、电子设备及存储介质 |
US11068724B2 (en) * | 2018-10-11 | 2021-07-20 | Baidu Usa Llc | Deep learning continuous lane lines detection system for autonomous vehicles |
CN116580373A (zh) * | 2023-07-11 | 2023-08-11 | 广汽埃安新能源汽车股份有限公司 | 一种车道线优化方法、装置、电子设备和存储介质 |
CN117114141A (zh) * | 2023-10-20 | 2023-11-24 | 安徽蔚来智驾科技有限公司 | 模型训练的方法、评估方法、计算机设备及存储介质 |
Families Citing this family (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046235B (zh) | 2015-08-03 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | 车道线的识别建模方法和装置、识别方法和装置 |
CN105426861B (zh) * | 2015-12-02 | 2019-05-21 | 百度在线网络技术(北京)有限公司 | 车道线确定方法及装置 |
US9494438B1 (en) * | 2015-12-15 | 2016-11-15 | Honda Motor Co., Ltd. | System and method for verifying map data for a vehicle |
CN105654064A (zh) * | 2016-01-25 | 2016-06-08 | 北京中科慧眼科技有限公司 | 车道线检测方法和装置及高级驾驶辅助系统 |
CN107220580B (zh) * | 2016-03-22 | 2022-08-09 | 敦泰电子有限公司 | 基于投票决策和最小二乘法的图像识别随机取样一致算法 |
WO2018035815A1 (zh) * | 2016-08-25 | 2018-03-01 | 深圳市锐明技术股份有限公司 | 一种成对车道线的检测方法和装置 |
CN106529505A (zh) * | 2016-12-05 | 2017-03-22 | 惠州华阳通用电子有限公司 | 一种基于图像视觉的车道线检测方法 |
CN108241829A (zh) * | 2016-12-23 | 2018-07-03 | 乐视汽车(北京)有限公司 | 车辆行驶图像识别方法 |
CN108268813B (zh) * | 2016-12-30 | 2021-05-07 | 北京文安智能技术股份有限公司 | 一种车道偏离预警方法、装置及电子设备 |
CN106845424B (zh) * | 2017-01-24 | 2020-05-05 | 南京大学 | 基于深度卷积网络的路面遗留物检测方法 |
CN108509826B (zh) * | 2017-02-27 | 2022-03-01 | 千寻位置网络有限公司 | 一种遥感影像的道路识别方法及其系统 |
CN107092862A (zh) * | 2017-03-16 | 2017-08-25 | 浙江零跑科技有限公司 | 一种基于卷积神经网络的车道边缘检测方法 |
CN109426800B (zh) * | 2017-08-22 | 2021-08-13 | 北京图森未来科技有限公司 | 一种车道线检测方法和装置 |
CN109426824B (zh) * | 2017-08-28 | 2021-05-25 | 阿里巴巴(中国)有限公司 | 道路交通标线的识别方法和装置 |
DE102017216802A1 (de) * | 2017-09-22 | 2019-03-28 | Continental Teves Ag & Co. Ohg | Verfahren und vorrichtung zum erkennen von fahrspuren, fahrerassistenzsystem und fahrzeug |
CN109726615A (zh) * | 2017-10-30 | 2019-05-07 | 北京京东尚科信息技术有限公司 | 一种道路边界的识别方法和装置 |
CN108090456B (zh) * | 2017-12-27 | 2020-06-19 | 北京初速度科技有限公司 | 识别车道线模型的训练方法、车道线识别方法及装置 |
CN108491811A (zh) * | 2018-03-28 | 2018-09-04 | 中山大学 | 一种基于fpga的超低功耗实时车道线检测的方法 |
CN110348273B (zh) * | 2018-04-04 | 2022-05-24 | 北京四维图新科技股份有限公司 | 神经网络模型训练方法、系统及车道线识别方法、系统 |
CN108615358A (zh) * | 2018-05-02 | 2018-10-02 | 安徽大学 | 一种道路拥堵检测方法及装置 |
CN108694386B (zh) * | 2018-05-15 | 2021-08-10 | 华南理工大学 | 一种基于并联卷积神经网络的车道线检测方法 |
CN108960183B (zh) * | 2018-07-19 | 2020-06-02 | 北京航空航天大学 | 一种基于多传感器融合的弯道目标识别系统及方法 |
CN110795961B (zh) * | 2018-08-01 | 2023-07-18 | 新疆万兴信息科技有限公司 | 一种车道线检测方法、装置、电子设备及介质 |
CN110796606B (zh) * | 2018-08-01 | 2023-07-07 | 新疆万兴信息科技有限公司 | 一种确定IPM matrix参数的方法、装置、电子设备及介质 |
DE102018214697A1 (de) * | 2018-08-30 | 2020-03-05 | Continental Automotive Gmbh | Fahrbahnkartierungsvorrichtung |
CN109446886B (zh) * | 2018-09-07 | 2020-08-25 | 百度在线网络技术(北京)有限公司 | 基于无人车的障碍物检测方法、装置、设备以及存储介质 |
CN109389046B (zh) * | 2018-09-11 | 2022-03-29 | 昆山星际舟智能科技有限公司 | 用于自动驾驶的全天候物体识别与车道线检测方法 |
US10311338B1 (en) * | 2018-09-15 | 2019-06-04 | StradVision, Inc. | Learning method, learning device for detecting lanes on the basis of CNN and testing method, testing device using the same |
DE102018216413A1 (de) * | 2018-09-26 | 2020-03-26 | Robert Bosch Gmbh | Vorrichtung und Verfahren zur automatischen Bildverbesserung bei Fahrzeugen |
CN109389650B (zh) * | 2018-09-30 | 2021-01-12 | 京东方科技集团股份有限公司 | 一种车载相机的标定方法、装置、车辆和存储介质 |
KR102483649B1 (ko) | 2018-10-16 | 2023-01-02 | 삼성전자주식회사 | 차량 위치 결정 방법 및 차량 위치 결정 장치 |
CN109345547B (zh) * | 2018-10-19 | 2021-08-24 | 天津天地伟业投资管理有限公司 | 基于深度学习多任务网络的交通车道线检测方法及装置 |
CN109376674B (zh) * | 2018-10-31 | 2024-09-06 | 北京小米移动软件有限公司 | 人脸检测方法、装置及存储介质 |
CN109598268B (zh) * | 2018-11-23 | 2021-08-17 | 安徽大学 | 一种基于单流深度网络的rgb-d显著目标检测方法 |
CN111368605B (zh) * | 2018-12-26 | 2023-08-25 | 易图通科技(北京)有限公司 | 车道线提取方法及装置 |
US11023745B2 (en) * | 2018-12-27 | 2021-06-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | System for automated lane marking |
WO2020139355A1 (en) * | 2018-12-27 | 2020-07-02 | Didi Research America, Llc | System for automated lane marking |
US11087173B2 (en) | 2018-12-27 | 2021-08-10 | Beijing Didi Infinity Technology And Development Co., Ltd. | Using image pre-processing to generate a machine learning model |
WO2020139357A1 (en) * | 2018-12-27 | 2020-07-02 | Didi Research America, Llc | Using image pre-processing to generate a machine learning model |
US10990815B2 (en) | 2018-12-27 | 2021-04-27 | Beijing Didi Infinity Technology And Development Co., Ltd. | Image pre-processing in a lane marking determination system |
CN109685850B (zh) * | 2018-12-29 | 2024-05-28 | 百度在线网络技术(北京)有限公司 | 一种横向定位方法及车载设备 |
CN109740554A (zh) * | 2019-01-09 | 2019-05-10 | 宽凳(北京)科技有限公司 | 一种道路边缘线识别方法及系统 |
US10346693B1 (en) * | 2019-01-22 | 2019-07-09 | StradVision, Inc. | Method and device for attention-based lane detection without post-processing by using lane mask and testing method and testing device using the same |
CN109816050A (zh) * | 2019-02-23 | 2019-05-28 | 深圳市商汤科技有限公司 | 物体位姿估计方法及装置 |
CN110163109B (zh) * | 2019-04-23 | 2021-09-17 | 浙江大华技术股份有限公司 | 一种车道线标注方法及装置 |
CN110569730B (zh) * | 2019-08-06 | 2022-11-15 | 福建农林大学 | 一种基于U-net神经网络模型的路面裂缝自动识别方法 |
CN110525436A (zh) * | 2019-08-27 | 2019-12-03 | 中国第一汽车股份有限公司 | 车辆换道控制方法、装置、车辆和存储介质 |
CN112446230B (zh) * | 2019-08-27 | 2024-04-09 | 中车株洲电力机车研究所有限公司 | 车道线图像的识别方法及装置 |
CN112686080A (zh) * | 2019-10-17 | 2021-04-20 | 北京京东乾石科技有限公司 | 进行车道线检测的方法和装置 |
CN112926354A (zh) * | 2019-12-05 | 2021-06-08 | 北京超星未来科技有限公司 | 一种基于深度学习的车道线检测方法及装置 |
CN111126209B (zh) * | 2019-12-09 | 2024-06-14 | 博泰车联网科技(上海)股份有限公司 | 车道线检测方法及相关设备 |
CN111191619B (zh) * | 2020-01-02 | 2023-09-05 | 北京百度网讯科技有限公司 | 车道线虚线段的检测方法、装置、设备和可读存储介质 |
US11798187B2 (en) * | 2020-02-12 | 2023-10-24 | Motive Technologies, Inc. | Lane detection and distance estimation using single-view geometry |
CN111310737B (zh) * | 2020-03-26 | 2023-10-13 | 山东极视角科技股份有限公司 | 一种车道线检测方法及装置 |
CN112639765B (zh) * | 2020-04-18 | 2022-02-11 | 华为技术有限公司 | 车道线识别异常事件确定方法、车道线识别装置及系统 |
CN111539401B (zh) * | 2020-07-13 | 2020-10-23 | 平安国际智慧城市科技股份有限公司 | 基于人工智能的车道线检测方法、装置、终端及存储介质 |
CN112902987B (zh) * | 2021-02-02 | 2022-07-15 | 北京三快在线科技有限公司 | 一种位姿修正的方法及装置 |
CN113011293B (zh) * | 2021-03-05 | 2022-09-30 | 郑州天迈科技股份有限公司 | 一种行道线参数实时提取方法 |
CN113537002B (zh) * | 2021-07-02 | 2023-01-24 | 安阳工学院 | 一种基于双模神经网络模型的驾驶环境评估方法及装置 |
CN113298050B (zh) * | 2021-07-21 | 2021-11-19 | 智道网联科技(北京)有限公司 | 车道线识别模型训练方法、装置及车道线识别方法、装置 |
CN113344929B (zh) * | 2021-08-09 | 2021-11-05 | 深圳智检慧通科技有限公司 | 一种焊点视觉检测识别方法、可读存储介质及设备 |
CN114758310B (zh) * | 2022-06-13 | 2022-10-28 | 山东博昂信息科技有限公司 | 一种基于高速监控相机的车道线检测方法、系统及装置 |
CN115240435A (zh) * | 2022-09-21 | 2022-10-25 | 广州市德赛西威智慧交通技术有限公司 | 一种基于ai技术的车辆违章行驶检测方法、装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104766058A (zh) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | 一种获取车道线的方法和装置 |
CN104809449A (zh) * | 2015-05-14 | 2015-07-29 | 重庆大学 | 适用于高速公路视频监控系统的车道虚线分界线自动检测方法 |
CN105046235A (zh) * | 2015-08-03 | 2015-11-11 | 百度在线网络技术(北京)有限公司 | 车道线的识别建模方法和装置、识别方法和装置 |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08202877A (ja) | 1995-01-31 | 1996-08-09 | Toyota Motor Corp | 画像認識装置 |
GB9804112D0 (en) * | 1998-02-27 | 1998-04-22 | Lucas Ind Plc | Road profile prediction |
US7375728B2 (en) * | 2001-10-01 | 2008-05-20 | University Of Minnesota | Virtual mirror |
US7774113B2 (en) * | 2002-04-10 | 2010-08-10 | Trw Limited | Cameras to determine vehicle heading |
AU2003295318A1 (en) * | 2002-06-14 | 2004-04-19 | Honda Giken Kogyo Kabushiki Kaisha | Pedestrian detection and tracking with night vision |
JP4328692B2 (ja) * | 2004-08-11 | 2009-09-09 | 国立大学法人東京工業大学 | 物体検出装置 |
US7831098B2 (en) * | 2006-11-07 | 2010-11-09 | Recognition Robotics | System and method for visual searching of objects using lines |
US8098889B2 (en) * | 2007-01-18 | 2012-01-17 | Siemens Corporation | System and method for vehicle detection and tracking |
JP5014237B2 (ja) * | 2008-04-23 | 2012-08-29 | 本田技研工業株式会社 | レーンマーカ認識装置、車両、及びレーンマーカ認識用プログラム |
US8751154B2 (en) * | 2008-04-24 | 2014-06-10 | GM Global Technology Operations LLC | Enhanced clear path detection in the presence of traffic infrastructure indicator |
US8855917B2 (en) * | 2008-10-16 | 2014-10-07 | Csr Technology Inc. | System and method for use of a vehicle back-up camera as a dead-reckoning sensor |
CN102201167B (zh) * | 2010-04-07 | 2013-03-06 | 宫宁生 | 基于视频的汽车车道自动识别方法 |
US9547795B2 (en) * | 2011-04-25 | 2017-01-17 | Magna Electronics Inc. | Image processing method for detecting objects using relative motion |
EP2574511B1 (en) * | 2011-09-30 | 2016-03-16 | Honda Research Institute Europe GmbH | Analyzing road surfaces |
KR101295077B1 (ko) | 2011-12-28 | 2013-08-08 | 전자부품연구원 | 다양한 도로 상황에서의 차선 검출 및 추적 장치 |
US9576214B1 (en) * | 2012-01-23 | 2017-02-21 | Hrl Laboratories, Llc | Robust object recognition from moving platforms by combining form and motion detection with bio-inspired classification |
US9053372B2 (en) * | 2012-06-28 | 2015-06-09 | Honda Motor Co., Ltd. | Road marking detection and recognition |
JP6056319B2 (ja) * | 2012-09-21 | 2017-01-11 | 富士通株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
US9349058B2 (en) * | 2012-10-31 | 2016-05-24 | Tk Holdings, Inc. | Vehicular path sensing system and method |
US20150276400A1 (en) * | 2013-03-13 | 2015-10-01 | Electronic Scripting Products, Inc. | Reduced homography for ascertaining conditioned motion of an optical apparatus |
US9821813B2 (en) * | 2014-11-13 | 2017-11-21 | Nec Corporation | Continuous occlusion models for road scene understanding |
CN104598892B (zh) * | 2015-01-30 | 2018-05-04 | 广东威创视讯科技股份有限公司 | 一种危险驾驶行为预警方法及系统 |
CN104657727B (zh) * | 2015-03-18 | 2018-01-02 | 厦门麦克玛视电子信息技术有限公司 | 一种车道线的检测方法 |
KR102267562B1 (ko) * | 2015-04-16 | 2021-06-22 | 한국전자통신연구원 | 무인자동주차 기능 지원을 위한 장애물 및 주차구획 인식 장치 및 그 방법 |
US10062010B2 (en) * | 2015-06-26 | 2018-08-28 | Intel Corporation | System for building a map and subsequent localization |
-
2015
- 2015-08-03 CN CN201510482990.1A patent/CN105046235B/zh active Active
- 2015-12-31 KR KR1020187005239A patent/KR102143108B1/ko active IP Right Grant
- 2015-12-31 EP EP15900291.4A patent/EP3321842B1/en active Active
- 2015-12-31 US US15/750,127 patent/US10699134B2/en active Active
- 2015-12-31 WO PCT/CN2015/100175 patent/WO2017020528A1/zh active Application Filing
- 2015-12-31 JP JP2018505645A patent/JP6739517B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104766058A (zh) * | 2015-03-31 | 2015-07-08 | 百度在线网络技术(北京)有限公司 | 一种获取车道线的方法和装置 |
CN104809449A (zh) * | 2015-05-14 | 2015-07-29 | 重庆大学 | 适用于高速公路视频监控系统的车道虚线分界线自动检测方法 |
CN105046235A (zh) * | 2015-08-03 | 2015-11-11 | 百度在线网络技术(北京)有限公司 | 车道线的识别建模方法和装置、识别方法和装置 |
Non-Patent Citations (2)
Title |
---|
See also references of EP3321842A4 * |
YANG, SHENGLAN;: "Design and Research of Vehicle Auxiliary Navigation System Based on Augmented Reality", CHINA MASTER'S THESES FULL-TEXT DATABASE, 17 September 2014 (2014-09-17), XP009504549 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10650529B2 (en) | 2017-08-09 | 2020-05-12 | Samsung Electronics Co., Ltd. | Lane detection method and apparatus |
EP3441909A1 (en) * | 2017-08-09 | 2019-02-13 | Samsung Electronics Co., Ltd. | Lane detection method and apparatus |
EP3506156A1 (en) * | 2017-12-29 | 2019-07-03 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for detecting lane line, and medium |
US10846543B2 (en) | 2017-12-29 | 2020-11-24 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for detecting lane line, and medium |
JP2020038624A (ja) * | 2018-09-04 | 2020-03-12 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 車線処理方法及び装置 |
US11068724B2 (en) * | 2018-10-11 | 2021-07-20 | Baidu Usa Llc | Deep learning continuous lane lines detection system for autonomous vehicles |
CN110413942A (zh) * | 2019-06-04 | 2019-11-05 | 联创汽车电子有限公司 | 车道线方程筛选方法及其筛选模块 |
CN110413942B (zh) * | 2019-06-04 | 2023-08-08 | 上海汽车工业(集团)总公司 | 车道线方程筛选方法及其筛选模块 |
CN111192216A (zh) * | 2019-12-31 | 2020-05-22 | 武汉中海庭数据技术有限公司 | 一种车道线平滑处理方法及系统 |
CN112434585A (zh) * | 2020-11-14 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种车道线的虚实识别方法、系统、电子设备及存储介质 |
CN116580373A (zh) * | 2023-07-11 | 2023-08-11 | 广汽埃安新能源汽车股份有限公司 | 一种车道线优化方法、装置、电子设备和存储介质 |
CN116580373B (zh) * | 2023-07-11 | 2023-09-26 | 广汽埃安新能源汽车股份有限公司 | 一种车道线优化方法、装置、电子设备和存储介质 |
CN117114141A (zh) * | 2023-10-20 | 2023-11-24 | 安徽蔚来智驾科技有限公司 | 模型训练的方法、评估方法、计算机设备及存储介质 |
CN117114141B (zh) * | 2023-10-20 | 2024-02-27 | 安徽蔚来智驾科技有限公司 | 模型训练的方法、评估方法、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3321842A1 (en) | 2018-05-16 |
CN105046235A (zh) | 2015-11-11 |
EP3321842A4 (en) | 2018-08-15 |
CN105046235B (zh) | 2018-09-07 |
JP2018523875A (ja) | 2018-08-23 |
EP3321842B1 (en) | 2020-04-29 |
US20180225527A1 (en) | 2018-08-09 |
KR102143108B1 (ko) | 2020-08-10 |
US10699134B2 (en) | 2020-06-30 |
JP6739517B2 (ja) | 2020-08-12 |
KR20180034529A (ko) | 2018-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017020528A1 (zh) | 车道线的识别建模方法、装置、存储介质和设备及识别方法、装置、存储介质和设备 | |
US10607362B2 (en) | Remote determination of containers in geographical region | |
US10192323B2 (en) | Remote determination of containers in geographical region | |
JP6926335B2 (ja) | 深層学習における回転可変物体検出 | |
WO2017041396A1 (zh) | 一种车道线数据的处理方法、装置、存储介质及设备 | |
WO2019218824A1 (zh) | 一种移动轨迹获取方法及其设备、存储介质、终端 | |
Zhang et al. | Semi-automatic road tracking by template matching and distance transformation in urban areas | |
CN110781885A (zh) | 基于图像处理的文本检测方法、装置、介质及电子设备 | |
AU2016315938A1 (en) | Systems and methods for analyzing remote sensing imagery | |
US20150104070A1 (en) | Detecting and identifying parking lots in remotely-sensed images | |
Ge et al. | Vehicle detection and tracking based on video image processing in intelligent transportation system | |
US20220004740A1 (en) | Apparatus and Method For Three-Dimensional Object Recognition | |
EP3553700A2 (en) | Remote determination of containers in geographical region | |
Xu et al. | Building height calculation for an urban area based on street view images and deep learning | |
CN114332633A (zh) | 雷达图像目标检测识别方法、设备和存储介质 | |
Li et al. | Automatic Road Extraction from High-Resolution Remote Sensing Image Based on Bat Model and Mutual Information Matching. | |
CN116823884A (zh) | 多目标跟踪方法、系统、计算机设备及存储介质 | |
Kumar | Solar potential analysis of rooftops using satellite imagery | |
CN115565072A (zh) | 一种道路垃圾识别和定位方法、装置、电子设备及介质 | |
CN104236518B (zh) | 一种基于光学成像与模式识别的天线主波束指向探测方法 | |
Widyaningrum et al. | Tailored features for semantic segmentation with a DGCNN using free training samples of a colored airborne point cloud | |
CN102938156B (zh) | 一种基于积分图像的面状注记配置方法 | |
CN117576369A (zh) | 火灾检测方法、装置以及存储介质 | |
CN117218616A (zh) | 车道线检测方法、装置、计算机设备和存储介质 | |
CN118154856A (zh) | 一种目标检测方法、装置及相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15900291 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018505645 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15750127 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20187005239 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015900291 Country of ref document: EP |