CN112699825A - Lane line identification method and device - Google Patents

Lane line identification method and device Download PDF

Info

Publication number
CN112699825A
CN112699825A CN202110009714.9A CN202110009714A CN112699825A CN 112699825 A CN112699825 A CN 112699825A CN 202110009714 A CN202110009714 A CN 202110009714A CN 112699825 A CN112699825 A CN 112699825A
Authority
CN
China
Prior art keywords
image
contour
lane line
bird
roi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110009714.9A
Other languages
Chinese (zh)
Inventor
李欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pateo Network Technology Service Co Ltd
Original Assignee
Shanghai Pateo Network Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Network Technology Service Co Ltd filed Critical Shanghai Pateo Network Technology Service Co Ltd
Priority to CN202110009714.9A priority Critical patent/CN112699825A/en
Publication of CN112699825A publication Critical patent/CN112699825A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a lane line identification method and a lane line identification device. The lane line identification method comprises the following steps: acquiring a road image in the advancing direction of a vehicle; preprocessing the road image to obtain a bird's-eye view image corresponding to the road image; detecting a contour line extending in the road direction in the bird's-eye view image based on shape features of a lane line to form a contour line set; and merging the contour lines in the contour line set based on the slope similarity degree between the contour lines to determine the lane lines in the bird's-eye view image.

Description

Lane line identification method and device
Technical Field
The invention relates to the field of vehicle auxiliary driving, in particular to a lane line identification method and a lane line identification device.
Background
With the improvement of living standard and the continuous progress of society, vehicles become indispensable travel tools for outdoor travel. In order to improve the driving experience of the user, the vehicle can be equipped with a vehicle data recorder, a vehicle-mounted navigation device and other devices. When a vehicle runs on a road, the condition that wheels are pressed to a lane line on the road may exist, and the traffic rules are violated when the vehicle runs for a long time. If the lane line on the road is identified, the line pressing behavior of the vehicle can be reminded according to the identified position of the lane line when the user presses the line to drive.
The current lane line identification technology is mainly divided into two categories: the method has the advantages that firstly, a deep learning method adopted by automatic driving has an overhigh performance requirement on an operation system, so that the performance requirement of the method cannot be met by general vehicle-mounted navigation; and the Hough line detection method based on the traditional algorithm has low performance requirements on an operation system compared with a deep learning method, but generally only needs to additionally process a straight line and a curve.
The invention provides a lane line recognition method and a lane line recognition device, which are used for recognizing lane lines of straight lines and curved lines, improving the real-time performance of lane line recognition and reducing the calculation amount of lane line recognition.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of the present invention, there is provided a lane line identification method including: acquiring a road image in the advancing direction of a vehicle; preprocessing the road image to obtain a bird's-eye view image corresponding to the road image; detecting a contour line extending in the road direction in the bird's-eye view image based on shape features of a lane line to form a contour line set; and merging the contour lines in the contour line set based on the slope similarity degree between the contour lines to determine the lane lines in the bird's-eye view image.
According to another aspect of the present invention, there is provided a lane line identification apparatus comprising a memory, a processor and a computer program stored on the memory, the processor being adapted to implement the steps of the lane line identification method described above when executing the computer program stored on the memory.
According to yet another aspect of the present invention, there is provided a computer storage medium having stored thereon a computer program that, when executed, performs the steps of the lane line identification method described above.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings.
Fig. 1 is a schematic flow chart of a lane line identification method in an embodiment according to an aspect of the present invention;
FIG. 2 is a partial flow diagram of a lane marking identification method in one embodiment according to one aspect of the present invention;
FIG. 3 is a partial flow diagram of a lane marking identification method in one embodiment according to one aspect of the present invention;
FIG. 4 is a partial flow diagram of a lane marking identification method in one embodiment according to one aspect of the present invention;
FIG. 5 is a partial flow diagram of a lane marking identification method in one embodiment according to one aspect of the present invention;
FIG. 6 is a partial flow diagram of a lane marking identification method in one embodiment according to one aspect of the present invention;
fig. 7 is a block diagram of a lane line recognition apparatus according to another aspect of the present invention.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the invention and is incorporated in the context of a particular application. Various modifications, as well as various uses in different applications will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the practice of the invention may not necessarily be limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Note that where used, the designations left, right, front, back, top, bottom, positive, negative, clockwise, and counterclockwise are used for convenience only and do not imply any particular fixed orientation. In fact, they are used to reflect the relative position and/or orientation between the various parts of the object. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
It is noted that, where used, further, preferably, still further and more preferably is a brief introduction to the exposition of the alternative embodiment on the basis of the preceding embodiment, the contents of the further, preferably, still further or more preferably back band being combined with the preceding embodiment as a complete constituent of the alternative embodiment. Several further, preferred, still further or more preferred arrangements of the belt after the same embodiment may be combined in any combination to form a further embodiment.
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
According to an aspect of the present invention, there is provided a lane line identification method for identifying a lane line in a vehicle advancing direction.
In one embodiment, as shown in FIG. 1, the lane line identification method 100 includes steps S110 to S140.
Step S110 is: an image of a road in a forward direction of a vehicle is acquired.
A vehicle is typically equipped with a camera device, such as an onboard tachograph, onboard camera or other camera device. The image it takes in real time may be acquired from a camera device disposed on the vehicle. For example, when the vehicle is in a normal driving state, an image captured by a camera device arranged on the vehicle for capturing the direction of the vehicle head can be acquired as a road image in the advancing direction of the vehicle; when the vehicle is in a reverse state, an image captured by an imaging device arranged on the vehicle for capturing a rear direction of the vehicle may be acquired as a road image in a forward direction of the vehicle.
Step S120 is: and preprocessing the road image to obtain a bird's-eye view image corresponding to the road image.
Image pre-processing is a preliminary step of image processing and may generally include conventional steps such as denoising and/or image format conversion.
Preferably, in one embodiment, as shown in FIG. 2, step S120 may include steps S210-S230.
Wherein, step S210 includes: and denoising the road image to obtain a denoised road image.
Since real-world digital images are often interfered by noise of imaging equipment and external environment in the digitization and transmission processes, road images acquired actually are generally noisy images. It can be understood that image denoising is a conventional technical means in the art, for example, a mean filter, a geometric mean filter, a harmonic mean filter, an inverse harmonic mean filter, an adaptive filter or other existing or future filters of a domain averaging method may be used to perform denoising processing on a road image to obtain a denoised road image, and details are not repeated.
Step S220 includes: and converting the denoised road image into a gray scale image.
The gray scale map is also called a gray scale map, and specifically, the white and black are logarithmically divided into several levels, which are called gray scales, and the gray scales can be divided into 256 levels. The image represented in grayscale is a grayscale image.
The road image can be converted into the gray-scale image by adopting a floating point algorithm, an integer method, a shifting method, an average value method, a green method or other existing or future methods.
Step S230 includes: and performing inverse perspective transformation on the gray-scale image to convert the gray-scale image into a bird's-eye view image.
The Perspective Transformation (Perspective Transformation) is a Transformation that a projection geometry on a projection surface is kept unchanged by rotating the projection surface (Perspective surface) around a trace line (Perspective axis) by a certain angle according to a Perspective rotation law under the condition that three points of a Perspective center, an image point and a target point are collinear. The inverse perspective transform is the inverse of the perspective transform. Specifically, the corresponding relation of the coordinates between the world coordinate system and the image coordinate system can be obtained by utilizing the conversion relation between various coordinate systems of the camera in the imaging process, and the inverse perspective transformation of the acquired road image to the bird's-eye view image is realized based on the corresponding relation of the coordinates between the world coordinate system and the image coordinate system.
Optionally, in another embodiment, step S120 may further include a step of scaling the grayscale map. It can be understood that the larger the image is, the larger the calculation amount is, so that the grayscale image can be scaled to reduce the calculation amount, and specifically, the scaling operation can be performed according to the accuracy requirement of lane line identification. Then, step S230 may be replaced with: and carrying out inverse perspective transformation on the scaled gray-scale image so as to convert the gray-scale image into a bird's-eye view image.
Preferably, according to the projective geometry principle, the parallel straight lines in the real space will intersect at the infinite point in the presence of perspective deformation, and the projection of the intersection point on the imaging plane is called vanishing point. When the parallel straight line group of the real world is parallel to the imaging plane, the vanishing point will be located at infinity from the imaging plane; but when the parallel straight line group has a non-parallel relationship with the imaging plane, the vanishing point will be located at a finite distance from the imaging plane, and may even appear within the image area.
Based on the above characteristics, the actual lane line or other straight line parallel to the lane line is actually directed to the same vanishing point in the bird's-eye view image, and therefore, the area in the bird's-eye view image where the parallel straight line appears in real space can be divided based on the position of the vanishing point of the parallel straight line in the bird's-eye view image.
Then, in a more preferred embodiment, the step S120 may further include a step of dividing an area where the lane line is located in the bird 'S-eye view image, that is, a Region of Interest (ROI) of the bird' S-eye view image.
Specifically, as shown in fig. 3, the step of dividing the ROI region of the bird' S eye view image may be refined to steps S310 to S320.
Wherein, step S310 is: and determining an ROI area in the road image and a projection matrix from the road image to the bird's-eye view image based on the internal parameters of the shooting device of the road image.
The internal parameters of the photographing apparatus may include a focal length, an optical center, a height, a pitch angle, a yaw angle, a size of a photographed image, and the like. The internal parameters are utilized to determine the area where the lane line in the road image is located, namely the ROI area in the road image. And determining a projection matrix of the road image to the bird's-eye view image using the coordinates of the photographing device in the world coordinate system, the position in the relative image coordinate system, and the coordinates in the bird's-eye view image.
Step S320 is: and calculating the corresponding area of the ROI area in the road image in the bird's-eye view image based on the projection matrix.
It can be understood that the ROI in the bird's eye view image is the corresponding region of the ROI in the road image in the bird's eye view image. The ROI area in the road image can be represented by a quadrangle, and the ROI area in the bird's-eye view image can be determined by determining the corresponding end points of the four end points of the quadrangle in the bird's-eye view image. Namely, the projection matrix from the road image to the bird's-eye view image can be used to determine the corresponding end points of the four end points of the ROI area in the road image in the bird's-eye view area, and the quadrilateral area surrounded by the four corresponding end points in the bird's-eye view area is the ROI area in the bird's-eye view area.
After the region of interest of the bird's-eye view image is determined, the subsequent step of identifying the lane line can be performed only in the region of interest of the bird's-eye view image, and obviously, the calculation amount can be further reduced.
Further, step S130 is: all contour lines extending in the road direction in the bird's-eye view image are detected based on the shape features of the lane lines to form a contour line set.
It will be appreciated that the lane lines generally extend along the direction of travel of the road, and that the lane lines on a straight road are parallel to the edges of the road, and that at turns of the road, there will also be corresponding turning features, and generally a parallel relationship. Therefore, the contour line of the lane line may be detected based on the shape feature of the lane line extending with respect to the extending direction of the road in combination with the extending direction of the road.
In one embodiment, as shown in FIG. 4, step S130 may include steps S410-S450.
Wherein, step S410 is: and calculating a local threshold of the ROI area, and performing image segmentation on the ROI area in the bird's-eye view image by using the local threshold to obtain a first ROI area.
It will be appreciated that a local threshold may be used to highlight the pixel characteristics of a local region. In particular, a gaussian filtering method, a local mean filtering algorithm, a median algorithm, or other methods that can be used to calculate a local threshold value of an image can be used to calculate a local threshold value in the RPI region of the bird's eye view image.
Preferably, top hat operation may be performed on the ROI to obtain edge information in the ROI, the ROI may be divided into a plurality of local regions by using the edge information, and then local thresholds of the local regions are calculated, so as to respectively determine pixel attributes of the local regions.
The top hat operation is an operation of subtracting the opening operation from the original image, and edge information brighter than the edge of the original image can be obtained through the top hat operation. It can be understood that the edge information of the lane line and the road surface is highlighted in the top hat operation because the lane line has a color abrupt change relative to the road surface. Each of the local regions obtained by the local region division based on the edge information may be a pure lane line region or a road surface region. The lane lines are generally marked clearly for drivers or pedestrians to recognize, for example, white or yellow with higher brightness is used, so after the local threshold of the ROI is calculated, the local threshold of each local area can be used to represent the brightness information of the local area.
After the local threshold value is determined, image segmentation can be carried out on the ROI area in the bird's-eye view image by using the local threshold value to obtain a first ROI area. Specifically, the gray scale of the pixel point whose brightness reaches the local threshold in each local region may be set to 255, and the gray scale of the pixel point which does not reach the local threshold may be set to 0.
Step S420 is: and calculating an overall threshold value of the ROI area and carrying out image segmentation on the ROI area by using the overall threshold value to obtain a second ROI area.
Specifically, the overall threshold of the ROI region in the bird's eye view image may be calculated by using an atrazine algorithm, a maximum entropy algorithm, or other methods that can be used for calculating the overall threshold of the image.
After the overall threshold value is calculated, image segmentation can be performed on the ROI area in the bird's eye view image by using the overall threshold value to obtain a second ROI area. Specifically, the gray scale of the pixel point whose brightness reaches the overall threshold value in the entire ROI region may be set to 255, and the gray scale of the pixel point which does not reach the overall threshold value may be set to 0.
Step S430 is: and (5) carrying out AND operation on the first ROI area and the second ROI area to obtain an ROI image corresponding to the bird's-eye view image.
Preferably, the ROI image after the AND operation can filter out most of the interference of non-lane lines.
Step S440 is: contour finding is performed on the ROI image to extract a contour extending in the road direction.
Specifically, the outline of the lane-conforming shape feature in the ROI image can be found based on the shape feature of the lane line. And determining the contour which accords with the shape characteristics of the lane line and has an included angle with the Y axis of the aerial view image not exceeding a preset threshold value as a contour extending along the road direction.
It will be appreciated that the lane lines are generally thick solid lines or thick dashed lines, i.e., generally elongated, and therefore the contours of the lane lines may be found based on such shape characteristics. For example, a rectangular contour having a width of 5 pixel values or less and a length of 10 pixel values or more is defined as a contour conforming to the shape characteristics of the lane line. It is to be understood that the width threshold and the length threshold of the specific lane line may be correspondingly set based on the road characteristics, and are not limited to the foregoing examples.
Further, since the vehicle needs to travel along the lane line when traveling on the road, there is generally no large deviation in the angle between the lane line and the travel route of the vehicle. The image pickup device for picking up the road image is mounted on the vehicle, and the image pickup angle (generally parallel to the Y axis) thereof generally follows the traveling path of the vehicle, so that the contour can be further screened based on the angle relative to the Y axis of the bird's-eye view image. For example, a contour that conforms to the shape characteristics of the lane line and that forms an angle of not more than 60 ° with the Y axis of the bird's eye view image may be determined as a contour extending in the road direction, that is, as a contour corresponding to a possible lane line. It is to be understood that the preset threshold of the Y-axis angle of the specific lane line with respect to the bird's eye view image may be correspondingly set based on the road characteristics, and is not limited to the foregoing examples.
Further, step S450 is: and respectively fitting the contours extending along the road direction to obtain corresponding contour lines, wherein all the contour lines form the contour line set.
In particular, a corresponding fitting method may be employed based on the fitting accuracy requirements of the profile. Since the line features of the lane lines are simple and involve at most simple bending, quadratic fitting can be used to fit the contour lines.
Step S140 is: and merging the contour lines in the contour line set based on the similarity degree of the slopes of the contour lines to determine the lane lines in the bird's-eye view image.
It is understood that a common lane line may be composed of two parallel dashed and/or solid lines, and therefore, the contour lines in the contour line set may belong to the same lane line, and therefore, the contour lines need to be merged to obtain the actual lane line.
Specifically, in one embodiment, as shown in FIG. 5, step S140 may include steps S510-S540.
Wherein, step S510 is: and fitting each contour line in the contour line set with other residual contour lines in pairs to obtain a third contour line corresponding to the two fitted contour lines.
Theoretically, if two contour lines belong to the same lane line, the two contour lines can be actually merged into the same contour line, and the line parameters of the merged contour line should be identical to those of the two contour lines. Therefore, it is possible to determine whether or not two contour lines belong to the same lane line based on an error of the line parameter of the combined contour line with respect to the line parameters of the two contour lines.
Specifically, a straight line is determined from two points, and thus the contour line can be determined based on the two end points of the contour line. Specifically, as shown in fig. 6, step S510 may include steps S511 to S522.
Wherein, step S511 is: and respectively picking out two end points of the two fitted contour lines.
Step S512 is: and fitting one contour line by using the four end points of the two fitted contour lines to serve as a third contour line corresponding to the two fitted contour lines.
Each contour line comprises two end points, the two contour lines comprise four end points, and a third contour line passing through the four end points can be fitted through quadratic fitting.
Step S520 is: and judging that the two fitted contour lines are contour lines of the same lane line in response to the fact that errors of slopes of the third contour line and the two fitted contour lines are smaller than a preset threshold value.
In actual judgment, errors in the data processing and fitting processes are considered, errors in a certain range can be allowed to occur in line parameters, and a certain error threshold value can be set. Line parameters can generally be characterized using a slope. Therefore, when the error between the slope of the third contour line and the slope of the fitted two contour lines is smaller than the preset threshold, the fitted two contour lines can be judged to be the contour lines of the same lane line.
Step S530 is: and merging the contour lines belonging to the same lane line to obtain a plurality of final contour lines.
It can be understood that if any two contour lines can be fitted into one contour line, the fitted two contour lines in the contour line geometry are deleted and the fitted third contour line is merged into the contour line set to form a new contour line set. And continuing the steps S510-S530 for the new contour line set until no mergeable contour lines exist in the contour line set. The non-mergeable contours then all constitute the final contour.
Step S540 is: and determining the longest final contour line, the two parallel final contour lines or the final contour line with the most merged contour line as the lane line.
Since the lane lines may be long solid lines, the longest final contour line may be the lane line. Since the lane lines are generally formed by two mutually parallel lines, two final contour lines which are mutually parallel may form the lane lines. Since the lane line may be formed of a broken line (i.e., a continuous plurality of long bars), the final contour line having the most merged contour lines may be the lane line.
It is understood that the determination criteria of other lane lines may be added based on other characteristics of the lane lines, and the above examples are not intended to be limiting.
Furthermore, after the lane line in the road image is identified, whether the vehicle is pressed can be alarmed based on the position of the vehicle so as to reduce the violation of pressing the vehicle.
It is understood that the line pressing alarm can be performed in the form of voice, icon, text, light or any combination thereof.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
According to another aspect of the invention, a lane line identification apparatus is also provided.
In one embodiment, as shown in fig. 7, the lane line identification apparatus 700 includes a memory 710 and a processor 720. .
The memory 710 is used for storing computer programs.
The processor 720 is connected to the memory 710 for executing the computer program on the memory 710, and the steps of the lane line identification method described in any of the above embodiments are implemented when the processor 720 executes the computer program.
According to a further aspect of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed, carries out the steps of the lane line identification method described in any of the embodiments above.
Those of skill in the art would understand that information, signals, and data may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits (bits), symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. It is to be understood that the scope of the invention is to be defined by the appended claims and not by the specific constructions and components of the embodiments illustrated above. Those skilled in the art can make various changes and modifications to the embodiments within the spirit and scope of the present invention, and these changes and modifications also fall within the scope of the present invention.

Claims (13)

1. A lane line identification method, comprising:
acquiring a road image in the advancing direction of a vehicle;
preprocessing the road image to obtain a bird's-eye view image corresponding to the road image;
detecting a contour line extending in the road direction in the bird's-eye view image based on shape features of a lane line to form a contour line set; and
and merging the contour lines in the contour line set based on the similarity degree of the slopes of the contour lines to determine the lane lines in the bird's-eye view image.
2. The lane line identification method according to claim 1, wherein the preprocessing the road image to obtain a bird's-eye view image corresponding to the road image comprises:
denoising the road image to obtain a denoised road image;
converting the denoised road image into a gray level image; and
and carrying out inverse perspective transformation on the gray-scale image so as to convert the gray-scale image into a bird's-eye view image.
3. The lane line identification method of claim 2, further comprising:
scaling the grayscale map to reduce the amount of computation; and
the performing an inverse perspective transformation on the grayscale image to convert the grayscale image into a bird's-eye view image includes:
and carrying out inverse perspective transformation on the scaled gray-scale image so as to convert the gray-scale image into a bird's-eye view image.
4. The lane line identification method of claim 2, further comprising:
determining an ROI (region of interest) in the road image and a projection matrix from the road image to a bird's-eye view image based on internal parameters of the road image shooting device; and
calculating a corresponding region of the ROI region in the road image in the bird's-eye view image based on the projection matrix, wherein the corresponding region of the ROI region in the road image in the bird's-eye view image is the ROI region of the bird's-eye view image.
5. The lane line identification method according to claim 4, wherein the detecting, based on the shape characteristic of the lane line, the contour lines extending in the road direction in the bird's-eye image to compose a set of contour lines includes:
calculating a local threshold of the ROI area and carrying out image segmentation on the ROI area by using the local threshold to obtain a first ROI area;
calculating an overall threshold value of the ROI area and carrying out image segmentation on the ROI area by using the overall threshold value to obtain a second ROI area;
performing an AND operation on the first ROI area and the second ROI area to obtain an ROI image corresponding to the bird's-eye view image;
performing contour search on the ROI image to extract a contour extending along the road direction; and
and respectively fitting the contours extending along the road direction to obtain corresponding contour lines, wherein all the contour lines form the contour line set.
6. The lane line identification method according to claim 5, wherein the calculating a local threshold value of the bird's-eye view image and performing image segmentation on the ROI region using the local threshold value to obtain a first ROI region comprises:
top hat operation is carried out on an ROI (region of interest) of the aerial view image so as to acquire edge information in the ROI;
calculating the local threshold based on the edge information; and
and performing image segmentation on the ROI area by using the local threshold to obtain the first ROI area.
7. The lane line identification method of claim 5, wherein the performing contour search on the ROI image to extract a contour extending in a road direction comprises:
finding out the outline which is in line with the shape characteristics of the lane line in the ROI image based on the shape characteristics of the lane line; and
determining the outline which conforms to the shape characteristics of the lane line and has an included angle with the Y axis of the aerial view image not exceeding a preset threshold value as the outline extending along the road direction.
8. The lane line recognition method of claim 7, wherein the lane-based shape feature finding out the contour of the lane-conforming shape feature in the ROI image comprises:
and finding out the rectangular contour with the width less than or equal to 5 pixel values and the length more than or equal to 10 pixel values as the contour conforming to the shape characteristics of the lane line.
9. The method of claim 1, wherein the merging the contour lines in the set of contour lines to determine the lane line based on the degree of similarity in slope between the contour lines comprises:
fitting each contour line in the contour line set with other residual contour lines in pairs to obtain a third contour line corresponding to the two fitted contour lines;
in response to that the errors of the slopes of the third contour line and the fitted two contour lines are both smaller than a preset threshold value, judging that the fitted two contour lines are contour lines of the same lane line;
merging the contour lines belonging to the same lane line to obtain a plurality of final contour lines; and
and determining the longest final contour line, the two parallel final contour lines or the final contour line with the most merged contour line as the lane line.
10. The lane line identification method of claim 9, wherein fitting each contour line in the set of contour lines with other remaining contour lines two by two to obtain a third contour line corresponding to the two fitted contour lines comprises:
respectively selecting two end points of the two fitted contour lines; and
and fitting a contour line by using the four end points of the two fitted contour lines to serve as the third contour line corresponding to the two fitted contour lines.
11. The lane line identification method of claim 1, further comprising:
and performing line pressing alarm based on the lane line in the aerial view image and the position of the vehicle.
12. Lane line identification device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor is adapted to carry out the steps of the lane line identification method according to any of claims 1 to 11 when executing the computer program stored on the memory.
13. A computer storage medium having a computer program stored thereon, wherein the computer program when executed implements the steps of the lane line identification method of any of claims 1-11.
CN202110009714.9A 2021-01-05 2021-01-05 Lane line identification method and device Pending CN112699825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110009714.9A CN112699825A (en) 2021-01-05 2021-01-05 Lane line identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110009714.9A CN112699825A (en) 2021-01-05 2021-01-05 Lane line identification method and device

Publications (1)

Publication Number Publication Date
CN112699825A true CN112699825A (en) 2021-04-23

Family

ID=75514805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110009714.9A Pending CN112699825A (en) 2021-01-05 2021-01-05 Lane line identification method and device

Country Status (1)

Country Link
CN (1) CN112699825A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117100243A (en) * 2023-10-23 2023-11-24 中国科学院自动化研究所 Magnetic particle imaging system, method and equipment based on system matrix pixel compression

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009026164A (en) * 2007-07-23 2009-02-05 Alpine Electronics Inc Lane recognition apparatus and navigation apparatus
US20100284569A1 (en) * 2008-01-11 2010-11-11 Kazuyuki Sakurai Lane recognition system, lane recognition method, and lane recognition program
JP2011170596A (en) * 2010-02-18 2011-09-01 Renesas Electronics Corp System, method and program for recognizing compartment line
CN103116748A (en) * 2013-03-11 2013-05-22 清华大学 Method and system for identifying illegal driving behavior based on road signs
CN105740796A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Grey level histogram based post-perspective transformation lane line image binarization method
CN106778593A (en) * 2016-12-11 2017-05-31 北京联合大学 A kind of track level localization method based on the fusion of many surface marks
CN107679520A (en) * 2017-10-30 2018-02-09 湖南大学 A kind of lane line visible detection method suitable for complex condition
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009026164A (en) * 2007-07-23 2009-02-05 Alpine Electronics Inc Lane recognition apparatus and navigation apparatus
US20100284569A1 (en) * 2008-01-11 2010-11-11 Kazuyuki Sakurai Lane recognition system, lane recognition method, and lane recognition program
JP2011170596A (en) * 2010-02-18 2011-09-01 Renesas Electronics Corp System, method and program for recognizing compartment line
CN103116748A (en) * 2013-03-11 2013-05-22 清华大学 Method and system for identifying illegal driving behavior based on road signs
CN105740796A (en) * 2016-01-27 2016-07-06 大连楼兰科技股份有限公司 Grey level histogram based post-perspective transformation lane line image binarization method
CN106778593A (en) * 2016-12-11 2017-05-31 北京联合大学 A kind of track level localization method based on the fusion of many surface marks
CN107679520A (en) * 2017-10-30 2018-02-09 湖南大学 A kind of lane line visible detection method suitable for complex condition
CN109785291A (en) * 2018-12-20 2019-05-21 南京莱斯电子设备有限公司 A kind of lane line self-adapting detecting method
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117100243A (en) * 2023-10-23 2023-11-24 中国科学院自动化研究所 Magnetic particle imaging system, method and equipment based on system matrix pixel compression
CN117100243B (en) * 2023-10-23 2024-02-20 中国科学院自动化研究所 Magnetic particle imaging system, method and equipment based on system matrix pixel compression

Similar Documents

Publication Publication Date Title
CN110659539B (en) Information processing method and device and machine-readable storage medium
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
US9269001B2 (en) Illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
WO2015147764A1 (en) A method for vehicle recognition, measurement of relative speed and distance with a single camera
CN110544211A (en) method, system, terminal and storage medium for detecting lens attachment
CN111179152A (en) Road sign identification method and device, medium and terminal
CN103034836A (en) Road sign detection method and device
CN113239733B (en) Multi-lane line detection method
CN111027535A (en) License plate recognition method and related equipment
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN112699825A (en) Lane line identification method and device
Chen et al. A novel lane departure warning system for improving road safety
CN108090425B (en) Lane line detection method, device and terminal
CN103577829B (en) A kind of vehicle-logo location method and apparatus
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
JP7264428B2 (en) Road sign recognition device and its program
CN111428538B (en) Lane line extraction method, device and equipment
CN110544232A (en) detection system, terminal and storage medium for lens attached object
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium
CN111626180B (en) Lane line detection method and device based on polarization imaging
CN113128264B (en) Vehicle region determining method and device and electronic equipment
CN111428537B (en) Method, device and equipment for extracting edges of road diversion belt

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination